Updates from: 01/11/2023 02:10:59
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/best-practices.md
Title: Best practices for Azure AD B2C
description: Recommendations and best practices to consider when working with Azure Active Directory B2C (Azure AD B2C). --++ Previously updated : 12/01/2022 Last updated : 12/29/2022
Manage your Azure AD B2C environment.
| Use version control for your custom policies | Consider using GitHub, Azure Repos, or another cloud-based version control system for your Azure AD B2C custom policies. | | Use the Microsoft Graph API to automate the management of your B2C tenants | Microsoft Graph APIs:<br/>Manage [Identity Experience Framework](/graph/api/resources/trustframeworkpolicy?preserve-view=true&view=graph-rest-beta) (custom policies)<br/>[Keys](/graph/api/resources/trustframeworkkeyset?preserve-view=true&view=graph-rest-beta)<br/>[User Flows](/graph/api/resources/identityuserflow?preserve-view=true&view=graph-rest-beta) | | Integrate with Azure DevOps | A [CI/CD pipeline](deploy-custom-policies-devops.md) makes moving code between different environments easy and ensures production readiness always. |
+| Custom policy deployment | Azure AD B2C relies on caching to deliver performance to your end users. When you deploy a custom policy using whatever method, expect a delay of up to **30 minutes** for your users to see the changes. As a result of this behavior, consider the following practices when you deploy your custom policies: <br> - If you're deploying to a development environment, set the `DeploymentMode` attribute to `Development` in your custom policy file's `<TrustFrameworkPolicy>` element. <br> - Deploy your updated policy files to a production environment when traffic in your app is low. <br> - When you deploy to a production environment to update existing policy files, upload the updated files with new name(s), and then update your app reference to the new name(s). You can then remove the old policy files afterwards.<br> - You can set the `DeploymentMode` to `Development` in a production environment to bypass the caching behavior. However, we don't recommend this practice. If you [Collect Azure AD B2C logs with Application Insights](troubleshoot-with-application-insights.md), all claims sent to and from identity providers are collected, which is a security and performance risk. |
| Integrate with Azure Monitor | [Audit log events](view-audit-logs.md) are only retained for seven days. [Integrate with Azure Monitor](azure-monitor.md) to retain the logs for long-term use, or integrate with third-party security information and event management (SIEM) tools to gain insights into your environment. | | Setup active alerting and monitoring | [Track user behavior](./analytics-with-application-insights.md) in Azure AD B2C using Application Insights. |
active-directory-b2c Configure Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-tokens.md
The following values are set in the previous example:
- **token_lifetime_secs** - Access token lifetimes (seconds). The default is 3,600 (1 hour). The minimum is 300 (5 minutes). The maximum is 86,400 (24 hours). - **id_token_lifetime_secs** - ID token lifetimes (seconds). The default is 3,600 (1 hour). The minimum is 300 (5 minutes). The maximum is 86,400 (24 hours). -- **refresh_token_lifetime_secs** Refresh token lifetimes (seconds). The default is 120,9600 (14 days). The minimum is 86,400 (24 hours). The maximum is 7,776,000 (90 days).
+- **refresh_token_lifetime_secs** Refresh token lifetimes (seconds). The default is 1,209,600 (14 days). The minimum is 86,400 (24 hours). The maximum is 7,776,000 (90 days).
- **rolling_refresh_token_lifetime_secs** - Refresh token sliding window lifetime (seconds). The default is 7,776,000 (90 days). The minimum is 86,400 (24 hours). The maximum is 31,536,000 (365 days). If you don't want to enforce a sliding window lifetime, set the value of `allow_infinite_rolling_refresh_token` to `true`. - **allow_infinite_rolling_refresh_token** - Refresh token sliding window lifetime never expires.
When using the [OAuth 2.0 authorization code flow](authorization-code-flow.md),
## Next steps - Learn more about how to [request access tokens](access-tokens.md).-- Learn how to build [Resilience through developer best practices](../active-directory/fundamentals/resilience-b2c-developer-best-practices.md?bc=%2fazure%2factive-directory-b2c%2fbread%2ftoc.json&toc=%2fazure%2factive-directory-b2c%2fTOC.json).
+- Learn how to build [Resilience through developer best practices](../active-directory/fundamentals/resilience-b2c-developer-best-practices.md?bc=%2fazure%2factive-directory-b2c%2fbread%2ftoc.json&toc=%2fazure%2factive-directory-b2c%2fTOC.json).
active-directory Concept Authentication Methods Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-methods-manage.md
Previously updated : 01/07/2023 Last updated : 01/10/2023
Tenants are set to either Pre-migration or Migration in Progress by default, dep
> > In the future, both of these features will be integrated with the Authentication methods policy.
+## Known issues
+Some customers may see the control to enable Voice call grayed out due to a licensing requirement, despite having a premium license. This is a known issue that we are actively working to fix.
+ ## Next steps - [How to migrate MFA and SSPR policy settings to the Authentication methods policy](how-to-authentication-methods-manage.md)
active-directory Concept Authentication Oath Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-oath-tokens.md
Some OATH TOTP hardware tokens are programmable, meaning they don't come with a
## OATH hardware tokens (Preview)
-Azure AD supports the use of OATH-TOTP SHA-1 tokens that refresh codes every 30 or 60 seconds. Customers can purchase these tokens from the vendor of their choice.
+Azure AD supports the use of OATH-TOTP SHA-1 tokens that refresh codes every 30 or 60 seconds. Customers can purchase these tokens from the vendor of their choice. Hardware OATH tokens are available for users with an Azure AD Premium P1 or P2 license.
OATH TOTP hardware tokens typically come with a secret key, or seed, pre-programmed in the token. These keys must be input into Azure AD as described in the following steps. Secret keys are limited to 128 characters, which may not be compatible with all tokens. The secret key can only contain the characters *a-z* or *A-Z* and digits *2-7*, and must be encoded in *Base32*.
active-directory Reference Aadsts Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-aadsts-error-codes.md
The `error` field has several possible values - review the protocol documentatio
| AADSTS50194 | Application '{appId}'({appName}) isn't configured as a multi-tenant application. Usage of the /common endpoint isn't supported for such applications created after '{time}'. Use a tenant-specific endpoint or configure the application to be multi-tenant. | | AADSTS50196 | LoopDetected - A client loop has been detected. Check the appΓÇÖs logic to ensure that token caching is implemented, and that error conditions are handled correctly. The app has made too many of the same request in too short a period, indicating that it is in a faulty state or is abusively requesting tokens. | | AADSTS50197 | ConflictingIdentities - The user could not be found. Try signing in again. |
-| AADSTS50199 | CmsiInterrupt - For security reasons, user confirmation is required for this request. Because this is an "interaction_required" error, the client should do interactive auth. This occurs because a system webview has been used to request a token for a native application - the user must be prompted to ask if this was actually the app they meant to sign into. To avoid this prompt, the redirect URI should be part of the following safe list: <br />http://<br />https://<br />msauth://(iOS only)<br />msauthv2://(iOS only)<br />chrome-extension:// (desktop Chrome browser only) |
+| AADSTS50199 | CmsiInterrupt - For security reasons, user confirmation is required for this request. Because this is an "interaction_required" error, the client should do interactive auth. This occurs because a system webview has been used to request a token for a native application - the user must be prompted to ask if this was actually the app they meant to sign into. To avoid this prompt, the redirect URI should be part of the following safe list: <br />http://<br />https://<br />chrome-extension:// (desktop Chrome browser only) |
| AADSTS51000 | RequiredFeatureNotEnabled - The feature is disabled. | | AADSTS51001 | DomainHintMustbePresent - Domain hint must be present with on-premises security identifier or on-premises UPN. | | AADSTS1000104| XCB2BResourceCloudNotAllowedOnIdentityTenant - Resource cloud {resourceCloud} isn't allowed on identity tenant {identityTenant}. {resourceCloud} - cloud instance which owns the resource. {identityTenant} - is the tenant where signing-in identity is originated from. |
active-directory Concept Azure Ad Join https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/concept-azure-ad-join.md
Any organization can deploy Azure AD joined devices no matter the size or indust
| | Applicable to all users in an organization | | **Device ownership** | Organization | | **Operating Systems** | All Windows 11 and Windows 10 devices except Home editions |
-| | [Windows Server 2019 Virtual Machines running in Azure](howto-vm-sign-in-azure-ad-windows.md) (Server core isn't supported) |
+| | [Windows Server 2019 and newer Virtual Machines running in Azure](howto-vm-sign-in-azure-ad-windows.md) (Server core isn't supported) |
| **Provisioning** | Self-service: Windows Out of Box Experience (OOBE) or Settings | | | Bulk enrollment | | | Windows Autopilot |
active-directory Hybrid Azuread Join Manual https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/hybrid-azuread-join-manual.md
The **$scp.Keywords** output shows the Azure AD tenant information. Here's an ex
azureADId:72f988bf-86f1-41af-91ab-2d7cd011db47 ```
-If the service connection point doesn't exist, you can create it by running the `Initialize-ADSyncDomainJoinedComputerSync` cmdlet on your Azure AD Connect server. Enterprise admin credentials are required to run this cmdlet.
-
-The `Initialize-ADSyncDomainJoinedComputerSync` cmdlet:
-
-* Creates the service connection point in the Active Directory forest that Azure AD Connect is connected to.
-* Requires you to specify the `AdConnectorAccount` parameter. This account is configured as the Active Directory connector account in Azure AD Connect.
--
-The following script shows an example for using the cmdlet. In this script, `$aadAdminCred = Get-Credential` requires you to type a user name. Provide the user name in the user principal name (UPN) format (`user@example.com`).
-
- ```PowerShell
- Import-Module -Name "C:\Program Files\Microsoft Azure Active Directory Connect\AdPrep\AdSyncPrep.psm1";
-
- $aadAdminCred = Get-Credential;
-
- Initialize-ADSyncDomainJoinedComputerSync ΓÇôAdConnectorAccount [connector account name] -AzureADCredentials $aadAdminCred;
- ```
-
-The `Initialize-ADSyncDomainJoinedComputerSync` cmdlet:
-
-* Uses the Active Directory PowerShell module and Active Directory Domain Services (AD DS) tools. These tools rely on Active Directory Web Services running on a domain controller. Active Directory Web Services is supported on domain controllers running Windows Server 2008 R2 and later.
-* Is only supported by the MSOnline PowerShell module version 1.1.166.0. To download this module, use [this link](https://www.powershellgallery.com/packages/MSOnline/1.1.166.0).
-* If the AD DS tools aren't installed, `Initialize-ADSyncDomainJoinedComputerSync` will fail. You can install the AD DS tools through Server Manager under **Features** > **Remote Server Administration Tools** > **Role Administration Tools**.
- ### Set up issuance of claims In a federated Azure AD configuration, devices rely on AD FS or an on-premises federation service from a Microsoft partner to authenticate to Azure AD. Devices authenticate to get an access token to register against the Azure Active Directory Device Registration Service (Azure DRS).
active-directory Auth Header Based https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/auth-header-based.md
Previously updated : 08/19/2022 Last updated : 01/10/2023
active-directory Auth Ldap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/auth-ldap.md
Previously updated : 08/19/2022 Last updated : 01/10/2023
active-directory Auth Oauth2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/auth-oauth2.md
Previously updated : 08/19/2022 Last updated : 01/10/2023
active-directory Auth Oidc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/auth-oidc.md
Previously updated : 08/19/2022 Last updated : 01/10/2023
active-directory Auth Password Based Sso https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/auth-password-based-sso.md
Previously updated : 08/19/2022 Last updated : 01/10/2023
active-directory Auth Radius https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/auth-radius.md
Previously updated : 08/19/2022 Last updated : 01/10/2023
active-directory Auth Saml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/auth-saml.md
Previously updated : 08/19/2022 Last updated : 01/10/2023
active-directory Auth Ssh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/auth-ssh.md
Previously updated : 06/22/2022 Last updated : 01/10/2023
active-directory Auth Sync Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/auth-sync-overview.md
Previously updated : 8/19/2022 Last updated : 1/10/2023
active-directory Sync Scim https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/sync-scim.md
Previously updated : 08/19/2022 Last updated : 01/10/2023
active-directory Access Reviews Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/access-reviews-overview.md
na Previously updated : 12/27/2022 Last updated : 1/10/2023
Here are some example license scenarios to help you determine the number of lice
\* Azure AD External Identities (guest user) pricing is based on monthly active users (MAU), which is the count of unique users with authentication activity within a calendar month. This model replaces the 1:5 ratio billing model, which allowed up to five guest users for each Azure AD Premium license in your tenant. When your tenant is linked to a subscription and you use External Identities features to collaborate with guest users, you'll be automatically billed using the MAU-based billing model. For more information, see [Billing model for Azure AD External Identities](../external-identities/external-identities-pricing.md).
+> [!NOTE]
+> Access Reviews for Service Principals requires an Entra Workload Identities Premium plan in addition to Azure AD Premium P2 license. You can view and acquire licenses on the [Workload Identities blade](https://portal.azure.com/#view/Microsoft_Azure_ManagedServiceIdentity/WorkloadIdentitiesBlade) in the Azure portal.
+ ## Next steps - [Prepare for an access review of users' access to an application](access-reviews-application-preparation.md)
active-directory Concept Usage Insights Report https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-usage-insights-report.md
Previously updated : 11/23/2022 Last updated : 01/10/2023
There are currently three reports available in Azure AD Usage & insights. All th
### Azure AD application activity (preview)
-The **Azure AD application activity (preview)** report shows the list of applications with one or more sign-in attempts. Any application activity during the selected date range appears in the report. It's possible that activity for a deleted application may appear in the report, if the activity took place during the selected date range and before the application was deleted. The report allows you to sort by the number of successful sign-ins, failed sign-ins, and the success rate.
+The **Azure AD application activity (preview)** report shows the list of applications with one or more sign-in attempts. Any application activity during the selected date range appears in the report. The report allows you to sort by the number of successful sign-ins, failed sign-ins, and the success rate.
+
+It's possible that activity for a deleted application may appear in the report if the activity took place during the selected date range and before the application was deleted. Other scenarios could include a user attempting to sign in to an application that doesn't have a service principal associated with the app. For these types of scenarios, you may need to review the audit logs or sign-in logs to investigate further.
Select the **View sign in activity** link for an application to view more details. The sign-in graph per application counts interactive user sign-ins. The details of any sign-in failures appears below the table.
active-directory Permissions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/permissions-reference.md
Users in this role can enable, disable, and delete devices in Azure AD and read
## Compliance Administrator
-Users with this role have permissions to manage compliance-related features in the Microsoft Purview compliance portal, Microsoft 365 admin center, Azure, and Office 365 Security & Compliance Center. Assignees can also manage all features within the Exchange admin center and Teams & Skype for Business admin centers and create support tickets for Azure and Microsoft 365. More information is available at [About Microsoft 365 admin roles](https://support.office.com/article/About-Office-365-admin-roles-da585eea-f576-4f55-a1e0-87090b6aaa9d).
+Users with this role have permissions to manage compliance-related features in the Microsoft Purview compliance portal, Microsoft 365 admin center, Azure, and Office 365 Security & Compliance Center. Assignees can also manage all features within the Exchange admin center and create support tickets for Azure and Microsoft 365. More information is available at [About Microsoft 365 admin roles](https://support.office.com/article/About-Office-365-admin-roles-da585eea-f576-4f55-a1e0-87090b6aaa9d).
In | Can do -- | -
active-directory Agiloft Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/agiloft-tutorial.md
Previously updated : 11/21/2022 Last updated : 01/09/2023 # Tutorial: Azure Active Directory integration with Agiloft Contract Management Suite
Follow these steps to enable Azure AD SSO in the Azure portal.
b. In the **Reply URL** text box, type a URL using the following pattern: `https://<SUBDOMAIN>.agiloft.com:443/gui2/spsamlsso?project=<KB_NAME>`
+ > [!NOTE]
+ > The Identifier value should match the entry in the Agiloft SAML Configuration Entity ID field. That field in Agiloft may need to be updated as follows:
+ > 1. Add https:// to the beginning.
+ > 1. If there are any spaces in the URL, replace each one with an underscore (_).
+ 5. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode: In the **Sign-on URL** text box, type the URL:
active-directory Issuance Request Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/issuance-request-api.md
The Request Service REST API issuance request supports the following HTTP method
The Request Service REST API issuance request requires the following HTTP headers:
-| Method |Value |
+| Name |Value |
||| |`Authorization`| Attach the access token as a bearer token to the authorization header in an HTTP request. For example, `Authorization: Bearer <token>`.|
-|`Content-Type`| `Application/json`|
+|`Content-Type`| `application/json`|
Construct an HTTP POST request to the Request Service REST API.
The following HTTP request demonstrates a request to the Request Service REST AP
```http POST https://verifiedid.did.msidentity.com/v1.0/verifiableCredentials/createIssuanceRequest Content-Type: application/json
-Authorization: Bearer <token>
+Authorization: Bearer <token>
{
- "includeQRCode": true,
- "callback":ΓÇ»{
- "url":ΓÇ»"https://wwww.contoso.com/vc/callback",
- "state": "Aaaabbbb11112222",
- "headers":ΓÇ»{
- "api-key":ΓÇ»"an-api-key-can-go-here"
-   }
- },
- ...
+ "includeQRCode": true,
+ "callback":ΓÇ»{
+ "url":ΓÇ»"https://wwww.contoso.com/vc/callback",
+ "state": "Aaaabbbb11112222",
+ "headers":ΓÇ»{
+ "api-key":ΓÇ»"an-api-key-can-go-here"
+ }
+ },
+ ...
} ```
When your app receives the response, the app needs to present the QR code to the
## Error response
-If there is an error with the request, an [error responses](error-codes.md) will be returned and should be handled appropriately by the app.
+If there is an error with the request, an [error response](error-codes.md) will be returned and should be handled appropriately by the app.
## Callback events
aks Azure Cni Overlay https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-cni-overlay.md
az provider register --namespace Microsoft.ContainerService
## Set up overlay clusters
-The following steps create a new virtual network with a subnet for the cluster nodes and an AKS cluster that uses Azure CNI Overlay.
-
-1. Create a virtual network with a subnet for the cluster nodes. Replace the values for the variables `resourceGroup`, `vnet` and `location`.
-
- ```azurecli-interactive
- resourceGroup="myResourceGroup"
- vnet="myVirtualNetwork"
- location="westcentralus"
-
- # Create the resource group
- az group create --name $resourceGroup --location $location
-
- # Create a VNet and a subnet for the cluster nodes
- az network vnet create -g $resourceGroup --location $location --name $vnet --address-prefixes 10.0.0.0/8 -o none
- az network vnet subnet create -g $resourceGroup --vnet-name $vnet --name nodesubnet --address-prefix 10.10.0.0/16 -o none
- ```
-
-2. Create a cluster with Azure CNI Overlay. Use the argument `--network-plugin-mode` to specify that this is an overlay cluster. If the pod CIDR is not specified then AKS assigns a default space, viz. 10.244.0.0/16. Replace the values for the variables `clusterName` and `subscription`.
-
- ```azurecli-interactive
- clusterName="myOverlayCluster"
- subscription="aaaaaaa-aaaaa-aaaaaa-aaaa"
-
- az aks create -n $clusterName -g $resourceGroup --location $location --network-plugin azure --network-plugin-mode overlay --pod-cidr 192.168.0.0/16 --vnet-subnet-id /subscriptions/$subscription/resourceGroups/$resourceGroup/providers/Microsoft.Network/virtualNetworks/$vnet/subnets/nodesubnet
- ```
+Create a cluster with Azure CNI Overlay. Use the argument `--network-plugin-mode` to specify that this is an overlay cluster. If the pod CIDR is not specified then AKS assigns a default space, viz. 10.244.0.0/16. Replace the values for the variables `clusterName`, `resourceGroup`, and `location`.
+
+```azurecli-interactive
+clusterName="myOverlayCluster"
+resourceGroup="myResourceGroup"
+location="westcentralus"
+
+az aks create -n $clusterName -g $resourceGroup --location $location --network-plugin azure --network-plugin-mode overlay --pod-cidr 192.168.0.0/16
+```
## Next steps
To learn how to utilize AKS with your own Container Network Interface (CNI) plug
<!-- LINKS - internal --> [az-provider-register]: /cli/azure/provider#az-provider-register [az-feature-register]: /cli/azure/feature#az-feature-register
-[az-feature-show]: /cli/azure/feature#az-feature-show
+[az-feature-show]: /cli/azure/feature#az-feature-show
aks Azure Cni Powered By Cilium https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-cni-powered-by-cilium.md
Create the cluster using `--enable-cilium-dataplane`:
```azurecli-interactive az aks create -n <clusterName> -g <resourceGroupName> -l <location> \ --max-pods 250 \
- --node-count 2 \
--network-plugin azure \ --vnet-subnet-id /subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.Network/virtualNetworks/<vnetName>/subnets/nodesubnet \ --pod-subnet-id /subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.Network/virtualNetworks/<vnetName>/subnets/podsubnet \
az aks create -n <clusterName> -g <resourceGroupName> -l <location> \
### Option 2: Assign IP addresses from an overlay network
-Run these commands to create a resource group and VNet with a single subnet:
-
-```azurecli-interactive
-# Create the resource group
-az group create --name <resourceGroupName> --location <location>
-```
-
-```azurecli-interactive
-# Create a VNet with a subnet for nodes and a subnet for pods
-az network vnet create -g <resourceGroupName> --location <location> --name <vnetName> --address-prefixes <address prefix, example: 10.0.0.0/8> -o none
-az network vnet subnet create -g <resourceGroupName> --vnet-name <vnetName> --name nodesubnet --address-prefixes <address prefix, example: 10.240.0.0/16> -o none
-```
-
-Then create the cluster using `--enable-cilium-dataplane`:
+Run this commands to create a cluster with an overlay network and Cilium. Replace the values for `<clusterName>`, `<resourceGroupName>`, and `<location>`:
```azurecli-interactive az aks create -n <clusterName> -g <resourceGroupName> -l <location> \
- --max-pods 250 \
- --node-count 2 \
--network-plugin azure \ --network-plugin-mode overlay \ --pod-cidr 192.168.0.0/16 \
- --vnet-subnet-id /subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.Network/virtualNetworks/<vnetName>/subnets/nodesubnet \
--enable-cilium-dataplane ```
aks Cis Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cis-kubernetes.md
Title: Center for Internet Security (CIS) Kubernetes benchmark
description: Learn how AKS applies the CIS Kubernetes benchmark Previously updated : 10/04/2022 Last updated : 12/20/2022 # Center for Internet Security (CIS) Kubernetes benchmark
As a secure service, Azure Kubernetes Service (AKS) complies with SOC, ISO, PCI
## Kubernetes CIS benchmark
-The following are the results from the [CIS Kubernetes V1.20 Benchmark v1.0.0][cis-benchmark-kubernetes] recommendations on AKS.
+The following are the results from the [CIS Kubernetes V1.24 Benchmark v1.0.0][cis-benchmark-kubernetes] recommendations on AKS. These are applicable to AKS 1.21.x through AKS 1.24.x.
*Scored* recommendations affect the benchmark score if they are not applied, while *Not Scored* recommendations don't.
Recommendations can have one of the following statuses:
|||||| |1|Control Plane Components|||| |1.1|Control Plane Node Configuration Files||||
-|1.1.1|Ensure that the API server pod specification file permissions are set to 644 or more restrictive|Scored|L1|N/A|
+|1.1.1|Ensure that the API server pod specification file permissions are set to 600 or more restrictive|Scored|L1|N/A|
|1.1.2|Ensure that the API server pod specification file ownership is set to root:root|Scored|L1|N/A|
-|1.1.3|Ensure that the controller manager pod specification file permissions are set to 644 or more restrictive|Scored|L1|N/A|
+|1.1.3|Ensure that the controller manager pod specification file permissions are set to 600 or more restrictive|Scored|L1|N/A|
|1.1.4|Ensure that the controller manager pod specification file ownership is set to root:root|Scored|L1|N/A|
-|1.1.5|Ensure that the scheduler pod specification file permissions are set to 644 or more restrictive|Scored|L1|N/A|
+|1.1.5|Ensure that the scheduler pod specification file permissions are set to 600 or more restrictive|Scored|L1|N/A|
|1.1.6|Ensure that the scheduler pod specification file ownership is set to root:root|Scored|L1|N/A|
-|1.1.7|Ensure that the etcd pod specification file permissions are set to 644 or more restrictive|Scored|L1|N/A|
+|1.1.7|Ensure that the etcd pod specification file permissions are set to 600 or more restrictive|Scored|L1|N/A|
|1.1.8|Ensure that the etcd pod specification file ownership is set to root:root|Scored|L1|N/A|
-|1.1.9|Ensure that the Container Network Interface file permissions are set to 644 or more restrictive|Not Scored|L1|N/A|
+|1.1.9|Ensure that the Container Network Interface file permissions are set to 600 or more restrictive|Not Scored|L1|N/A|
|1.1.10|Ensure that the Container Network Interface file ownership is set to root:root|Not Scored|L1|N/A| |1.1.11|Ensure that the etcd data directory permissions are set to 700 or more restrictive|Scored|L1|N/A| |1.1.12|Ensure that the etcd data directory ownership is set to etcd:etcd|Scored|L1|N/A|
-|1.1.13|Ensure that the admin.conf file permissions are set to 644 or more restrictive|Scored|L1|N/A|
+|1.1.13|Ensure that the admin.conf file permissions are set to 600 or more restrictive|Scored|L1|N/A|
|1.1.14|Ensure that the admin.conf file ownership is set to root:root|Scored|L1|N/A|
-|1.1.15|Ensure that the scheduler.conf file permissions are set to 644 or more restrictive|Scored|L1|N/A|
+|1.1.15|Ensure that the scheduler.conf file permissions are set to 600 or more restrictive|Scored|L1|N/A|
|1.1.16|Ensure that the scheduler.conf file ownership is set to root:root|Scored|L1|N/A|
-|1.1.17|Ensure that the controller-manager.conf file permissions are set to 644 or more restrictive|Scored|L1|N/A|
+|1.1.17|Ensure that the controller-manager.conf file permissions are set to 600 or more restrictive|Scored|L1|N/A|
|1.1.18|Ensure that the controller-manager.conf file ownership is set to root:root|Scored|L1|N/A| |1.1.19|Ensure that the Kubernetes PKI directory and file ownership is set to root:root|Scored|L1|N/A|
-|1.1.20|Ensure that the Kubernetes PKI certificate file permissions are set to 644 or more restrictive|Scored|L1|N/A|
+|1.1.20|Ensure that the Kubernetes PKI certificate file permissions are set to 600 or more restrictive|Scored|L1|N/A|
|1.1.21|Ensure that the Kubernetes PKI key file permissions are set to 600|Scored|L1|N/A| |1.2|API Server|||| |1.2.1|Ensure that the `--anonymous-auth` argument is set to false|Not Scored|L1|Pass|
-|1.2.2|Ensure that the `--basic-auth-file` argument is not set|Scored|L1|Pass|
-|1.2.3|Ensure that the `--token-auth-file` parameter is not set|Scored|L1|Fail|
-|1.2.4|Ensure that the `--kubelet-https` argument is set to true|Scored|L1|Equivalent Control |
-|1.2.5|Ensure that the `--kubelet-client-certificate` and `--kubelet-client-key` arguments are set as appropriate|Scored|L1|Pass|
-|1.2.6|Ensure that the `--kubelet-certificate-authority` argument is set as appropriate|Scored|L1|Equivalent Control|
-|1.2.7|Ensure that the `--authorization-mode` argument is not set to AlwaysAllow|Scored|L1|Pass|
-|1.2.8|Ensure that the `--authorization-mode` argument includes Node|Scored|L1|Pass|
-|1.2.9|Ensure that the `--authorization-mode` argument includes RBAC|Scored|L1|Pass|
-|1.2.10|Ensure that the admission control plugin EventRateLimit is set|Not Scored|L1|Fail|
-|1.2.11|Ensure that the admission control plugin AlwaysAdmit is not set|Scored|L1|Pass|
-|1.2.12|Ensure that the admission control plugin AlwaysPullImages is set|Not Scored|L1|Fail|
-|1.2.13|Ensure that the admission control plugin SecurityContextDeny is set if PodSecurityPolicy is not used|Not Scored|L1|Fail|
-|1.2.14|Ensure that the admission control plugin ServiceAccount is set|Scored|L1|Pass|
-|1.2.15|Ensure that the admission control plugin NamespaceLifecycle is set|Scored|L1|Pass|
-|1.2.16|Ensure that the admission control plugin PodSecurityPolicy is set|Scored|L1|Fail|
-|1.2.17|Ensure that the admission control plugin NodeRestriction is set|Scored|L1|Fail|
-|1.2.18|Ensure that the `--insecure-bind-address` argument is not set|Scored|L1|Fail|
-|1.2.19|Ensure that the `--insecure-port` argument is set to 0|Scored|L1|Pass|
-|1.2.20|Ensure that the `--secure-port` argument is not set to 0|Scored|L1|Pass|
-|1.2.21|Ensure that the `--profiling` argument is set to false|Scored|L1|Pass|
-|1.2.22|Ensure that the `--audit-log-path` argument is set|Scored|L1|Pass|
-|1.2.23|Ensure that the `--audit-log-maxage` argument is set to 30 or as appropriate|Scored|L1|Equivalent Control|
-|1.2.24|Ensure that the `--audit-log-maxbackup` argument is set to 10 or as appropriate|Scored|L1|Equivalent Control|
-|1.2.25|Ensure that the `--audit-log-maxsize` argument is set to 100 or as appropriate|Scored|L1|Pass|
-|1.2.26|Ensure that the `--request-timeout` argument is set as appropriate|Scored|L1|Pass|
-|1.2.27|Ensure that the `--service-account-lookup` argument is set to true|Scored|L1|Pass|
-|1.2.28|Ensure that the `--service-account-key-file` argument is set as appropriate|Scored|L1|Pass|
-|1.2.29|Ensure that the `--etcd-certfile` and `--etcd-keyfile` arguments are set as appropriate|Scored|L1|Pass|
-|1.2.30|Ensure that the `--tls-cert-file` and `--tls-private-key-file` arguments are set as appropriate|Scored|L1|Pass|
-|1.2.31|Ensure that the `--client-ca-file` argument is set as appropriate|Scored|L1|Pass|
-|1.2.32|Ensure that the `--etcd-cafile` argument is set as appropriate|Scored|L1|Pass|
-|1.2.33|Ensure that the `--encryption-provider-config` argument is set as appropriate|Scored|L1|Fail|
-|1.2.34|Ensure that encryption providers are appropriately configured|Scored|L1|Fail|
-|1.2.35|Ensure that the API Server only makes use of Strong Cryptographic Ciphers|Not Scored|L1|Pass|
+|1.2.2|Ensure that the `--token-auth-file` parameter is not set|Scored|L1|Fail|
+|1.2.3|Ensure that `--DenyServiceExternalIPs` is not set|Scored|L1|Pass|
+|1.2.4|Ensure that the `--kubelet-client-certificate` and `--kubelet-client-key` arguments are set as appropriate|Scored|L1|Pass|
+|1.2.5|Ensure that the `--kubelet-certificate-authority` argument is set as appropriate|Scored|L1|Fail|
+|1.2.6|Ensure that the `--authorization-mode` argument is not set to AlwaysAllow|Scored|L1|Pass|
+|1.2.7|Ensure that the `--authorization-mode` argument includes Node|Scored|L1|Pass|
+|1.2.8|Ensure that the `--authorization-mode` argument includes RBAC|Scored|L1|Pass|
+|1.2.9|Ensure that the admission control plugin EventRateLimit is set|Not Scored|L1|Fail|
+|1.2.10|Ensure that the admission control plugin AlwaysAdmit is not set|Scored|L1|Pass|
+|1.2.11|Ensure that the admission control plugin AlwaysPullImages is set|Not Scored|L1|Fail|
+|1.2.12|Ensure that the admission control plugin SecurityContextDeny is set if PodSecurityPolicy is not used|Not Scored|L1|Fail|
+|1.2.13|Ensure that the admission control plugin ServiceAccount is set|Scored|L1|Pass|
+|1.2.14|Ensure that the admission control plugin NamespaceLifecycle is set|Scored|L1|Pass|
+|1.2.15|Ensure that the admission control plugin NodeRestriction is set|Scored|L1|Pass|
+|1.2.16|Ensure that the `--secure-port` argument is not set to 0|Scored|L1|Pass|
+|1.2.17|Ensure that the `--profiling` argument is set to false|Scored|L1|Pass|
+|1.2.18|Ensure that the `--audit-log-path` argument is set|Scored|L1|Pass|
+|1.2.19|Ensure that the `--audit-log-maxage` argument is set to 30 or as appropriate|Scored|L1|Equivalent Control|
+|1.2.20|Ensure that the `--audit-log-maxbackup` argument is set to 10 or as appropriate|Scored|L1|Equivalent Control|
+|1.2.21|Ensure that the `--audit-log-maxsize` argument is set to 100 or as appropriate|Scored|L1|Pass|
+|1.2.22|Ensure that the `--request-timeout` argument is set as appropriate|Scored|L1|Pass|
+|1.2.23|Ensure that the `--service-account-lookup` argument is set to true|Scored|L1|Pass|
+|1.2.24|Ensure that the `--service-account-key-file` argument is set as appropriate|Scored|L1|Pass|
+|1.2.25|Ensure that the `--etcd-certfile` and `--etcd-keyfile` arguments are set as appropriate|Scored|L1|Pass|
+|1.2.26|Ensure that the `--tls-cert-file` and `--tls-private-key-file` arguments are set as appropriate|Scored|L1|Pass|
+|1.2.27|Ensure that the `--client-ca-file` argument is set as appropriate|Scored|L1|Pass|
+|1.2.28|Ensure that the `--etcd-cafile` argument is set as appropriate|Scored|L1|Pass|
+|1.2.29|Ensure that the `--encryption-provider-config` argument is set as appropriate|Scored|L1|Depends on Environment|
+|1.2.30|Ensure that encryption providers are appropriately configured|Scored|L1|Depends on Environment|
+|1.2.31|Ensure that the API Server only makes use of Strong Cryptographic Ciphers|Not Scored|L1|Pass|
|1.3|Controller Manager|||| |1.3.1|Ensure that the `--terminated-pod-gc-threshold` argument is set as appropriate|Scored|L1|Pass| |1.3.2|Ensure that the `--profiling` argument is set to false|Scored|L1|Pass| |1.3.3|Ensure that the `--use-service-account-credentials` argument is set to true|Scored|L1|Pass| |1.3.4|Ensure that the `--service-account-private-key-file` argument is set as appropriate|Scored|L1|Pass| |1.3.5|Ensure that the `--root-ca-file` argument is set as appropriate|Scored|L1|Pass|
-|1.3.6|Ensure that the RotateKubeletServerCertificate argument is set to true|Scored|L2|Pass|
-|1.3.7|Ensure that the `--bind-address` argument is set to 127.0.0.1|Scored|L1|Fail|
+|1.3.6|Ensure that the RotateKubeletServerCertificate argument is set to true|Scored|L2|Fail|
+|1.3.7|Ensure that the `--bind-address` argument is set to 127.0.0.1|Scored|L1|Equivalent Control|
|1.4|Scheduler|||| |1.4.1|Ensure that the `--profiling` argument is set to false|Scored|L1|Pass|
-|1.4.2|Ensure that the `--bind-address` argument is set to 127.0.0.1|Scored|L1|Fail|
+|1.4.2|Ensure that the `--bind-address` argument is set to 127.0.0.1|Scored|L1|Equivalent Control|
|2|etcd|||| |2.1|Ensure that the `--cert-file` and `--key-file` arguments are set as appropriate|Scored|L1|Pass| |2.2|Ensure that the `--client-cert-auth` argument is set to true|Scored|L1|Pass|
Recommendations can have one of the following statuses:
|3.2.2|Ensure that the audit policy covers key security concerns|Not Scored|L2|Pass| |4|Worker Nodes|||| |4.1|Worker Node Configuration Files||||
-|4.1.1|Ensure that the kubelet service file permissions are set to 644 or more restrictive|Scored|L1|Pass|
+|4.1.1|Ensure that the kubelet service file permissions are set to 600 or more restrictive|Scored|L1|Pass|
|4.1.2|Ensure that the kubelet service file ownership is set to root:root|Scored|L1|Pass|
-|4.1.3|Ensure that the proxy kubeconfig file permissions are set to 644 or more restrictive|Scored|L1|Pass|
-|4.1.4|Ensure that the proxy kubeconfig file ownership is set to root:root|Scored|L1|Pass|
-|4.1.5|Ensure that the kubelet.conf file permissions are set to 644 or more restrictive|Scored|L1|Pass|
-|4.1.6|Ensure that the kubelet.conf file ownership is set to root:root|Scored|L1|Pass|
-|4.1.7|Ensure that the certificate authorities file permissions are set to 644 or more restrictive|Scored|L1|Pass|
+|4.1.3|If a proxy kubeconfig file exists, ensure permissions are set to 600 or more restrictive|Scored|L1|N/A|
+|4.1.4|If a proxy kubeconfig file exists, ensure ownership is set to root:root|Scored|L1|N/A|
+|4.1.5|Ensure that the `--kubeconfig` kubelet.conf file permissions are set to 600 or more restrictive|Scored|L1|Pass|
+|4.1.6|Ensure that the `--kubeconfig` kubelet.conf file ownership is set to root:root|Scored|L1|Pass|
+|4.1.7|Ensure that the certificate authorities file permissions are set to 600 or more restrictive|Scored|L1|Pass|
|4.1.8|Ensure that the client certificate authorities file ownership is set to root:root|Scored|L1|Pass|
-|4.1.9|Ensure that the kubelet configuration file has permissions set to 644 or more restrictive|Scored|L1|Pass|
-|4.1.10|Ensure that the kubelet configuration file ownership is set to root:root|Scored|L1|Pass|
+|4.1.9|If the kubelet config.yaml configuration file is being used, ensure permissions set to 600 or more restrictive|Scored|L1|Pass|
+|4.1.10|If the kubelet config.yaml configuration file is being used, ensure file ownership is set to root:root|Scored|L1|Pass|
|4.2|Kubelet|||| |4.2.1|Ensure that the `--anonymous-auth` argument is set to false|Scored|L1|Pass| |4.2.2|Ensure that the `--authorization-mode` argument is not set to AlwaysAllow|Scored|L1|Pass|
Recommendations can have one of the following statuses:
|4.2.6|Ensure that the `--protect-kernel-defaults` argument is set to true|Scored|L1|Pass| |4.2.7|Ensure that the `--make-iptables-util-chains` argument is set to true|Scored|L1|Pass| |4.2.8|Ensure that the `--hostname-override` argument is not set|Not Scored|L1|Pass|
-|4.2.9|Ensure that the `--event-qps` argument is set to 0 or a level which ensures appropriate event capture|Not Scored|L2|Pass|
-|4.2.10|Ensure that the `--tls-cert-file`and `--tls-private-key-file` arguments are set as appropriate|Scored|L1|Equivalent Control|
+|4.2.9|Ensure that the eventRecordQPS argument is set to a level which ensures appropriate event capture|Not Scored|L2|Pass|
+|4.2.10|Ensure that the `--tls-cert-file`and `--tls-private-key-file` arguments are set as appropriate|Scored|L1|Pass|
|4.2.11|Ensure that the `--rotate-certificates` argument is not set to false|Scored|L1|Pass|
-|4.2.12|Ensure that the RotateKubeletServerCertificate argument is set to true|Scored|L1|Fail|
+|4.2.12|Ensure that the RotateKubeletServerCertificate argument is set to true|Scored|L1|Pass|
|4.2.13|Ensure that the Kubelet only makes use of Strong Cryptographic Ciphers|Not Scored|L1|Pass| |5|Policies|||| |5.1|RBAC and Service Accounts||||
For more information about AKS security, see the following articles:
* [AKS security considerations](./concepts-security.md) * [AKS best practices](./best-practices.md) - [azure-update-management]: ../automation/update-management/overview.md [azure-file-integrity-monotoring]: ../security-center/security-center-file-integrity-monitoring.md [azure-time-sync]: ../virtual-machines/linux/time-sync.md
aks Dapr Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/dapr-settings.md
+
+ Title: Configure the Dapr extension for your Azure Kubernetes Service (AKS) and Arc-enabled Kubernetes project
+description: Learn how to configure the Dapr extension specifically for your Azure Kubernetes Service (AKS) and Arc-enabled Kubernetes project
++++ Last updated : 01/09/2023++
+# Configure the Dapr extension for your Azure Kubernetes Service (AKS) and Arc-enabled Kubernetes project
+
+Once you've [created the Dapr extension](./dapr.md), you can configure the [Dapr](https://dapr.io/) extension to work best for you and your project using various configuration options, like:
+
+- Limiting which of your nodes use the Dapr extension
+- Setting automatic CRD updates
+- Configuring the Dapr release namespace
+
+The extension enables you to set Dapr configuration options by using the `--configuration-settings` parameter. For example, to provision Dapr with high availability (HA) enabled, set the `global.ha.enabled` parameter to `true`:
+
+```azurecli
+az k8s-extension create --cluster-type managedClusters \
+--cluster-name myAKSCluster \
+--resource-group myResourceGroup \
+--name dapr \
+--extension-type Microsoft.Dapr \
+--auto-upgrade-minor-version true \
+--configuration-settings "global.ha.enabled=true" \
+--configuration-settings "dapr_operator.replicaCount=2"
+```
+
+> [!NOTE]
+> If configuration settings are sensitive and need to be protected, for example cert related information, pass the `--configuration-protected-settings` parameter and the value will be protected from being read.
+
+If no configuration-settings are passed, the Dapr configuration defaults to:
+
+```yaml
+ ha:
+ enabled: true
+ replicaCount: 3
+ disruption:
+ minimumAvailable: ""
+ maximumUnavailable: "25%"
+ prometheus:
+ enabled: true
+ port: 9090
+ mtls:
+ enabled: true
+ workloadCertTTL: 24h
+ allowedClockSkew: 15m
+```
+
+For a list of available options, see [Dapr configuration][dapr-configuration-options].
+
+## Limiting the extension to certain nodes
+
+In some configurations, you may only want to run Dapr on certain nodes. You can limit the extension by passing a `nodeSelector` in the extension configuration. If the desired `nodeSelector` contains `.`, you must escape them from the shell and the extension. For example, the following configuration will install Dapr to only nodes with `topology.kubernetes.io/zone: "us-east-1c"`:
+
+```azurecli
+az k8s-extension create --cluster-type managedClusters \
+--cluster-name myAKSCluster \
+--resource-group myResourceGroup \
+--name dapr \
+--extension-type Microsoft.Dapr \
+--auto-upgrade-minor-version true \
+--configuration-settings "global.ha.enabled=true" \
+--configuration-settings "dapr_operator.replicaCount=2" \
+--configuration-settings "global.nodeSelector.kubernetes\.io/zone: us-east-1c"
+```
+
+For managing OS and architecture, use the [supported versions](https://github.com/dapr/dapr/blob/b8ae13bf3f0a84c25051fcdacbfd8ac8e32695df/docker/docker.mk#L50) of the `global.daprControlPlaneOs` and `global.daprControlPlaneArch` configuration:
+
+```azurecli
+az k8s-extension create --cluster-type managedClusters \
+--cluster-name myAKSCluster \
+--resource-group myResourceGroup \
+--name dapr \
+--extension-type Microsoft.Dapr \
+--auto-upgrade-minor-version true \
+--configuration-settings "global.ha.enabled=true" \
+--configuration-settings "dapr_operator.replicaCount=2" \
+--configuration-settings "global.daprControlPlaneOs=linuxΓÇ¥ \
+--configuration-settings "global.daprControlPlaneArch=amd64ΓÇ¥
+```
+## Configure the Dapr release namespace
+
+You can configure the release namespace. The Dapr extension gets installed in the `dapr-system` namespace by default. To override it, use `--release-namespace`. Include the cluster `--scope` to redefine the namespace.
+
+```azurecli
+az k8s-extension create \
+--cluster-type managedClusters \
+--cluster-name dapr-aks \
+--resource-group dapr-rg \
+--name my-dapr-ext \
+--extension-type microsoft.dapr \
+--release-train stable \
+--auto-upgrade false \
+--version 1.9.2 \
+--scope cluster \
+--release-namespace dapr-custom
+```
+
+[Learn how to configure the Dapr release namespace if you already have Dapr installed](./dapr-migration.md).
+
+## Show current configuration settings
+
+Use the `az k8s-extension show` command to show the current Dapr configuration settings:
+
+```azurecli
+az k8s-extension show --cluster-type managedClusters \
+--cluster-name myAKSCluster \
+--resource-group myResourceGroup \
+--name dapr
+```
+
+## Update configuration settings
+
+> [!IMPORTANT]
+> Some configuration options cannot be modified post-creation. Adjustments to these options require deletion and recreation of the extension, applicable to the following settings:
+> * `global.ha.*`
+> * `dapr_placement.*`
+>
+> HA is enabled enabled by default. Disabling it requires deletion and recreation of the extension.
+
+To update your Dapr configuration settings, recreate the extension with the desired state. For example, assume we've previously created and installed the extension using the following configuration:
+
+```azurecli-interactive
+az k8s-extension create --cluster-type managedClusters \
+--cluster-name myAKSCluster \
+--resource-group myResourceGroup \
+--name dapr \
+--extension-type Microsoft.Dapr \
+--auto-upgrade-minor-version true \
+--configuration-settings "global.ha.enabled=true" \
+--configuration-settings "dapr_operator.replicaCount=2"
+```
+
+To update the `dapr_operator.replicaCount` from two to three, use the following command:
+
+```azurecli-interactive
+az k8s-extension create --cluster-type managedClusters \
+--cluster-name myAKSCluster \
+--resource-group myResourceGroup \
+--name dapr \
+--extension-type Microsoft.Dapr \
+--auto-upgrade-minor-version true \
+--configuration-settings "global.ha.enabled=true" \
+--configuration-settings "dapr_operator.replicaCount=3"
+```
+
+## Set the outbound proxy for Dapr extension for Azure Arc on-premises
+
+If you want to use an outbound proxy with the Dapr extension for AKS, you can do so by:
+
+1. Setting the proxy environment variables using the [`dapr.io/env` annotations](https://docs.dapr.io/reference/arguments-annotations-overview/):
+ - `HTTP_PROXY`
+ - `HTTPS_PROXY`
+ - `NO_PROXY`
+1. [Installing the proxy certificate in the sidecar](https://docs.dapr.io/operations/configuration/install-certificates/).
+
+## Disable automatic CRD updates
+
+With Dapr version 1.9.2, CRDs are automatically upgraded when the extension upgrades. To disable this setting, you can set `hooks.applyCrds` to `false`.
+
+```azurecli
+az k8s-extension upgrade --cluster-type managedClusters \
+--cluster-name myAKSCluster \
+--resource-group myResourceGroup \
+--name dapr \
+--extension-type Microsoft.Dapr \
+--auto-upgrade-minor-version true \
+--configuration-settings "global.ha.enabled=true" \
+--configuration-settings "dapr_operator.replicaCount=2" \
+--configuration-settings "global.daprControlPlaneOs=linuxΓÇ¥ \
+--configuration-settings "global.daprControlPlaneArch=amd64ΓÇ¥ \
+--configuration-settings "hooks.applyCrds=false"
+```
+
+> [!NOTE]
+> CRDs are only applied in case of upgrades and are skipped during downgrades.
++
+## Meet network requirements
+
+The Dapr extension for AKS and Arc for Kubernetes requires outbound URLs on `https://:443` to function. In addition to the `https://mcr.microsoft.com/daprio` URL for pulling Dapr artifacts, verify you've included the [outbound URLs required for AKS or Arc for Kubernetes](../azure-arc/kubernetes/quickstart-connect-cluster.md#meet-network-requirements).
+
+## Next Steps
+
+Once you have successfully provisioned Dapr in your AKS cluster, try deploying a [sample application][sample-application].
+
+<!-- LINKS INTERNAL -->
+[deploy-cluster]: ./tutorial-kubernetes-deploy-cluster.md
+[az-feature-register]: /cli/azure/feature#az-feature-register
+[az-feature-list]: /cli/azure/feature#az-feature-list
+[az-provider-register]: /cli/azure/provider#az-provider-register
+[sample-application]: ./quickstart-dapr.md
+[k8s-version-support-policy]: ./supported-kubernetes-versions.md?tabs=azure-cli#kubernetes-version-support-policy
+[arc-k8s-cluster]: ../azure-arc/kubernetes/quickstart-connect-cluster.md
+[update-extension]: ./cluster-extensions.md#update-extension-instance
+[install-cli]: /cli/azure/install-azure-cli
+[dapr-migration]: ./dapr-migration.md
+[dapr-settings]: ./dapr-settings.md
+
+<!-- LINKS EXTERNAL -->
+[kubernetes-production]: https://docs.dapr.io/operations/hosting/kubernetes/kubernetes-production
+[building-blocks-concepts]: https://docs.dapr.io/developing-applications/building-blocks/
+[dapr-configuration-options]: https://github.com/dapr/dapr/blob/master/charts/dapr/README.md#configuration
+[sample-application]: https://github.com/dapr/quickstarts/tree/master/hello-kubernetes#step-2create-and-configure-a-state-store
+[dapr-security]: https://docs.dapr.io/concepts/security-concept/
+[dapr-deployment-annotations]: https://docs.dapr.io/operations/hosting/kubernetes/kubernetes-overview/#adding-dapr-to-a-kubernetes-deployment
+[dapr-oss-support]: https://docs.dapr.io/operations/support/support-release-policy/
+[dapr-supported-version]: https://docs.dapr.io/operations/support/support-release-policy/#supported-versions
+[dapr-troubleshooting]: https://docs.dapr.io/operations/troubleshooting/common_issues/
+[supported-cloud-regions]: https://azure.microsoft.com/global-infrastructure/services/?products=azure-arc
aks Dapr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/dapr.md
Previously updated : 12/12/2022 Last updated : 01/06/2023
For example:
--release-train stable ```
-## Configuration settings
-
-The extension enables you to set Dapr configuration options by using the `--configuration-settings` parameter. For example, to provision Dapr with high availability (HA) enabled, set the `global.ha.enabled` parameter to `true`:
-
-```azurecli
-az k8s-extension create --cluster-type managedClusters \
cluster-name myAKSCluster \resource-group myResourceGroup \name dapr \extension-type Microsoft.Dapr \auto-upgrade-minor-version true \configuration-settings "global.ha.enabled=true" \configuration-settings "dapr_operator.replicaCount=2"
-```
-
-> [!NOTE]
-> If configuration settings are sensitive and need to be protected, for example cert related information, pass the `--configuration-protected-settings` parameter and the value will be protected from being read.
-
-If no configuration-settings are passed, the Dapr configuration defaults to:
-
-```yaml
- ha:
- enabled: true
- replicaCount: 3
- disruption:
- minimumAvailable: ""
- maximumUnavailable: "25%"
- prometheus:
- enabled: true
- port: 9090
- mtls:
- enabled: true
- workloadCertTTL: 24h
- allowedClockSkew: 15m
-```
-
-For a list of available options, see [Dapr configuration][dapr-configuration-options].
- ## Targeting a specific Dapr version > [!NOTE]
az k8s-extension create --cluster-type managedClusters \
--version X.X.X ```
-## Limiting the extension to certain nodes
-
-In some configurations, you may only want to run Dapr on certain nodes. You can limit the extension by passing a `nodeSelector` in the extension configuration. If the desired `nodeSelector` contains `.`, you must escape them from the shell and the extension. For example, the following configuration will install Dapr to only nodes with `topology.kubernetes.io/zone: "us-east-1c"`:
-
-```azurecli
-az k8s-extension create --cluster-type managedClusters \
cluster-name myAKSCluster \resource-group myResourceGroup \name dapr \extension-type Microsoft.Dapr \auto-upgrade-minor-version true \configuration-settings "global.ha.enabled=true" \configuration-settings "dapr_operator.replicaCount=2" \configuration-settings "global.nodeSelector.kubernetes\.io/zone: us-east-1c"
-```
-
-For managing OS and architecture, use the [supported versions](https://github.com/dapr/dapr/blob/b8ae13bf3f0a84c25051fcdacbfd8ac8e32695df/docker/docker.mk#L50) of the `global.daprControlPlaneOs` and `global.daprControlPlaneArch` configuration:
-
-```azurecli
-az k8s-extension create --cluster-type managedClusters \
cluster-name myAKSCluster \resource-group myResourceGroup \name dapr \extension-type Microsoft.Dapr \auto-upgrade-minor-version true \configuration-settings "global.ha.enabled=true" \configuration-settings "dapr_operator.replicaCount=2" \configuration-settings "global.daprControlPlaneOs=linuxΓÇ¥ \configuration-settings "global.daprControlPlaneArch=amd64ΓÇ¥
-```
-
-## Set automatic CRD updates
-
-Starting with Dapr version 1.9.2, CRDs are automatically upgraded when the extension upgrades. To disable this setting, you can set `hooks.applyCrds` to `false`.
-
-```azurecli
-az k8s-extension upgrade --cluster-type managedClusters \
cluster-name myAKSCluster \resource-group myResourceGroup \name dapr \extension-type Microsoft.Dapr \auto-upgrade-minor-version true \configuration-settings "global.ha.enabled=true" \configuration-settings "dapr_operator.replicaCount=2" \configuration-settings "global.daprControlPlaneOs=linuxΓÇ¥ \configuration-settings "global.daprControlPlaneArch=amd64ΓÇ¥ \configuration-settings "hooks.applyCrds=false"
-```
-
-> [!NOTE]
-> CRDs are only applied in case of upgrades and are skipped during downgrades.
-
-## Configure the Dapr release namespace
-
-You can configure the release namespace. The Dapr extension gets installed in the `dapr-system` namespace by default. To override it, use `--release-namespace`. Include the cluster `--scope` to redefine the namespace.
-
-```azurecli
-az k8s-extension create \
cluster-type managedClusters \cluster-name dapr-aks \resource-group dapr-rg \name my-dapr-ext \extension-type microsoft.dapr \release-train stable \auto-upgrade false \version 1.9.2 \scope cluster \release-namespace dapr-custom
-```
-
-## Show current configuration settings
-
-Use the `az k8s-extension show` command to show the current Dapr configuration settings:
-
-```azurecli
-az k8s-extension show --cluster-type managedClusters \
cluster-name myAKSCluster \resource-group myResourceGroup \name dapr
-```
-
-## Update configuration settings
-
-> [!IMPORTANT]
-> Some configuration options cannot be modified post-creation. Adjustments to these options require deletion and recreation of the extension, applicable to the following settings:
-> * `global.ha.*`
-> * `dapr_placement.*`
->
-> HA is enabled enabled by default. Disabling it requires deletion and recreation of the extension.
-
-To update your Dapr configuration settings, recreate the extension with the desired state. For example, assume we've previously created and installed the extension using the following configuration:
-
-```azurecli-interactive
-az k8s-extension create --cluster-type managedClusters \
cluster-name myAKSCluster \resource-group myResourceGroup \name dapr \extension-type Microsoft.Dapr \auto-upgrade-minor-version true \ configuration-settings "global.ha.enabled=true" \configuration-settings "dapr_operator.replicaCount=2"
-```
-
-To update the `dapr_operator.replicaCount` from two to three, use the following command:
-
-```azurecli-interactive
-az k8s-extension create --cluster-type managedClusters \
cluster-name myAKSCluster \resource-group myResourceGroup \name dapr \extension-type Microsoft.Dapr \auto-upgrade-minor-version true \configuration-settings "global.ha.enabled=true" \configuration-settings "dapr_operator.replicaCount=3"
-```
-
-## Set the outbound proxy for Dapr extension for Azure Arc on-premises
-
-If you want to use an outbound proxy with the Dapr extension for AKS, you can do so by:
-
-1. Setting the proxy environment variables using the [`dapr.io/env` annotations](https://docs.dapr.io/reference/arguments-annotations-overview/):
- - `HTTP_PROXY`
- - `HTTPS_PROXY`
- - `NO_PROXY`
-1. [Installing the proxy certificate in the sidecar](https://docs.dapr.io/operations/configuration/install-certificates/).
-
-## Meet network requirements
-
-The Dapr extension for AKS and Arc for Kubernetes requires outbound URLs on `https://:443` to function. In addition to the `https://mcr.microsoft.com/daprio` URL for pulling Dapr artifacts, verify you've included the [outbound URLs required for AKS or Arc for Kubernetes](../azure-arc/kubernetes/quickstart-connect-cluster.md#meet-network-requirements).
- ## Troubleshooting extension errors If the extension fails to create or update, try suggestions and solutions in the [Dapr extension troubleshooting guide](./dapr-troubleshooting.md).
az k8s-extension delete --resource-group myResourceGroup --cluster-name myAKSClu
## Next Steps
+- Learn more about [additional settings and preferences you can set on the Dapr extension][dapr-settings].
- Once you have successfully provisioned Dapr in your AKS cluster, try deploying a [sample application][sample-application]. <!-- LINKS INTERNAL -->
az k8s-extension delete --resource-group myResourceGroup --cluster-name myAKSClu
[update-extension]: ./cluster-extensions.md#update-extension-instance [install-cli]: /cli/azure/install-azure-cli [dapr-migration]: ./dapr-migration.md
+[dapr-settings]: ./dapr-settings.md
<!-- LINKS EXTERNAL --> [kubernetes-production]: https://docs.dapr.io/operations/hosting/kubernetes/kubernetes-production
aks Enable Fips Nodes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/enable-fips-nodes.md
Title: Enable Federal Information Process Standard (FIPS) for Azure Kubernetes S
description: Learn how to enable Federal Information Process Standard (FIPS) for Azure Kubernetes Service (AKS) node pools. -+ Last updated 07/19/2022
aks Use Kms Etcd Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-kms-etcd-encryption.md
Title: Use Key Management Service (KMS) etcd encryption in Azure Kubernetes Serv
description: Learn how to use the Key Management Service (KMS) etcd encryption with Azure Kubernetes Service (AKS) Previously updated : 12/17/2022 Last updated : 01/09/2023 # Add Key Management Service (KMS) etcd encryption to an Azure Kubernetes Service (AKS) cluster
The following limitations apply when you integrate KMS etcd encryption with AKS:
* Deletion of the key, Key Vault, or the associated identity isn't supported. * KMS etcd encryption doesn't work with system-assigned managed identity. The key vault access policy is required to be set before the feature is enabled. In addition, system-assigned managed identity isn't available until cluster creation, thus there's a cycle dependency.
+* Azure Key Vault with Firewall enabled to allow public access isn't supported because it blocks traffic from KMS plugin to the Key Vault.
* The maximum number of secrets that a cluster enabled with KMS supports is 2,000. * Bring your own (BYO) Azure Key Vault from another tenant isn't supported. * With KMS enabled, you can't change associated Azure Key Vault model (public, private). To [change associated key vault mode][changing-associated-key-vault-mode], you need to disable and enable KMS again.
After changing the key ID (including key name and key version), you can use [az
> [!WARNING] > Remember to update all secrets after key rotation. Otherwise, the secrets will be inaccessible if the old keys are not existing or working.
+>
+> Once you rotate the key, the old key (key1) is still cached and shouldn't be deleted. If you want to delete the old key (key1) immediately, you need to rotate the key twice. Then key2 and key3 are cached, and key1 can be deleted without impacting existing cluster.
```azurecli-interactive az aks update --name myAKSCluster --resource-group MyResourceGroup --enable-azure-keyvault-kms --azure-keyvault-kms-key-id $NewKEY_ID --azure-keyvault-kms-key-vault-network-access "Private" --azure-keyvault-kms-key-vault-resource-id $KEYVAULT_RESOURCE_ID
aks Use Labels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-labels.md
Title: Use labels in an Azure Kubernetes Service (AKS) cluster
description: Learn how to use labels in an Azure Kubernetes Service (AKS) cluster. -+ Last updated 03/03/2022
analysis-services Analysis Services Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/analysis-services-logging.md
In the query builder, expand **LogManagement** > **AzureDiagnostics**. AzureDiag
#### Example 1
-The following query returns durations for each query end/refresh end event for a model database and server. If scaled out, the results are broken out by replica because the replica number is included in ServerName_s. Grouping by RootActivityId_g reduces the row count retrieved from the Azure Diagnostics REST API and helps stay within the limits as described in [Log Analytics Rate limits](https://dev.loganalytics.io/documentation/Using-the-API/Limits).
+The following query returns durations for each query end/refresh end event for a model database and server. If scaled out, the results are broken out by replica because the replica number is included in ServerName_s. Grouping by RootActivityId_g reduces the row count retrieved from the Azure Diagnostics REST API and helps stay within the limits as described in Log Analytics Rate limits.
```Kusto let window = AzureDiagnostics
automation Automation Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-role-based-access-control.md
Title: Manage role permissions and security in Azure Automation
description: This article describes how to use Azure role-based access control (Azure RBAC), which enables access management and role permissions for Azure resources. Previously updated : 09/10/2021 Last updated : 01/09/2023 #Customer intent: As an administrator, I want to understand permissions so that I use the least necessary set of permissions.
Update Management can be used to assess and schedule update deployments to machi
|Create update schedule ([Software Update Configurations](/rest/api/automation/softwareupdateconfigurations)) |Microsoft.Compute/virtualMachines/write |For static VM list and resource groups | |Create update schedule ([Software Update Configurations](/rest/api/automation/softwareupdateconfigurations)) |Microsoft.OperationalInsights/workspaces/analytics/query/action |For workspace resource ID when using non-Azure dynamic list.|
+>[!NOTE]
+>When you use Update management, ensure that the execution policy for scripts is *RemoteSigned*.
+ ## Configure Azure RBAC for your Automation account The following section shows you how to configure Azure RBAC on your Automation account through the [Azure portal](#configure-azure-rbac-using-the-azure-portal) and [PowerShell](#configure-azure-rbac-using-powershell).
automation Automation Runbook Authoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-runbook-authoring.md
+
+ Title: Runbook authoring using VS code in Azure Automation
+description: This article provides an overview authoring runbooks in Azure Automation using the visual studio code.
++ Last updated : 01/10/2023++++
+# Runbook authoring through VS Code in Azure Automation
+
+This article explains about the Visual Studio extension that you can use to create and manage runbooks.
+
+Azure Automation provides a new extension from VS Code to create and manage runbooks. Using this extension, you can perform all runbook management operations such as, creating and editing runbooks, triggering a job, tracking recent jobs output, linking a schedule, asset management, and local debugging.
+
+## Prerequisites
+- An Azure account with an active subscription.ΓÇ»[Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- [Visual Studio Code](https://code.visualstudio.com/).
+- PowerShell modules and Python packages used by runbook must be locally installed on the machine to run the runbook locally.
+
+## Supported operating systems
+
+The test matrix includes the following operating systems:
+1. **Windows Server 2022** with Windows PowerShell 5.1 and PowerShell Core 7.2.7
+1. **Windows Server 2019** with Windows PowerShell 5.1 and PowerShell Core 7.2.7
+1. **macOS 11** with PowerShell Core 7.2.7
+1. **Ubuntu** 20.04 with PowerShell Core 7.2.7
+
+>[!NOTE]
+>- The extension should work anywhere in VS Code and it supports [PowerShell 7.2 or higher](https://learn.microsoft.com/powershell/scripting/install/PowerShell-Support-Lifecycle?view=powershell-7.3). For Windows PowerShell, only version 5.1 is supported.
+>- PowerShell Core 6 is end-of-life and not supported.
++
+## Key Features
+
+- **Simplified onboarding** ΓÇô You can sign in using an Azure account in a simple and secure way.
+- **Multiple languages** - Supports all Automation runtime stack such as PowerShell 5, PowerShell 7, Python 2, and Python 3 Runbooks.
+- **Supportability**- Supports test execution of job, publishing Automation job and triggering job in Azure and Hybrid workers. You can execute runbooks locally.
+- Supports Python positional parameters and PowerShell parameters to trigger job.
+- **Webhooks simplified** ΓÇô You can create a webhook, start a job through a webhook in simpler way. Also, support to link a schedule to a Runbook.
+- **Manage Automation Assets** ΓÇô You can perform create, update, and delete operation against assets including certificates, variables, credentials, and connections.
+- **View properties** ΓÇô You can view the properties and select Hybrid worker group to execute hybrid jobs and view the recent last 10 jobs executed.
+- **Debug locally** - You can debug the PowerShell scripts locally.
+- **Runbook comparison** - You can compare the local runbook to the published or the draft runbook copy.
+
+## Key Features of v1.0.8
+
+- **Local directory configuration settings** - You can define the working directory that you want to save runbooks locally.
+ - **Change Directory:Base Path** - You use the changed directory path when you reopen Visual Studio code IDE. To change the directory using the Command Palette, use **Ctrl+Shift+P -> select Change Directory**. To change the base path from extension configuration settings, select **Manage** icon in the activity bar on the left and go to **Settings > Extensions > Azure Automation > Directory:Base Path**.
+ - **Change Directory:Folder Structure** - You can change the local directory folder structure from *vscodeAutomation/accHash* to *subscription/resourceGroup/automationAccount*. Select **Manage** icon in the activity bar on the left and go to **Settings > Extensions > Azure Automation > Directory:Folder Structure**. You can change the default configuration setting from *vscodeAutomation/accHash* to *subscription/resourceGroupe/automationAccount* format.
+ >[!NOTE]
+ >If your automation account is integrated with source control you can provide the runbook folder path of your GitHub repo as the directory path. For example: changing directory to *C:\abc* would store runbooks in *C:\abc\vscodeAutomation..* or *C:\abc//subscriptionName//resourceGroupName//automationAccountName//runbookname.ps1*.
+- **Runbook management operations** - You can create runbook, fetch draft runbook, fetch published runbook, open local runbook in the editor, compare local runbook with a published or draft runbook copy, upload as draft, publish runbook, and delete runbook from your Automation account.
+- **Runbook execution operations** - You can run a local version of Automation jobs such as, Start Automation jobs, Start Automation test job, view job outputs and run local version of the PowerShell Runbook in debug mode by allowing you to add breakpoints in the script.
+ >[!NOTE]
+ > Currently, we support the use of internal cmdlets likeΓÇ»`Get-AutomationVariable` only with non-encrypted assets.
+
+- **Work with schedules, assets and webhooks** - You can view the properties of a schedule, delete schedule, link schedule to link a schedule to a runbook.
+- **Add webhook** - You can add a webhook to the runbook.
+- **Update properties of assets** - You can create, update, view properties of assets such as Certificates, Connections, Credentials, Variables and Deletion of assets from the extension.
++
+## Limitations
+Currently, the following features aren't supported:
+
+- Creation of new schedules.
+- Adding new Certificates in Assets.
+- Upload Modules (PowerShell and Python) packages from the extension.
+- Auto-sync of local runbooks to Azure Automation account. You will have to perform the operation to **Fetch** or **Publish** runbook.
+- Management of Hybrid worker groups.
+- Graphical runbook and workflows.
+- For Python, we don't provide any debug options. We recommend that you install any debugger extension in your Python script.
+- Currently, we support only the unencrypted assets in local run.
+
+## Next steps
+
+- For Runbook management operations and to test runbook and jobs, see [Use Azure Automation extension for Visual Studio Code](../automation/how-to/runbook-authoring-extension-for-vscode.md)
+
automation Runbook Authoring Extension For Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/how-to/runbook-authoring-extension-for-vscode.md
+
+ Title: Azure Automation extension for Visual Studio Code
+description: Learn how to use the Azure Automation extension for Visual Studio Code to author runbooks.
Last updated : 01/10/2023+++
+# Use Azure Automation extension for Visual Studio Code
+
+This article explains about the Visual Studio that you can use to create and manage runbooks. You can perform all runbook management operations such as creating runbooks, editing runbook, triggering a job, tracking recent jobs outputs, linking a schedule, asset management and local debugging.
++
+## Prerequisites
+
+The following items are required for completing the steps in this article:
+
+- An Azure subscription. If you don't have an Azure subscription, create a
+ [free account](https://azure.microsoft.com/free/)
+- [Visual Studio Code](https://code.visualstudio.com).
+- PowerShell modules and Python packages used by runbook must be locally installed on the machine to run the runbook locally.
+
+## Install and configure the Azure Automation extension
+
+After you meet the prerequisites, you can install the [Azure Automation extension for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=azure-automation.vscode-azureautomation&ssr=false#overview) by
+following these steps:
+
+1. Open Visual Studio Code.
+1. From the menu bar, go to **View** > **Extensions**.
+1. In the search box, enter **Azure Automation**.
+1. Select **Azure Automation** from the search results, and then select **Install**.
+1. Select **Reload** when necessary.
+
+## Using the Azure Automation extension
+
+The extension simplifies the process of creating and editing runbooks. You can now test them locally without logging into the Azure portal. The various actions that you can perform are listed below:
+
+### Create a runbook
+
+To create a runbook in the Automation account. Follow these steps:
+
+1. Sign in to Azure from the Azure Automation extension.
+1. Select **Runbooks**
+1. Right click and select **Create Runbook** to create a new Runbook in the Automation account.
+
+ :::image type="content" source="media/runbook-authoring-extension-for-vscode/create-runbook-inline.png" alt-text="Screenshot on how to create runbook using the Azure Automation extension." lightbox="media/runbook-authoring-extension-for-vscode/create-runbook-expanded.png":::
+
+### Publish a runbook
+
+To publish a runbook in the Automation account. Follow these steps:
+
+1. In Automation account, select the runbook.
+1. Right click and select **Publish runbook** to publish the runbook.
+
+ A notification appears that the runbook is successfully published.
+
+ :::image type="content" source="media/runbook-authoring-extension-for-vscode/publish-runbook-inline.png" alt-text="Screenshot on how to publish runbook using the Azure Automation extension." lightbox="media/runbook-authoring-extension-for-vscode/publish-runbook-expanded.png":::
+
+
+### Run local version of Automation job
+
+To run local version of Automation job, follow these steps:
+
+1. In Automation account, select the runbook.
+1. Right click and select **Run Local** to run local version of the Automation job.
+
+ :::image type="content" source="media/runbook-authoring-extension-for-vscode/run-local-job-inline.png" alt-text="Screenshot on how to run local version of job using the Azure Automation extension." lightbox="media/runbook-authoring-extension-for-vscode/run-local-job-expanded.png":::
++
+### Run Automation job
+
+To run the Automation job, follow these steps:
+
+1. In Automation account, select the runbook.
+1. Right click and select **Start Automation job** to run the Automation job.
+
+ :::image type="content" source="media/runbook-authoring-extension-for-vscode/start-automation-job-inline.png" alt-text="Screenshot on how to run Automation job using the Azure Automation extension." lightbox="media/runbook-authoring-extension-for-vscode/start-automation-job-expanded.png":::
+
+### Add new webhook
+
+To add a webhook to the runbook, follow these steps:
+
+1. In Automation account, select the runbook.
+1. Right click and select **Add New Webhook**.
+1. Select and copy the Webhook URI.
+1. Use the command palette and select **Azure Automation Trigger Webhook**
+1. Paste the Webhook URI.
+
+ A notification appears that JobId is created successfully.
+
+ :::image type="content" source="media/runbook-authoring-extension-for-vscode/add-new-webhook-inline.png" alt-text="Screenshot that shows the notification after successfully adding a new webhook." lightbox="media/runbook-authoring-extension-for-vscode/add-new-webhook-expanded.png":::
+
+
+### Link a schedule
+
+1. In Automation account, go to **Schedules** and select your schedule.
+1. Go to **Runbooks**, select your runbook.
+1. Right click and select **Link Schedule** and confirm the schedule.
+1. In the drop-down select **Azure**
+
+ A notification appears that the schedule is linked.
++
+### Manage Assets
+1. In Automation account, go to **Assets** > **fx Variables**.
+1. Right click and select **Create or Update**.
+1. Provide a name in the text box.
+
+ A notification appears that the variable is created, you can view the new variable in **fx Variables** option.
+
+### Run local in debug mode
+1. In Automation account, go to **Runbooks** and select a runbook.
+1. In the edit pane, add the break point.
+1. Right click on the runbook and select **Run local in Debug Mode**.
+
+ :::image type="content" source="media/runbook-authoring-extension-for-vscode/run-local-debug-mode-inline.png" alt-text="Screenshot that shows the running of local runbook in debug mode." lightbox="media/runbook-authoring-extension-for-vscode/add-new-webhook-expanded.png":::
+
+### Compare local runbook
+1. In Automation account, go to **Runbooks** and select a runbook
+1. Right click on the runbook and select **Compare local runbook**.
+1. In the edit pane, you will see the information in two layouts - runbook copy and published/draft copy.
+ >[!NOTE]
+ >If the runbook is **InEdit** mode, you will have to select either the Compare Published content or Compare Draft content to compare.
+
+ :::image type="content" source="media/runbook-authoring-extension-for-vscode/compare-local-runbook-inline.png" alt-text="Screenshot that shows how to compare local runbook." lightbox="media/runbook-authoring-extension-for-vscode/compare-local-runbook-expanded.png":::
+
+## Next steps
+
+- For information on key features and limitations of Azure Automation extension see, [Runbook authoring through VS Code in Azure Automation](../automation-runbook-authoring.md)
azure-app-configuration Quickstart Aspnet Core App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-aspnet-core-app.md
ms.devlang: csharp-+ Previously updated : 9/29/2022 Last updated : 01/04/2023 #Customer intent: As an ASP.NET Core developer, I want to learn how to manage all my app settings in one place.
Use the [.NET Core command-line interface (CLI)](/dotnet/core/tools) to create a
Run the following command to create an ASP.NET Core web app in a new *TestAppConfig* folder: #### [.NET 6.x](#tab/core6x)+ ```dotnetcli dotnet new webapp --output TestAppConfig --framework net6.0 ``` #### [.NET Core 3.x](#tab/core3x)+ ```dotnetcli dotnet new webapp --output TestAppConfig --framework netcoreapp3.1 ```+ ## Connect to the App Configuration store
dotnet new webapp --output TestAppConfig --framework netcoreapp3.1
Secret Manager stores the secret outside of your project tree, which helps prevent the accidental sharing of secrets within source code. It's used only to test the web app locally. When the app is deployed to Azure like [App Service](../app-service/overview.md), use the *Connection strings*, *Application settings* or environment variables to store the connection string. Alternatively, to avoid connection strings all together, you can [connect to App Configuration using managed identities](./howto-integrate-azure-managed-service-identity.md) or your other [Azure AD identities](./concept-enable-rbac.md).
-1. Open *Program.cs*, and add Azure App Configuration as an extra configuration source by calling the `AddAzureAppConfiguration` method.
+1. Open *Program.cs* and add Azure App Configuration as an extra configuration source by calling the `AddAzureAppConfiguration` method.
#### [.NET 6.x](#tab/core6x)+ ```csharp var builder = WebApplication.CreateBuilder(args);
dotnet new webapp --output TestAppConfig --framework netcoreapp3.1
// The rest of existing code in program.cs // ... ... ```
-
+ #### [.NET Core 3.x](#tab/core3x)+ Update the `CreateHostBuilder` method.
-
+ ```csharp public static IHostBuilder CreateHostBuilder(string[] args) => Host.CreateDefaultBuilder(args)
dotnet new webapp --output TestAppConfig --framework netcoreapp3.1
webBuilder.UseStartup<Startup>(); }); ```+ This code will connect to your App Configuration store using a connection string and load *all* key-values that have *no labels*. For more information on the App Configuration provider, see the [App Configuration provider API reference](/dotnet/api/Microsoft.Extensions.Configuration.AzureAppConfiguration).
dotnet new webapp --output TestAppConfig --framework netcoreapp3.1
In this example, you'll update a web page to display its content using the settings you configured in your App Configuration store.
-1. Add a *Settings.cs* file at the root of your project directory. It defines a strongly typed `Settings` class for the configuration you're going to use. Replace the namespace with the name of your project.
+1. Add a *Settings.cs* file at the root of your project directory. It defines a strongly typed `Settings` class for the configuration you're going to use. Replace the namespace with the name of your project.
```csharp namespace TestAppConfig
In this example, you'll update a web page to display its content using the setti
1. Bind the `TestApp:Settings` section in configuration to the `Settings` object. #### [.NET 6.x](#tab/core6x)
- Update *Program.cs* with the following code.
+
+ Update *Program.cs* with the following code and add the `TestAppConfig` namespace at the beginning of the file.
```csharp
+ using TestAppConfig;
+ // Existing code in Program.cs // ... ...
In this example, you'll update a web page to display its content using the setti
// The rest of existing code in program.cs // ... ... ```
-
+ #### [.NET Core 3.x](#tab/core3x)+ Open *Startup.cs* and update the `ConfigureServices` method.
-
+ ```csharp public void ConfigureServices(IServiceCollection services) {
In this example, you'll update a web page to display its content using the setti
services.Configure<Settings>(Configuration.GetSection("TestApp:Settings")); } ```+
-1. Open *Index.cshtml.cs* in the *Pages* directory, and update the `IndexModel` class with the following code. Add `using Microsoft.Extensions.Options` namespace at the beginning of the file, if it's not already there.
+1. Open *Index.cshtml.cs* in the *Pages* directory, and update the `IndexModel` class with the following code. Add the `using Microsoft.Extensions.Options` namespace at the beginning of the file, if it's not already there.
```csharp public class IndexModel : PageModel
In this example, you'll update a web page to display its content using the setti
dotnet run ```
-1. Open a browser and navigate to the URL the app is listening on, as specified in the command output. It looks like `https://localhost:5001`.
+1. The output of the `dotnet run` command contains two URLs. Open a browser and navigate to either one of these URLs to access your application. For example: `https://localhost:5001`.
If you're working in the Azure Cloud Shell, select the *Web Preview* button followed by *Configure*. When prompted to configure the port for preview, enter *5000*, and select *Open and browse*.
- ![Locate the Web Preview button](./media/quickstarts/cloud-shell-web-preview.png)
+ :::image type="content" source="./media/quickstarts/cloud-shell-web-preview.png" alt-text="Screenshot of Azure Cloud Shell. Locate Web Preview.":::
- The web page will look like this:
- ![Launching quickstart app locally](./media/quickstarts/aspnet-core-app-launch-local-before.png)
+ The web page looks like this:
+ :::image type="content" source="./media/quickstarts/aspnet-core-app-launch-local-navbar.png" alt-text="Screenshot of the browser.Launching quickstart app locally.":::
## Clean up resources
In this quickstart, you:
To learn how to configure your ASP.NET Core web app to dynamically refresh configuration settings, continue to the next tutorial. > [!div class="nextstepaction"]
-> [Enable dynamic configuration](./enable-dynamic-configuration-aspnet-core.md)
+> [Enable dynamic configuration](./enable-dynamic-configuration-aspnet-core.md)
azure-arc Managed Instance Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/managed-instance-features.md
Azure Arc-enabled SQL Managed Instance supports various data tools that can help
### <a name="Unsupported"></a> Unsupported Features & Services
-The following features and services are not available for Azure Arc-enabled SQL Managed Instance. The support of these features will be increasingly enabled over time.
+The following features and services are not available for Azure Arc-enabled SQL Managed Instance.
| Area | Unsupported feature or service | |--|--|
The following features and services are not available for Azure Arc-enabled SQL
| &nbsp; | FileTable, FILESTREAM | | &nbsp; | CLR assemblies with the EXTERNAL_ACCESS or UNSAFE permission set | | &nbsp; | Buffer Pool Extension |
-| **SQL Server Agent** | Subsystems: CmdExec, PowerShell, Queue Reader, SSIS, SSAS, SSRS |
-| &nbsp; | Alerts |
-| &nbsp; | Managed Backup |
+| **SQL Server Agent** | SQL Server agent is supported but the following specific capabilities are not supported: Subsystems (CmdExec, PowerShell, Queue Reader, SSIS, SSAS, SSRS), Alerts, Managed Backup
| **High Availability** | Database mirroring | | **Security** | Extensible Key Management | | &nbsp; | AD Authentication for Linked Servers |
azure-cache-for-redis Cache Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-overview.md
The [Azure Cache for Redis Pricing](https://azure.microsoft.com/pricing/details/
| - | :--: | :: | :: | :: | :: | | [Service Level Agreement (SLA)](https://azure.microsoft.com/support/legal/sla/cache/v1_0/) |-|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Data encryption in transit |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| [Network isolation](cache-how-to-premium-vnet.md) |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| [Network isolation](cache-private-link.md) |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
| [Scaling](cache-how-to-scale.md) |Γ£ö|Γ£ö|Γ£ö|-|-| | [OSS clustering](cache-how-to-premium-clustering.md) |-|-|Γ£ö|Γ£ö|Γ£ö| | [Data persistence](cache-how-to-premium-persistence.md) |-|-|Γ£ö|Preview|Preview|
azure-functions Create First Function Vs Code Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-csharp.md
adobe-target-content: ./create-first-function-vs-code-csharp-ieux
# Quickstart: Create a C# function in Azure using Visual Studio Code
-This article creates an HTTP triggered function that runs on .NET 6, either in-process or isolated worker process. .NET Functions isolated worker process also lets you run on .NET 7 (in preview). For information about all .NET versions supported by isolated worker process, see [Supported versions](dotnet-isolated-process-guide.md#supported-versions).
-
-There's also a [CLI-based version](create-first-function-cli-csharp.md) of this article.
-
-By default, this article shows you how to create C# functions that run on .NET 6 [in the same process as the Functions host](functions-dotnet-class-library.md). These _in-process_ C# functions are only supported on [Long Term Support (LTS)](https://dotnet.microsoft.com/en-us/platform/support/policy/dotnet-core) .NET versions, such as .NET 6. When creating your project, you can choose to instead create a function that runs on .NET 6 in an [isolated worker process](dotnet-isolated-process-guide.md). [Isolated worker process](dotnet-isolated-process-guide.md) supports both LTS and Standard Term Support (STS) versions of .NET. For more information, see [Supported versions](dotnet-isolated-process-guide.md#supported-versions) in the .NET Functions isolated worker process guide.
+This article shows you how to create C# functions that run on .NET 6 [in the same process as the Functions host](functions-dotnet-class-library.md). These _in-process_ C# functions are only supported on [Long Term Support (LTS)](https://dotnet.microsoft.com/en-us/platform/support/policy/dotnet-core) .NET versions, such as .NET 6. When creating your project, you can choose to instead create a function that runs on .NET 6 in an [isolated worker process](dotnet-isolated-process-guide.md). [Isolated worker process](dotnet-isolated-process-guide.md) supports both LTS and Standard Term Support (STS) versions of .NET. For more information, see [Supported versions](dotnet-isolated-process-guide.md#supported-versions) in the .NET Functions isolated worker process guide. There's also a [CLI-based version](create-first-function-cli-csharp.md) of this article.
Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account.
azure-functions Functions Reference Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-python.md
def main(req: func.HttpRequest,
``` When the function is invoked, the HTTP request is passed to the function as `req`. An entry will be retrieved from the Azure Blob Storage account based on the _ID_ in the route URL and made available as `obj` in the function body. Here, the specified storage account is the connection string that's found in the AzureWebJobsStorage app setting, which is the same storage account that's used by the function app.+
+For data intensive binding operations, you may want to use a separate storage account. For more information, see [Storage account guidance](storage-considerations.md#storage-account-guidance).
At this time, only specific triggers and bindings are supported by the Python v2 programming model. Supported triggers and bindings are as follows: | Type | Trigger | Input binding | Output binding |
azure-monitor Azure Monitor Agent Windows Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-windows-client.md
description: This article describes the instructions to install the agent on Win
Previously updated : 10/18/2022 Last updated : 1/9/2023
Now we associate the Data Collection Rules (DCR) to the Monitored Object by crea
**Request URI** ```HTTP
-PUT https://management.azure.com/{MOResourceId}/providers/microsoft.insights/datacollectionruleassociations/assoc?api-version=2021-04-01
+PUT https://management.azure.com/{MOResourceId}/providers/microsoft.insights/datacollectionruleassociations/{associationName}?api-version=2021-09-01-preview
``` **Sample Request URI** ```HTTP
-PUT https://management.azure.com/providers/Microsoft.Insights/monitoredObjects/{AADTenantId}/providers/microsoft.insights/datacollectionruleassociations/assoc?api-version=2021-04-01
+PUT https://management.azure.com/providers/Microsoft.Insights/monitoredObjects/{AADTenantId}/providers/microsoft.insights/datacollectionruleassociations/{associationName}?api-version=2021-09-01-preview
``` **URI Parameters**
PUT https://management.azure.com/providers/Microsoft.Insights/monitoredObjects/{
| `dataCollectionRuleID` | The resource ID of an existing Data Collection Rule that you created in the **same region** as the Monitored Object. |
-### Using PowerShell
+### Using PowerShell for onboarding
```PowerShell $TenantID = "xxxxxxxxx-xxxx-xxx" #Your Tenant ID $SubscriptionID = "xxxxxx-xxxx-xxxxx" #Your Subscription ID $ResourceGroup = "rg-yourResourseGroup" #Your resroucegroup
-$DCRName = "CollectWindowsOSlogs" #Your Data collection rule name
Connect-AzAccount -Tenant $TenantID
$body = @"
} "@
-$request = "https://management.azure.com/providers/microsoft.insights/providers/microsoft.authorization/roleassignments/$newguid`?api-version=2021-04-01-preview"
+$requestURL = "https://management.azure.com/providers/microsoft.insights/providers/microsoft.authorization/roleassignments/$newguid`?api-version=2021-04-01-preview"
-Invoke-RestMethod -Uri $request -Headers $AuthenticationHeader -Method PUT -Body $body
+Invoke-RestMethod -Uri $requestURL -Headers $AuthenticationHeader -Method PUT -Body $body
##########################
Invoke-RestMethod -Uri $request -Headers $AuthenticationHeader -Method PUT -Body
#2. Create Monitored Object # "location" property value under the "body" section should be the Azure region where the MO object would be stored. It should be the "same region" where you created the Data Collection Rule. This is the location of the region from where agent communications would happen.-
-$request = "https://management.azure.com/providers/Microsoft.Insights/monitoredObjects/$TenantID`?api-version=2021-09-01-preview"
-$body = @'
+$Location = "eastus" #Use your own loacation
+$requestURL = "https://management.azure.com/providers/Microsoft.Insights/monitoredObjects/$TenantID`?api-version=2021-09-01-preview"
+$body = @"
{ "properties":{
- "location":"eastus"
+ "location":`"$Location`"
} }
-'@
+"@
-$Respond = Invoke-RestMethod -Uri $request -Headers $AuthenticationHeader -Method PUT -Body $body -Verbose
+$Respond = Invoke-RestMethod -Uri $requestURL -Headers $AuthenticationHeader -Method PUT -Body $body -Verbose
$RespondID = $Respond.id ########################## #3. Associate DCR to Monitored Object
+#See reference documentation https://learn.microsoft.com/en-us/rest/api/monitor/data-collection-rule-associations/create?tabs=HTTP
+$associationName = "assoc01" #You can define your custom associationname, must change the association name to a unique name, if you want to associate multiple DCR to monitored object
+$DCRName = "dcr-WindowsClientOS" #Your Data collection rule name
+
+$requestURL = "https://management.azure.com$RespondId/providers/microsoft.insights/datacollectionruleassociations/$associationName`?api-version=2021-09-01-preview"
+$body = @"
+ {
+ "properties": {
+ "dataCollectionRuleId": "/subscriptions/$SubscriptionID/resourceGroups/$ResourceGroup/providers/Microsoft.Insights/dataCollectionRules/$DCRName"
+ }
+ }
+
+"@
+
+Invoke-RestMethod -Uri $requestURL -Headers $AuthenticationHeader -Method PUT -Body $body
-$request = "https://management.azure.com$RespondId/providers/microsoft.insights/datacollectionruleassociations/assoc?api-version=2021-04-01"
+#(Optional example). Associate another DCR to Monitored Object
+#See reference documentation https://learn.microsoft.com/en-us/rest/api/monitor/data-collection-rule-associations/create?tabs=HTTP
+$associationName = "assoc02" #You must change the association name to a unique name, if you want to associate multiple DCR to monitored object
+$DCRName = "dcr-PAW-WindowsClientOS" #Your Data collection rule name
+
+$requestURL = "https://management.azure.com$RespondId/providers/microsoft.insights/datacollectionruleassociations/$associationName`?api-version=2021-09-01-preview"
$body = @" { "properties": {
$body = @"
"@
-Invoke-RestMethod -Uri $request -Headers $AuthenticationHeader -Method PUT -Body $body
+Invoke-RestMethod -Uri $requestURL -Headers $AuthenticationHeader -Method PUT -Body $body
+
+#4. (Optional) Get all the associatation.
+$requestURL = "https://management.azure.com$RespondId/providers/microsoft.insights/datacollectionruleassociations?api-version=2021-09-01-preview"
+(Invoke-RestMethod -Uri $requestURL -Headers $AuthenticationHeader -Method get).value
++ ```
+### Using PowerShell for offboarding
+```PowerShell
+#This will remove the monitor object
+$TenantID = "xxxxxxxxx-xxxx-xxx" #Your Tenant ID
+$SubscriptionID = "xxxxxx-xxxx-xxxxx" #Your Subscription ID
+$ResourceGroup = "rg-yourResourseGroup" #Your resroucegroup
+Connect-AzAccount -Tenant $TenantID
+
+#Select the subscription
+Select-AzSubscription -SubscriptionId $SubscriptionID
+
+#Delete monitored object
+$requestURL = "https://management.azure.com/providers/Microsoft.Insights/monitoredObjects/$TenantID`?api-version=2021-09-01-preview"
+#Invoke-RestMethod -Uri $requestURL -Headers $AuthenticationHeader -Method Delete
+
+```
## Verify successful setup Check the ΓÇÿHeartbeatΓÇÖ table (and other tables you configured in the rules) in the Log Analytics workspace that you specified as a destination in the data collection rule(s).
In order to update the version, install the new version you wish to update to.
3. The 'ServiceLogs' folder contains log from AMA Windows Service, which launches and manages AMA processes 4. 'AzureMonitorAgent.MonitoringDataStore' contains data/logs from AMA processes.
-### Common issues
+### Common installation issues
#### Missing DLL - Error message: "There's a problem with this Windows Installer package. A DLL required for this installer to complete could not be run. …" - Ensure you have installed [C++ Redistributable (>2015)](/cpp/windows/latest-supported-vc-redist?view=msvc-170&preserve-view=true) before installing AMA:
+#### Not AAD joined
+Error message: "Tenant and device ids retrieval failed"
+1. Run the command `dsregcmd /status`. This should produce the output as `AzureAdJoined : YES` in the 'Device State' section. If not, join the device with an AAD tenant and try installation again.
+ #### Silent install from command prompt fails Make sure to start the installer on administrator command prompt. Silent install can only be initiated from the administrator command prompt.
Make sure to start the installer on administrator command prompt. Silent install
- If There's an option to try again, do try it again - If retry from uninstaller doesn't work, cancel the uninstall and stop Azure Monitor Agent service from Services (Desktop Application) - Retry uninstall- #### Force uninstall manually when uninstaller doesn't work - Stop Azure Monitor Agent service. Then try uninstalling again. If it fails, then proceed with the following steps - Delete AMA service with "sc delete AzureMonitorAgent" from admin cmd
Make sure to start the installer on administrator command prompt. Silent install
- Delete AMA data/logs. They're stored in `C:\Resources\Azure Monitor Agent` by default - Open Registry. Check `HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Azure Monitor Agent`. If it exists, delete the key.
+### Post installation/Operational issues
+Once the agent is installed successfully (i.e. you see the agent service running but don't see data as expected), you can follow standard troubleshooting steps listed here for [Windows VM](./azure-monitor-agent-troubleshoot-windows-vm.md) and [Windows Arc-enabled server](azure-monitor-agent-troubleshoot-windows-arc.md) respectively.
## Questions and feedback Take this [quick survey](https://forms.microsoft.com/r/CBhWuT1rmM) or share your feedback/questions regarding the client installer.
azure-monitor Alerts Troubleshoot Metric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-troubleshoot-metric.md
Metric alerts are stateful by default, so other alerts aren't fired if there's a
- If you create the alert rule programmatically, for example, via [Azure Resource Manager](./alerts-metric-create-templates.md), [PowerShell](/powershell/module/az.monitor/), [REST](/rest/api/monitor/metricalerts/createorupdate), or the [Azure CLI](/cli/azure/monitor/metrics/alert), set the `autoMitigate` property to `False`. - If you create the alert rule via the Azure portal, clear the **Automatically resolve alerts** option under the **Alert rule details** section.
-<sup>1</sup> For stateless metric alert rules, an alert triggers once every 10 minutes at a minimum, even if the frequency of evaluation is equal or less than 5 minutes and the condition is still being met.
+<sup>1</sup>The frequency of notifications for stateless metric alerts differs based on the alert rule's configured frequency:
+
+- **Alert frequency of less than 5 minutes**: While the condition continues to be met, a notification is sent somewhere between one and six minutes.
+- **Alert frequency of more than 5 minutes**: While the condition continues to be met, a notification is sent between the configured frequency and double the frequency. For example, for an alert rule with a frequency of 15 minutes, a notification is sent somewhere between 15 to 30 minutes.
> [!NOTE] > Making a metric alert rule stateless prevents fired alerts from becoming resolved. So, even after the condition isn't met anymore, the fired alerts remain in a fired state until the 30-day retention period.
azure-monitor Asp Net Dependencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-dependencies.md
Title: Dependency tracking in Application Insights | Microsoft Docs description: Monitor dependency calls from your on-premises or Azure web application with Application Insights. Previously updated : 12/13/2022 Last updated : 01/09/2023 ms.devlang: csharp
azure-monitor Availability Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-azure-functions.md
Title: Create and run custom availability tests by using Azure Functions description: This article explains how to create an Azure function with TrackAvailability() that will run periodically according to the configuration given in a TimerTrigger function. Previously updated : 05/06/2021 Last updated : 01/06/2023 ms.devlang: csharp
azure-monitor Azure Web Apps Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/azure-web-apps-net-core.md
Title: Monitor Azure App Service performance in .NET Core | Microsoft Docs description: Application performance monitoring for Azure App Service using ASP.NET Core. Chart load and response time, dependency information, and set alerts on performance. Previously updated : 11/15/2022 Last updated : 01/09/2023 ms.devlang: csharp
azure-monitor Codeless Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/codeless-overview.md
Title: Monitor your apps without code changes - auto-instrumentation for Azure Monitor Application Insights | Microsoft Docs description: Overview of auto-instrumentation for Azure Monitor Application Insights - codeless application performance management Previously updated : 10/19/2022 Last updated : 01/06/2023
azure-monitor Java Standalone Arguments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-arguments.md
For more information, see [Monitoring Azure Functions with Azure Monitor Applica
## Spring Boot
-Read the Spring Boot documentation [on this website](../app/java-in-process-agent.md).
+For more information, see [Using Azure Monitor Application Insights with Spring Boot](./java-spring-boot.md).
## Third-party container images
azure-monitor Live Stream https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/live-stream.md
Title: Diagnose with Live Metrics - Application Insights - Azure Monitor description: Monitor your web app in real time with custom metrics, and diagnose issues with a live feed of failures, traces, and events. Previously updated : 11/15/2022 Last updated : 01/06/2023 ms.devlang: csharp
azure-monitor Performance Counters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/performance-counters.md
Title: Performance counters in Application Insights | Microsoft Docs description: Monitor system and custom .NET performance counters in Application Insights. Previously updated : 06/30/2022 Last updated : 01/06/2023 ms.devlang: csharp
azure-monitor Pre Aggregated Metrics Log Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/pre-aggregated-metrics-log-metrics.md
Title: Log-based and pre-aggregated metrics in Application Insights | Microsoft Docs description: This article explains when to use log-based versus pre-aggregated metrics in Application Insights. Previously updated : 09/18/2018 Last updated : 01/06/2023
azure-monitor Sampling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sampling.md
Title: Telemetry sampling in Azure Application Insights | Microsoft Docs description: How to keep the volume of telemetry under control. Previously updated : 11/15/2022 Last updated : 01/06/2023
In [`ApplicationInsights.config`](./configuration-with-applicationinsights-confi
* `<ExcludedTypes>type;type</ExcludedTypes>`
- A semi-colon delimited list of types that you don't want to be subject to sampling. Recognized types are: `Dependency`, `Event`, `Exception`, `PageView`, `Request`, `Trace`. All telemetry of the specified types is transmitted; the types that aren't specified will be sampled.
+ A semi-colon delimited list of types that you don't want to be subject to sampling. Recognized types are: [`Dependency`](data-model-dependency-telemetry.md), [`Event`](data-model-event-telemetry.md), [`Exception`](data-model-exception-telemetry.md), [`PageView`](data-model-pageview-telemetry.md), [`Request`](data-model-request-telemetry.md), [`Trace`](data-model-trace-telemetry.md). All telemetry of the specified types is transmitted; the types that aren't specified will be sampled.
* `<IncludedTypes>type;type</IncludedTypes>`
- A semi-colon delimited list of types that you do want to subject to sampling. Recognized types are: `Dependency`, `Event`, `Exception`, `PageView`, `Request`, `Trace`. The specified types will be sampled; all telemetry of the other types will always be transmitted.
+ A semi-colon delimited list of types that you do want to subject to sampling. Recognized types are: [`Dependency`](data-model-dependency-telemetry.md), [`Event`](data-model-event-telemetry.md), [`Exception`](data-model-exception-telemetry.md), [`PageView`](data-model-pageview-telemetry.md), [`Request`](data-model-request-telemetry.md), [`Trace`](data-model-trace-telemetry.md). The specified types will be sampled; all telemetry of the other types will always be transmitted.
**To switch off** adaptive sampling, remove the `AdaptiveSamplingTelemetryProcessor` node(s) from `ApplicationInsights.config`.
azure-monitor Status Monitor V2 Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/status-monitor-v2-overview.md
Title: Application Insights Agent overview | Microsoft Docs description: Learn how to use Application Insights Agent to monitor website performance without redeploying the website. It works with ASP.NET web apps hosted on-premises, in VMs, or on Azure. Previously updated : 11/15/2022 Last updated : 01/09/2023
azure-monitor Metrics Supported https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-supported.md
This latest update adds a new column and reorders the metrics to be alphabetical
|BackendLastByteResponseTime|No|Backend Last Byte Response Time|MilliSeconds|Average|Time interval between start of establishing a connection to backend server and receiving the last byte of the response body|Listener, BackendServer, BackendPool, BackendHttpSetting| |BackendResponseStatus|Yes|Backend Response Status|Count|Total|The number of HTTP response codes generated by the backend members. This does not include any response codes generated by the Application Gateway.|BackendServer, BackendPool, BackendHttpSetting, HttpStatusGroup| |BlockedCount|Yes|Web Application Firewall Blocked Requests Rule Distribution|Count|Total|Web Application Firewall blocked requests rule distribution|RuleGroup, RuleId|
-|BlockedReqCount|Yes|Web Application Firewall Blocked Requests Count|Count|Total|Web Application Firewall blocked requests count|No Dimensions|
|BytesReceived|Yes|Bytes Received|Bytes|Total|The total number of bytes received by the Application Gateway from the clients|Listener| |BytesSent|Yes|Bytes Sent|Bytes|Total|The total number of bytes sent by the Application Gateway to the clients|Listener| |CapacityUnits|No|Current Capacity Units|Count|Average|Capacity Units consumed|No Dimensions|
azure-monitor Log Query Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-query-overview.md
Areas in Azure Monitor where you'll use queries include:
- [Azure dashboards](../visualize/tutorial-logs-dashboards.md): Pin the results of any query into an Azure dashboard, which allows you to visualize log and metric data together and optionally share with other Azure users. - [Azure Logic Apps](../logs/logicapp-flow-connector.md): Use the results of a log query in an automated workflow by using Logic Apps. - [PowerShell](/powershell/module/az.operationalinsights/invoke-azoperationalinsightsquery): Use the results of a log query in a PowerShell script from a command line or an Azure Automation runbook that uses `Invoke-AzOperationalInsightsQuery`.-- [Azure Monitor Logs API](https://dev.loganalytics.io): Retrieve log data from the workspace from any REST API client. The API request includes a query that's run against Azure Monitor to determine the data to retrieve.
+- [Azure Monitor Logs API](/rest/api/loganalytics/): Retrieve log data from the workspace from any REST API client. The API request includes a query that's run against Azure Monitor to determine the data to retrieve.
- **Azure Monitor Query SDK**: Retrieve log data from the workspace via an idiomatic client library for the following ecosystems: - [.NET](/dotnet/api/overview/azure/Monitor.Query-readme) - [Go](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/monitor/azquery)
azure-portal Azure Portal Safelist Urls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/azure-portal-safelist-urls.md
The URL endpoints to allow for the Azure portal are specific to the Azure cloud
``` *.login.microsoftonline.com *.aadcdn.msftauth.net
+*.aadcdn.msftauthimages.net
+*.aadcdn.msauthimages.net
*.logincdn.msftauth.net *.login.live.com *.msauth.net
azure-resource-manager Add Template To Azure Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/add-template-to-azure-pipelines.md
Title: CI/CD with Azure Pipelines and Bicep files description: In this quickstart, you learn how to configure continuous integration in Azure Pipelines by using Bicep files. It shows how to use an Azure CLI task to deploy a Bicep file. Previously updated : 08/03/2022 Last updated : 01/10/2023 # Quickstart: Integrate Bicep with Azure Pipelines
steps:
azureSubscription: $(azureServiceConnection) scriptType: bash scriptLocation: inlineScript
+ useGlobalConfig: false
inlineScript: | az --version az group create --name $(resourceGroupName) --location $(location) az deployment group create --resource-group $(resourceGroupName) --template-file $(templateFile) ```
-For the descriptions of the task inputs, see [Azure CLI task](/azure/devops/pipelines/tasks/deploy/azure-cli).
+For the descriptions of the task inputs, see [Azure CLI task](/azure/devops/pipelines/tasks/reference/azure-cli-v2). When using the task on air-gapped cloud, you must set the `useGlobalConfig` property of the task to `true`. The default value is `false`.
Select **Save**. The build pipeline automatically runs. Go back to the summary for your build pipeline, and watch the status.
azure-resource-manager Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/install.md
Title: Set up Bicep development and deployment environments description: How to configure Bicep development and deployment environments Previously updated : 11/03/2022 Last updated : 01/10/2023
The `bicep install` and `bicep upgrade` commands don't work in an air-gapped env
1. Download **bicep-win-x64.exe** from the [Bicep release page](https://github.com/Azure/bicep/releases/latest/) in a non-air-gapped environment. 1. Copy the executable to the **%UserProfile%/.azure/bin** directory on an air-gapped machine. Rename file to **bicep.exe**.
+When using the [Azure CLI task](/azure/devops/pipelines/tasks/reference/azure-cli-v2) on air-gapped cloud, you must set the `useGlobalConfig` property of the task to `true`. The default value is `false`. See [CI/CD with Azure Pipelines and Bicep files](./add-template-to-azure-pipelines.md) for an example.
+ ## Install the nightly builds If you'd like to try the latest pre-release bits of Bicep before they're released, see [Install nightly builds](https://github.com/Azure/bicep/blob/main/docs/installing-nightly.md).
azure-resource-manager Azure Subscription Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-subscription-service-limits.md
Last updated 12/13/2022
This document lists some of the most common Microsoft Azure limits, which are also sometimes called quotas.
-To learn more about Azure pricing, see [Azure pricing overview](https://azure.microsoft.com/pricing/). There, you can estimate your costs by using the [pricing calculator](https://azure.microsoft.com/pricing/calculator/). You also can go to the pricing details page for a particular service, for example, [Windows VMs](https://azure.microsoft.com/pricing/details/virtual-machines/#Windows). For tips to help manage your costs, see [Prevent unexpected costs with Azure billing and cost management](../../cost-management-billing/cost-management-billing-overview.md).
+To learn more about Azure pricing, see [Azure pricing overview](https://azure.microsoft.com/pricing/). There, you can estimate your costs by using the [pricing calculator](https://azure.microsoft.com/pricing/calculator/). You also can go to the pricing details page for a particular service, for example, [Windows VMs](https://azure.microsoft.com/pricing/details/virtual-machines/Windows/). For tips to help manage your costs, see [Prevent unexpected costs with Azure billing and cost management](../../cost-management-billing/cost-management-billing-overview.md).
## Managing limits
azure-vmware Attach Azure Netapp Files To Azure Vmware Solution Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md
Title: Attach Azure NetApp Files datastores to Azure VMware Solution hosts
-description: Learn how to create Azure NetApp Files-based NSF datastores for Azure VMware Solution hosts.
+description: Learn how to create Azure NetApp Files-based NFS datastores for Azure VMware Solution hosts.
Last updated 01/09/2023
There are some important best practices to follow for optimal performance of NFS
- For optimized performance, choose either **UltraPerformance** gateway or **ErGw3Az** gateway, and enable [FastPath](../expressroute/expressroute-howto-linkvnet-arm.md#configure-expressroute-fastpath) from a private cloud to Azure NetApp Files volumes virtual network. View more detailed information on gateway SKUs at [About ExpressRoute virtual network gateways](../expressroute/expressroute-about-virtual-network-gateways.md). - Based on your performance requirements, select the correct service level needed for the Azure NetApp Files capacity pool. See [Service levels for Azure NetApp Files](../azure-netapp-files/azure-netapp-files-service-levels.md) to understand the throughput allowed per provisioned TiB for each service level. - Create one or more volumes based on the required throughput and capacity. See [Performance considerations](../azure-netapp-files/azure-netapp-files-performance-considerations.md) for Azure NetApp Files to understand how volume size, service level, and capacity pool QoS type will determine volume throughput. For assistance calculating workload capacity and performance requirements, contact your Azure VMware Solution or Azure NetApp Files field expert. The default maximum number of Azure NetApp Files datastores is 64, but it can be increased to a maximum of 256 by submitting a support ticket. To submit a support ticket, see [Create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md).-- Work with your Microsoft representative to ensure that the Azure VMware Solution private cloud and the Azure NetApp Files volumes are deployed within same [availability zone](../availability-zones/az-overview.md#availability-zones).
+- Work with your Microsoft representative to ensure that the Azure VMware Solution private cloud and the Azure NetApp Files volumes are deployed within the same [availability zone](../availability-zones/az-overview.md#availability-zones).
> [!IMPORTANT] >Changing the Azure NetApp Files volumes tier after creating the datastore will result in unexpected behavior in portal and API due to metadata mismatch. Set your performance tier of the Azure NetApp Files volume when creating the datastore. If you need to change tier during run time, detach the datastore, change the performance tier of the volume and attach the datastore. We are working on improvements to make this seamless.
azure-vmware Concepts Network Design Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-network-design-considerations.md
Title: Concepts - Network design considerations
description: Learn about network design considerations for Azure VMware Solution Previously updated : 12/22/2022 Last updated : 1/10/2023 # Azure VMware Solution network design considerations
Azure VMware Solution offers a VMware private cloud environment accessible for u
## Azure VMware Solution compatibility with AS-Path Prepend
-Azure VMware Solution is incompatible with AS-Path Prepend for redundant ExpressRoute configurations and doesn't honor the outbound path selection from Azure towards on-premises. If you're running two or more ExpressRoute paths between on-premises and Azure, and the listed [Prerequisites](#prerequisites) are not met, you may experience impaired connectivity or no connectivity between your on-premises networks and Azure VMware Solution. The connectivity issue is caused when Azure VMware Solution doesn't see the AS-Path Prepend and uses equal cost multi-pathing (ECMP) to send traffic towards your environment over both ExpressRoute circuits. That action causes issues with stateful firewall inspection.
+Azure VMware Solution is compatible with AS-Path Prepend for redundant ExpressRoute configurations with the caveat of not honoring the outbound path selection from Azure towards on-premises. If you're running two or more ExpressRoute paths between on-premises and Azure, and the listed [Prerequisites](#prerequisites) are not met, you may experience impaired connectivity or no connectivity between your on-premises networks and Azure VMware Solution. The connectivity issue is caused when Azure VMware Solution doesn't see the AS-Path Prepend and uses equal cost multi-pathing (ECMP) to send traffic towards your environment over both ExpressRoute circuits. That action causes issues with stateful firewall inspection.
### Prerequisites
backup Backup Reports Email https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-reports-email.md
Title: Email Azure Backup Reports
description: Create automated tasks to receive periodic reports via email Last updated 04/06/2022-+ -+ # Email Azure Backup Reports
cloudfoundry Cloudfoundry Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloudfoundry/cloudfoundry-get-started.md
Microsoft provides best-effort support for OSS CF through the following communit
Pivotal Cloud Foundry includes the same core platform as the OSS distribution, along with a set of proprietary management tools and enterprise support. To run PCF on Azure, you must acquire a license from Pivotal. The PCF offer from the Azure marketplace includes a 90-day trial license.
-The tools include [Pivotal Operations Manager](https://docs.pivotal.io/ops-manager/2-10/install/), a web application that simplifies deployment and management of a Cloud Foundry foundation, and [Pivotal Apps Manager](https://docs.pivotal.io/pivotalcf/console/), a web application for managing users and applications.
+The tools include [Pivotal Operations Manager](https://docs.pivotal.io/ops-manager/2-10/install/), a web application that simplifies deployment and management of a Cloud Foundry foundation, and [Pivotal Apps Manager](https://docs.pivotal.io/application-service/2-7/console/https://docsupdatetracker.net/index.html), a web application for managing users and applications.
In addition to the support channels listed for OSS CF above, a PCF license entitles you to contact Pivotal for support. Microsoft and Pivotal have also enabled support workflows that allow you to contact either party for assistance and have your inquiry routed appropriately depending on where the issue lies.
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/overview.md
Azure Cognitive Service for Language is a cloud-based service that provides Natu
## Available features
-This Language service unifies Text Analytics, QnA Maker, and LUIS and provides several new features as well. These features can either be:
+This Language service unifies the following previously available Cognitive
+
+The Language service also provides several new features as well, which can either be:
* Pre-configured, which means the AI models that the feature uses are not customizable. You just send your data, and use the feature's output in your applications. * Customizable, which means you'll train an AI model using our tools to fit your data specifically.
This Language service unifies Text Analytics, QnA Maker, and LUIS and provides s
:::image type="content" source="media/studio-examples/named-entity-recognition.png" alt-text="A screenshot of a named entity recognition example." lightbox="media/studio-examples/named-entity-recognition.png"::: :::column-end::: :::column span="":::
- [Named entity recognition](./named-entity-recognition/overview.md) is a pre-configured feature that identifies entities in unstructured text across several pre-defined categories. For example: people, events, places, dates, [and more](./named-entity-recognition/concepts/named-entity-categories.md).
+ [Named entity recognition](./named-entity-recognition/overview.md) is a pre-configured feature that categorizes entities (words or phrases) in unstructured text across several pre-defined category groups. For example: people, events, places, dates, [and more](./named-entity-recognition/concepts/named-entity-categories.md).
:::column-end::: :::row-end:::
This Language service unifies Text Analytics, QnA Maker, and LUIS and provides s
:::image type="content" source="media/studio-examples/entity-linking.png" alt-text="A screenshot of an entity linking example." lightbox="media/studio-examples/entity-linking.png"::: :::column-end::: :::column span="":::
- [Entity linking](./entity-linking/overview.md) is a pre-configured feature that disambiguates the identity of entities found in unstructured text and returns links to Wikipedia.
+ [Entity linking](./entity-linking/overview.md) is a pre-configured feature that disambiguates the identity of entities (words or phrases) found in unstructured text and returns links to Wikipedia.
:::column-end::: :::row-end:::
This Language service unifies Text Analytics, QnA Maker, and LUIS and provides s
:::image type="content" source="media/studio-examples/single-classification.png" alt-text="A screenshot of a custom text classification example." lightbox="media/studio-examples/single-classification.png"::: :::column-end::: :::column span="":::
- [Custom text classification](./custom-text-classification/overview.md) enables you to build custom AI models to classify text into custom classes you define.
+ [Custom text classification](./custom-text-classification/overview.md) enables you to build custom AI models to classify unstructured text documents into custom classes you define.
:::column-end::: :::row-end:::
This Language service unifies Text Analytics, QnA Maker, and LUIS and provides s
:::image type="content" source="media/studio-examples/custom-named-entity-recognition.png" alt-text="A screenshot of a custom NER example." lightbox="media/studio-examples/custom-named-entity-recognition.png"::: :::column-end::: :::column span="":::
- [Custom NER](custom-named-entity-recognition/overview.md) enables you to build custom AI models to extract custom entity categories, using unstructured text that you provide.
+ [Custom NER](custom-named-entity-recognition/overview.md) enables you to build custom AI models to extract custom entity categories (labels for words or phrases), using unstructured text that you provide.
:::column-end::: :::row-end:::
cognitive-services Embeddings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/tutorials/embeddings.md
Previously updated : 12/14/2022 Last updated : 01/10/2022 recommendations: false
# Tutorial: Explore Azure OpenAI embeddings and document search
-This tutorial will walk you through using the Azure OpenAI embeddings API to perform **document search** where you'll query a knowledge base to find the most relevant document.
+This tutorial will walk you through using the Azure OpenAI [embeddings](../concepts/understand-embeddings.md) API to perform **document search** where you'll query a knowledge base to find the most relevant document.
In this tutorial, you learn how to:
communication-services Recording Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/recording-logs.md
Title: Azure Communication Services - Recording Analytics Public Preview-
-description: About using Log Analytics for recording logs
-
+ Title: Azure Communication Services - Call Recording summary logs
+
+description: Learn about the properties of summary logs for the Call Recording feature.
+
-# Call Recording Summary Log
-Call recording summary logs provide details about the call duration, media content (e.g., Audio-Video, Unmixed, Transcription, etc.), the format types used for the recording (e.g., WAV, MP4, etc.), as well as the reason of why the recording ended.
+# Call Recording summary logs
+In Azure Communication Services, summary logs for the Call Recording feature provide details about:
-Recording file is generated at the end of a call or meeting. The recording can be initiated and stopped by either a user or an app (bot) or ended due to a system failure.
+- Call duration.
+- Media content (for example, audio/video, unmixed, or transcription).
+- Format types used for the recording (for example, WAV or MP4).
+- The reason why the recording ended.
-> [!IMPORTANT]
+A recording file is generated at the end of a call or meeting. The recording can be initiated and stopped by either a user or an app (bot). It can also end because of a system failure.
-> Please note the call recording logs will be published once the call recording is ready to be downloaded. The log will be published within the standard latency time for Azure Monitor Resource logs see [Log data ingestion time in Azure Monitor](../../../azure-monitor/logs/data-ingestion-time.md#azure-metrics-resource-logs-activity-log)
+Summary logs are published after a recording is ready to be downloaded. The logs are published within the standard latency time for Azure Monitor resource logs. See [Log data ingestion time in Azure Monitor](../../../azure-monitor/logs/data-ingestion-time.md#azure-metrics-resource-logs-activity-log).
+## Properties
-## Properties Description
-
-| Field Name | DataType | Description |
+| Property name | Data type | Description |
|- |--|--|
-|timeGenerated|DateTime|The timestamp (UTC) of when the log was generated|
-|operationName| String | The operation associated with log record|
-|correlationId |String |`CallID` is used to correlate events between multiple tables|
-|recordingID| String | The ID given to the recording this log refers to|
-|category| String | The log category of the event. Logs with the same log category and resource type will have the same properties fields|
-|resultType| String| The status of the operation |
-|level |String |The severity level of the operation |
-|chunkCount |Integer|The total number of chunks created for the recording|
-|channelType| String |The recording's channel type, i.e., mixed, unmixed|
-|recordingStartTime| DateTime|The time that the recording started |
-|contentType| String | The recording's content, i.e., Audio Only, Audio - Video, Transcription, etc.|
-|formatType| String | The recording's file format |
-|recordingLength| Double | Duration of the recording in seconds |
-|audioChannelsCount| Integer | Total number of audio channels in the recording|
-|recordingEndReason| String | The reason why the recording ended |
--
-## Call recording and sample data
+|`timeGenerated`|DateTime|Time stamp (UTC) of when the log was generated.|
+|`operationName`|String|Operation associated with a log record.|
+|`correlationId`|String|ID that's used to correlate events between tables.|
+|`recordingID`|String|ID for the recording that this log refers to.|
+|`category`|String|Log category of the event. Logs with the same log category and resource type have the same property fields.|
+|`resultType`|String| Status of the operation.|
+|`level`|String |Severity level of the operation.|
+|`chunkCount`|Integer|Total number of chunks created for the recording.|
+|`channelType`|String|Channel type of the recording, such as mixed or unmixed.|
+|`recordingStartTime`|DateTime|Time that the recording started.|
+|`contentType`|String|Content of the recording, such as audio only, audio/video, or transcription.|
+|`formatType`|String|File format of the recording.|
+|`recordingLength`|Double|Duration of the recording in seconds.|
+|`audioChannelsCount`|Integer|Total number of audio channels in the recording.|
+|`recordingEndReason`|String|Reason why the recording ended.|
+
+## Call Recording and example data
+ ```json "operationName": "Call Recording Summary", "operationVersion": "1.0", "category": "RecordingSummaryPUBLICPREVIEW", ```
-A call can have one recording or many recordings depending on how many times a recording event is triggered.
+A call can have one recording or many recordings, depending on how many times a recording event is triggered.
+
+For example, if an agent initiates an outbound call on a recorded line and the call drops because of a poor network signal, `callid` will have one `recordingid` value. If the agent calls back the customer, the system generates a new `callid` instance and a new `recordingid` value.
-For example, if an agent initiates an outbound call in a recorded line and the call drops due to poor network signal, the `callid` will have one `recordingid`. If the agent calls back the customer, the system will generate a new `callid` as well as a new `recordingid`.
+#### Example: Call Recording for one call to one recording
-#### Example1: Call recording for "One call to one recording"
```json "properties" {
For example, if an agent initiates an outbound call in a recorded line and the c
} ```
-If the agent initiated a recording and stopped and restarted the recording multiple times while the call is still on, the `callid` will have many `recordingid` depending on how many times the recording events were triggered.
+If the agent initiates a recording and then stops and restarts the recording multiple times while the call is still on, `callid` will have many `recordingid` values, depending on how many times the recording events were triggered.
+
+#### Example: Call Recording for one call to many recordings
-#### Example2: Call recording for "One call to many recordings"
```json {
If the agent initiated a recording and stopped and restarted the recording mult
"AudioChannelsCount": 1 } ```
-See also call recording for more info
-[Azure Communication Services Call Recording overview](../../../communication-services/concepts/voice-video-calling/call-recording.md)
+
+## Next steps
+
+For more information about Call Recording, see [Call Recording overview](../../../communication-services/concepts/voice-video-calling/call-recording.md).
communication-services Sub Eligibility Number Capability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/sub-eligibility-number-capability.md
Additional details on eligible subscription types are as follows:
\* Allowing the purchase of Italian phone numbers for CSP and LSP customers is planned only for General Availability launch.
-\** Applications from all other subscription types will be reviewed and approved on a case-by-case bases. Please reach out to phone@microsoft.com for assistance with your application.
+\** Applications from all other subscription types will be reviewed and approved on a case-by-case bases. Please reach out to acstns@microsoft.com for assistance with your application.
## Number capabilities
communication-services Send https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/sms/send.md
zone_pivot_groups: acs-azcli-js-csharp-java-python
> SMS capabilities depend on the phone number you use and the country that you're operating within as determined by your Azure billing address. For more information, visit the [Subscription eligibility](../../concepts/numbers/sub-eligibility-number-capability.md) documentation. > > Currently, SMS messages can only be sent to received from United States phone numbers. For more information, see [Phone number types](../../concepts/telephony/plan-solution.md).+ <br/>
-<br/>
+ >[!VIDEO https://www.youtube.com/embed/YEyxSZqzF4o] ::: zone pivot="platform-azcli"
connectors Connectors Create Api Mq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-mq.md
Title: Connect to IBM MQ server
-description: Connect to an MQ server on premises or in Azure from a workflow using Azure Logic Apps.
+ Title: Connect to IBM MQ
+description: Connect to an MQ server on premises or in Azure from a workflow in Azure Logic Apps.
ms.suite: integration-- Previously updated : 03/14/2022 Last updated : 01/10/2023+ tags: connectors
tags: connectors
[!INCLUDE [logic-apps-sku-consumption-standard](../../includes/logic-apps-sku-consumption-standard.md)]
-The MQ connector helps you connect your logic app workflows to an IBM MQ server that's either on premises or in Azure. You can then have your workflows receive and send messages stored in your MQ server. This article provides a get started guide to using the MQ connector by showing how to connect to your MQ server and add an MQ action to your workflow. For example, you can start by browsing a single message in a queue and then try other actions.
+This article shows how to access an MQ server that's either on premises or in Azure from a workflow in Azure Logic Apps with the MQ connector. You can then create automated workflows that receive and send messages stored in your MQ server. For example, your workflow can browse for a single message in a queue and then run other actions. The MQ connector includes a Microsoft MQ client that communicates with a remote MQ server across a TCP/IP network.
-This connector includes a Microsoft MQ client that communicates with a remote MQ server across a TCP/IP network. You can connect to the following IBM WebSphere MQ versions:
+## Supported IBM WebSphere MQ versions
* MQ 7.5 * MQ 8.0 * MQ 9.0, 9.1, and 9.2
-<a name="available-operations"></a>
+## Connector technical reference
-## Available operations
+The MQ connector has different versions, based on [logic app type and host environment](../logic-apps/logic-apps-overview.md#resource-environment-differences).
-* Consumption logic app: You can connect to an MQ server only by using the *managed* MQ connector. This connector provides only actions, no triggers.
+| Logic app | Environment | Connection version |
+|--|-|--|
+| **Consumption** | Multi-tenant Azure Logic Apps | Managed connector, which appears in the designer under the **Enterprise** label. This connector provides only actions, not triggers. For more information, review the following documentation: <br><br>- [MQ managed connector reference](/connectors/mq) <br>- [Managed connectors in Azure Logic Apps](managed.md) |
+| **Consumption** | Integration service environment (ISE) | Managed connector, which appears in the designer under the **Enterprise** label. For more information, review the following documentation: <br><br>- [MQ managed connector reference](/connectors/mq) <br>- [Managed connectors in Azure Logic Apps](managed.md) |
+| **Standard** | Single-tenant Azure Logic Apps and App Service Environment v3 (Windows plans only) | Managed connector, which appears in the designer under the **Azure** label, and built-in connector, which appears in the designer under the **Built-in** label and is service provider based. The built-in version differs in the following ways: <br><br>- The built-in version includes actions *and* triggers. <br><br>- The built-in version can connect directly to an MQ server and access Azure virtual networks. You don't need an on-premises data gateway. <br><br>- The built-in version supports Transport Layer Security (TLS) encryption for data in transit, message encoding for both the send and receive operations, and Azure virtual network integration when your logic app uses the Azure Functions Premium plan <br><br>For more information, review the following documentation: <br><br>- [MQ managed connector reference](/connectors/mq) <br>- [MQ built-in connector reference](/azure/logic-apps/connectors/built-in/reference/mq/) <br>- [Built-in connectors in Azure Logic Apps](built-in.md) |
-* Standard logic app: You can connect to an MQ server by using either the managed MQ connector, which includes *only* actions, or the *built-in* MQ operations, which include triggers *and* actions.
+## Limitations
-For more information about the difference between a managed connector and built-in operations, review [key terms in Logic Apps](../logic-apps/logic-apps-overview.md#logic-app-concepts).
+* The MQ connector doesn't support segmented messages.
-#### [Managed](#tab/managed)
+* The MQ connector doesn't use the message's **Format** field and doesn't make any character set conversions. The connector only puts whatever data appears in the message field into a JSON message and sends the message along.
-The following list describes only some of the managed operations available for MQ:
+For more information, review the [MQ managed connector reference](/connectors/mq) or the [MQ built-in connector reference](/azure/logic-apps/connectors/built-in/reference/mq/).
-* Browse a single message or an array of messages without deleting from the MQ server. For multiple messages, you can specify the maximum number of messages to return per batch. Otherwise, all messages are returned.
-* Delete a single or an array of messages from the MQ server.
-* Receive a single message or an array of messages and then delete from the MQ server.
-* Send a single message to the MQ server.
+## Prerequisites
-For all the managed connector operations and other technical information, such as properties, limits, and so on, review the [MQ connector's reference page](/connectors/mq/).
+* An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-#### [Built-in](#tab/built-in)
+* If you're using an on-premises MQ server, [install the on-premises data gateway](../logic-apps/logic-apps-gateway-install.md) on a server within your network. For the MQ connector to work, the server with the on-premises data gateway also must have .NET Framework 4.6 installed.
-The following list describes only some of the built-in operations available for MQ:
+ After you install the gateway, you must also create a data gateway resource in Azure. The MQ connector uses this resource to access your MQ server. For more information, review [Set up the data gateway connection](../logic-apps/logic-apps-gateway-connection.md).
-* When a message is available in a queue, take some action.
-* When one or more messages are received from a queue (auto-complete), take some action.
-* When one or more messages are received from a queue (peek-lock), take some action.
-* Receive a single message or an array of messages from a queue. For multiple messages, you can specify the maximum number of messages to return per batch and the maximum batch size in KB.
-* Send a single message or an array of messages to the MQ server.
+ > [!NOTE]
+ >
+ > You don't need the gateway in the following scenarios:
+ >
+ > * Your MQ server is publicly available or available in Azure.
+ > * You're going to use the MQ built-in connector, not the managed connector.
-These built-in MQ operations also have the following capabilities plus the benefits from all the other capabilities for logic apps in the [single-tenant Logic Apps service](../logic-apps/single-tenant-overview-compare.md):
+* The logic app workflow where you want to access your MQ server.
-* Transport Layer Security (TLS) encryption for data in transit
-* Message encoding for both the send and receive operations
-* Support for Azure virtual network integration when your logic app uses the Azure Functions Premium plan
+ * If you're using the MQ managed connector, which doesn't provide any triggers, make sure that your workflow already starts with a trigger or that you first add a trigger to your workflow. For example, you can use the [Recurrence trigger](../connectors/connectors-native-recurrence.md).
-
+ * If you're using a trigger from the MQ built-in connector, make sure that you start with a blank workflow.
-## Limitations
+ * If you're using the on-premises data gateway, your logic app resource must use the same location as your gateway resource in Azure.
-* The MQ connector doesn't support segmented messages.
+<a name="add-trigger"></a>
-* The MQ connector doesn't use the message's **Format** field and doesn't make any character set conversions. The connector only puts whatever data appears in the message field into a JSON message and sends the message along.
+## Add an MQ trigger (Standard logic app only)
-## Prerequisites
+The following steps apply only to Standard logic app workflows, which can use triggers provided by the MQ built-in connector. The MQ managed connector doesn't include any triggers.
-* An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+These steps use the Azure portal, but with the appropriate Azure Logic Apps extension, you can also use [Visual Studio Code](../logic-apps/create-single-tenant-workflows-visual-studio-code.md) to create a Standard logic app workflow.
-* If you're using an on-premises MQ server, [install the on-premises data gateway](../logic-apps/logic-apps-gateway-install.md) on a server within your network. For the MQ connector to work, the server with the on-premises data gateway also must have .NET Framework 4.6 installed.
+1. In the [Azure portal](https://portal.azure.com), open your blank logic app workflow in the designer.
- After you install the gateway, you must also create a data gateway resource in Azure. The MQ connector uses this resource to access your MQ server. For more information, review [Set up the data gateway connection](../logic-apps/logic-apps-gateway-connection.md).
+1. On the designer, select **Choose an operation**, if not already selected.
- > [!NOTE]
- > You don't need the gateway in the following scenarios:
- >
- > * You're going to use the built-in operations, not the managed connector.
- > * Your MQ server is publicly available or available in Azure.
+1. Under the **Choose an operation** search box, select **Built-in**. In the search box, enter **mq**.
-* The logic app workflow where you want to access your MQ server. Your logic app resource must have the same location as your gateway resource in Azure.
+1. From the triggers list, select the [MQ trigger](/azure/logic-apps/connectors/built-in/reference/mq/#triggers) that you want to use.
- The MQ connector doesn't have any triggers, so either your workflow must already start with a trigger, or you first have to add a trigger to your workflow. For example, you can use the [Recurrence trigger](../connectors/connectors-native-recurrence.md).
+1. Provide the [information to authenticate your connection](/azure/logic-apps/connectors/built-in/reference/mq/#authentication). When you're done, select **Create**.
- If you're new to Azure Logic Apps, try this [quickstart to create an example logic app workflow](../logic-apps/quickstart-create-first-logic-app-workflow.md), which runs in the multi-tenant Logic Apps service.
+1. When the trigger information box appears, provide the required [information for your trigger](/azure/logic-apps/connectors/built-in/reference/mq/#triggers).
-<a name="create-connection"></a>
+1. When you're done, save your workflow. On the designer toolbar, select **Save**.
-## Create an MQ connection
+<a name="add-action"></a>
-When you add an MQ action for the first time, you're prompted to create a connection to your MQ server.
+## Add an MQ action
-> [!NOTE]
-> The MQ connector currently supports only server authentication, not client authentication.
-> For more information, see [Connection and authentication problems](#connection-problems).
+A Consumption logic app workflow can use only the MQ managed connector. However, a Standard logic app workflow can use the MQ managed connector and the MQ built-in connector. Each version has multiple actions. For example, both managed and built-in connector versions have their own actions to browse a message.
-#### [Managed](#tab/managed)
+* Managed connector actions: These actions run in a Consumption or Standard logic app workflow.
-1. If you're connecting to an on-premises MQ server, select **Connect via on-premises data gateway**.
+* Built-in connector actions: These actions run only in a Standard logic app workflow.
-1. Provide the connection information for your MQ server.
+The following steps use the Azure portal, but with the appropriate Azure Logic Apps extension, you can also use the following tools to create logic app workflows:
- | Property | On-premises or Azure | Description |
- |-|-|-|
- | **Gateways** | On-premises only | Select **Connect via on-premises data gateway**. |
- | **Connection name** | Both | The name to use for your connection |
- | **Server** | Both | Either of the following values: <p><p>- MQ server host name <br>- IP address followed by a colon and the port number |
- | **Queue Manager name** | Both | The Queue Manager that you want to use |
- | **Channel name** | Both | The channel for connecting to the Queue Manager |
- | **Default queue name** | Both | The default name for the queue |
- | **Connect As** | Both | The username for connecting to the MQ server |
- | **Username** | Both | Your username credential |
- | **Password** | Both | Your password credential |
- | **Enable SSL?** | On-premises only | Use Transport Layer Security (TLS) or Secure Sockets Layer (SSL) |
- | **Gateway - Subscription** | On-premises only | The Azure subscription associated with your gateway resource in Azure |
- | **Gateway - Connection Gateway** | On-premises only | The gateway resource to use |
- ||||
+* Consumption logic app workflows: [Visual Studio](../logic-apps/quickstart-create-logic-apps-with-visual-studio.md) or [Visual Studio Code](../logic-apps/quickstart-create-logic-apps-visual-studio-code.md)
- For example:
+* Standard logic app workflows: [Visual Studio Code](../logic-apps/create-single-tenant-workflows-visual-studio-code.md)
- ![Screenshot showing the managed MQ connection details.](media/connectors-create-api-mq/managed-connection-properties.png)
+### [Consumption](#tab/consumption)
-1. When you're done, select **Create**.
+1. In the [Azure portal](https://portal.azure.com/), open your logic app workflow in the designer.
-#### [Built-in](#tab/built-in)
+1. In your workflow where you want to add an MQ action, follow one of these steps:
-1. Provide the connection information for your MQ server.
+ * To add an action under the last step, select **New step**.
- | Property | On-premises or Azure | Description |
- |-|-|-|
- | **Connection name** | Both | The name to use for your connection |
- | **Server name** | Both | The MQ server name or IP address |
- | **Port number** | Both | The TCP port number for connecting to the Queue Manager on the host |
- | **Channel** | Both | The channel for connecting to the Queue Manager |
- | **Queue Manager name** | Both | The Queue Manager that you want to use |
- | **Default queue name** | Both | The default name for the queue |
- | **Connect As** | Both | The username for connecting to the MQ server |
- | **Username** | Both | Your username credential |
- | **Password** | Both | Your password credential |
- | **Use TLS** | Both | Use Transport Layer Security (TLS) |
- ||||
+ * To add an action between steps, move your mouse over the connecting arrow so that the plus sign (**+**) appears. Select the plus sign, and then select **Add an action**.
- For example:
+1. Under the **Choose an operation** search box, select **Enterprise**. In the search box, enter **mq**.
- ![Screenshot showing the built-in MQ connection details.](media/connectors-create-api-mq/built-in-connection-properties.png)
+1. From the actions list, select the [MQ action](/connectors/mq/#actions) that you want to use.
-1. When you're done, select **Create**.
+1. Provide the [information to authenticate your connection](/connectors/mq/#creating-a-connection). When you're done, select **Create**.
-
+1. When the action information box appears, provide the required [information for your action](/connectors/mq/#actions).
-<a name="add-action"></a>
+1. When you're done, save your workflow. On the designer toolbar, select **Save**.
-## Add an MQ action
+### [Standard](#tab/standard)
+
+The steps to add and use an MQ action differ based on whether your workflow uses the built-in connector or the managed, Azure-hosted connector.
+
+* [Built-in connector](#add-built-in-action): Describes the steps to add an action for the MQ built-in connector.
+
+* [Managed connector](#add-managed-action): Describes the steps to add an action for the MQ managed connector.
+
+<a name="add-built-in-action"></a>
+
+#### Add an MQ built-in connector action
+
+1. In the [Azure portal](https://portal.azure.com/), open your logic app workflow in the designer.
+
+1. In your workflow where you want to add an MQ action, follow one of these steps:
+
+ * To add an action under the last step, select the plus sign (**+**), and then select **Add an action**.
+
+ * To add an action between steps, select the plus sign (**+**) between those steps, and then select **Add an action**.
+
+1. On the **Add an action** pane, under the **Choose an operation** search box, select **Built-in**. In the search box, enter **mq**.
+
+1. From the actions list, select the [MQ action](/azure/logic-apps/connectors/built-in/reference/mq/#actions) that you want to use.
+
+1. Provide the [information to authenticate your connection](/azure/logic-apps/connectors/built-in/reference/mq/#authentication). When you're done, select **Create**.
-In Azure Logic Apps, an action follows the trigger or another action and performs some operation in your workflow. The following steps describe the general way to add an action, for example, **Browse a single message**.
+1. When the action information box appears, provide the required [information for your action](/azure/logic-apps/connectors/built-in/reference/mq/#actions).
-1. In the Logic Apps Designer, open your workflow, if not already open.
+1. When you're done, save your workflow. On the designer toolbar, select **Save**.
-1. Under the trigger or another action, add a new step.
+<a name="add-managed-action"></a>
- To add a step between existing steps, move your mouse over the arrow. Select the plus sign (+) that appears, and then select **Add an action**.
+#### Add an MQ managed connector action
-1. In the operation search box, enter `mq`. From the actions list, select the action named **Browse message**.
+1. In the [Azure portal](https://portal.azure.com/), open your logic app workflow in the designer.
-1. If you're prompted to create a connection to your MQ server, [provide the requested connection information](#create-connection).
+1. In your workflow where you want to add an MQ action, follow one of these steps:
-1. In the action, provide the property values that the action needs.
+ * To add an action under the last step, select **New step**.
- For more properties, open the **Add new parameter** list, and select the properties that you want to add.
+ * To add an action between steps, move your mouse over the connecting arrow between those steps, select the plus sign (**+**) that appears between those steps, and then select **Add an action**.
-1. When you're done, on the designer toolbar, select **Save**.
+1. Under the **Choose an operation** search box, select **Azure**. In the search box, enter **mq**.
-1. To test your workflow, on the designer toolbar, select **Run**.
+1. From the actions list, select the [MQ action](/connectors/mq/#actions) that you want to use.
- After the run finishes, the designer shows the workflow's run history along with the status for step.
+1. Provide the [information to authenticate your connection](/connectors/mq/#creating-a-connection). When you're done, select **Create**.
+
+1. When the action information box appears, provide the required [information for your action](/connectors/mq/#actions).
+
+1. When you're done, save your workflow. On the designer toolbar, select **Save**.
+++
+## Test your workflow
+
+To check that your workflow returns the results that you expect, run your workflow and then review the outputs from your workflow's run history.
+
+1. Run your workflow.
+
+ * Consumption logic app: On the workflow designer toolbar, select **Run Trigger** > **Run**.
+
+ * Standard logic app: On workflow resource menu, select **Overview**. On the **Overview** pane toolbar, select **Run Trigger** > **Run**.
+
+ After the run finishes, the designer shows the workflow's run history along with the status for each step.
1. To review the inputs and outputs for each step that ran (not skipped), expand or select the step. * To review more input details, select **Show raw inputs**.+ * To review more output details, select **Show raw outputs**. If you set **IncludeInfo** to **true**, more output is included. ## Troubleshoot problems
In Azure Logic Apps, an action follows the trigger or another action and perform
If you run a browse or receive action on an empty queue, the action fails with the following header outputs:
-![MQ "no message" error](media/connectors-create-api-mq/mq-no-message-error.png)
+![Screenshot showing the MQ "no message" error.](media/connectors-create-api-mq/mq-no-message-error.png)
<a name="connection-problems"></a> ### Connection and authentication problems
-When your workflow tries connecting to your on-premises MQ server, you might get this error:
+When your workflow tries connecting to your on-premises MQ server, you might get the following error:
`"MQ: Could not Connect the Queue Manager '<queue-manager-name>': The Server was expecting an SSL connection."`
When your workflow tries connecting to your on-premises MQ server, you might get
When you try to connect, the MQ server logs an event message that the connection attempt failed because the MQ server chose the incorrect cipher specification. The event message contains the cipher specification that the MQ server chose from the list. In the channel configuration, update the cipher specification to match the cipher specification in the event message.
-## Connector reference
-
-For all the operations in the managed connector and other technical information, such as properties, limits, and so on, review the [MQ connector's reference page](/connectors/mq/).
- ## Next steps
-* Learn about other [Logic Apps connectors](../connectors/apis-list.md)
+* [Managed connectors in Azure Logic Apps](/connectors/connector-reference/connector-reference-logicapps-connectors)
+* [Built-in connectors in Azure Logic Apps](built-in.md)
connectors Connectors Create Api Oracledatabase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-create-api-oracledatabase.md
Using the Oracle Database connector, you create organizational workflows that us
This connector doesn't support the following items:
-* Views 
* Any table with composite keys * Nested object types in tables * Database functions with non-scalar values
container-instances Container Instances Tutorial Prepare App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-tutorial-prepare-app.md
In this tutorial, you created a container image that can be deployed in Azure Co
Advance to the next tutorial in the series to learn about storing your container image in Azure Container Registry:
-[Push image to Azure Container Registry](container-instances-tutorial-prepare-acr.md)
+> [!div class="nextstepaction"]
+> [Push image to Azure Container Registry](container-instances-tutorial-prepare-acr.md)
<! IMAGES > [aci-tutorial-app]:./media/container-instances-quickstart/aci-app-browser.png
container-instances Tutorial Docker Compose https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/tutorial-docker-compose.md
It can take a few minutes to push to the registry.
To verify the image is stored in your registry, run the [az acr repository show](/cli/azure/acr/repository#az-acr-repository-show) command: ```azurecli
-az acr repository show --name <acrName> --repository azure-vote-front
+az acr repository show --name <acrName> --repository azuredocs/azure-vote-front
``` [!INCLUDE [container-instances-create-docker-context](../../includes/container-instances-create-docker-context.md)]
container-registry Container Registry Oci Artifacts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-oci-artifacts.md
Title: Push and pull OCI artifact
-description: Push and pull Open Container Initiative (OCI) artifacts using a private container registry in Azure
+ Title: Push and pull OCI artifact references
+description: Push and pull Open Container Initiative (OCI) artifacts using a container registry in Azure
Previously updated : 10/11/2022 Last updated : 01/03/2023
-# Push and pull an OCI artifact using an Azure container registry
+# Push and pull OCI artifacts using an Azure container registry
-You can use an Azure container registry to store and manage [Open Container Initiative (OCI) artifacts](container-registry-image-formats.md#oci-artifacts) as well as Docker and Docker-compatible container images.
+You can use an [Azure container registry][acr-landing] to store and manage [Open Container Initiative (OCI) artifacts](container-registry-image-formats.md#oci-artifacts) as well as Docker and OCI container images.
-To demonstrate this capability, this article shows how to use the [OCI Registry as Storage (ORAS)](https://github.com/deislabs/oras) tool to push a sample artifact - a text file - to an Azure container registry. Then, pull the artifact from the registry. You can manage a variety of OCI artifacts in an Azure container registry using different command-line tools appropriate to each artifact.
+To demonstrate this capability, this article shows how to use the [OCI Registry as Storage (ORAS)][oras-cli] CLI to push a sample artifact - a text file - to an Azure container registry. Then, pull the artifact from the registry. You can manage various OCI artifacts in an Azure container registry using different command-line tools appropriate to each artifact.
## Prerequisites
-* **Azure container registry** - Create a container registry in your Azure subscription. For example, use the [Azure portal](container-registry-get-started-portal.md) or the [Azure CLI](container-registry-get-started-azure-cli.md).
-* **ORAS tool** - Download and install ORAS CLI v0.16.0 for your operating system from the [ORAS installation guide](https://oras.land/cli/).
-* **Azure Active Directory service principal (optional)** - To authenticate directly with ORAS, create a [service principal](container-registry-auth-service-principal.md) to access your registry. Ensure that the service principal is assigned a role such as AcrPush so that it has permissions to push and pull artifacts.
-* **Azure CLI (optional)** - To use an individual identity, you need a local installation of the Azure CLI. Version 2.0.71 or later is recommended. Run `az --version `to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
-* **Docker (optional)** - To use an individual identity, you must also have Docker installed locally, to authenticate with the registry. Docker provides packages that easily configure Docker on any [macOS][docker-mac], [Windows][docker-windows], or [Linux][docker-linux] system.
+* **Azure container registry** - Create a container registry in your Azure subscription. For example, use the [Azure portal](container-registry-get-started-portal.md) or [az acr create][az-acr-create].
+* **Azure CLI** - Version `2.29.1` or later is required. See [Install Azure CLI][azure-cli-install] for installation and/or upgrade.
+* **ORAS CLI** - Version `v0.16.0` is required. See: [ORAS installation][oras-install-docs].
+* **Docker (Optional)** - While Docker Desktop isn't required, the `oras` CLI utilizes the Docker desktop credential store for storing credentials. If Docker Desktop is installed, it must be running for `oras login`.
+## Configure a registry
-## Sign in to a registry
-
-This section shows two suggested workflows to sign into the registry, depending on the identity used. Choose the one of the two methods below appropriate for your environment.
+Configure environment variables to easily copy/paste commands into your shell. The commands can be run locally or in the [Azure Cloud Shell](https://shell.azure.com/).
-### Sign in with Azure CLI
+```bash
+ACR_NAME=myregistry
+REGISTRY=$ACR_NAME.azurecr.io
+```
-[Sign in](/cli/azure/authenticate-azure-cli) to the Azure CLI with your identity to push and pull artifacts from the container registry.
+## Sign in to a registry
-Then, use the Azure CLI command [az acr login](/cli/azure/acr#az-acr-login) to access the registry. For example, to authenticate to a registry named *myregistry*:
+Authenticate with your [individual Azure AD identity](container-registry-authentication.md?tabs=azure-cli#individual-login-with-azure-ad) using an AD token. Always use "000..." for the `USER_NAME` as the token is parsed through the `PASSWORD` variable.
```azurecli
+# Login to Azure
az login
-az acr login --name myregistry
+
+# Login to ACR, using a token based on your Azure identity
+USER_NAME="00000000-0000-0000-0000-000000000000"
+PASSWORD=$(az acr login --name $ACR_NAME --expose-token --output tsv --query accessToken)
``` > [!NOTE]
-> `az acr login` uses the Docker client to set an Azure Active Directory token in the `docker.config` file. The Docker client must be installed and running to complete the individual authentication flow.
+> ACR and ORAS support multiple authentication options for users and system automation. This article uses individual identity, using an Azure token. For more authentication options see [Authenticate with an Azure container registry][acr-authentication]
### Sign in with ORAS
-This section shows options to sign into the registry. Choose one method below appropriate for your environment.
-
-Run `oras login` to authenticate with the registry. You may pass [registry credentials](container-registry-authentication.md) appropriate for your scenario, such as service principal credentials, user identity, or a repository-scoped token (preview).
--- Authenticate with your [individual Azure AD identity](container-registry-authentication.md?tabs=azure-cli#individual-login-with-azure-ad) to use an AD token. Always use "000..." as the token is parsed through the `PASSWORD` variable.-
- ```azurecli
- USER_NAME="00000000-0000-0000-0000-000000000000"
- PASSWORD=$(az acr login --name $ACR_NAME --expose-token --output tsv --query accessToken)
- ```
--- Authenticate with a [repository scoped token](container-registry-repository-scoped-permissions.md) (Preview) to use non-AD based tokens.-
- ```azurecli
- USER_NAME="oras-token"
- PASSWORD=$(az acr token create -n $USER_NAME \
- -r $ACR_NAME \
- --repository $REPO content/write \
- --only-show-errors \
- --query "credentials.passwords[0].value" -o tsv)
- ```
--- Authenticate with an Azure Active Directory [service principal with pull and push permissions](container-registry-auth-service-principal.md#create-a-service-principal) (AcrPush role) to the registry.-
- ```azurecli
- SERVICE_PRINCIPAL_NAME="oras-sp"
- ACR_REGISTRY_ID=$(az acr show --name $ACR_NAME --query id --output tsv)
- PASSWORD=$(az ad sp create-for-rbac --name $SERVICE_PRINCIPAL_NAME \
- --scopes $(az acr show --name $ACR_NAME --query id --output tsv) \
- --role acrpush \
- --query "password" --output tsv)
- USER_NAME=$(az ad sp list --display-name $SERVICE_PRINCIPAL_NAME --query "[].appId" --output tsv)
- ```
-
- Supply the credentials to `oras login` after authentication configured.
+Provide the credentials to `oras login`.
```bash oras login $REGISTRY \
Run `oras login` to authenticate with the registry. You may pass [registry cred
--password $PASSWORD ```
-To read the password from Stdin, use `--password-stdin`.
+## Push a root artifact
-## Push an artifact
+A root artifact is an artifact that has no `subject` parent. Root artifacts can be anything from a container image, a helm chart, a readme file for the repository. Reference artifacts, described in [Attach, push, and pull supply chain artifacts](container-registry-oras-artifacts.md) are artifacts that refer to another artifact. Reference artifacts can be anything from a signature, software bill of materials, scan report or other evolving types.
-Create a text file in a local working working directory with some sample text. For example, in a bash shell:
+For this example, create content that represents a markdown file:
```bash
-echo "Here is an artifact" > artifact.txt
+echo 'Readme Content' > readme.md
```
-Use the `oras push` command to push this text file to your registry. The following example pushes the sample text file to the `samples/artifact` repo. The registry is identified with the fully qualified registry name *myregistry.azurecr.io* (all lowercase). The artifact is tagged `1.0`. The artifact has an undefined type, by default, identified by the *media type* string following the filename `artifact.txt`. See [OCI Artifacts](https://github.com/opencontainers/artifacts) for additional types.
+The following step pushes the `readme.md` file to `<myregistry>.azurecr.io/samples/artifact:readme`.
+- The registry is identified with the fully qualified registry name `<myregistry>.azurecr.io` (all lowercase), followed by the namespace and repo: `/samples/artifact`.
+- The artifact is tagged `:readme`, to identify it uniquely from other artifacts listed in the repo (`:latest, :v1, :v1.0.1`).
+- Setting `--artifact-type readme/example` differentiates the artifact from a container image, which uses `application/vnd.oci.image.config.v1+json`.
+- The `./readme.md` identifies the file uploaded, and the `:application/markdown` represents the [IANA `mediaType`][iana-mediatypes] of the file.
+ For more information, see [OCI Artifact Authors Guidance](https://github.com/opencontainers/artifacts/blob/main/artifact-authors.md).
+
+Use the `oras push` command to push the file to your registry.
-**Linux or macOS**
+**Linux, WSL2 or macOS**
```bash
-oras push myregistry.azurecr.io/samples/artifact:1.0 \
- --config :application/vnd.unknown.v1\
- ./artifact.txt:application/vnd.unknown.layer.v1+txt
+oras push $REGISTRY/samples/artifact:readme \
+ --artifact-type readme/example \
+ ./readme.md:application/markdown
``` **Windows** ```cmd
-.\oras.exe push myregistry.azurecr.io/samples/artifact:1.0 ^
- --config NUL:application/vnd.unknown.v1 ^
- .\artifact.txt:application/vnd.unknown.layer.v1+txt
+.\oras.exe push $REGISTRY/samples/artifact:readme ^
+ --artifact-type readme/example ^
+ .\readme.md:application/markdown
```
-Output for a successful push is similar to the following:
+Output for a successful push is similar to the following output:
```console
-Uploading 33998889555f artifact.txt
-Pushed myregistry.azurecr.io/samples/artifact:1.0
-Digest: sha256:xxxxxxbc912ef63e69136f05f1078dbf8d00960a79ee73c210eb2a5f65xxxxxx
+Uploading 2fdeac43552b readme.md
+Uploaded 2fdeac43552b readme.md
+Pushed <myregistry>.azurecr.io/samples/artifact:readme
+Digest: sha256:e2d60d1b171f08bd10e2ed171d56092e39c7bac1aec5d9dcf7748dd702682d53
```
-To manage artifacts in your registry, if you are using the Azure CLI, run standard `az acr` commands for managing images. For example, get the attributes of the artifact using the [az acr repository show][az-acr-repository-show] command:
+## Push a multi-file root artifact
-```azurecli
-az acr repository show \
- --name myregistry \
- --image samples/artifact:1.0
-```
+When OCI artifacts are pushed to a registry with ORAS, each file reference is pushed as a blob. To push separate blobs, reference the files individually, or collection of files by referencing a directory.
+For more information how to push a collection of files, see [Pushing artifacts with multiple files][oras-push-multifiles]
-Output is similar to the following:
+Create some documentation for the repository:
-```json
-{
- "changeableAttributes": {
- "deleteEnabled": true,
- "listEnabled": true,
- "readEnabled": true,
- "writeEnabled": true
- },
- "createdTime": "2019-08-28T20:43:31.0001687Z",
- "digest": "sha256:xxxxxxbc912ef63e69136f05f1078dbf8d00960a79ee73c210eb2a5f65xxxxxx",
- "lastUpdateTime": "2019-08-28T20:43:31.0001687Z",
- "name": "1.0",
- "signed": false
-}
+```bash
+echo 'Readme Content' > readme.md
+mkdir details/
+echo 'Detailed Content' > details/readme-details.md
+echo 'More detailed Content' > details/readme-more-details.md
```
-## Pull an artifact
+Push the multi-file artifact:
-Run the `oras pull` command to pull the artifact from your registry.
-
-First remove the text file from your local working directory:
+**Linux, WSL2 or macOS**
```bash
-rm artifact.txt
+oras push $REGISTRY/samples/artifact:readme \
+ --artifact-type readme/example\
+ ./readme.md:application/markdown\
+ ./details
```
-Run `oras pull` to pull the artifact, and specify the media type used to push the artifact:
+**Windows**
-```bash
-oras pull myregistry.azurecr.io/samples/artifact:1.0
+```cmd
+.\oras.exe push $REGISTRY/samples/artifact:readme ^
+ --artifact-type readme/example ^
+ .\readme.md:application/markdown ^
+ .\details
```
-Verify that the pull was successful:
+## Discover the manifest
+
+To view the manifest created as a result of `oras push`, use `oras manifest fetch`:
```bash
-$ cat artifact.txt
-Here is an artifact
+oras manifest fetch --pretty $REGISTRY/samples/artifact:readme
```
-## Remove the artifact (optional)
-
-To remove the artifact from your Azure container registry, use the [az acr repository delete][az-acr-repository-delete] command. The following example removes the artifact you stored there:
+The output will be similar to:
-```azurecli
-az acr repository delete \
- --name myregistry \
- --image samples/artifact:1.0
+```json
+{
+ "mediaType": "application/vnd.oci.artifact.manifest.v1+json",
+ "artifactType": "readme/example",
+ "blobs": [
+ {
+ "mediaType": "application/markdown",
+ "digest": "sha256:2fdeac43552b71eb9db534137714c7bad86b53a93c56ca96d4850c9b41b777fc",
+ "size": 15,
+ "annotations": {
+ "org.opencontainers.image.title": "readme.md"
+ }
+ },
+ {
+ "mediaType": "application/vnd.oci.image.layer.v1.tar+gzip",
+ "digest": "sha256:0d6c7434a34f6854f971487621426332e6c0fda08040b9e6cc8a93f354cee0b1",
+ "size": 189,
+ "annotations": {
+ "io.deis.oras.content.digest": "sha256:11eceb2e7ac3183ec9109003a7389468ec73ad5ceaec0c4edad0c1b664c5593a",
+ "io.deis.oras.content.unpack": "true",
+ "org.opencontainers.image.title": "details"
+ }
+ }
+ ],
+ "annotations": {
+ "org.opencontainers.artifact.created": "2023-01-10T14:44:06Z"
+ }
+}
```
-## Example: Build Docker image from OCI artifact
+## Pull a root artifact
-Source code and binaries to build a container image can be stored as OCI artifacts in an Azure container registry. You can reference a source artifact as the build context for an [ACR task](container-registry-tasks-overview.md). This example shows how to store a Dockerfile as an OCI artifact and then reference the artifact to build a container image.
-
-For example, create a one-line Dockerfile:
+Create a clean directory for downloading
```bash
-echo "FROM mcr.microsoft.com/hello-world" > hello-world.dockerfile
+mkdir ./download
```
-Log in to the destination container registry.
+Run the `oras pull` command to pull the artifact from your registry.
-```azurecli
-az login
-az acr login --name myregistry
+```bash
+oras pull -o ./download $REGISTRY/samples/artifact:readme
```
-Create and push a new OCI artifact to the destination registry by using the `oras push` command. This example sets the default media type for the artifact.
+### View the pulled files
```bash
-oras push myregistry.azurecr.io/dockerfile:1.0 hello-world.dockerfile
+tree ./download
```
-Run the [az acr build](/cli/azure/acr#az-acr-build) command to build the hello-world image using the new artifact as build context:
+## Remove the artifact (optional)
-```azurecli
-az acr build --registry myregistry --image builds/hello-world:v1 \
- --file hello-world.dockerfile \
- oci://myregistry.azurecr.io/dockerfile:1.0
+To remove the artifact from your registry, use the `oras manifest delete` command.
+
+```bash
+ oras manifest delete $REGISTRY/samples/artifact:readme
``` ## Next steps
-* Learn more about [the ORAS Library](https://github.com/deislabs/oras), including how to configure a manifest for an artifact
+* Learn about [Artifact References](container-registry-oras-artifacts.md), associating signatures, software bill of materials and other reference types
+* Learn more about [the ORAS Project](https://oras.land/), including how to configure a manifest for an artifact
* Visit the [OCI Artifacts](https://github.com/opencontainers/artifacts) repo for reference information about new artifact types -- <!-- LINKS - external -->
-[docker-linux]: https://docs.docker.com/engine/installation/#supported-platforms
-[docker-mac]: https://docs.docker.com/docker-for-mac/
-[docker-windows]: https://docs.docker.com/docker-for-windows/
+[iana-mediatypes]: https://www.rfc-editor.org/rfc/rfc6838
+[oras-install-docs]: https://oras.land/cli/
+[oras-cli]: https://oras.land/cli_reference/
+[oras-push-multifiles]: https://oras.land/cli/1_pushing/#pushing-artifacts-with-multiple-files
<!-- LINKS - internal -->
-[az-acr-repository-show]: /cli/azure/acr/repository?#az_acr_repository_show
+[acr-landing]: https://aka.ms/acr
+[acr-authentication]: /azure/container-registry/container-registry-authentication?tabs=azure-cli
+[az-acr-create]: /azure/container-registry/container-registry-get-started-azure-cli
[az-acr-repository-delete]: /cli/azure/acr/repository#az_acr_repository_delete
+[azure-cli-install]: /cli/azure/install-azure-cli
container-registry Container Registry Oras Artifacts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-oras-artifacts.md
description: Attach, push, and pull supply chain artifacts using Azure Registry
Previously updated : 10/11/2022 Last updated : 01/04/2023 # Push and pull supply chain artifacts using Azure Registry (Preview)
-Use an Azure container registry to store and manage a graph of supply chain artifacts along side container images, including signatures, software bill of materials (SBoM), security scan results or other types.
+Use an Azure container registry to store and manage a graph of supply chain artifacts, including signatures, software bill of materials (SBOM), security scan results and other types.
![Graph of artifacts, including a container image, signature and signed software bill of materials](./media/container-registry-artifacts/oras-artifact-graph.svg)
-To demonstrate this capability, this article shows how to use the [OCI Registry as Storage (ORAS)](https://oras.land) tool to push and pull a graph of supply chain artifacts to an Azure container registry.
+To demonstrate this capability, this article shows how to use the [OCI Registry as Storage (ORAS)](https://oras.land) CLI to `push`, `discover` and `pull` a graph of supply chain artifacts to an Azure container registry.
+Storing individual (root) OCI Artifacts are covered in [Push and pull OCI artifacts](container-registry-oci-artifacts.md).
-Supply chain artifact is a type of [OCI Artifact Manifest][oci-artifact-manifest]. OCI Artifact Manifest support is a preview feature and subject to [limitations](#preview-limitations).
+To store a graph of artifacts, a reference to a `subject` artifact is defined using the [OCI Artifact Manifest][oci-artifact-manifest], which is part of the [pre-release OCI 1.1 Distribution specification][oci-1_1-spec].
+OCI 1.1 Artifact Manifest support is an ACR preview feature and subject to [limitations](#preview-limitations).
## Prerequisites
-* **ORAS CLI** - The ORAS CLI enables attach, copy, push, discover, pull of artifacts to an OCI Artifact Manifest enabled registry.
-* **Azure CLI** - To create an identity, list and delete repositories, you need a local installation of the Azure CLI. Version 2.29.1 or later is recommended. Run `az --version `to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
-* **Docker (optional)** - To complete the walkthrough, a container image is referenced. You can use Docker installed locally to build and push a container image, or reference an existing container image. Docker provides packages that easily configure Docker on any [macOS][docker-mac], [Windows][docker-windows], or [Linux][docker-linux] system.
+* **Azure container registry** - Create a container registry in your Azure subscription. For example, use the [Azure portal](container-registry-get-started-portal.md) or the [Azure CLI][az-acr-create].
+*See [Preview limitations](#preview-limitations) for Azure cloud support.*
+* **Azure CLI** - Version `2.29.1` or later is required. See [Install Azure CLI][azure-cli-install] for installation and/or upgrade.
+* **ORAS CLI** - Version `v0.16.0` is required. See: [ORAS installation][oras-install-docs].
+* **Docker (Optional)** - To complete the walkthrough, a container image is referenced.
+You can use [Docker installed locally][docker-install] to build and push a container image, or use [`acr build`][az-acr-build] to build remotely in Azure.
+While Docker Desktop isn't required, the `oras` cli utilizes the Docker desktop credential store for storing credentials. If Docker Desktop is installed, it must be running for `oras login`.
## Preview limitations
-OCI Artifact Manifest support is not available in the government or China clouds, but available in all other regions.
-
-## ORAS installation
-
-Download and install a preview ORAS release for your operating system. See [ORAS installation instructions][oras-install-docs] for how to extract and install ORAS for your operating system. This article uses ORAS CLI 0.16.0 to demonstrate how to manage supply chain artifacts in ACR.
+OCI Artifact Manifest support ([OCI 1.1 specification][oci-1_1-spec]) is available in all Azure public regions. Azure China and government clouds aren't yet supported.
## Configure a registry
-Configure environment variables to easily copy/paste commands into your shell. The commands can be run in the [Azure Cloud Shell](https://shell.azure.com/).
+Configure environment variables to easily copy/paste commands into your shell. The commands can be run locally or in the [Azure Cloud Shell](https://shell.azure.com/).
```console ACR_NAME=myregistry
TAG=v1
IMAGE=$REGISTRY/${REPO}:$TAG ```
-### Create a resource group
-
-If needed, run the [az group create](/cli/azure/group#az-group-create) command to create a resource group for the registry.
-
-```azurecli
-az group create --name $ACR_NAME --location southcentralus
-```
-### Create OCI Artifact Manifest enabled registry
-
-Preview support for OCI Artifact Manifest requires Zone Redundancy, which requires a Premium service tier, in the South Central US region. Run the [az acr create](/cli/azure/acr#az-acr-create) command to create an OCI Artifact Manifest enabled registry. See the `az acr create` command help for more registry options.
-
-```azurecli
-az acr create \
- --resource-group $ACR_NAME \
- --name $ACR_NAME \
- --zone-redundancy enabled \
- --sku Premium \
- --output jsonc
-```
-
-In the command output, note the `zoneRedundancy` property for the registry. When enabled, the registry is zone redundant, and OCI Artifact Manifest enabled.
-
-```output
-{
- [...]
- "zoneRedundancy": "Enabled",
-}
-```
-
-### Sign in with Azure CLI
-
-[Sign in](/cli/azure/authenticate-azure-cli) to the Azure CLI with your identity to push and pull artifacts from the container registry.
-
-Then, use the Azure CLI command [az acr login](/cli/azure/acr#az-acr-login) to access the registry.
+Authenticate with your [individual Azure AD identity](container-registry-authentication.md?tabs=azure-cli#individual-login-with-azure-ad) using an AD token. Always use "000..." for the `USER_NAME` as the token is parsed through the `PASSWORD` variable.
```azurecli
+# Login to Azure
az login
-az acr login --name $ACR_NAME
+
+# Login to ACR, using a token based on your Azure identity
+USER_NAME="00000000-0000-0000-0000-000000000000"
+PASSWORD=$(az acr login --name $ACR_NAME --expose-token --output tsv --query accessToken)
``` > [!NOTE]
-> `az acr login` uses the Docker client to set an Azure Active Directory token in the `docker.config` file. The Docker client must be installed and running to complete the individual authentication flow.
-
-## Sign in with ORAS
-
-This section shows options to sign into the registry. Choose the method appropriate for your environment.
-
-Run `oras login` to authenticate with the registry. You may pass [registry credentials](container-registry-authentication.md) appropriate for your scenario, such as service principal credentials, user identity, or a repository-scoped token (preview).
--- Authenticate with your [individual Azure AD identity](container-registry-authentication.md?tabs=azure-cli#individual-login-with-azure-ad) to use an AD token.-
- ```azurecli
- USER_NAME="00000000-0000-0000-0000-000000000000"
- PASSWORD=$(az acr login --name $ACR_NAME --expose-token --output tsv --query accessToken)
- ```
--- Authenticate with a [repository scoped token](container-registry-repository-scoped-permissions.md) (Preview) to use non-AD based tokens.-
- ```azurecli
- USER_NAME="oras-token"
- PASSWORD=$(az acr token create -n $USER_NAME \
- -r $ACR_NAME \
- --repository $REPO content/write \
- --only-show-errors \
- --query "credentials.passwords[0].value" -o tsv)
- ```
--- Authenticate with an Azure Active Directory [service principal with pull and push permissions](container-registry-auth-service-principal.md#create-a-service-principal) (AcrPush role) to the registry.-
- ```azurecli
- SERVICE_PRINCIPAL_NAME="oras-sp"
- ACR_REGISTRY_ID=$(az acr show --name $ACR_NAME --query id --output tsv)
- PASSWORD=$(az ad sp create-for-rbac --name $SERVICE_PRINCIPAL_NAME \
- --scopes $(az acr show --name $ACR_NAME --query id --output tsv) \
- --role acrpush \
- --query "password" --output tsv)
- USER_NAME=$(az ad sp list --display-name $SERVICE_PRINCIPAL_NAME --query "[].appId" --output tsv)
- ```
+> ACR and ORAS support multiple authentication options for users and system automation. This article uses individual identity, using an Azure token. For more authentication options see [Authenticate with an Azure container registry][acr-authentication]
### Sign in with ORAS
-Supply the credentials to `oras login`.
+Provide the credentials to `oras login`.
```bash oras login $REGISTRY \
Supply the credentials to `oras login`.
--password $PASSWORD ```
-To read the password from Stdin, use `--password-stdin`.
- ## Push a container image
-This example associates a graph of artifacts to a container image. Build and push a container image, or reference an existing image in the registry.
+This example associates a graph of artifacts to a container image.
+
+Build and push a container image, or skip this step if `$IMAGE` references an existing image in the registry.
```bash
-docker build -t $IMAGE https://github.com/wabbit-networks/net-monitor.git#main
-docker push $IMAGE
+az acr build -r $ACR_NAME -t $IMAGE https://github.com/wabbit-networks/net-monitor.git#main
``` ## Create a sample signature to the container image ```bash
-echo '{"artifact": "'${IMAGE}'", "signature": "pat hancock"}' > signature.json
+echo '{"artifact": "'${IMAGE}'", "signature": "jayden hancock"}' > signature.json
``` ### Attach a signature to the registry, as a reference to the container image
-The ORAS command attaches the signature to a repository, referencing another artifact. The `--artifact-type` provides for differentiating artifacts, similar to file extensions that enable different file types. One or more files can be attached by specifying `file:mediaType`.
+The `oras attach` command creates a reference between the file (`./signature.json`) to the `$IMAGE`. The `--artifact-type` provides for differentiating artifacts, similar to file extensions that enable different file types. One or more files can be attached by specifying `[file]:[mediaType]`.
```bash oras attach $IMAGE \
- ./signature.json:application/json \
- --artifact-type signature/example
+ --artifact-type signature/example \
+ ./signature.json:application/json
``` For more information on oras attach, see [ORAS documentation][oras-docs]. ## Attach a multi-file artifact as a reference
-Create some documentation around an artifact.
+When OCI artifacts are pushed to a registry with ORAS, each file reference is pushed as a blob. To push separate blobs, reference the files individually, or collection of files by referencing a directory.
+For more information how to push a collection of files, see [Pushing artifacts with multiple files][oras-push-multifiles]
+
+Create some documentation around an artifact:
```bash echo 'Readme Content' > readme.md
-echo 'Detailed Content' > readme-details.md
+mkdir details/
+echo 'Detailed Content' > details/readme-details.md
+echo 'More detailed Content' > details/readme-more-details.md
```
-Attach the multi-file artifact as a reference.
+Attach the multi-file artifact as a reference to `$IMAGE`:
+
+**Linux, WSL2 or macOS**
```bash oras attach $IMAGE \
+ --artifact-type readme/example \
./readme.md:application/markdown \
- ./readme-details.md:application/markdown \
- --artifact-type readme/example
+ ./details
+```
+
+**Windows**
+
+```cmd
+.\oras.exe attach $IMAGE ^
+ --artifact-type readme/example ^
+ .\readme.md:application/markdown ^
+ .\details
``` ## Discovering artifact references
-The [OCI v1.1 Specification][oci-spec] defines a [referrers API][oci-artifacts-referrers] for discovering references to a `subject` artifact. The `oras discover` command can show the list of references to the container image.
+The [OCI v1.1 Specification][oci-spec] defines a [referrers API][oci-artifact-referrers] for discovering references to a `subject` artifact. The `oras discover` command can show the list of references to the container image.
Using `oras discover`, view the graph of artifacts now stored in the registry.
The output shows the beginning of a graph of artifacts, where the signature and
```output myregistry.azurecr.io/net-monitor:v1 Γö£ΓöÇΓöÇ signature/example
-│   └── sha256:555ea91f39e7fb30c06f3b7aa483663f067f2950dcb...
+Γöé ΓööΓöÇΓöÇ sha256:555ea91f39e7fb30c06f3b7aa483663f067f2950dcb...
ΓööΓöÇΓöÇ readme/example ΓööΓöÇΓöÇ sha256:1a118663d1085e229ff1b2d4d89b5f6d67911f22e55... ``` ## Creating a deep graphs of artifacts
-The OCI v1.1 Specification enables deep graphs, enabling signed software bill of materials (SBoM) and other artifact types.
+The OCI v1.1 Specification enables deep graphs, enabling signed software bill of materials (SBOM) and other artifact types.
-### Create a sample SBoM
+### Create a sample SBOM
```bash echo '{"version": "0.0.0.0", "artifact": "'${IMAGE}'", "contents": "good"}' > sbom.json ```
-### Attach a sample SBoM to the image in the registry
+### Attach a sample SBOM to the image in the registry
+
+**Linux, WSL2 or macOS**
```bash oras attach $IMAGE \
- ./sbom.json:application/json \
- --artifact-type sbom/example
+ --artifact-type sbom/example \
+ ./sbom.json:application/json
```
-### Sign the SBoM
+**Windows**
-Artifacts that are pushed as references, typically do not have tags as they are considered part of the subject artifact. To push a signature to an artifact that is a child of another artifact, use the `oras discover` with `--artifact-type` filtering to find the digest.
+```cmd
+.\oras.exe attach $IMAGE ^
+ --artifact-type sbom/example ^
+ ./sbom.json:application/json
+```
+
+### Sign the SBOM
+
+Artifacts that are pushed as references, typically don't have tags as they're considered part of the `subject` artifact. To push a signature to an artifact that is a child of another artifact, use the `oras discover` with `--artifact-type` filtering to find the digest.
```bash SBOM_DIGEST=$(oras discover -o json \
SBOM_DIGEST=$(oras discover -o json \
$IMAGE | jq -r ".manifests[0].digest") ```
-Create a signature of an SBoM
+Create a signature of an SBOM
```bash
-echo '{"artifact": "'$IMAGE@$SBOM_DIGEST'", "signature": "pat hancock"}' > sbom-signature.json
+echo '{"artifact": "'$IMAGE@$SBOM_DIGEST'", "signature": "jayden hancock"}' > sbom-signature.json
```
-### Attach the SBoM signature
+### Attach the SBOM signature
```bash oras attach $IMAGE@$SBOM_DIGEST \
Generates the following output:
```output myregistry.azurecr.io/net-monitor:v1
-Γö£ΓöÇΓöÇ signature/example
-│   └── sha256:555ea91f39e7fb30c06f3b7aa483663f067f2950dcb...
+Γö£ΓöÇΓöÇ sbom/example
+│   └── sha256:4f1843833c029ecf0524bc214a0df9a5787409fd27bed2160d83f8cc39fedef5
+│   └── signature/example
+│   └── sha256:3c43b8cb0c941ec165c9f33f197d7f75980a292400d340f1a51c6b325764aa93
Γö£ΓöÇΓöÇ readme/example
-│   └── sha256:1a118663d1085e229ff1b2d4d89b5f6d67911f22e55...
-ΓööΓöÇΓöÇ sbom/example
- ΓööΓöÇΓöÇ sha256:4280eef9adb632b42cf200e7cd5a822a456a558e4f3142da6b...
- ΓööΓöÇΓöÇ signature/example
- ΓööΓöÇΓöÇ sha256:a31ab875d37eee1cca68dbb14b2009979d05594d44a075bdd7...
+│   └── sha256:5fafd40589e2c980e2864a78818bff51ee641119cf96ebb0d5be83f42aa215af
+ΓööΓöÇΓöÇ signature/example
+ ΓööΓöÇΓöÇ sha256:00da2c1c3ceea087b16e70c3f4e80dbce6f5b7625d6c8308ad095f7d3f6107b5
+```
+
+## Promote the graph
+
+A typical DevOps workflow will promote artifacts from dev through staging, to the production environment
+Secure supply chain workflows promote public content to privately secured environments.
+In either case you'll want to promote the signatures, SBOMs, scan results and other related artifact with the root artifact to have a complete graph of dependencies.
+
+Using the [`oras copy`][oras-cli] command, you can promote a filtered graph of artifacts across registries.
+
+Copy the `net-monitor:v1` image, and it's related artifacts to `sample-staging/net-monitor:v1`:
+
+```bash
+TARGET_REPO=$REGISTRY/sample-staging/$REPO
+oras copy -r $IMAGE $TARGET_REPO:$TAG
+```
+
+The output of `oras copy`:
+
+```console
+Copying 6bdea3cdc730 sbom-signature.json
+Copying 78e159e81c6b sbom.json
+Copied 6bdea3cdc730 sbom-signature.json
+Copied 78e159e81c6b sbom.json
+Copying 7cf1385c7f4d signature.json
+Copied 7cf1385c7f4d signature.json
+Copying 3e797ecd0697 details
+Copying 2fdeac43552b readme.md
+Copied 3e797ecd0697 details
+Copied 2fdeac43552b readme.md
+Copied demo42.myregistry.io/net-monitor:v1 => myregistry.azurecr.io/sample-staging/net-monitor:v1
+Digest: sha256:ff858b2ea3cdf4373cba65d2ca6bcede4da1d620503a547cab5916614080c763
+```
+## Discover the promoted artifact graph
+
+```bash
+oras discover -o tree $TARGET_REPO:$TAG
+```
+
+Output of `oras discover`:
+
+```console
+myregistry.azurecr.io/sample-staging/net-monitor:v1
+Γö£ΓöÇΓöÇ sbom/example
+│   └── sha256:4f1843833c029ecf0524bc214a0df9a5787409fd27bed2160d83f8cc39fedef5
+│   └── signature/example
+│   └── sha256:3c43b8cb0c941ec165c9f33f197d7f75980a292400d340f1a51c6b325764aa93
+Γö£ΓöÇΓöÇ readme/example
+│   └── sha256:5fafd40589e2c980e2864a78818bff51ee641119cf96ebb0d5be83f42aa215af
+ΓööΓöÇΓöÇ signature/example
+ ΓööΓöÇΓöÇ sha256:00da2c1c3ceea087b16e70c3f4e80dbce6f5b7625d6c8308ad095f7d3f6107b5
``` ## Pull a referenced artifact
-To pull a referenced type, the digest of reference is discovered with the `oras discover` command
+To pull a specific referenced artifact, the digest of reference is discovered with the `oras discover` command:
```bash DOC_DIGEST=$(oras discover -o json \ --artifact-type 'readme/example' \
- $IMAGE | jq -r ".manifests[0].digest")
+ $TARGET_REPO:$TAG | jq -r ".manifests[0].digest")
``` ### Create a clean directory for downloading
mkdir ./download
``` ### Pull the docs into the download directory+ ```bash
-oras pull -o ./download $REGISTRY/$REPO@$DOC_DIGEST
+oras pull -o ./download $TARGET_REPO@$DOC_DIGEST
```+ ### View the docs ```bash
-ls ./download
+tree ./download
+```
+
+The output of `tree`:
+
+```output
+./download
+Γö£ΓöÇΓöÇ details
+│   ├── readme-details.md
+│   └── readme-more-details.md
+ΓööΓöÇΓöÇ readme.md
``` ## View the repository and tag listing
-OCI Artifact Manifest enables artifact graphs to be pushed, discovered, pulled and copied without having to assign tags. This enables a tag listing to focus on the artifacts users think about, as opposed to the signatures and SBoMs that are associated with the container images, helm charts and other artifacts.
+The OCI Artifact Manifest enables artifact graphs to be pushed, discovered, pulled and copied without having to assign tags. Artifact manifests enable a tag listing to focus on the artifacts users think about, as opposed to the signatures and SBOMs that are associated with the container images, helm charts and other artifacts.
### View a list of tags
-```azurecli
-az acr repository show-tags \
- -n $ACR_NAME \
- --repository $REPO \
- -o jsonc
+```bash
+oras repo tags $REGISTRY/$REPO
``` ### View a list of manifests
-A repository can have a list of manifests that are both tagged and untagged
+A repository can have a list of manifests that are both tagged and untagged. Using the [`az acr manifest`][az-acr-manifest-metadata] CLI, view the full list of manifests:
```azurecli az acr manifest list-metadata \
az acr manifest list-metadata \
--output jsonc ```
-Note the container image manifests have `"tags":`
-
-```json
-{
- "architecture": "amd64",
- "changeableAttributes": {
- "deleteEnabled": true,
- "listEnabled": true,
- "readEnabled": true,
- "writeEnabled": true
- },
- "configMediaType": "application/vnd.docker.container.image.v1+json",
- "createdTime": "2021-11-12T00:18:54.5123449Z",
- "digest": "sha256:a0fc570a245b09ed752c42d600ee3bb5b4f77bbd70d8898780b7ab4...",
- "imageSize": 2814446,
- "lastUpdateTime": "2021-11-12T00:18:54.5123449Z",
- "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
- "os": "linux",
- "tags": [
- "v1"
- ]
-}
-```
+Note the container image manifests have `"tags"`, while the reference types (`"mediaType": "application/vnd.oci.artifact.manifest.v1+json"`) don't.
-The signature is untagged, but tracked as a `oras.artifact.manifest` reference to the container image
+In the output, the signature is untagged, but tracked as a `oci.artifact.manifest` reference to the container image:
```json {
The signature is untagged, but tracked as a `oras.artifact.manifest` reference t
"readEnabled": true, "writeEnabled": true },
- "createdTime": "2021-11-12T00:19:10.987156Z",
- "digest": "sha256:555ea91f39e7fb30c06f3b7aa483663f067f2950dcbcc0b0d...",
- "imageSize": 85,
- "lastUpdateTime": "2021-11-12T00:19:10.987156Z",
- "mediaType": "application/vnd.cncf.oras.artifact.manifest.v1+json"
+ "createdTime": "2023-01-10T17:58:28.4403142Z",
+ "digest": "sha256:00da2c1c3ceea087b16e70c3f4e80dbce6f5b7625d6c8308ad095f7d3f6107b5",
+ "imageSize": 80,
+ "lastUpdateTime": "2023-01-10T17:58:28.4403142Z",
+ "mediaType": "application/vnd.oci.artifact.manifest.v1+json"
} ```+ ## Delete all artifacts in the graph
-Support for the OCI v1.1 Specification enables deleting the graph of artifacts associated with the root artifact. Use the [az acr repository delete][az-acr-repository-delete] command to delete the signature, SBoM and the signature of the SBoM.
+Support for the OCI v1.1 Specification enables deleting the graph of artifacts associated with the root artifact. Use the [`oras delete`][oras-cli] command to delete the graph of artifacts (signature, SBOM and the signature of the SBOM).
```azurecli
-az acr repository delete \
- -n $ACR_NAME \
- -t ${REPO}:$TAG -y
+oras manifest delete -f $REGISTRY/$REPO:$TAG
+
+oras manifest delete -f $REGISTRY/sample-staging/$REPO:$TAG
``` ### View the remaining manifests
+By deleting the root artifact, all related artifacts are also deleted leaving a clean environment:
+ ```azurecli az acr manifest list-metadata \ --name $REPO \
- --registry $ACR_NAME \
- --detail -o jsonc
+ --registry $ACR_NAME -o jsonc
```
+Output:
+```output
+2023-01-10 18:38:45.366387 Error: repository "net-monitor" is not found.
+```
+## Summary
+
+In this article, a graph of supply chain artifacts is created, discovered, promoted and pulled providing lifecycle management of the artifacts you build and depend upon.
+ ## Next steps * Learn more about [the ORAS CLI](https://oras.land/cli/) * Learn more about [OCI Artifact Manifest][oci-artifact-manifest] for how to push, discover, pull, copy a graph of supply chain artifacts <!-- LINKS - external -->
-[docker-linux]: https://docs.docker.com/engine/installation/#supported-platforms
-[docker-mac]: https://docs.docker.com/docker-for-mac/
-[docker-windows]: https://docs.docker.com/docker-for-windows/
-[oras-install-docs]: https://oras.land/cli/
-[oras-docs]: https://oras.land/
-[oci-artifacts-referrers]: https://github.com/opencontainers/distribution-spec/blob/main/spec.md#listing-referrers/
-[oci-artifact-manifest]: https://github.com/opencontainers/image-spec/blob/main/artifact.md/
-[oci-spec]: https://github.com/opencontainers/distribution-spec/blob/main/spec.md/
+[docker-install]: https://www.docker.com/get-started/
+[oci-artifact-manifest]: https://github.com/opencontainers/image-spec/blob/main/artifact.md/
+[oci-artifact-referrers]: https://github.com/opencontainers/distribution-spec/blob/main/spec.md#listing-referrers/
+[oci-spec]: https://github.com/opencontainers/distribution-spec/blob/main/spec.md/
+[oci-1_1-spec]: https://github.com/opencontainers/distribution-spec/releases/tag/v1.1.0-rc1
+[oras-docs]: https://oras.land/
+[oras-install-docs]: https://oras.land/cli/
+[oras-push-multifiles]: https://oras.land/cli/1_pushing/#pushing-artifacts-with-multiple-files
+[oras-cli]: https://oras.land/cli_reference/
<!-- LINKS - internal -->
-[az-acr-repository-show]: /cli/azure/acr/repository?#az_acr_repository_show
+[acr-authentication]: /azure/container-registry/container-registry-authentication?tabs=azure-cli
+[az-acr-create]: /azure/container-registry/container-registry-get-started-azure-cli
+[az-acr-build]: /cli/azure/acr#az_acr_build
+[az-acr-manifest-metadata]: /cli/azure/acr/manifest/metadata
[az-acr-repository-delete]: /cli/azure/acr/repository#az_acr_repository_delete
+[azure-cli-install]: /cli/azure/install-azure-cli
cosmos-db Continuous Backup Restore Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/continuous-backup-restore-introduction.md
Currently the point in time restore functionality has the following limitations:
* While a restore is in progress, don't modify or delete the Identity and Access Management (IAM) policies. These policies grant the permissions for the account to change any VNET, firewall configuration.
-* Azure Cosmos DB for SQL or MongoDB accounts that create unique index after the container is created aren't supported for continuous backup. Only containers that create unique index as a part of the initial container creation are supported. For MongoDB accounts, you create unique index using [extension commands](mongodb/custom-commands.md).
+* Azure Cosmos DB for MongoDB accounts with continuous backup do not support creating a unique index for an existing collection. For such an account, unique indexes must be created along with their collection; this is done using the create collection [extension commands](mongodb/custom-commands.md).
* The point-in-time restore functionality always restores to a new Azure Cosmos DB account. Restoring to an existing account is currently not supported. If you're interested in providing feedback about in-place restore, contact the Azure Cosmos DB team via your account representative.
cosmos-db Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gremlin/introduction.md
g.addV('person').
property('age', 44) ```
+> [!TIP]
+> If you are following along with these examples, you can use any of these properties (`age`, `firstName`, `lastName`) as a partition key when you create your graph. The `id` property is not supported as a partition key in a graph.
+ Next, the following Gremlin statement inserts a *knows* edge between **Thomas** and **Robin**. ```console
cosmos-db Monitor Resource Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/monitor-resource-logs.md
Platform metrics and the Activity logs are collected automatically, whereas you
| **MongoRequests** | Mongo | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB for MongoDB. When you enable this category, make sure to disable DataPlaneRequests. | `Requestcharge`, `opCode`, `retryCount`, `piiCommandText` | | **CassandraRequests** | Cassandra | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB for Cassandra. When you enable this category, make sure to disable DataPlaneRequests. | `operationName`, `requestCharge`, `piiCommandText` | | **GremlinRequests** | Gremlin | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB for Gremlin. When you enable this category, make sure to disable DataPlaneRequests. | `operationName`, `requestCharge`, `piiCommandText`, `retriedDueToRateLimiting` |
- | **QueryRuntimeStatistics** | SQL | This table details query operations executed against an API for NoSQL account. By default, the query text and its parameters are obfuscated to avoid logging personal data with full text query logging available by request. | `databasename`, `partitionkeyrangeid`, `querytext` |
+ | **QueryRuntimeStatistics** | NoSQL | This table details query operations executed against an API for NoSQL account. By default, the query text and its parameters are obfuscated to avoid logging personal data with full text query logging available by request. | `databasename`, `partitionkeyrangeid`, `querytext` |
| **PartitionKeyStatistics** | All APIs | Logs the statistics of logical partition keys by representing the estimated storage size (KB) of the partition keys. This table is useful when troubleshooting storage skews. This PartitionKeyStatistics log is only emitted if the following conditions are true: 1. At least 1% of the documents in the physical partition have same logical partition key. 2. Out of all the keys in the physical partition, the top three keys with largest storage size are captured by the PartitionKeyStatistics log. </li></ul> If the previous conditions aren't met, the partition key statistics data isn't available. It's okay if the above conditions aren't met for your account, which typically indicates you have no logical partition storage skew. **Note**: The estimated size of the partition keys is calculated using a sampling approach that assumes the documents in the physical partition are roughly the same size. If the document sizes aren't uniform in the physical partition, the estimated partition key size may not be accurate. | `subscriptionId`, `regionName`, `partitionKey`, `sizeKB` | | **PartitionKeyRUConsumption** | API for NoSQL | Logs the aggregated per-second RU/s consumption of partition keys. This table is useful for troubleshooting hot partitions. Currently, Azure Cosmos DB reports partition keys for API for NoSQL accounts only and for point read/write and stored procedure operations. | `subscriptionId`, `regionName`, `partitionKey`, `requestCharge`, `partitionKeyRangeId` | | **ControlPlaneRequests** | All APIs | Logs details on control plane operations, which include, creating an account, adding or removing a region, updating account replication settings etc. | `operationName`, `httpstatusCode`, `httpMethod`, `region` |
cosmos-db Object Array https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/object-array.md
The results are:
], [ {
- "familyName": "Merriam",
- "givenName": "Jesse",
- "gender": "female",
- "grade": 1
+ "familyName": "Merriam",
+ "givenName": "Jesse",
+ "gender": "female",
+ "grade": 1
}, {
- "familyName": "Miller",
- "givenName": "Lisa",
- "gender": "female",
- "grade": 8
+ "familyName": "Miller",
+ "givenName": "Lisa",
+ "gender": "female",
+ "grade": 8
} ] ]
cosmos-db Quickstart Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-dotnet.md
The `Microsoft.Azure.Cosmos` client libraries enable you to perform *data* opera
> - [Azure Resource Manager templates (ARM templates)](manage-with-templates.md) > - [Azure Resource Manager .NET client library](https://www.nuget.org/packages/Azure.ResourceManager.CosmosDB/)
-The Azure CLI approach is used in this example. Use the [`az cosmosdb sql database create`](/azure/cosmosdb/sql/database#az-cosmosdb-sql-database-create) and [`az cosmosdb sql container create`](/azure/cosmosdb/sql/container#az-cosmosdb-sql-container-create) commands to create a Cosmos DB NoSQL database and container.
+The Azure CLI approach is used in this example. Use the [`az cosmosdb sql database create`](/cli/azure/cosmosdb/sql/database#az-cosmosdb-sql-database-create) and [`az cosmosdb sql container create`](/cli/azure/cosmosdb/sql/container#az-cosmosdb-sql-container-create) commands to create a Cosmos DB NoSQL database and container.
```azurecli # Create a SQL API database
cosmos-db Time To Live https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/time-to-live.md
Last updated 09/16/2021
# Time to Live (TTL) in Azure Cosmos DB [!INCLUDE[NoSQL](../includes/appliesto-nosql.md)]
-With **Time to Live** or TTL, Azure Cosmos DB provides the ability to delete items automatically from a container after a certain time period. By default, you can set time to live at the container level and override the value on a per-item basis. After you set the TTL at a container or at an item level, Azure Cosmos DB will automatically remove these items after the time period, since the time they were last modified. Time to live value is configured in seconds. When you configure TTL, the system will automatically delete the expired items based on the TTL value, without needing a delete operation that is explicitly issued by the client application. The maximum value for TTL is 2147483647.
+With **Time to Live** or TTL, Azure Cosmos DB provides the ability to delete items automatically from a container after a certain time period. By default, you can set time to live at the container level and override the value on a per-item basis. After you set the TTL at a container or at an item level, Azure Cosmos DB will automatically remove these items after the time period, since the time they were last modified. Time to live value is configured in seconds. When you configure TTL, the system will automatically delete the expired items based on the TTL value, without needing a delete operation that is explicitly issued by the client application. The maximum value for TTL is 2147483647 seconds, the approximate equivalent of 24,855 days or 68 years.
Deletion of expired items is a background task that consumes left-over [Request Units](../request-units.md), that is Request Units that haven't been consumed by user requests. Even after the TTL has expired, if the container is overloaded with requests and if there aren't enough RU's available, the data deletion is delayed. Data is deleted once there are enough RUs available to perform the delete operation. Though the data deletion is delayed, data is not returned by any queries (by any API) after the TTL has expired.
data-factory Tutorial Pipeline Failure Error Handling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-pipeline-failure-error-handling.md
Previously updated : 09/22/2022 Last updated : 01/09/2023
-# Understanding pipeline failure
+# Conditional execution
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
-## Error handling
+## Conditional paths
Azure Data Factory and Synapse Pipeline orchestration allows conditional logic and enables user to take different based upon outcomes of a previous activity. Using different paths allow users to build robust pipelines and incorporates error handling in ETL/ELT logic. In total, we allow four conditional paths,
-* Upon Success (default pass)
-* Upon Failure
-* Upon Completion
-* Upon Skip
+| Name | Explanation |
+| | |
+| Upon Success | (Default Pass) Execute this path if the current activity succeeded |
+| Upon Failure | Execute this path if the current activity failed |
+| Upon Completion | Execute this path after the current activity completed, regardless if it succeeded or not |
+| Upon Skip | Execute this path if the activity itself didn't run |
:::image type="content" source="media/tutorial-pipeline-failure-error-handling/pipeline-error-1-four-branches.png" alt-text="Screenshot showing the four branches out of an activity.":::
+You may add multiple branches following an activity, with one exception: _Upon Completion_ path can't co-exist with either _Upon Success_ or _Upon Failure_ path. For each pipeline run, at most one path will be activated, based on the execution outcome of the activity.
+
+## Error Handling
+ ### Common error handling mechanism #### Try Catch block
databox Data Box Cable Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-cable-options.md
Previously updated : 10/24/2018 Last updated : 01/10/2023 # Cabling options for your Azure Data Box
-This article describes the various ways to cable your Azure Data Box for data transfer.
+This article describes the various ways to cable your Azure Data Box for data transfer. For a full list of supported cables, see the [list of supported cables and switches from Mellanox](https://network.nvidia.com/pdf/firmware/ConnectX3-FW-2_42_5000-release_notes.pdf).
## Transfer via MGMT port
defender-for-cloud Adaptive Application Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/adaptive-application-controls.md
description: This document helps you use adaptive application control in Microso
Previously updated : 11/09/2021 Last updated : 01/08/2023 # Use adaptive application controls to reduce your machines' attack surfaces - Learn about the benefits of Microsoft Defender for Cloud's adaptive application controls and how you can enhance your security with this data-driven, intelligent feature. ## What are adaptive application controls? Adaptive application controls are an intelligent and automated solution for defining allowlists of known-safe applications for your machines.
-Often, organizations have collections of machines that routinely run the same processes. Microsoft Defender for Cloud uses machine learning to analyze the applications running on your machines and create a list of the known-safe software. Allowlists are based on your specific Azure workloads, and you can further customize the recommendations using the instructions below.
+Often, organizations have collections of machines that routinely run the same processes. Microsoft Defender for Cloud uses machine learning to analyze the applications running on your machines and create a list of the known-safe software. Allowlists are based on your specific Azure workloads, and you can further customize the recommendations using the following instructions.
When you've enabled and configured adaptive application controls, you'll get security alerts if any application runs other than the ones you've defined as safe.
Select the recommendation, or open the adaptive application controls page to vie
1. Open the Workload protections dashboard and from the advanced protection area, select **Adaptive application controls**.
- :::image type="content" source="./media/adaptive-application/opening-adaptive-application-control.png" alt-text="Opening adaptive application controls from the Azure Dashboard." lightbox="./media/adaptive-application/opening-adaptive-application-control.png":::
+ :::image type="content" source="./media/adaptive-application/opening-adaptive-application-control-new.png" alt-text="Screenshot showing opening adaptive application controls from the Azure Dashboard." lightbox="./media/adaptive-application/opening-adaptive-application-control.png":::
The **Adaptive application controls** page opens with your VMs grouped into the following tabs:
Select the recommendation, or open the adaptive application controls page to vie
- **Recommended** - Groups of machines that consistently run the same applications, and don't have an allowlist configured. We recommend that you enable adaptive application controls for these groups. > [!TIP]
- > If you see a group name with the prefix "REVIEWGROUP", it contains machines with a partially consistent list of applications. Microsoft Defender for Cloud can't see a pattern but recommends reviewing this group to see whether _you_ can manually define some adaptive application controls rules as described in [Editing a group's adaptive application controls rule](#edit-a-groups-adaptive-application-controls-rule).
+ > If you see a group name with the prefix "REVIEWGROUP", it contains machines with a partially consistent list of applications. Microsoft Defender for Cloud can't see a pattern but recommends reviewing this group to see whether _you_ can manually define some adaptive application controls rules as described in [Edit a group's adaptive application controls rule](#edit-a-groups-adaptive-application-controls-rule).
> > You can also move machines from this group to other groups as described in [Move a machine from one group to another](#move-a-machine-from-one-group-to-another).
Select the recommendation, or open the adaptive application controls page to vie
- It's missing a Log Analytics agent - The Log Analytics agent isn't sending events - It's a Windows machine with a pre-existing [AppLocker](/windows/security/threat-protection/windows-defender-application-control/applocker/applocker-overview) policy enabled by either a GPO or a local security policy
- - AppLocker is not available (Windows Server Core installations)
+ - AppLocker isn't available (Windows Server Core installations)
> [!TIP] > Defender for Cloud needs at least two weeks of data to define the unique recommendations per group of machines. Machines that have recently been created, or which belong to subscriptions that were only recently protected by Microsoft Defender for Servers, will appear under the **No recommendation** tab.
-1. Open the **Recommended** tab. The groups of machines with recommended allowlists appears.
+1. Open the **Recommended** tab. The groups of machines with recommended allowlists appear.
![Recommended tab.](./media/adaptive-application/adaptive-application-recommended-tab.png)
Select the recommendation, or open the adaptive application controls page to vie
![Configure a new rule.](./media/adaptive-application/adaptive-application-create-rule.png)
- 1. **Select machines** - By default, all machines in the identified group are selected. Unselect any to removed them from this rule.
+ 1. **Select machines** - By default, all machines in the identified group are selected. Unselect any to remove them from this rule.
1. **Recommended applications** - Review this list of applications that are common to the machines within this group, and recommended to be allowed to run.
Select the recommendation, or open the adaptive application controls page to vie
> [!TIP] > Both application lists include the option to restrict a specific application to certain users. Adopt the principle of least privilege whenever possible. >
- > Applications are defined by their publishers, if an application doesn't have publisher information (it's unsigned), a path rule is created for the full path of the specific application.
+ > Applications are defined by their publishers; if an application doesn't have publisher information (it's unsigned), a path rule is created for the full path of the specific application.
1. To apply the rule, select **Audit**.
To edit the rules for a group of machines:
## Review and edit a group's settings
-1. To view the details and settings of your group, select **Group settings**
+1. To view the details and settings of your group, select **Group settings**.
This pane shows the name of the group (which can be modified), the OS type, the location, and other relevant details.
- :::image type="content" source="./media/adaptive-application/adaptive-application-group-settings.png" alt-text="The group settings page for adaptive application controls." lightbox="./media/adaptive-application/adaptive-application-group-settings.png":::
+ :::image type="content" source="./media/adaptive-application/adaptive-application-group-settings.png" alt-text="Screenshot showing the group settings page for adaptive application controls." lightbox="./media/adaptive-application/adaptive-application-group-settings.png":::
1. Optionally, modify the group's name or file type protection modes.
To remediate the issues:
1. To investigate further, select a group.
- ![Recent alerts.](./media/adaptive-application/recent-alerts.png)
+ :::image type="content" source="./media/adaptive-application/recent-alerts.png" alt-text="Screenshot showing selecting a group the group settings page for adaptive application controls." lightbox="./media/adaptive-application/recent-alerts.png":::
1. For further details, and the list of affected machines, select an alert. The security alerts page shows more details of the alerts and provides a **Take action** link with recommendations of how to mitigate the threat.
- :::image type="content" source="media/adaptive-application/adaptive-application-alerts-start-time.png" alt-text="The start time of adaptive application controls alerts is the time that adaptive application controls created the alert.":::
+ :::image type="content" source="media/adaptive-application/adaptive-application-alerts-start-time.png" alt-text="Screenshot showing the start time of adaptive application controls alerts is the time that adaptive application controls created the alert.":::
> [!NOTE] > Adaptive application controls calculates events once every twelve hours. The "activity start time" shown in the security alerts page is the time that adaptive application controls created the alert, **not** the time that the suspicious process was active.
To remediate the issues:
## Move a machine from one group to another
-When you move a machine from one group to another, the application control policy applied to it changes to the settings of the group that you moved it to. You can also move a machine from a configured group to a non-configured group, doing so removes any application control rules that were applied to the machine.
+When you move a machine from one group to another, the application control policy applied to it changes to the settings of the group that you moved it to. You can also move a machine from a configured group to a non-configured group; doing so removes any application control rules that were applied to the machine.
1. Open the **Workload protections dashboard** and from the advanced protection area, select **Adaptive application controls**.
-1. From the **Adaptive application controls** page, from the **Configured** tab, select the group containing the machine to be moved.
+1. From the **Adaptive application controls** page, from the **Configured** tab, select the group containing the machine to be moved.
1. Open the list of **Configured machines**.
When you move a machine from one group to another, the application control polic
To manage your adaptive application controls programmatically, use our REST API.
-The relevant API documentation is available in [the Adaptive application Controls section of Defender for Cloud's API docs](/rest/api/defenderforcloud/adaptive-application-controls).
+The relevant API documentation is available in [the Adaptive application Controls section of Defender for Cloud's API docs](https://learn.microsoft.com/rest/api/defenderforcloud/adaptive-application-controls).
-Some of the functions that are available from the REST API:
+Some of the functions available from the REST API include:
* **List** retrieves all your group recommendations and provides a JSON with an object for each group.
-* **Get** retrieves the JSON with the full recommendation data (that is, list of machines, publisher/path rules, and so on).
+* **Get** retrieves the JSON with the full recommendation data (list of machines, publisher/path rules, etc.).
* **Put** configures your rule (use the JSON you retrieved with **Get** as the body for this request). > [!IMPORTANT]
- > The **Put** function expects fewer parameters than the JSON returned by the Get command contains.
+ > The **Put** function expects fewer parameters than the JSON returned by the **Get** command contains.
>
- > Remove the following properties before using the JSON in the Put request: recommendationStatus, configurationStatus, issues, location, and sourceSystem.
+ > Remove the following properties before using the JSON in the **Put** request: recommendationStatus, configurationStatus, issues, location, and sourceSystem.
## FAQ - Adaptive application controls
Some of the functions that are available from the REST API:
- [Why do I see a Qualys app in my recommended applications?](#why-do-i-see-a-qualys-app-in-my-recommended-applications) ### Are there any options to enforce the application controls?
-No enforcement options are currently available. Adaptive application controls are intended to provide **security alerts** if any application runs other than the ones you've defined as safe. They have a range of benefits ([What are the benefits of adaptive application controls?](#what-are-the-benefits-of-adaptive-application-controls)) and are extremely customizable as shown on this page.
+No enforcement options are currently available. Adaptive application controls are intended to provide **security alerts** if any application runs other than the ones you've defined as safe. They have a range of benefits ([What are the benefits of adaptive application controls?](#what-are-the-benefits-of-adaptive-application-controls)) and are customizable as shown on this page.
### Why do I see a Qualys app in my recommended applications? [Microsoft Defender for Servers](defender-for-servers-introduction.md) includes vulnerability scanning for your machines at no extra cost. You don't need a Qualys license or even a Qualys account - everything's handled seamlessly inside Defender for Cloud. For details of this scanner and instructions for how to deploy it, see [Defender for Cloud's integrated Qualys vulnerability assessment solution](deploy-vulnerability-assessment-vm.md).
defender-for-cloud Concept Easm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-easm.md
description: Learn how to gain comprehensive visibility and insights over extern
Previously updated : 09/21/2022 Last updated : 01/10/2023 # What is an external attack surface?
-An external attack surface is the entire area of an organization or system that is susceptible to an attack from an external source. An organization's attack surface is made up of all the points of access that an unauthorized person could use to enter their system. The larger your attack surface is, the harder it is to protect.
+An external attack surface is the entire area of an organization or system that is susceptible to an attack from an external source. An organization's attack surface is made up of all the points of access that an unauthorized person could use to enter their system. The larger your attack surface is, the harder it's to protect.
You can use Defender for Cloud's new integration with Microsoft Defender External Attack Surface Management (Defender EASM), to improve your organization's security posture and reduce the potential risk of being attacked. Defender EASM continuously discovers and maps your digital attack surface to provide an external view of your online infrastructure. This visibility enables security and IT teams to identify unknowns, prioritize risk, eliminate threats, and extend vulnerability and exposure control beyond the firewall.
Defender EASM applies MicrosoftΓÇÖs crawling technology to discover assets that
- Pinpoint attacker-exposed weaknesses, anywhere and on-demand - Gain visibility into third-party attack surfaces
-EASM collects data for publicly exposed assets (ΓÇ£outside-inΓÇ¥) which can be used by MDC CSPM (ΓÇ£inside-outΓÇ¥) to assist with internet-exposure validation and discovery capabilities to provide better visibility to customers.
+EASM collects data for publicly exposed assets (ΓÇ£outside-inΓÇ¥). That data can be used by MDC CSPM (ΓÇ£inside-outΓÇ¥) to assist with internet-exposure validation and discovery capabilities to provide better visibility to customers.
## Learn more
defender-for-cloud Defender For App Service Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-app-service-introduction.md
Title: Microsoft Defender for App Service - the benefits and features description: Learn about the capabilities of Microsoft Defender for App Service and how to enable it on your subscription Previously updated : 11/09/2021 Last updated : 01/10/2023
Defender for Cloud monitors for many threats to your App Service resources. The
### Dangling DNS detection
-Defender for App Service also identifies any DNS entries remaining in your DNS registrar when an App Service website is decommissioned - these are known as dangling DNS entries. When you remove a website and don't remove its custom domain from your DNS registrar, the DNS entry is pointing at a non-existent resource and your subdomain is vulnerable to a takeover. Defender for Cloud doesn't scan your DNS registrar for *existing* dangling DNS entries; it alerts you when an App Service website is decommissioned and its custom domain (DNS entry) isn't deleted.
+Defender for App Service also identifies any DNS entries remaining in your DNS registrar when an App Service website is decommissioned - these are known as dangling DNS entries. When you remove a website and don't remove its custom domain from your DNS registrar, the DNS entry is pointing to a non-existent resource, and your subdomain is vulnerable to a takeover. Defender for Cloud doesn't scan your DNS registrar for *existing* dangling DNS entries; it alerts you when an App Service website is decommissioned and its custom domain (DNS entry) isn't deleted.
Subdomain takeovers are a common, high-severity threat for organizations. When a threat actor detects a dangling DNS entry, they create their own site at the destination address. The traffic intended for the organizationΓÇÖs domain is then directed to the threat actor's site, and they can use that traffic for a wide range of malicious activity.
For a full list of the App Service alerts, see the [Reference table of alerts](a
## Next steps
-In this article, you learned about Microsoft Defender for App Service.
+In this article, you learned about Microsoft Defender for App Service.
> [!div class="nextstepaction"] > [Enable enhanced protections](enable-enhanced-security.md)
defender-for-cloud Defender For Devops Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-devops-introduction.md
Defender for DevOps uses a central console to empower security teams with the ab
Defender for DevOps helps unify, strengthen and manage multi-pipeline DevOps security. ## Availability
+ > [!Note]
+ > During the preview, the maximum number of repositories that can be onboarded to Microsoft Defender for Cloud is 2,000. If you try to connect more than 2,000 repositories, only the first 2,000 repositories, sorted alphabetically, will be onboarded.
+ >
+ > If your organization is interested in onboarding more than 2,000 repositories, please complete [this survey](https://aka.ms/dfd-forms/onboarding).
| Aspect | Details | |--|--|
defender-for-cloud Defender For Dns Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-dns-introduction.md
Title: Microsoft Defender for DNS - the benefits and features description: Learn about the benefits and features of Microsoft Defender for DNS Previously updated : 11/09/2021 Last updated : 01/10/2023
A full list of the alerts provided by Microsoft Defender for DNS is on the [aler
Microsoft Defender for DNS doesn't use any agents.
-To protect your DNS layer, enable Microsoft Defender for DNS for each of your subscriptions as described in [Enable enhanced protections](enable-enhanced-security.md).
## Next steps In this article, you learned about Microsoft Defender for DNS.
+To protect your DNS layer, enable Microsoft Defender for DNS for each of your subscriptions as described in [Enable enhanced protections](enable-enhanced-security.md).
+ > [!div class="nextstepaction"] > [Enable enhanced protections](enable-enhanced-security.md)
defender-for-cloud Github Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/github-action.md
Title: Configure the Microsoft Security DevOps GitHub action description: Learn how to configure the Microsoft Security DevOps GitHub action. Previously updated : 09/11/2022 Last updated : 01/09/2023
Security DevOps uses the following Open Source tools:
## Prerequisites
+- An Azure subscription If you donΓÇÖt have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+ - [Connect your GitHub repositories](quickstart-onboard-github.md). - Follow the guidance to set up [GitHub Advanced Security](https://docs.github.com/en/organizations/keeping-your-organization-secure/managing-security-settings-for-your-organization/managing-security-and-analysis-settings-for-your-organization).
Security DevOps uses the following Open Source tools:
1. Sign in to [GitHub](https://www.github.com).
-1. Select a repository on which you want to configure the GitHub action.
+1. Select a repository you want to configure the GitHub action to.
1. Select **Actions**.
defender-for-cloud Plan Defender For Servers Agents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-defender-for-servers-agents.md
Title: Plan Defender for Servers agents and extensions deployment
-description: Plan for agent deployment to protect Azure, AWS, GCP, and on-premises servers with Defender for Servers
+description: Plan for agent deployment to protect Azure, AWS, GCP, and on-premises servers with Microsoft Defender for Servers.
Last updated 11/06/2022
-# Plan Defender for Servers agents/extensions and Azure Arc
+# Plan agents, extensions, and Azure Arc for Defender for Servers
-This article helps you to scale your Microsoft Defender for Servers deployment. Defender for Servers is one of the paid plans provided by [Microsoft Defender for Cloud](defender-for-cloud-introduction.md).
+This article helps you plan your agents, extensions, and Azure Arc resources for your Microsoft Defender for Servers deployment.
-## Before you start
+Defender for Servers is one of the paid plans provided by [Microsoft Defender for Cloud](defender-for-cloud-introduction.md).
-This article is the fifth part of the Defender for Servers planning guide. Before you begin, review:
+## Before you begin
+
+This article is the *fifth* article in the Defender for Servers planning guide. Before you begin, review the earlier articles:
1. [Start planning your deployment](plan-defender-for-servers.md) 1. [Understand where your data is stored and Log Analytics workspace requirements](plan-defender-for-servers-data-workspace.md) 1. [Review Defender for Servers access roles](plan-defender-for-servers-roles.md) 1. [Select a Defender for Servers plan](plan-defender-for-servers-select-plan.md)
-## Review agents and extensions
+## Review Azure Arc requirements
+
+Azure Arc helps you onboard Amazon Web Services (AWS), Google Cloud Platform (GCP), and on-premises machines to Azure. Defender for Cloud uses Azure Arc to protect non-Azure machines.
-Defender for Servers plans use a number of agents/extensions.
+### Foundational cloud security posture management
-## Review Azure Arc requirements
+For free foundational cloud security posture management (CSPM) features, Azure Arc running on AWS or GCP machines isn't required. For full functionality, we recommend that you *do* have Azure Arc running on AWS or GCP machines.
-Azure Arc is used to onboard AWS, GCP, and on-premises machines to Azure, and is used by Defender for Cloud to protect non-Azure machines.
+Azure Arc onboarding is required for on-premises machines.
-- **Foundational CSPM**:
- - For free foundational CSPM features, you don't need Azure Arc running on AWS/GCP machines, but it's recommended for full functionality.
- - You do need Azure Arc onboarding for on-premises machines.
-- **Defender for Servers plan**:
- - To use the Defender for Servers, all AWS/GCP and on-premises machines should be Azure Arc-enabled.
- - After setting up AWS/GCP connectors, Defender for Cloud can automatically deploy agents to AWS/GCP servers. This includes automatic deployment of the Azure Arc agent.
+### Defender for Servers plan
+
+To use Defender for Servers, all AWS, GCP, and on-premises machines should be Azure Arc-enabled.
+
+You can onboard the Azure Arc agent to your AWS or GCP servers automatically with the AWS or GCP multicloud connector.
### Plan for Azure Arc deployment
-1. Review [planning recommendations](../azure-arc/servers/plan-at-scale-deployment.md), and [deployment prerequisites](../azure-arc/servers/prerequisites.md).
-1. Azure Arc installs the Connected Machine agent to connect to and manage machines hosted outside Azure. Review the following:
+To plan for Azure Arc deployment:
+
+1. Review the Azure Arc [planning recommendations](../azure-arc/servers/plan-at-scale-deployment.md) and [deployment prerequisites](../azure-arc/servers/prerequisites.md).
+1. Azure Arc installs the Connected Machine agent to connect to and manage machines that are hosted outside of Azure. Review the following information:
+ - The [agent components and data collected from machines](../azure-arc/servers/agent-overview.md#agent-resources). - [Network and internet access ](../azure-arc/servers/network-requirements.md) for the agent. - [Connection options](../azure-arc/servers/deployment-options.md) for the agent.
+## Log Analytics agent and Azure Monitor agent
-## Log Analytics agent/Azure Monitor agent
-
-Defender for Cloud uses the Log Analytics agent/Azure Monitor agent to collect information from compute resources, and then sends it to a Log Analytics workspace for further analysis. Review the [differences and recommendations regarding both agents](../azure-monitor/agents/agents-overview.md). Agents are used in Defender for Servers as follows.
+Defender for Cloud uses the Log Analytics agent and the Azure Monitor agent to collect information from compute resources. Then, it sends the data to a Log Analytics workspace for more analysis. Review the [differences and recommendations for both agents](../azure-monitor/agents/agents-overview.md).
+The following table describes the agents that are used in Defender for Servers:
Feature | Log Analytics agent | Azure Monitor agent
- | |
-Foundational CSPM recommendations (free) that depend on agent: [OS baseline recommendation](apply-security-baseline.md) (Azure VMs) | :::image type="icon" source="./medi) is used.
+ | |
+Foundational CSPM recommendations (free) that depend on the agent: [OS baseline recommendation](apply-security-baseline.md) (Azure VMs) | :::image type="icon" source="./medi) is used.
Foundational CSPM: [System updates recommendations](recommendations-reference.md#compute-recommendations) (Azure VMs) | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported by the Log Analytics agent"::: | Not yet available.
-Foundational CSPM: [Antimalware/Endpoint protection recommendations](endpoint-protection-recommendations-technical.md) (Azure VMs) | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported by the Log Analytics agent."::: | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported by the Azure Monitor agent.":::
-Attack detection at the OS level and network layer, including fileless attack detection).<br/><br/> Plan 1 relies on Defender for Endpoint capabilities for attack detection. | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported by the Log Analytics agent. Plan 1 relies on Defender for Endpoint.":::<br/><br/> Plan 2| :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported by the Azure Monitor agent. Plan 1 relies on Defender for Endpoint.":::<br/><br/> Plan 2
+Foundational CSPM: [Antimalware/endpoint protection recommendations](endpoint-protection-recommendations-technical.md) (Azure VMs) | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported by the Log Analytics agent."::: | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported by the Azure Monitor agent.":::
+Attack detection at the OS level and network layer, including fileless attack detection<br/><br/> Plan 1 relies on Defender for Endpoint capabilities for attack detection. | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported by the Log Analytics agent. Plan 1 relies on Defender for Endpoint.":::<br/><br/> Plan 2| :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported by the Azure Monitor agent. Plan 1 relies on Defender for Endpoint.":::<br/><br/> Plan 2
File integrity monitoring (Plan 2 only) | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported by the Log Analytics agent."::: | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported by the Azure Monitor agent."::: [Adaptive application controls](adaptive-application-controls.md) (Plan 2 only) | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported by the Log Analytics agent."::: | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported by the Azure Monitor agent."::: - ## Qualys extension
-The Qualys extension is available in Defender for Servers Plan 2, and is deployed if you want to use Qualys for vulnerability assessment.
+The Qualys extension is available in Defender for Servers Plan 2. The extension is deployed if you want to use Qualys for vulnerability assessment.
+
+Here's more information:
- The Qualys extension sends metadata for analysis to one of two Qualys datacenter regions, depending on your Azure region.
- - If you're in a European Azure geography data is processed in Qualys' European data center.
- - For other regions data is processed in the US data center.
-- To use Qualys on a machine, the extension must be installed, and the machine must be able to communicate with the relevant network endpoint:
- - Europe datacenter: `https://qagpublic.qg2.apps.qualys.eu`
- - US datacenter: `https://qagpublic.qg3.apps.qualys.com`
+ - If you're in a European Azure geography, data is processed in the Qualys European datacenter.
+ - For other regions, data is processed in the US datacenter.
+
+- To use Qualys on a machine, the extension must be installed and the machine must be able to communicate with the relevant network endpoint:
+ - Europe datacenter: `https://qagpublic.qg2.apps.qualys.eu`
+ - US datacenter: `https://qagpublic.qg3.apps.qualys.com`
## Guest configuration extension The extension performs audit and configuration operations inside VMs. -- If you're using the Azure Monitor Agent, Defender for Cloud leverages this extension to analyze operating system security baseline settings on Windows and Linux machines.-- While Azure Arc-enabled servers and the guest configuration extension are free, additional costs might apply when using guest configuration policies on Azure Arc servers outside Defender for Cloud scope.
+- If you're using the Azure Monitor Agent, Defender for Cloud uses this extension to analyze operating system security baseline settings on Windows and Linux machines.
+- Although Azure Arc-enabled servers and the guest configuration extension are free, more costs might apply if you use guest configuration policies on Azure Arc servers outside the scope of Defender for Cloud.
-Learn more about the Azure Policy [guest configuration extension](../virtual-machines/extensions/guest-configuration.md)
+Learn more about the Azure Policy [guest configuration extension](../virtual-machines/extensions/guest-configuration.md).
## Defender for Endpoint extensions When you enable Defender for Servers, Defender for Cloud automatically deploys a Defender for Endpoint extension. The extension is a management interface that runs a script inside the operating system to deploy and integrate the Defender for Endpoint sensor on the machine. -- Windows machines extension: MDE.Windows-- Linux machines extension: MDE.Linux
+- Windows machines extension: `MDE.Windows`
+- Linux machines extension: `MDE.Linux`
- Machines must meet [minimum requirements](/microsoft-365/security/defender-endpoint/minimum-requirements).-- There are some [specific requirements](/microsoft-365/security/defender-endpoint/configure-server-endpoints) for some Windows Server versions.
+- Some Windows Server versions have [specific requirements](/microsoft-365/security/defender-endpoint/configure-server-endpoints).
## Verify operating system support
-Before deployment, verify operating system support for agents and extensions.
+Before you deploy Defender for Servers, verify operating system support for agents and extensions:
- Verify that your [operating systems are supported](/microsoft-365/security/defender-endpoint/minimum-requirements) by Defender for Endpoint.-- [Check requirements](../azure-arc/servers/prerequisites.md) for Azure Arc Connect Machine agent.-- Check operating system support for the [Log Analytics agent](../azure-monitor/agents/log-analytics-agent.md#supported-operating-systems) and [Azure Monitor agent](../azure-monitor/agents/agents-overview.md)
+- [Check requirements](../azure-arc/servers/prerequisites.md) for the Azure Arc Connect Machine agent.
+- Check operating system support for the [Log Analytics agent](../azure-monitor/agents/log-analytics-agent.md#supported-operating-systems) and [Azure Monitor agent](../azure-monitor/agents/agents-overview.md).
## Review agent provisioning
-When you enable Defender for Cloud plans, including Defender for Servers, you can select to automatically provision a number of agents. These are the agents that are relevant for Defender for Servers:
+When you enable Defender for Cloud plans, including Defender for Servers, you can choose to automatically provision some agents that are relevant for Defender for Servers:
+
+- Log Analytics agent and Azure Monitor agent for Azure VMs
+- Log Analytics agent and Azure Monitor agent for Azure Arc VMs
+- Qualys agent
+- Guest configuration agent
-- Log Analytics agent/Azure Monitor agent for Azure VMs-- Log Analytics agent/Azure Monitor agent for Azure Arc VMs-- Qualys agent -- Guest configuration agent
+When you enable Defender for Servers Plan 1 or Plan 2, the Defender for Endpoint extension is automatically provisioned on all supported machines in the subscription.
-In addition, when you enable Defender for Servers Plan 1 or Plan 2, the Defender for Endpoint extension is automatically provisioned on all supported machines in the subscription.
+## Provisioning considerations
-## Points to note
+The following table describes provisioning considerations to be aware of:
-**Provisioning** | **Details**
+Provisioning | Details
|
-Defender for Endpoint sensor | If machines are running Microsoft Antimalware, also known as System Center Endpoint Protection (SCEP), the Windows extension automatically removes it from the machine.<br/><br/> If you deploy on a machine that already has the legacy Microsoft Monitoring agent (MMA) Defender for Endpoint sensor running, after successfully installing the Defender for Cloud/Defender for Endpoint unified solution, the extension will stop and disable the legacy sensor. The change is transparent and the machineΓÇÖs protection history is preserved.
-AWS/GCP machines | For these machines, you configure automatic provisioning when you set up the AWS or GCP connector.
-Manual installation | If you don't want Defender for Cloud to provision the Log Analytics agent/Azure Monitor agent, you can install agents manually.<br/><br/> You can connect the agent to the default Defender for Cloud workspace, or to a custom workspace.<br/><br/> The workspace must have the *SecurityCenterFree* (providing free foundational CSPM) or *Security* solution enabled (Defender for Servers Plan 2).
-[Log Analytics agent running directly](faq-data-collection-agents.yml#what-if-a-log-analytics-agent-is-directly-installed-on-the-machine-but-not-as-an-extension--direct-agent--) | If a Windows VM has the Log Analytics agent running, but not as a VM extension, Defender for Cloud will install the extension. The agent will report to the Defender for Cloud workspace in addition to the existing agent workspace. <br/><br/> On Linux VMs, multi-homing isn't supported, and if an existing agent is detected then the agent won't be automatically provisioned.
-[Operations Manager agent](faq-data-collection-agents.yml#what-if-a-system-center-operations-manager-agent-is-already-installed-on-my-vm-) | The Log Analytics agent can work side-by-side with the Operations Manager agent. The agents share common runtime libraries which will be updated when the Log Analytics agent is deployed.
-Removing the Log Analytics extension | If you remove the Log Analytics extension, Defender for Cloud won't be able to collect security data and recommendations/alerts will be missing. Within 24 hours, Defender for Cloud will determine that the extension is missing and reinstalls it.
+Defender for Endpoint sensor | If machines are running Microsoft Antimalware, also known as System Center Endpoint Protection (SCEP), the Windows extension automatically removes it from the machine.<br/><br/> If you deploy on a machine that already has the legacy Microsoft Monitoring agent (MMA) Defender for Endpoint sensor running, after the Defender for Cloud and Defender for Endpoint unified solution is successfully installed, the extension stops and it disables the legacy sensor. The change is transparent and the machineΓÇÖs protection history is preserved.
+AWS and GCP machines | Configure automatic provisioning when you set up the AWS or GCP connector.
+Manual installation | If you don't want Defender for Cloud to provision the Log Analytics agent and Azure Monitor agent, you can install agents manually.<br/><br/> You can connect the agent to the default Defender for Cloud workspace or to a custom workspace.<br/><br/> The workspace must have the *SecurityCenterFree* (for free foundational CSPM) or *Security* solution enabled (Defender for Servers Plan 2).
+[Log Analytics agent running directly](faq-data-collection-agents.yml#what-if-a-log-analytics-agent-is-directly-installed-on-the-machine-but-not-as-an-extension--direct-agent--) | If a Windows VM has the Log Analytics agent running but not as a VM extension, Defender for Cloud installs the extension. The agent reports to the Defender for Cloud workspace and to the existing agent workspace. <br/><br/> On Linux VMs, multi-homing isn't supported. If an existing agent exists, the Log Analytics agent isn't automatically provisioned.
+[Operations Manager agent](faq-data-collection-agents.yml#what-if-a-system-center-operations-manager-agent-is-already-installed-on-my-vm-) | The Log Analytics agent can work side by side with the Operations Manager agent. The agents share common runtime libraries that are updated when the Log Analytics agent is deployed.
+Removing the Log Analytics extension | If you remove the Log Analytics extension, Defender for Cloud can't collect security data and recommendations, and alerts will be missing. Within 24 hours, Defender for Cloud determines that the extension is missing and reinstalls it.
-## When shouldn't I use auto provisioning?
+## When to opt out of auto provisioning
-You might want to opt out of automatic provisioning in the following circumstances.
+You might want to opt out of automatic provisioning in the circumstances that are described in the following table:
Situation | Relevant agent | Details | |
-You have critical VMs that shouldn't have agents installed. | Log Analytics agent, Azure Monitor agent. | Automatic provisioning is for an entire subscription. You can't opt out for specific machines.
-If you're running the System Center Operations Manager agent version 2012 with Operations Manager 2012 | Log Analytics agent | With this configuration, don't turn on automatic provisioning, otherwise management capabilities might be lost.
-You want to configure a custom workspace | Log Analytics agent, Azure Monitor agent | You have two options with a custom workspace:<br/><br/> - Opt out of automatic provisioning when you first set up Defender for Cloud. Then, configure provisioning on your custom workspace.<br/><br/>- Let automatic provisioning run to install the Log Analytic agents on machines. Set a custom workspace, and then when asked, reconfigure existing VMs with the new workspace setting.
+You have critical VMs that shouldn't have agents installed | Log Analytics agent, Azure Monitor agent | Automatic provisioning is for an entire subscription. You can't opt out for specific machines.
+You're running the System Center Operations Manager agent version 2012 with Operations Manager 2012 | Log Analytics agent | With this configuration, don't turn on automatic provisioning. Management capabilities might be lost.
+You want to configure a custom workspace | Log Analytics agent, Azure Monitor agent | You have two options with a custom workspace:<br/><br/> - Opt out of automatic provisioning when you first set up Defender for Cloud. Then, configure provisioning on your custom workspace.<br/><br/>- Let automatic provisioning run to install the Log Analytics agents on machines. Set a custom workspace, and then reconfigure existing VMs with the new workspace setting.
## Next steps
-After working through these planning steps, you can start deployment:
--- [Enable Defender for Servers](enable-enhanced-security.md) plans-- [Connect on-premises machines](quickstart-onboard-machines.md) to Azure.-- [Connect AWS accounts](quickstart-onboard-aws.md) to Defender for Cloud.-- [Connect GCP projects](quickstart-onboard-gcp.md) to Defender for Cloud.-- Learn about [scaling your Defender for Server deployment](plan-defender-for-servers-scale.md).
+After you work through these planning steps, learn how to [scale your Defender for Servers deployment](plan-defender-for-servers-scale.md).
defender-for-cloud Plan Defender For Servers Data Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-defender-for-servers-data-workspace.md
Title: Plan data residency and workspace design for Defender for Servers
-description: Review data residency and workspace design for Defender for Servers
+ Title: Plan Defender for Servers data residency and workspaces
+description: Review data residency and workspace design for Microsoft Defender for Servers.
Last updated 11/06/2022
-# Review data residency and workspace design
+# Plan data residency and workspaces for Defender for Servers
-This article helps you to understand how your data is stored in Defender for Servers, and how Log Analytics workspaces are used. Defender for Servers is one of the paid plans provided by [Microsoft Defender for Cloud](defender-for-cloud-introduction.md).
+This article helps you understand how your data is stored in Microsoft Defender for Servers and how Log Analytics workspaces are used in Defender for Servers.
+Defender for Servers is one of the paid plans provided by [Microsoft Defender for Cloud](defender-for-cloud-introduction.md).
-## Before you start
+## Before you begin
-This article is the second in the Defender for Servers planning guide series. Before you begin, review [Start planning your deployment](plan-defender-for-servers.md).
+This article is the *second* article in the Defender for Servers planning guide series. Begin by [planning your deployment](plan-defender-for-servers.md).
## Understand data residency
-Data residency refers to the physical or geographic location of your organizationΓÇÖs data.
+Data residency refers to the physical or geographic location of your organization's data.
-1. Before you deploy Defender for Servers, review [general Azure data residency considerations](https://azure.microsoft.com/blog/making-your-data-residency-choices-easier-with-azure/).
-1. Review the table below to understand where Defender for Cloud/Defender for Servers stores data.
+Before you deploy Defender for Servers, it's important for you to understand data residency for your organization:
+
+- Review [general Azure data residency considerations](https://azure.microsoft.com/blog/making-your-data-residency-choices-easier-with-azure/).
+- Review the table in the next section to understand where Defender for Cloud stores data.
### Storage locations
-**Data** | **Location**
+Understand where Defender for Cloud stores data and how you can work with your data:
+
+**Data** | **Location**
|
-**Security alerts and recommendations** | These are stored in the Defender for Cloud backend, and accessible via the Azure portal, Azure Resource Graph, and the REST APIs.<br/><br/> Export to a Log Analytics workspace using [continuous export](continuous-export.md).
-**Machine information** | Stored in a Log Analytics workspace.<br/><br/> Either in the Defender for Cloud default workspace, or a custom workspace that you specify. Data is stored in accordance with the workspace location.
+**Security alerts and recommendations** | - Stored in the Defender for Cloud back end and accessible via the Azure portal, Azure Resource Graph, and REST APIs.<br/><br/> - You can export the data to a Log Analytics workspace by using [continuous export](continuous-export.md).
+**Machine information** | - Stored in a Log Analytics workspace.<br/><br/> - You can use either the default Defender for Cloud workspace or a custom workspace. Data is stored in accordance with the workspace location.
-## Understand workspace considerations
+## Workspace considerations
-In Defender for Cloud, you can store server data in the default Defender for Cloud log analytics workspace, or in a custom workspace.
+In Defender for Cloud, you can store server data in the default Log Analytics workspace for your Defender for Cloud deployment or in a custom workspace.
-### Default workspace
+Here's more information:
-- By default, when you onboard for the first time Defender for Cloud creates a new resource group and default workspace in the region of each subscription with Defender for Cloud enabled.
+- By default, when you enable Defender for Cloud for the first time, a new resource group and a default workspace are created in the subscription region for each subscription that has Defender for Cloud enabled.
- When you use only free foundational cloud security posture management (CSPM), Defender for Cloud sets up the default workspace with the *SecurityCenterFree* solution enabled.-- When you turn on Defender for Cloud plans (including Defender for Servers), they're enabled on the default workspace, and the *Security* solution is installed.-- If you have VMs in multiple locations, Defender for Cloud creates multiple workspaces accordingly, to ensure data compliance.-- Default workspace naming is in the format: [subscription-id]-[geo].
+- When you turn on a Defender for Cloud plan (including Defender for Servers), the plan is enabled for the default workspace, and the *Security* solution is enabled.
+- If you have virtual machines in multiple locations, Defender for Cloud creates multiple workspaces accordingly to ensure data compliance.
+- Default workspace names are in the format `[subscription-id]-[geo]`.
-## Default workspace
+## Default workspaces
-Default workspaces are created in the following locations.
+Defender for Cloud default workspaces are created in the following locations:
**Server location** | **Workspace location** |
-United States, Canada, Europe, UK, Korea, India, Japan, China, Australia | The workspace is created in the matching location.
+United States, Canada, Europe, United Kingdom, Korea, India, Japan, China, Australia | The workspace is created in the matching location.
Brazil | United States East Asia, Southeast Asia | Asia ## Custom workspaces
-Your server information can be stored in the default workspace, or you can select to use a custom workspace.
--- You must enable the Defender for Servers plan on custom workspaces.-- The custom workspace must be associated with the Azure subscription on which Defender for Cloud is enabled.-- You need at minimum read permissions for the workspace.-- If the *Security & Audit* solution is installed on a workspace, Defender for Cloud uses the existing solution.-- Learn more about [Log Analytics workspace design strategy and criteria](../azure-monitor/logs/workspace-design.md).
+You can store your server information in the default workspace or you can use a custom workspace. A custom workspace must meet these requirements:
+- You must enable the Defender for Servers plan in the custom workspace.
+- The custom workspace must be associated with the Azure subscription in which Defender for Cloud is enabled.
+- You must have at least read permissions for the workspace.
+- If the *Security & Audit* solution is installed in a workspace, Defender for Cloud uses the existing solution.
+Learn more about [Log Analytics workspace design strategy and criteria](../azure-monitor/logs/workspace-design.md).
## Next steps
-After working through these planning steps, review [Defender for Server access roles](plan-defender-for-servers-roles.md).
+After you work through these planning steps, review [Defender for Server access roles](plan-defender-for-servers-roles.md).
defender-for-cloud Plan Defender For Servers Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-defender-for-servers-roles.md
Title: Plan roles and permissions for Defender for Servers
-description: Review roles and permissions for Defender for Servers
+ Title: Plan Defender for Servers roles and permissions
+description: Review roles and permissions for Microsoft Defender for Servers.
Last updated 11/06/2022
-# Review Defender for Servers roles and permissions
+# Plan roles and permissions for Defender for Servers
-This article helps you to understand how to control access to Defender for Servers. Defender for Servers is one of the paid plans provided by [Microsoft Defender for Cloud](defender-for-cloud-introduction.md).
+This article helps you understand how to control access to your Defender for Servers deployment.
+Defender for Servers is one of the paid plans provided by [Microsoft Defender for Cloud](defender-for-cloud-introduction.md).
-## Before you start
+## Before you begin
-This article is the third in the Defender for Servers planning guide series. Before you begin, review the earlier articles:
+This article is the *third* article in the Defender for Servers planning guide. Before you begin, review the earlier articles:
1. [Start planning your deployment](plan-defender-for-servers.md) 1. [Understand where your data is stored and Log Analytics workspace requirements](plan-defender-for-servers-data-workspace.md)
+## Determine ownership and access
-## Determine access and ownership
+In complex enterprises, different teams manage different [security functions](/azure/cloud-adoption-framework/organize/cloud-security) in the organization.
-In complex enterprises, different teams manage different [security functions](/azure/cloud-adoption-framework/organize/cloud-security).
+It's critical that you identify ownership for server and endpoint security in your organization. Ownership that's undefined or hidden in organizational silos increases risk for the organization. Security operations (SecOps) teams that need to identify and follow threats across the enterprise are hindered. Deployments might be delayed or they might not be secure.
-Figuring out ownership for server and endpoint security is critical. Ownership that's undefined, or hidden within organizational silos, causes friction that leads to delays, insecure deployments, and difficulties for security operations (SecOps) teams who need to identify and follow threats across the enterprise.
+Security leadership should identify the teams, roles, and individuals that are responsible for making and implementing decisions about server security.
-- Security leadership should identify the teams, roles, and individuals responsible for making and implementing decisions about server security.-- Typically, responsibility is shared between a [central IT team](/azure/cloud-adoption-framework/organize/central-it) and a [cloud infrastructure and endpoint security team](/azure/cloud-adoption-framework/organize/cloud-security-infrastructure-endpoint).-
-Individuals in these teams need Azure access rights to manage and use Defender for Cloud. As part of planning, figure out the right level of access for individuals, based on Defender for Cloud's role-base access control (RBAC) model.
+Responsibility usually is shared between a [central IT team](/azure/cloud-adoption-framework/organize/central-it) and a [cloud infrastructure and endpoint security team](/azure/cloud-adoption-framework/organize/cloud-security-infrastructure-endpoint). Individuals on these teams need Azure access rights to manage and use Defender for Cloud. As part of planning, determine the right level of access for individuals based on the Defender for Cloud role-based access control (RBAC) model.
## Defender for Cloud roles
-In addition to the built-in Owner, Contributor, Reader roles for an Azure subscription/resource group, there are a couple of built-in roles that control Defender for Cloud access.
--- **Security Reader**: Provides viewing rights to Defender for Cloud recommendations, alerts, security policy and states. This role can't make changes to Defender for Cloud settings.-- **Security Admin**: Provide Security Reader rights, and the ability to update security policy, dismiss alerts and recommendations, and apply recommendations.-
-[Get more details](permissions.md#roles-and-allowed-actions) about allowed role actions
+In addition to the built-in Owner, Contributor, and Reader roles for an Azure subscription and resource group, Defender for Cloud has built-in roles that control Defender for Cloud access:
+- **Security Reader**: Provides viewing rights to Defender for Cloud recommendations, alerts, security policy, and states. This role can't make changes to Defender for Cloud settings.
+- **Security Admin**: Provides Security Reader rights and the ability to update security policy, dismiss alerts and recommendations, and apply recommendations.
+Learn more about [allowed role actions](permissions.md#roles-and-allowed-actions).
## Next steps
-After working through these planning steps, [decide which Defender for Servers plan](plan-defender-for-servers-select-plan.md) is right for your organization.
-
+After you work through these planning steps, [decide which Defender for Servers plan](plan-defender-for-servers-select-plan.md) is right for your organization.
defender-for-cloud Plan Defender For Servers Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-defender-for-servers-scale.md
Title: Scale a Defender for Servers deployment
-description: Scale protection of Azure, AWS, GCP, and on-premises servers with Defender for Servers
+description: Scale protection of Azure, AWS, GCP, and on-premises servers by using Microsoft Defender for Servers.
Last updated 11/06/2022
# Scale a Defender for Servers deployment
-This article helps you to scale your Microsoft Defender for Servers deployment. Defender for Servers is one of the paid plans provided by [Microsoft Defender for Cloud](defender-for-cloud-introduction.md).
+This article helps you scale your Microsoft Defender for Servers deployment.
-## Before you start
+Defender for Servers is one of the paid plans provided by [Microsoft Defender for Cloud](defender-for-cloud-introduction.md).
-This article is the fifth in the Defender for Servers planning guide series. Before you begin, review the earlier articles:
+## Before you begin
+
+This article is the *sixth* and final article in the Defender for Servers planning guide series. Before you begin, review the earlier articles:
1. [Start planning your deployment](plan-defender-for-servers.md)
-1. [Understand where your data is stored, and Log Analytics workspace requirements](plan-defender-for-servers-data-workspace.md)
+1. [Understand where your data is stored and Log Analytics workspace requirements](plan-defender-for-servers-data-workspace.md)
1. [Review access and role requirements](plan-defender-for-servers-roles.md) 1. [Select a Defender for Servers plan](plan-defender-for-servers-select-plan.md)
-1. [Review Azure Arc and agent/extension requirements](plan-defender-for-servers-agents.md)
-
+1. [Review requirements for agents, extensions, and Azure Arc resources](plan-defender-for-servers-agents.md)
-
## Scaling overview
-When you enable Defender for Cloud subscription, the following occurs:
+When you enable a Defender for Cloud subscription, this process occurs:
-- The *microsoft.security* resource provider is automatically registered on the subscription.-- At the same time, the Cloud Security Benchmark initiative that's responsible for creating security recommendations and calculating secure score is assigned to the subscription.-- After enabling Defender for Cloud on the subscription, you turn on Defender for Servers Plan 1 or 2, and enable auto-provisioning.
+1. The *microsoft.security* resource provider is automatically registered on the subscription.
+1. At the same time, the Cloud Security Benchmark initiative that's responsible for creating security recommendations and calculating the security score is assigned to the subscription.
+1. After you enable Defender for Cloud on the subscription, you turn on Defender for Servers Plan 1 or Defender for Servers Plan 2, and then you enable auto provisioning.
+In the next sections, review considerations for specific steps as you scale your deployment:
-There are some considerations around these steps as you scale your deployment.
+- Scale a Cloud Security Benchmark deployment
+- Scale a Defender for Servers plan
+- Scale auto provisioning
-## Scaling Cloud Security Benchmark deployment
+## Scale a Cloud Security Benchmark deployment
-- In a scaled deployment you might want the Cloud Security Benchmark (formerly the Azure Security Benchmark) to be automatically assigned.
- - You can do this manually assigning the policy initiative to your (root) management group, instead of each subscription individually.
- - You can find the **Azure Security Benchmark** policy definition in [git hub](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policySetDefinitions/Security%20Center/AzureSecurityCenter.json).
- - The assignment is inherited for every existing and future subscription underneath the management group.
- - [Learn more](onboard-management-group.md) about using a built-in policy definition to register a resource provider.
+In a scaled deployment, you might want the Cloud Security Benchmark (formerly the Azure Security Benchmark) to be automatically assigned.
+The assignment is inherited for every existing and future subscription in the management group. To set up your deployment to automatically apply the benchmark, assign the policy initiative to your management group (root) instead of to each subscription.
-## Scaling Defender for Server plans
+You can get the *Azure Security Benchmark* policy definition on [GitHub](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policySetDefinitions/Security%20Center/AzureSecurityCenter.json).
-You can use a policy definition to enable Defender for Servers at scale. Note that:
+[Learn more](onboard-management-group.md) about using a built-in policy definition to register a resource provider.
-- You can find the built-in **Configure Defender for Servers to be enabled** policy definition in the Azure Policy > Policy Definitions, in the Azure portal.-- Alternatively, there's a [custom policy in GitHub](https://github.com/Azure/Microsoft-Defender-for-Cloud/tree/main/Policy/Enable%20Defender%20for%20Servers%20plans) that allows you to enable Defender for Servers and select the plan at the same time.-- You can only enable one Defender for Servers plan on each subscription, and not both at the same time.-- If you want to use both plans in your environment, divide your subscriptions into two management groups. On each management group you assign a policy to enable the respective plan on each underlying subscription.
+## Scale a Defender for Servers plan
+You can use a policy definition to enable Defender for Servers at scale:
-## Scaling auto-provisioning
+- To get the built-in *Configure Defender for Servers to be enabled* policy definition, in the Azure portal for your deployment, go to **Azure Policy** > **Policy Definitions**.
+- Alternatively, you can use a [custom policy](https://github.com/Azure/Microsoft-Defender-for-Cloud/tree/main/Policy/Enable%20Defender%20for%20Servers%20plans) to enable Defender for Servers and select the plan at the same time.
+- You can enable only one Defender for Servers plan on each subscription. You can't enable both Defender for Servers Plan 1 and Plan 2 at the same time.
+- If you want to use both plans in your environment, divide your subscriptions into two management groups. On each management group, assign a policy to enable the respective plan on each underlying subscription.
-Auto-provisioning can be configured by assigning the built-in policy definitions to an Azure management group, so that underlying subscriptions are covered. The following table summarizes the definitions.
+## Scale auto provisioning
+You can set up auto provisioning by assigning the built-in policy definitions to an Azure management group to cover underlying subscriptions. The following table summarizes the definitions:
Agent | Policy |
-Log Analytics agent (default workspace) | **Enable Security Center's autoprovisioning of the Log Analytics agent on your subscriptions with default workspaces**
-Log Analytics agent (custom workspace) | **Enable Security Center's autoprovisioning of the Log Analytics agent on your subscriptions with custom workspaces**
-Azure Monitor agent (default data collection rule) | **[Preview]: Configure Arc machines to create the default Microsoft Defender for Cloud pipeline using Azure Monitor Agent**<br/><br/> **[Preview]: Configure virtual machines to create the default Microsoft Defender for Cloud pipeline using Azure Monitor Agent**
-Azure Monitor agent (custom data collection rule) | **[Preview]: Configure Arc machines to create the Microsoft Defender for Cloud user-defined pipeline using Azure Monitor Agent**<br/><br/> **[Preview]: Configure machines to create the Microsoft Defender for Cloud user-defined pipeline using Azure Monitor Agent**
-Qualys vulnerability assessment | **Configure machines to receive a vulnerability assessment provider**
+Log Analytics agent (default workspace) | *Enable Security Center's autoprovisioning of the Log Analytics agent on your subscriptions with default workspaces*
+Log Analytics agent (custom workspace) | *Enable Security Center's autoprovisioning of the Log Analytics agent on your subscriptions with custom workspaces*
+Azure Monitor agent (default data collection rule) | *[Preview]: Configure Arc machines to create the default Microsoft Defender for Cloud pipeline using Azure Monitor Agent*<br/><br/> *[Preview]: Configure virtual machines to create the default Microsoft Defender for Cloud pipeline using Azure Monitor Agent*
+Azure Monitor agent (custom data collection rule) | *[Preview]: Configure Arc machines to create the Microsoft Defender for Cloud user-defined pipeline using Azure Monitor Agent*<br/><br/> *[Preview]: Configure machines to create the Microsoft Defender for Cloud user-defined pipeline using Azure Monitor Agent*
+Qualys vulnerability assessment | *Configure machines to receive a vulnerability assessment provider*
Guest configuration extension | [Overview and prerequisites](../virtual-machines/extensions/guest-configuration.md)
-Policy definitions can be found in the Azure portal > **Policy** > **Definitions**.
--
+To review policy definitions, in the Azure portal, go to **Policy** > **Definitions**.
## Next steps
-After working through planning steps, you can start deployment:
+Begin a deployment for your scenario:
-- [Enable Defender for Servers](enable-enhanced-security.md) plans-- [Connect on-premises machines](quickstart-onboard-aws.md) to Azure.-- [Connect AWS accounts](quickstart-onboard-aws.md) to Defender for Cloud.-- [Connect GCP projects](quickstart-onboard-gcp.md) to Defender for Cloud.
+- [Enable a Defender for Servers plan](enable-enhanced-security.md)
+- [Connect an on-premises machine to Azure](quickstart-onboard-aws.md)
+- [Connect an AWS account to Defender for Cloud](quickstart-onboard-aws.md)
+- [Connect a GCP project to Defender for Cloud](quickstart-onboard-gcp.md)
defender-for-cloud Plan Defender For Servers Select Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-defender-for-servers-select-plan.md
Title: Select a Microsoft Defender for Servers plan with Microsoft Defender for Cloud
-description: Select a Defender for Servers plan to protect Azure, AWS, GCP servers, and on-premises machines.
+ Title: Select a Defender for Servers plan in Microsoft Defender for Cloud
+description: Select a Microsoft Defender for Servers plan in Microsoft Defender for Cloud to protect Azure, AWS, and GCP servers and on-premises machines.
Last updated 11/06/2022
# Select a Defender for Servers plan
-This article helps you select which Microsoft Defender for Servers plan is right for your organization. Defender for Servers is one of the paid plans provided by [Microsoft Defender for Cloud](defender-for-cloud-introduction.md).
+This article helps you select the Microsoft Defender for Servers plan that's right for your organization.
+Defender for Servers is one of the paid plans provided by [Microsoft Defender for Cloud](defender-for-cloud-introduction.md).
-## Before you start
+## Before you begin
-This article is the fourth in the Defender for Servers planning guide series. Before you begin, review the earlier articles:
+This article is the *fourth* article in the Defender for Servers planning guide. Before you begin, review the earlier articles:
1. [Start planning your deployment](plan-defender-for-servers.md)
-1. [Understand where your data is stored, and Log Analytics workspace requirements](plan-defender-for-servers-data-workspace.md)
+1. [Understand where your data is stored and Log Analytics workspace requirements](plan-defender-for-servers-data-workspace.md)
1. [Review access and role requirements](plan-defender-for-servers-roles.md) - ## Review plans
-Defender for Servers provides two plans you can choose from.
+You can choose from two Defender for Servers paid plans:
+
+- **Defender for Servers Plan 1** is entry-level and must be enabled at the subscription level. Features include:
+
+ - [Foundational cloud security posture management (CSPM)](concept-cloud-security-posture-management.md#defender-cspm-plan-options), which is provided free by Defender for Cloud.
+
+ - For Azure virtual machines and Amazon Web Services (AWS) and Google Cloud Platform (GCP) machines, you don't need a Defender for Cloud plan enabled to use foundational CSPM features.
+ - For on-premises server, to receive configuration recommendations machines must be onboarded to Azure with Azure Arc, and Defender for Servers must be enabled.
+
+ - Endpoint detection and response (EDR) features that are provided by [Microsoft Defender for Endpoint Plan 2](/microsoft-365/security/defender-endpoint/defender-endpoint-plan-1-2).
-- **Defender for Servers Plan 1** is entry-level, and must be enabled at the subscription level. Features include:
- - [Foundational cloud security posture management (CSPM)](concept-cloud-security-posture-management.md#defender-cspm-plan-options), which is provided for free by Defender for Cloud.
- - For Azure VMs, and AWS/GCP machines, you don't need a Defender for Cloud plan enabled to use foundational CSPM features.
- - For on-premises server, to receive configuration recommendations machines must be onboarded to Azure with Azure Arc, and Defender for Servers must be enabled.
- - Endpoint detection and response (EDR) features that are provided by [Microsoft Defender for Endpoint Plan 2](/microsoft-365/security/defender-endpoint/defender-endpoint-plan-1-2).
-- **Defender for Servers Plan 2** provides all features. It must be enabled at the subscription level and at the workspace level to get full feature coverage. Features include:
- - All the functionality provided by Defender for Servers Plan 1.
- - Additional extended detection and response (XDR) capabilities.
+- **Defender for Servers Plan 2** provides all features. The plan must be enabled at the subscription level and at the workspace level to get full feature coverage. Features include:
+
+ - All the functionality that's provided by Defender for Servers Plan 1.
+ - More extended detection and response (XDR) capabilities.
## Plan features | Feature | Details | Plan 1 | Plan 2 | |:|:|::|::|
-| **Defender for Endpoint integration** | Defender for Servers integrates with Defender for Endpoint and protects servers with all the features, including:<br/><br/>- [Attack surface reduction](/microsoft-365/security/defender-endpoint/overview-attack-surface-reduction) to lower the risk of attack.<br/><br/> - [Next-generation protection](/microsoft-365/security/defender-endpoint/next-generation-protection), including real-time scanning/protection and [Microsoft Defender Antivirus](/microsoft-365/security/defender-endpoint/next-generation-protection).<br/><br/> - EDR including [threat analytics](/microsoft-365/security/defender-endpoint/threat-analytics), [automated investigation and response](/microsoft-365/security/defender-endpoint/automated-investigations), [advanced hunting](/microsoft-365/security/defender-endpoint/advanced-hunting-overview), and [Microsoft Defender Experts](/microsoft-365/security/defender-endpoint/endpoint-attack-notifications).<br/><br/> - Vulnerability assessment/mitigation, provided by Defender for Endpoint's integration with [Microsoft Defender Vulnerability Management](/microsoft-365/security/defender-vulnerability-management/defender-vulnerability-management-capabilities) | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 1."::: | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 2."::: |
-| **Licensing** | Defender for Servers covers licensing for Defender for Endpoint, and is charged per hour instead of per seat, lowering costs by protecting virtual machines only when they're in use.| :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 1."::: | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 2."::: |
-| **Defender for Endpoint provisioning** | Defender for Servers automatically provisions the Defender for Endpoint sensor on every supported machine that's connected to Defender for Cloud.| :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 1."::: | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 2."::: |
-| **Unified view** | Defender for Endpoint alerts display in the Defender for Cloud portal. You can drill down into the Defender for Endpoint portal for more information.| :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 1."::: | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 2."::: |
-| **Threat detection for OS-level (Agent-based)** | Defender for Servers and Microsoft Defender for Endpoint (MDE) detect threats at the OS level, including VM behavioral detections and **Fileless attack detection**, which generates detailed security alerts that accelerate alert triage, correlation, and downstream response time.<br>[Learn more](alerts-reference.md#alerts-windows) | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 2."::: |
-| **Threat detection for network-level (Agentless)** | Defender for Servers detects threats directed at the control plane on the network, including network-based detections for Azure virtual machines. | Not supported in Plan 1 | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 2."::: |
+| **Defender for Endpoint integration** | Defender for Servers integrates with Defender for Endpoint and protects servers with all the features, including:<br/><br/>- [Attack surface reduction](/microsoft-365/security/defender-endpoint/overview-attack-surface-reduction) to lower the risk of attack.<br/><br/> - [Next-generation protection](/microsoft-365/security/defender-endpoint/next-generation-protection), including real-time scanning and protection and [Microsoft Defender Antivirus](/microsoft-365/security/defender-endpoint/next-generation-protection).<br/><br/> - EDR, including [threat analytics](/microsoft-365/security/defender-endpoint/threat-analytics), [automated investigation and response](/microsoft-365/security/defender-endpoint/automated-investigations), [advanced hunting](/microsoft-365/security/defender-endpoint/advanced-hunting-overview), and [Microsoft Defender Experts](/microsoft-365/security/defender-endpoint/endpoint-attack-notifications).<br/><br/> - Vulnerability assessment and mitigation provided by Defender for Endpoint integration with [Microsoft Defender Vulnerability Management](/microsoft-365/security/defender-vulnerability-management/defender-vulnerability-management-capabilities). | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 1."::: | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 2."::: |
+| **Licensing** | Defender for Servers covers licensing for Defender for Endpoint. Licensing is charged per hour instead of per seat, lowering costs by protecting virtual machines only when they're in use.| :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 1."::: | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 2."::: |
+| **Defender for Endpoint provisioning** | Defender for Servers automatically provisions the Defender for Endpoint sensor on every supported machine that's connected to Defender for Cloud.| :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 1."::: | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 2."::: |
+| **Unified view** | Defender for Endpoint alerts appear in the Defender for Cloud portal. You can get detailed information in the Defender for Endpoint portal.| :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 1."::: | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 2."::: |
+| **Threat detection for OS-level (agent-based)** | Defender for Servers and Defender for Endpoint detect threats at the OS level, including virtual machine behavioral detections and *fileless attack detection*, which generates detailed security alerts that accelerate alert triage, correlation, and downstream response time.<br>[Learn more](alerts-reference.md#alerts-windows) | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 2."::: |
+| **Threat detection for network-level (agentless)** | Defender for Servers detects threats that are directed at the control plane on the network, including network-based detections for Azure virtual machines. | Not supported in Plan 1 | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 2."::: |
| **Microsoft Defender Vulnerability Management Add-on** | See a deeper analysis of the security posture of your protected servers, including risks related to browser extensions, network shares, and digital certificates. [Learn more](deploy-vulnerability-assessment-defender-vulnerability-management.md). | Not supported in Plan 1 | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 2."::: |
-| **[Qualys vulnerability assessment](deploy-vulnerability-assessment-vm.md)** | As an alternative to Microsoft Defender Vulnerability Management, Defender for Cloud integrates with the Qualys scanner to identify vulnerabilities. You don't need a Qualys license or account. | Not supported in Plan 1 | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 2.":::|
-**[Adaptive application controls](adaptive-application-controls.md)** | Adaptive application controls define allowlists of known safe applications for machines. Defender for Cloud must be enabled on a subscription to use this feature. | Not supported in Plan 1 |:::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 2."::: |
-| **Free data ingestion (500 MB) in workspaces** | Free data ingestion is available for [specific data types](faq-defender-for-servers.yml#what-data-types-are-included-in-the-daily-allowance-), calculated per node, per reported workspace, per day, and available for every workspace that has a *Security* or *AntiMalware* solution installed. | Not supported in Plan 1 | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 2."::: |
-| **[Just-in-time VM access](just-in-time-access-overview.md)** | Just-in-time VM access locks down machine ports to reduce the attack surface. Defender for Cloud must be enabled on a subscription to use this feature. | Not supported in Plan 1 | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 2."::: |
-| **[Adaptive network hardening](adaptive-network-hardening.md)** | Network hardening filters traffic to and from resources with network security groups (NSG) to improve your network security posture. Further improve security by hardening the NSG rules based on actual traffic patterns. Defender for Cloud must be enabled on a subscription to use this feature. | Not supported in Plan 1 | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 2."::: |
-| **[File Integrity Monitoring](file-integrity-monitoring-overview.md)** | File integrity monitoring examines files and registries for changes that might indicate an attack. A comparison method is used to determine whether suspicious modifications have been made to files. | Not supported in Plan 1 | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 2."::: |
-| **[Docker host hardening](harden-docker-hosts.md)** | Assesses containers hosted on Linux machines running Docker containers, and compares them with the Center for Internet Security (CIS) Docker Benchmark. | Not supported in Plan 1 | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 2."::: |
+| **[Qualys vulnerability assessment](deploy-vulnerability-assessment-vm.md)** | As an alternative to Defender Vulnerability Management, Defender for Cloud integrates with the Qualys scanner to identify vulnerabilities. You don't need a Qualys license or account. | Not supported in Plan 1 | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 2.":::|
+**[Adaptive application controls](adaptive-application-controls.md)** | Adaptive application controls define allowlists of known safe applications for machines. To use this feature, Defender for Cloud must be enabled on the subscription. | Not supported in Plan 1 |:::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 2."::: |
+| **Free data ingestion (500 MB) in workspaces** | Free data ingestion is available for [specific data types](faq-defender-for-servers.yml#what-data-types-are-included-in-the-daily-allowance-). Data ingestion is calculated per node, per reported workspace, and per day. It's available for every workspace that has a *Security* or *AntiMalware* solution installed. | Not supported in Plan 1 | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 2."::: |
+| **[Just-in-time virtual machine access](just-in-time-access-overview.md)** | Just-in-time virtual machine access locks down machine ports to reduce the attack surface. To use this feature, Defender for Cloud must be enabled on the subscription. | Not supported in Plan 1 | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 2."::: |
+| **[Adaptive network hardening](adaptive-network-hardening.md)** | Network hardening filters traffic to and from resources by using network security groups (NSGs) to improve your network security posture. Further improve security by hardening the NSG rules based on actual traffic patterns. To use this feature, Defender for Cloud must be enabled on the subscription. | Not supported in Plan 1 | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 2."::: |
+| **[File integrity monitoring](file-integrity-monitoring-overview.md)** | File integrity monitoring examines files and registries for changes that might indicate an attack. A comparison method is used to determine whether suspicious modifications have been made to files. | Not supported in Plan 1 | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 2."::: |
+| **[Docker host hardening](harden-docker-hosts.md)** | Assesses containers hosted on Linux machines running Docker containers, and then compares them with the Center for Internet Security (CIS) Docker Benchmark. | Not supported in Plan 1 | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 2."::: |
[Network map](protect-network-resources.md) | Provides a geographical view of recommendations for hardening your network resources. | Not supported in Plan 1| :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 2."::: |
-[Agentless scanning](concept-agentless-data-collection.md) | Scans Azure VMs, using cloud APIs to collect data | Not supported in Plan 1 | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 2.":::
+[Agentless scanning](concept-agentless-data-collection.md) | Scans Azure virtual machines by using cloud APIs to collect data. | Not supported in Plan 1 | :::image type="icon" source="./media/icons/yes-icon.png" alt-text="Icon that shows it's supported in Plan 2.":::
## Select a vulnerability assessment solution
-There are a couple of vulnerability assessment options available in Defender for Servers.
+A couple vulnerability assessment options are available in Defender for Servers:
- [Microsoft Defender Vulnerability Management](/microsoft-365/security/defender-vulnerability-management/defender-vulnerability-management-capabilities): Integrated with Defender for Endpoint.
- - Available in both Defender for Servers Plan 1 and Plan 2.
- - It's enabled by default on machines onboarded to Defender for Endpoint, if [Defender for Endpoint has Defender Vulnerability Management](/microsoft-365/security/defender-vulnerability-management/get-defender-vulnerability-management) enabled.
- - Has the same [Windows](/microsoft-365/security/defender-endpoint/configure-server-endpoints#prerequisites), [Linux](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint-linux#prerequisites), and [network](/microsoft-365/security/defender-endpoint/configure-proxy-internet#enable-access-to-microsoft-defender-for-endpoint-service-urls-in-the-proxy-server) prerequisites as Defender for Endpoint.
- - No additional software installation is needed.
-- [Qualys vulnerability scanner](deploy-vulnerability-assessment-vm.md): Provided by Defender for Cloud's Qualys integration.
- - Available only in Defender for Servers Plan 2.
- - A great fit if you're using a third-party EDR solution, or a Fanotify-based solution, which might mean you can't deploy Defender for Endpoint for vulnerability assessment.
- - The integrated Defender for Cloud Qualys solution doesn't support a proxy configuration, and can't connect to an existing Qualys deployment. Vulnerability findings are only available in Defender for Cloud.
+ - Available in Defender for Servers Plan 1 and Defender for Servers Plan 2.
+ - Defender Vulnerability Management is enabled by default on machines that are onboarded to Defender for Endpoint if [Defender for Endpoint has Defender Vulnerability Management](/microsoft-365/security/defender-vulnerability-management/get-defender-vulnerability-management) enabled.
+ - Has the same [Windows](/microsoft-365/security/defender-endpoint/configure-server-endpoints#prerequisites), [Linux](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint-linux#prerequisites), and [network](/microsoft-365/security/defender-endpoint/configure-proxy-internet#enable-access-to-microsoft-defender-for-endpoint-service-urls-in-the-proxy-server) prerequisites as Defender for Endpoint.
+ - No extra software is required.
-## Next steps
+- [Qualys vulnerability scanner](deploy-vulnerability-assessment-vm.md): Provided by Defender for Cloud Qualys integration.
-After working through these planning steps, [review Azure Arc and agent/extension requirements](plan-defender-for-servers-agents.md).
+ - Available only in Defender for Servers Plan 2.
+ - A great fit if you're using a third-party EDR solution or a Fanotify-based solution. In these scenarios, you might not be able to deploy the Defender for Endpoint for vulnerability assessment.
+ - The integrated Defender for Cloud Qualys solution doesn't support a proxy configuration, and it can't connect to an existing Qualys deployment. Vulnerability findings are available only in Defender for Cloud.
+
+## Next steps
+After you work through these planning steps, [review Azure Arc and agent and extension requirements](plan-defender-for-servers-agents.md).
defender-for-cloud Plan Defender For Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-defender-for-servers.md
Title: Plan a Defender for Servers deployment to protect on-premises and multicloud servers
-description: Design a solution to protect on-premises and multicloud servers with Defender for Servers
+description: Design a solution to protect on-premises and multicloud servers with Microsoft Defender for Servers.
Last updated 11/06/2022
+# Plan your Defender for Servers deployment
-# Plan Defender for Servers deployment
+Microsoft Defender for Servers extends protection to your Windows and Linux machines that run in Azure, Amazon Web Services (AWS), Google Cloud Platform (GCP), and on-premises. Defender for Servers integrates with Microsoft Defender for Endpoint to provide endpoint detection and response (EDR) and other threat protection features.
-Defender for Servers extends protection to your Windows and Linux machines running in Azure, AWS, GCP, and on-premises. Defender for Servers integrates with Microsoft Defender for Endpoint to provide endpoint detection and response (EDR), and also provides a host of additional threat protection features.
-
-This guide helps you to design and plan an effective Microsoft Defender for Servers deployment. Defender for Servers is one of the paid plans provided by [Microsoft Defender for Cloud](defender-for-cloud-introduction.md).
+This guide helps you design and plan an effective Defender for Servers deployment. [Microsoft Defender for Cloud](defender-for-cloud-introduction.md) offers two paid plans for Defender for Servers.
## About this guide
-This planning guide is aimed at cloud solution and infrastructure architects, security architects and analysts, and anyone else involved in protecting cloud/hybrid servers and workloads. The guide aims to answer these questions:
+The intended audience of this guide is cloud solution and infrastructure architects, security architects and analysts, and anyone who's involved in protecting cloud and hybrid servers and workloads.
+
+The guide answers these questions:
-- What does Defender for Servers do, and how is it deployed?-- Where will my data be stored, and what Log Analytics workspaces do I need?-- Who needs access?-- Which Defender for Servers plan should I choose, and which vulnerability assessment solution should I use?-- When do I need Azure Arc, and which agents/extensions must be deployed?
+- What does Defender for Servers do and how is it deployed?
+- Where will my data be stored and what Log Analytics workspaces do I need?
+- Who needs access to my Defender for Servers resources?
+- Which Defender for Servers plan should I choose and which vulnerability assessment solution should I use?
+- When do I need to use Azure Arc and which agents and extensions are required?
- How do I scale a deployment? ## Before you begin -- Review pricing details for [Defender for Servers](https://azure.microsoft.com/pricing/details/defender-for-cloud/).-- If you're deploying for AWS/GCP machines, we suggest reviewing the [multicloud planning guide](plan-multicloud-security-get-started.md) before you start.
+Before you review the series of articles in the Defender for Servers planning guide:
+
+- Review Defender for Servers [pricing details](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
+- If you're deploying for AWS machines or GCP projects, review the [multicloud planning guide](plan-multicloud-security-get-started.md).
## Deployment overview
-Here's a quick overview of the deployment process.
+The following diagram shows an overview of the Defender for Servers deployment process:
- Learn more about [foundational cloud security posture management (CSPM)](concept-cloud-security-posture-management.md#defender-cspm-plan-options). - Learn more about [Azure Arc](../azure-arc/index.yml) onboarding. - ## Next steps
-After kicking off the planning process, review the [second article in this planning series](plan-defender-for-servers-data-workspace.md) to understand how your data is stored, and Log Analytics workspace requirements.
+You've begun the Defender for Servers planning process. Review the next article in the planning guide to [understand how your data is stored and the Log Analytics workspace requirements](plan-defender-for-servers-data-workspace.md).
defender-for-cloud Quickstart Automation Alert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-automation-alert.md
Title: Create a security automation for specific security alerts by using an Azu
description: Learn how to create a Microsoft Defender for Cloud automation to trigger a logic app, which will be triggered by specific Defender for Cloud alerts by using an Azure Resource Manager template (ARM template) or Bicep. Previously updated : 08/31/2022 Last updated : 01/09/2023 + # Quickstart: Create an automatic response to a specific security alert using an ARM template or Bicep
-This quickstart describes how to use an Azure Resource Manager template (ARM template) or a Bicep file to create a workflow automation that triggers a logic app when specific security alerts are received by Microsoft Defender for Cloud.
+In this quickstart, you'll learn how to use an Azure Resource Manager template (ARM template) or a Bicep file to create a workflow automation. The workflow automation will trigger a logic app when specific security alerts are received by Microsoft Defender for Cloud.
## Prerequisites
For other Defender for Cloud quickstart templates, see these [community contribu
Use the Azure portal to check the workflow automation has been deployed.
-1. From the [Azure portal](https://portal.azure.com), open **Microsoft Defender for Cloud**.
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Search for and select **Microsoft Defender for Cloud**.
+
+1. Select **filter**.
-1. From the top menu bar, select the filter icon, and select the specific subscription on which you deployed the new workflow automation.
+1. Select the specific subscription on which you deployed the new workflow automation.
1. From Microsoft Defender for Cloud's menu, open **workflow automation** and check for your new automation. :::image type="content" source="./media/quickstart-automation-alert/validating-template-run.png" alt-text="List of configured automations." lightbox="./media/quickstart-automation-alert/validating-template-run.png":::
Use the Azure portal to check the workflow automation has been deployed.
When no longer needed, delete the workflow automation using the Azure portal.
-1. From the [Azure portal](https://portal.azure.com), open **Microsoft Defender for Cloud**.
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Search for and select **Microsoft Defender for Cloud**.
+
+1. Select **filter**.
-1. From the top menu bar, select the filter icon, and select the specific subscription on which you deployed the new workflow automation.
+1. Select the specific subscription on which you deployed the new workflow automation.
1. From Microsoft Defender for Cloud's menu, open **workflow automation** and find the automation to be deleted. :::image type="content" source="./media/quickstart-automation-alert/deleting-workflow-automation.png" alt-text="Steps for removing a workflow automation." lightbox="./media/quickstart-automation-alert/deleting-workflow-automation.png":::
For other Defender for Cloud quickstart templates, see these [community contribu
You're required to enter the following parameters:
- - **automationName**: Replace **\<automation-name\>** with the name of the automation. It has a minimum length of 3 characters and a maximum length of 24 characters.
- - **logicAppName**: Replace **\<logic-name\>** with the name of the logic app. It has a minimum length of 3 characters.
- - **logicAppResourceGroupName**: Replace **\<group-name\>** with the name of the resource group in which the resources are located. It has a minimum length of 3 characters.
+ - **automationName**: Replace **\<automation-name\>** with the name of the automation. It has a minimum length of three characters and a maximum length of 24 characters.
+ - **logicAppName**: Replace **\<logic-name\>** with the name of the logic app. It has a minimum length of three characters.
+ - **logicAppResourceGroupName**: Replace **\<group-name\>** with the name of the resource group in which the resources are located. It has a minimum length of three characters.
- **alertSettings**: Replace **\{alert-settings\}** with the alert settings object used for deploying the automation. > [!NOTE]
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes for Microsoft Defender for Cloud description: A description of what's new and changed in Microsoft Defender for Cloud- Previously updated : 01/04/2023+ Last updated : 01/10/2023 # What's new in Microsoft Defender for Cloud?
To learn about *planned* changes that are coming soon to Defender for Cloud, see
> [!TIP] > If you're looking for items older than six months, you'll find them in the [Archive for What's new in Microsoft Defender for Cloud](release-notes-archive.md).
+## January 2023
+
+Updates in January include:
+
+- [New version of the recommendation to find missing system updates (Preview)](#new-version-of-the-recommendation-to-find-missing-system-updates-preview)
+
+### New version of the recommendation to find missing system updates (Preview)
+
+You no longer need an agent on your Azure VMs and Azure Arc machines to make sure the machines have all of the latest security or critical system updates.
+
+The new system updates recommendation, "System updates should be installed on your machines (powered by Update management center)" in the "Apply system updates" control, is based on the [Update management center (preview)](../update-center/overview.md) and relies on a native agent embedded in every Azure VM and Azure Arc machines instead of an installed agent. The Quick Fix in the new recommendation leads you to a one-time installation of the missing updates in the Update management center portal.
+
+To use the new recommendation you need to:
+
+- Connect your non-Azure machines to Arc
+- Turn on the [periodic assessment property](../update-center/assessment-options.md#periodic-assessment). For this, you can use the Quick Fix in a new recommendation, "Machines should be configured to periodically check for missing system updates".
+
+The existing "System updates should be installed on your machines" recommendation, which relies on the Log Analytics agent, is still available under the same control.
+ ## December 2022 Updates in December include:
defender-for-cloud Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Microsoft Defender for Cloud description: Sample Azure Resource Graph queries for Microsoft Defender for Cloud showing use of resource types and tables to access Microsoft Defender for Cloud related resources and properties. Previously updated : 07/07/2022 Last updated : 01/09/2023 + # Azure Resource Graph sample queries for Microsoft Defender for Cloud This page is a collection of [Azure Resource Graph](../governance/resource-graph/overview.md) sample
defender-for-cloud Review Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/review-security-recommendations.md
Title: Improving your security posture with recommendations in Microsoft Defender for Cloud description: This document walks you through how to identify security recommendations that will help you improve your security posture. Previously updated : 06/29/2022 Last updated : 01/10/2023 + # Find recommendations that can improve your security posture To improve your [secure score](secure-score-security-controls.md), you have to implement the security recommendations for your environment. From the list of recommendations, you can use filters to find the recommendations that have the most impact on your score, or the ones that you were assigned to implement.
To improve your [secure score](secure-score-security-controls.md), you have to i
To get to the list of recommendations: 1. Sign in to the [Azure portal](https://portal.azure.com).+ 1. Either: - In the Defender for Cloud overview, select **Security posture** and then select **View recommendations** for the environment that you want to improve. - Go to **Recommendations** in the Defender for Cloud menu.
You can learn more by watching this video from the Defender for Cloud in the Fie
## Finding recommendations with high impact on your secure score<a name="monitor-recommendations"></a>
-Your [secure score is calculated](secure-score-security-controls.md?branch=main#how-your-secure-score-is-calculated) based on the security recommendations that you have implemented. In order to increase your score and improve your security posture, you have to find recommendations with unhealthy resources and [remediate those recommendations](implement-security-recommendations.md).
+Your [secure score is calculated](secure-score-security-controls.md?branch=main#how-your-secure-score-is-calculated) based on the security recommendations that you've implemented. In order to increase your score and improve your security posture, you have to find recommendations with unhealthy resources and [remediate those recommendations](implement-security-recommendations.md).
The list of recommendations shows the **Potential score increase** that you can achieve when you remediate all of the recommendations in the security control.
To find recommendations that can improve your secure score:
- You can also use the search box and filters above the list of recommendations to find specific recommendations. 1. Open a security control to see the recommendations that have unhealthy resources.
-When you [remediate](implement-security-recommendations.md) all of the recommendations in the security control, your secure score increases by the percentage points listed for the control.
+When you [remediate](implement-security-recommendations.md) all of the recommendations in the security control, your secure score increases by the percentage point listed for the control.
## Manage the owner and ETA of recommendations that are assigned to you
-[Security teams can assign a recommendation](governance-rules.md) to a specific person and assign a due date to drive your organization towards increased security. If you have recommendations assigned to you, you are accountable to remediate the resources affected by the recommendations to help your organization be compliant with the security policy.
+[Security teams can assign a recommendation](governance-rules.md) to a specific person and assign a due date to drive your organization towards increased security. If you have recommendations assigned to you, you're accountable to remediate the resources affected by the recommendations to help your organization be compliant with the security policy.
-Recommendations are listed as **On time** until their due date is passed, when they are changed to **Overdue**. Before the recommendation is overdue, the recommendation does not impact the secure score. The security team can also apply a grace period during which overdue recommendations continue to not impact the secure score.
+Recommendations are listed as **On time** until their due date is passed, when they're changed to **Overdue**. Before the recommendation is overdue, the recommendation doesn't impact the secure score. The security team can also apply a grace period during which overdue recommendations continue to not impact the secure score.
To help you plan your work and report on progress, you can set an ETA for the specific resources to show when you plan to have the recommendation resolved by for those resources. You can also change the owner of the recommendation for specific resources so that the person responsible for remediation is assigned to the resource.
To change the owner of resources and set the ETA for remediation of recommendati
1. In the filters for list of recommendations, select **Show my items only**. - The status column indicates the recommendations that are on time, overdue, or completed.
- - The insights column indicates the recommendations that are in a grace period, so they currently do not impact your secure score until they become overdue.
+ - The insights column indicates the recommendations that are in a grace period, so they currently don't impact your secure score until they become overdue.
1. Select an on time or overdue recommendation. 1. For the resources that are assigned to you, set the owner of the resource: 1. Select the resources that are owned by another person, and select **Change owner and set ETA**. 1. Select **Change owner**, enter the email address of the owner of the resource, and select **Save**.
- The owner of the resource gets a weekly email listing the recommendations that they are assigned to.
+ The owner of the resource gets a weekly email listing the recommendations that they're assigned to.
1. For resources that you own, set an ETA for remediation: 1. Select resources that you plan to remediate by the same date, and select **Change owner and set ETA**. 1. Select **Change ETA** and set the date by which you plan to remediate the recommendation for those resources. 1. Enter a justification for the remediation by that date, and select **Save**.
-The due date for the recommendation does not change, but the security team can see that you plan to update the resources by the specified ETA date.
+The due date for the recommendation doesn't change, but the security team can see that you plan to update the resources by the specified ETA date.
## Review recommendation data in Azure Resource Graph Explorer (ARG)
defender-for-iot Agent Based Security Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/agent-based-security-alerts.md
Title: Legacy agent based security alerts description: Learn about the legacy version of Defender for IoT's security alerts, and recommended remediation using Defender for IoT device's features, and service. Previously updated : 11/09/2021 Last updated : 01/01/2023 # Legacy Defender for IoT devices security alerts
defender-for-iot Concept Agent Based Security Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/concept-agent-based-security-alerts.md
Title: Micro agent security alerts description: Learn about security alerts and recommended remediation using Defender for IoT device's features, and services. Previously updated : 11/09/2021 Last updated : 01/01/2023 # Micro agent security alerts
defender-for-iot Concept Customizable Security Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/concept-customizable-security-alerts.md
Title: Custom security alerts for IoT Hub description: Learn about customizable security alerts and recommended remediation using Defender for IoT Hub's features and service. Previously updated : 11/09/2021 Last updated : 01/01/2023 # Defender for IoT Hub custom security alerts
defender-for-iot Concept Micro Agent Linux Dependencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/concept-micro-agent-linux-dependencies.md
Title: Micro agent Linux dependencies description: This article describes the different Linux OS dependencies for the Defender for IoT micro agent. Previously updated : 11/09/2021 Last updated : 01/01/2023 # Micro agent Linux dependencies
defender-for-iot Concept Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/concept-recommendations.md
Title: Security recommendations for IoT Hub description: Learn about the concept of security recommendations and how they're used in the Defender for IoT Hub. Previously updated : 11/09/2021 Last updated : 01/01/2023 # Security recommendations for IoT Hub
defender-for-iot Concept Rtos Security Alerts Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/concept-rtos-security-alerts-recommendations.md
Title: Defender-IoT-micro-agent for Azure RTOS built-in & customizable alerts and recommendations description: Learn about security alerts and recommended remediation using the Azure IoT Defender-IoT-micro-agent -RTOS. Previously updated : 11/09/2021 Last updated : 01/01/2023 # Defender-IoT-micro-agent for Azure RTOS security alerts and recommendations (preview)
defender-for-iot Concept Rtos Security Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/concept-rtos-security-module.md
Title: Conceptual explanation of the basics of the Defender-IoT-micro-agent for Azure RTOS description: Learn the basics about the Defender-IoT-micro-agent for Azure RTOS concepts and workflow. Previously updated : 11/09/2021 Last updated : 01/01/2023 # Defender-IoT-micro-agent for Azure RTOS
defender-for-iot Concept Security Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/concept-security-alerts.md
Title: Built-in & custom alerts list description: Learn about security alerts and recommended remediation using Defender for IoT Hub's features and service. Previously updated : 11/09/2021 Last updated : 01/01/2023 # Defender for IoT Hub security alerts
defender-for-iot Edge Security Module Deprecation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/edge-security-module-deprecation.md
Title: Feature support and retirement description: Defender for IoT will continue to support C, C#, and Edge until March 1, 2022. Previously updated : 11/09/2021 Last updated : 01/01/2023
defender-for-iot How To Azure Rtos Security Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/how-to-azure-rtos-security-module.md
Title: Configure and customize Defender-IoT-micro-agent for Azure RTOS description: Learn about how to configure and customize your Defender-IoT-micro-agent for Azure RTOS. Previously updated : 11/09/2021 Last updated : 01/01/2023 # Configure and customize Defender-IoT-micro-agent for Azure RTOS
defender-for-iot How To Manage Device Inventory On The Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/how-to-manage-device-inventory-on-the-cloud.md
Title: Manage your IoT devices with the cloud device inventory description: Learn how to manage your IoT devices with the device inventory. Previously updated : 11/10/2021 Last updated : 01/01/2023
defender-for-iot Iot Security Azure Rtos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/iot-security-azure-rtos.md
Title: Defender-IoT-micro-agent for Azure RTOS overview description: Learn more about the Defender-IoT-micro-agent for Azure RTOS support and implementation as part of Microsoft Defender for IoT. Previously updated : 11/14/2021 Last updated : 01/01/2023 # Overview: Defender for IoT Defender-IoT-micro-agent for Azure RTOS
defender-for-iot Quickstart Create Custom Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/quickstart-create-custom-alerts.md
Title: Create custom alerts description: Understand, create, and assign custom device alerts for the Microsoft Defender for IoT security service. Previously updated : 11/09/2021 Last updated : 01/01/2023 # Create custom alerts
defender-for-iot References Defender For Iot Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/references-defender-for-iot-glossary.md
Title: Defender for IoT glossary for device builder description: The glossary provides a brief description of important Defender for IoT platform terms and concepts. Previously updated : 11/09/2021 Last updated : 01/01/2023
defender-for-iot Resources Agent Frequently Asked Questions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/resources-agent-frequently-asked-questions.md
Title: Microsoft Defender for IoT for device builders frequently asked questions description: Find answers to the most frequently asked questions about Microsoft Defender for IoT agent. Previously updated : 11/09/2021 Last updated : 01/01/2023 # Microsoft Defender for IoT for device builders frequently asked questions
defender-for-iot Virtual Sensor Hyper V https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/virtual-sensor-hyper-v.md
This article describes an OT sensor deployment on a virtual appliance using Micr
|**Status** | Supported | > [!IMPORTANT]
-> Versions 22.2.x of the sensor are incompatible with Hyper-V. Until the issue has been resolved, we recommend using version 22.1.7.
+> Versions 22.2.x of the sensor are incompatible with Hyper-V. Until the issue has been resolved, we recommend using either version 22.3.x or 22.1.7.
## Prerequisites
This procedure describes how to create a virtual machine by using Hyper-V.
1. Enter a name for the virtual machine.
-1. Select **Specify Generation** > **Generation 1**.
+1. Select **Specify Generation** > **Generation 1** or **Generation 2**.
1. Specify the memory allocation [according to your organization's needs](../ot-appliance-sizing.md), in standard RAM denomination (eg. 8192, 16384, 32768). Do not enable **Dynamic Memory**.
defender-for-iot How To Forward Alert Information To Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-forward-alert-information-to-partners.md
Title: Forward on-premises OT alert information to partners - Microsoft Defender for IoT description: Learn how to forward OT alert details from an OT sensor or on-premises management console to partner services. Previously updated : 12/08/2022 Last updated : 01/01/2023
defender-for-iot How To Gain Insight Into Global Regional And Local Threats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-gain-insight-into-global-regional-and-local-threats.md
Title: Gain insight into global, regional, and local threats description: Gain insight into global, regional, and local threats by using the site map in the on-premises management console. Previously updated : 11/09/2021 Last updated : 01/01/2023
defender-for-iot How To Work With Alerts On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-work-with-alerts-on-premises-management-console.md
Title: View and manage OT alerts on the on-premises management console - Microsoft Defender for IoT description: Learn how to view and manage OT alerts collected from all connected OT network sensors on a Microsoft Defender for IoT on-premises management console. Previously updated : 12/12/2022 Last updated : 01/01/2023
defender-for-iot References Defender For Iot Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/references-defender-for-iot-glossary.md
Title: Defender for IoT glossary for organizations description: This glossary provides a brief description of important Defender for IoT platform terms and concepts. Previously updated : 11/09/2021 Last updated : 01/01/2023
defender-for-iot References Work With Defender For Iot Cli Commands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/references-work-with-defender-for-iot-cli-commands.md
Title: CLI command users and access for OT monitoring - Microsoft Defender for IoT description: Learn about the users supported for the Microsoft Defender for IoT CLI commands and how to access the CLI. Previously updated : 12/29/2022 Last updated : 01/01/2023
defender-for-iot Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/release-notes.md
This version includes the following new updates and fixes:
This version includes the following new updates and fixes:
+- [Update your sensors from the Azure portal](update-ot-software.md#update-your-sensors)
- [New naming convention for hardware profiles](ot-appliance-sizing.md) - [PCAP access from the Azure portal](how-to-manage-cloud-alerts.md) - [Bi-directional alert synch between OT sensors and the Azure portal](alerts.md#managing-ot-alerts-in-a-hybrid-environment)
defender-for-iot Resources Manage Proprietary Protocols https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/resources-manage-proprietary-protocols.md
Title: Manage proprietary protocols (Horizon) - Microsoft Defender for IoT description: Defender for IoT Horizon delivers an Open Development Environment (ODE) used to secure IoT and ICS devices running proprietary protocols. Previously updated : 11/09/2021 Last updated : 01/01/2023
defender-for-iot Tutorial Fortinet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-fortinet.md
Title: Integrate Fortinet with Microsoft Defender for IoT description: In this article, you'll learn how to integrate Microsoft Defender for IoT with Fortinet. Previously updated : 11/09/2021 Last updated : 01/01/2023
defender-for-iot Tutorial Palo Alto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-palo-alto.md
Title: Integrate Palo Alto with Microsoft Defender for IoT description: Defender for IoT has integrated its continuous ICS threat monitoring platform with Palo AltoΓÇÖs next-generation firewalls to enable blocking of critical threats, faster and more efficiently. Previously updated : 11/09/2021 Last updated : 01/01/2023
defender-for-iot Update Ot Software https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/update-ot-software.md
Title: Update Defender for IoT OT monitoring software versions description: Learn how to update (upgrade) Defender for IoT software on OT sensors and on-premises management servers. Previously updated : 06/02/2022 Last updated : 01/10/2023
In such cases, make sure to update your on-premises management consoles *before*
You can update software on your sensors individually, directly from each sensor console, or in bulk from the on-premises management console. Select one of the following tabs for the steps required in each method. > [!NOTE]
-> If you are updating from software versions earlier than [22.1.x](whats-new.md#update-to-version-221x), note that this version has a large update with more complicated background processes. Expect this update to take more time than earlier updates have required.
+> If you are updating from software versions earlier than [22.1.x](whats-new.md#update-to-version-221x), note that [version 22.1.x](release-notes.md#2223) has a large update with more complicated background processes. Expect this update to take more time than earlier updates have required.
>
-> [!IMPORTANT]
-> If you're using an on-premises management console to manage your sensors, make sure to update your on-premises management console software *before* you update your sensor software.
->
-> On-premises management software is backwards compatible, and can connect to sensors with earlier versions installed, but not later versions. If you update your sensor software before updating your on-premises management console, the updated sensor will be disconnected from the on-premises management console.
->
-> For more information, see [Update an on-premises management console](#update-an-on-premises-management-console).
+### Prerequisites
+
+If you're using an on-premises management console to manage your sensors, make sure to update your on-premises management console software *before* you update your sensor software.
+
+On-premises management software is backwards compatible, and can connect to sensors with earlier versions installed, but not later versions. If you update your sensor software before updating your on-premises management console, the updated sensor will be disconnected from the on-premises management console.
+
+For more information, see [Update an on-premises management console](#update-an-on-premises-management-console).
++
+# [From the Azure portal (Public preview)](#tab/portal)
+
+This procedure describes how to send a software version update to one or more OT sensors, and then run the updates remotely from the Azure portal. Bulk updates are supported for up to 10 sensors at a time.
+
+> [!TIP]
+> Sending your version update and running the update process are two separate steps, which can be done one right after the other or at different times.
>
+> For example, you might want to first send the update to your sensor and then an administrator to run the installation during a planned maintenance window.
+
+**Prerequisites**: A cloud-connected sensor with a software version equal to or higher than [22.2.3](release-notes.md#2223), but not yet the latest version available.
+
+**To send the software update to your OT sensor**:
+
+1. In the Azure portal, go to **Defender for IoT** > **Sites and sensors** and identify the sensors that have legacy versions installed.
+
+ If you know your site and sensor name, you can browse or search for it directly. Alternately, filter the sensors listed to show only cloud-connected, OT sensors that have *Remote updates supported*, and have legacy software version installed. For example:
+
+ :::image type="content" source="media/update-ot-software/filter-remote-update.png" alt-text="Screenshot of how to filter for OT sensors that are ready for remote update." lightbox="media/update-ot-software/filter-remote-update.png":::
+
+1. Select one or more sensors to update, and then select **Update (Preview)** > **Send package**. For a specific sensor, you can also access the **Send package** option from the **...** options menu to the right of the sensor row. For example:
+
+ :::image type="content" source="media/update-ot-software/send-package.png" alt-text="Screenshot of the Send package option." lightbox="media/update-ot-software/send-package.png":::
+
+1. In the **Send package** pane that appears on the right, check to make sure that you're sending the correct software to the sensor you want to update. For more information, see [Legacy version updates vs. recent version updates](#legacy-version-updates-vs-recent-version-updates).
+
+ To jump to the release notes for the new version, select **Learn more** at the top of the pane.
+
+1. When you're ready, select **Send package**. The software transfer to your sensor machine is started, and you can see the progress in the **Sensor version** column.
+
+ When the transfer is complete, the **Sensor version** column changes to :::image type="icon" source="media/update-ot-software/ready-to-update.png" border="false"::: **Ready to update**.
+
+ Hover over the **Sensor version** value to see the source and target version for your update.
+
+**To run your sensor update from the Azure portal**:
+
+When the **Sensor version** column for your sensors reads :::image type="icon" source="media/update-ot-software/ready-to-update.png" border="false"::: **Ready to update**, you're ready to run your update.
+
+1. As in the previous step, either select multiple sensors that are ready to update, or select one sensor at a time.
+
+1. Select either **Update (Preview)** > **Update sensor** from the toolbar, or for an individual sensor, select the **...** options menu > **Update sensor**. For example:
+
+ :::image type="content" source="media/update-ot-software/update-sensor.png" alt-text="Screenshot of the Update sensor option." lightbox="media/update-ot-software/update-sensor.png":::
+
+1. In the **Update sensor (Preview)** pane that appears on the right, verify your update details.
+
+ When you're ready, select **Update now** > **Confirm update**. In the grid, the **Sensor version** value changes to :::image type="icon" source="media/update-ot-software/installing.png" border="false"::: **Installing** until the update is complete, when the value switches to the new sensor version number instead.
+
+If a sensor fails to update for any reason, the software reverts back to the previous version installed, and a sensor health alert is triggered. For more information, see [Understand sensor health (Public preview)](how-to-manage-sensors-on-the-cloud.md#understand-sensor-health-public-preview) and [Sensor health message reference](sensor-health-messages.md).
+ # [From an OT sensor UI](#tab/sensor)
This procedure describes how to update OT sensor software via the CLI, directly
> [!NOTE]
-> After upgrading to version 22.1.x, the new upgrade log is accessible by the *cyberx_host* user on the sensor at the following path: `/opt/sensor/logs/legacy-upgrade.log`. To access the update log, sign into the sensor via SSH with the *cyberx_host* user.
+> After upgrading to version 22.1.x or higher, the new upgrade log is accessible by the *cyberx_host* user on the sensor at the following path: `/opt/sensor/logs/legacy-upgrade.log`. To access the update log, sign into the sensor via SSH with the *cyberx_host* user.
> > For more information, see [Default privileged on-premises users](roles-on-premises.md#default-privileged-on-premises-users).
defender-for-iot Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/whats-new.md
Features released earlier than nine months ago are described in the [What's new
|Service area |Updates | |||
-|**OT networks** | **Version 22.3.4**: [Azure connectivity status shown on OT sensors](#azure-connectivity-status-shown-on-ot-sensors) |
+|**OT networks** | - **Sensor version 22.3.4**: [Azure connectivity status shown on OT sensors](#azure-connectivity-status-shown-on-ot-sensors)<br>- **Sensor version 22.2.3**: [Update sensor software from the Azure portal](#update-sensor-software-from-the-azure-portal-public-preview) |
+
+### Update sensor software from the Azure portal (Public preview)
+
+For cloud-connected sensor versions [22.2.3](release-notes.md#2223) and higher, now you can update your sensor software directly from the new **Sites and sensors** page on the Azure portal.
++
+For more information, see [Update your sensors from the Azure portal](update-ot-software.md#update-your-sensors).
### Azure connectivity status shown on OT sensors
For more information, see [Manage individual sensors](how-to-manage-individual-s
|Service area |Updates | |||
-| **OT networks** | [New purchase experience for OT plans](#new-purchase-experience-for-ot-plans) |
+|**OT networks** | [New purchase experience for OT plans](#new-purchase-experience-for-ot-plans) |
|**Enterprise IoT networks** | [Enterprise IoT sensor alerts and recommendations (Public Preview)](#enterprise-iot-sensor-alerts-and-recommendations-public-preview) | ### Enterprise IoT sensor alerts and recommendations (Public Preview)
devtest-labs Devtest Lab Integrate Ci Cd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-integrate-ci-cd.md
description: Learn how to integrate Azure DevTest Labs into Azure Pipelines cont
Previously updated : 11/16/2021 Last updated : 12/28/2021 # Integrate DevTest Labs into Azure Pipelines
Write-Host "##vso[task.setvariable variable=labVMFqdn;]$labVMFqdn"
Next, create the release pipeline in Azure Pipelines. The pipeline tasks use the values you assigned to the VM when you created the ARM template. 1. From your Azure DevOps Services project page, select **Pipelines** > **Releases** from the left navigation.
-1. Select **Create release**.
+1. Select **New pipeline**.
+1. In the **Select a template** pane, select **Empty job**.
+1. Close the **Stage** pane.
1. On the **New release pipeline** page, select the **Variables** tab. 1. Select **Add**, and enter the following **Name** and **Value** pairs, selecting **Add** after adding each one. - *vmName*: The VM name you assigned in the ARM template. - *userName*: The username to access the VM. - *password*: Password for the username. Select the lock icon to hide and secure the password.
+### Add an artifact
+
+1. On the new release pipeline page, on the **Pipeline** tab, select **Add an artifact**.
+1. On the **Add an artifact pane**, select **Azure Repo**.
+1. In the **Project** list, select your DevOps project.
+1. In the **Source (repository)** list, select your source repo.
+1. In the **Default branch** list, select the branch to check out.
+1. Select **Add**.
+ ### Create a DevTest Labs VM The next step creates a golden image VM to use for future deployments. This step uses the **Azure DevTest Labs Create VM** task.
The next step creates a golden image VM to use for future deployments. This step
> [!NOTE] > For information about creating a more restricted permissions connection to your Azure subscription, see [Azure Resource Manager service endpoint](/azure/devops/pipelines/library/service-endpoints#sep-azure-resource-manager). - **Lab**: Select your DevTest Labs lab name.
+ - **Virtual Machine Name**: the variable you specified for your virtual machine name: *$vmName*.
- **Template**: Browse to and select the template file you checked in to your project repository.
- - **Parameters File**: Browse to and select the parameters file you checked in to your repository.
+ - **Parameters File**: If you checked a parameters file into your repository, browse to and select it.
- **Parameter Overrides**: Enter `-newVMName '$(vmName)' -userName '$(userName)' -password (ConvertTo-SecureString -String '$(password)' -AsPlainText -Force)`. - Drop down **Output Variables**, and under **Reference name**, enter the variable for the created lab VM ID. If you use the default *labVmId*, you can refer to the variable in subsequent tasks as **$(labVmId)**.
The next step creates a golden image VM to use for future deployments. This step
Next, the pipeline runs the script you created to collect the details of the DevTest Labs VM.
-1. On the release pipeline **Pipeline** tab, select the hyperlinked text in **Stage 1**, and then select the plus sign **+** next to **Agent job**.
+1. On the release pipeline **Tasks** tab, select the plus sign **+** next to **Agent job**.
1. Under **Add tasks** in the right pane, search for and select **Azure PowerShell**, and select **Add**. 1. In the left pane, select the **Azure PowerShell script: FilePath** task. 1. In the right pane, fill out the form as follows:
The script collects the required values and stores them in environment variables
The next task creates an image of the newly deployed VM in your lab. You can use the image to create copies of the VM on demand to do developer tasks or run tests.
-1. On the release pipeline **Pipeline** tab, select the hyperlinked text in **Stage 1**, and then select the plus sign **+** next to **Agent job**.
+1. On the release pipeline **Tasks** tab, select the plus sign **+** next to **Agent job**.
1. Under **Add tasks**, select **Azure DevTest Labs Create Custom Image**, and select **Add**. 1. In the left pane, select the **Azure DevTest Labs Create Custom Image** task. 1. In the right pane, fill out the form as follows:
The tasks you usually use to deploy apps are **Azure File Copy** and **PowerShel
The final task is to delete the VM that you deployed in your lab. You'd ordinarily delete the VM after you do the developer tasks or run the tests that you need on the deployed VM.
-1. On the release pipeline **Pipeline** tab, select the hyperlinked text in **Stage 1**, and then select the plus sign **+** next to **Agent job**.
+1. On the release pipeline **Tasks** tab, select the plus sign **+** next to **Agent job**.
1. Under **Add tasks**, select **Azure DevTest Labs Delete VM**, and select **Add**. 1. Configure the task as follows: - **Azure RM Subscription**: Select your service connection or subscription.
dms Known Issues Azure Sql Migration Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-azure-sql-migration-azure-data-studio.md
Known issues and limitations associated with the Azure SQL Migration extension f
- **Message**: `Migration for Database <Database Name> failed with error 'Non retriable error occurred while restoring backup with index 1 - 3234 Logical file <Name> isn't part of database <Database GUID>. Use RESTORE FILELISTONLY to list the logical file names. RESTORE DATABASE is terminating abnormally.'.` -- **Cause**: You've specified a logical file name that isn't in the database backup.
+- **Cause**: You've specified a logical file name that isn't in the database backup. Another potential cause of this error is an incorrect storage account container name.
- **Recommendation**: Run RESTORE FILELISTONLY to check the logical file names in your backup. For more information about RESTORE FILELISTONLY, see [RESTORE Statements - FILELISTONLY (Transact-SQL)](/sql/t-sql/statements/restore-statements-filelistonly-transact-sql).
event-hubs Apache Kafka Developer Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/apache-kafka-developer-guide.md
This article provides links to articles that describe how to integrate your Apache Kafka applications with Azure Event Hubs. ## Overview
-Event Hubs provides a Kafka endpoint that can be used by your existing Kafka based applications as an alternative to running your own Kafka cluster. Event Hubs works with many of your existing Kafka applications. For more information, see [Event Hubs for Apache Kafka](event-hubs-for-kafka-ecosystem-overview.md)
+Event Hubs provides a Kafka endpoint that can be used by your existing Kafka based applications as an alternative to running your own Kafka cluster. Event Hubs works with many of your existing Kafka applications. For more information, see [Event Hubs for Apache Kafka](azure-event-hubs-kafka-overview.md)
## Quickstarts You can find quickstarts in GitHub and in this content set that helps you quickly ramp up on Event Hubs for Kafka.
event-hubs Apache Kafka Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/apache-kafka-migration-guide.md
Last updated 09/20/2021
# Migrate to Azure Event Hubs for Apache Kafka Ecosystems
-Azure Event Hubs exposes an Apache Kafka endpoint, which enables you to connect to Event Hubs using the Kafka protocol. By making minimal changes to your existing Kafka application, you can connect to Azure Event Hubs and reap the benefits of the Azure ecosystem. Event Hubs works with many of your existing Kafka applications, including MirrorMaker. For more information, see [Event Hubs for Apache Kafka](event-hubs-for-kafka-ecosystem-overview.md)
+Azure Event Hubs exposes an Apache Kafka endpoint, which enables you to connect to Event Hubs using the Kafka protocol. By making minimal changes to your existing Kafka application, you can connect to Azure Event Hubs and reap the benefits of the Azure ecosystem. Event Hubs works with many of your existing Kafka applications, including MirrorMaker. For more information, see [Event Hubs for Apache Kafka](azure-event-hubs-kafka-overview.md)
## Pre-migration
event-hubs Authorize Access Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/authorize-access-azure-active-directory.md
The following list describes the levels at which you can scope access to Event H
> [!NOTE] > - Keep in mind that Azure role assignments may take up to five minutes to propagate.
-> - This content applies to both Event Hubs and Event Hubs for Apache Kafka. For more information on Event Hubs for Kafka support, see [Event Hubs for Kafka - security and authentication](event-hubs-for-kafka-ecosystem-overview.md#security-and-authentication).
+> - This content applies to both Event Hubs and Event Hubs for Apache Kafka. For more information on Event Hubs for Kafka support, see [Event Hubs for Kafka - security and authentication](azure-event-hubs-kafka-overview.md#security-and-authentication).
For more information about how built-in roles are defined, see [Understand role definitions](../role-based-access-control/role-definitions.md#control-and-data-actions). For information about creating Azure custom roles, see [Azure custom roles](../role-based-access-control/custom-roles.md).
event-hubs Authorize Access Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/authorize-access-event-hubs.md
Azure Event Hubs offers the following options for authorizing access to secure r
- Shared access signature > [!NOTE]
-> This article applies to both Event Hubs and [Apache Kafka](event-hubs-for-kafka-ecosystem-overview.md) scenarios.
+> This article applies to both Event Hubs and [Apache Kafka](azure-event-hubs-kafka-overview.md) scenarios.
## Azure Active Directory Azure Active Directory (Azure AD) integration for Event Hubs resources provides Azure role-based access control (Azure RBAC) for fine-grained control over a client's access to resources. You can use Azure RBAC to grant permissions to security principal, which may be a user, a group, or an application service principal. The security principal is authenticated by Azure AD to return an OAuth 2.0 token. The token can be used to authorize a request to access an Event Hubs resource.
event-hubs Azure Event Hubs Kafka Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/azure-event-hubs-kafka-overview.md
+
+ Title: Use Azure Event Hubs from an Apache Kafka app
+description: This article provides you the information on using Azure Event Hubs to stream data from Apache Kafka applications without setting up a Kafka cluster.
+ Last updated : 01/10/2023++
+# Use Azure Event Hubs from Apache Kafka applications
+
+This article provides information about using Azure Event Hubs to stream data from [Apache Kafka](https://kafka.apache.org) applications without setting up a Kafka cluster on your own.
+
+> [!NOTE]
+> Event Hubs supports Apache Kafka's producer and consumer APIs clients at version 1.0 and above.
++
+## Azure Event Hubs for Apache Kafka overview
+
+The Event Hubs for Apache Kafka feature provides a protocol head on top of Azure Event Hubs that is protocol compatible with Apache Kafka clients built for Apache Kafka server versions 1.0 and later and supports for both reading from and writing to Event Hubs, which are equivalent to Apache Kafka topics.
+
+You can often use the Event Hubs Kafka endpoint from your applications without code changes and only modify the configuration: Update the connection string in configurations to point to the Kafka endpoint exposed by your event hub instead of pointing to your Kafka cluster. Then, you can start streaming events from your applications that use the Kafka protocol into Event Hubs.
+
+Conceptually, Kafka and Event Hubs are very similar: they're both partitioned logs built for streaming data, whereby the client controls which part of the retained log it wants to read. The following table maps concepts between Kafka and Event Hubs.
+
+### Kafka and Event Hubs conceptual mapping
+
+| Kafka Concept | Event Hubs Concept|
+| | |
+| Cluster | Namespace |
+| Topic | An event hub |
+| Partition | Partition|
+| Consumer Group | Consumer Group |
+| Offset | Offset|
+
+### Key differences between Apache Kafka and Event Hubs
+
+While [Apache Kafka](https://kafka.apache.org/) is software you typically need to install and operate, Event Hubs is a fully managed, cloud-native service. There are no servers, disks, or networks to manage and monitor and no brokers to consider or configure, ever. You create a namespace, which is an endpoint with a fully qualified domain name, and then you create Event Hubs (topics) within that namespace.
+
+For more information about Event Hubs and namespaces, see [Event Hubs features](event-hubs-features.md#namespace). As a cloud service, Event Hubs uses a single stable virtual IP address as the endpoint, so clients don't need to know about the brokers or machines within a cluster. Even though Event Hubs implements the same protocol, this difference means that all Kafka traffic for all partitions is predictably routed through this one endpoint rather than requiring firewall access for all brokers of a cluster.
+
+Scale in Event Hubs is controlled by how many [throughput units (TUs)](event-hubs-scalability.md#throughput-units) or [processing units](event-hubs-scalability.md#processing-units) you purchase. If you enable the [Auto-Inflate](event-hubs-auto-inflate.md) feature for a standard tier namespace, Event Hubs automatically scales up TUs when you reach the throughput limit. This feature work also works with the Apache Kafka protocol support. For a premier tier namespace, you can increase the number of processing units assigned to the namespace.
+
+### Is Apache Kafka the right solution for your workload?
+
+Coming from building applications using Apache Kafka, it's also useful to understand that Azure Event Hubs is part of a fleet of services, which also includes [Azure Service Bus](../service-bus-messaging/service-bus-messaging-overview.md), and [Azure Event Grid](../event-grid/overview.md).
+
+While some providers of commercial distributions of Apache Kafka might suggest that Apache Kafka is a one-stop-shop for all your messaging platform needs, the reality is that Apache Kafka doesn't implement, for instance, the [competing-consumer](/azure/architecture/patterns/competing-consumers) queue pattern, doesn't have support for [publish-subscribe](/azure/architecture/patterns/publisher-subscriber) at a level that allows subscribers access to the incoming messages based on server-evaluated rules other than plain offsets, and it has no facilities to track the lifecycle of a job initiated by a message or sidelining faulty messages into a dead-letter queue, all of which are foundational for many enterprise messaging scenarios.
+
+To understand the differences between patterns and which pattern is best covered by which service, see the [Asynchronous messaging options in Azure](/azure/architecture/guide/technology-choices/messaging) guidance. As an Apache Kafka user, you may find that communication paths you have so far realized with Kafka, can be realized with far less basic complexity and yet more powerful capabilities using either Event Grid or Service Bus.
+
+If you need specific features of Apache Kafka that aren't available through the Event Hubs for Apache Kafka interface or if your implementation pattern exceeds the [Event Hubs quotas](event-hubs-quotas.md), you can also run a [native Apache Kafka cluster in Azure HDInsight](../hdinsight/kafk).
+
+## Security and authentication
+Every time you publish or consume events from an Event Hubs for Kafka, your client is trying to access the Event Hubs resources. You want to ensure that the resources are accessed using an authorized entity. When using Apache Kafka protocol with your clients, you can set your configuration for authentication and encryption using the SASL mechanisms. When using Event Hubs for Kafka requires the TLS-encryption (as all data in transit with Event Hubs is TLS encrypted), it can be done specifying the SASL_SSL option in your configuration file.
+
+Azure Event Hubs provides multiple options to authorize access to your secure resources.
+
+- OAuth 2.0
+- Shared access signature (SAS)
+
+### OAuth 2.0
+Event Hubs integrates with Azure Active Directory (Azure AD), which provides an **OAuth 2.0** compliant centralized authorization server. With Azure AD, you can use Azure role-based access control (Azure RBAC) to grant fine grained permissions to your client identities. You can use this feature with your Kafka clients by specifying **SASL_SSL** for the protocol and **OAUTHBEARER** for the mechanism. For details about Azure roles and levels for scoping access, see [Authorize access with Azure AD](authorize-access-azure-active-directory.md).
+
+```properties
+bootstrap.servers=NAMESPACENAME.servicebus.windows.net:9093
+security.protocol=SASL_SSL
+sasl.mechanism=OAUTHBEARER
+sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required;
+sasl.login.callback.handler.class=CustomAuthenticateCallbackHandler
+```
+
+> [!NOTE]
+> The above configuration properties are for the Java programming language. For **samples** that show how to use OAuth with Event Hubs for Kafka using different programming languages, see [samples on GitHub](https://github.com/Azure/azure-event-hubs-for-kafka/tree/master/tutorials/oauth).
++
+### Shared Access Signature (SAS)
+Event Hubs also provides the **Shared Access Signatures (SAS)** for delegated access to Event Hubs for Kafka resources. Authorizing access using OAuth 2.0 token-based mechanism provides superior security and ease of use over SAS. The built-in roles can also eliminate the need for ACL-based authorization, which has to be maintained and managed by the user. You can use this feature with your Kafka clients by specifying **SASL_SSL** for the protocol and **PLAIN** for the mechanism.
+
+```properties
+bootstrap.servers=NAMESPACENAME.servicebus.windows.net:9093
+security.protocol=SASL_SSL
+sasl.mechanism=PLAIN
+sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="$ConnectionString" password="{YOUR.EVENTHUBS.CONNECTION.STRING}";
+```
+
+> [!IMPORTANT]
+> Replace `{YOUR.EVENTHUBS.CONNECTION.STRING}` with the connection string for your Event Hubs namespace. For instructions on getting the connection string, see [Get an Event Hubs connection string](event-hubs-get-connection-string.md). Here's an example configuration: `sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="$ConnectionString" password="Endpoint=sb://mynamespace.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=XXXXXXXXXXXXXXXX";`
+
+> [!NOTE]
+> When using SAS authentication with Kafka clients, established connections aren't disconnected when the SAS key is regenerated.
+
+> [!NOTE]
+> [Generated shared access signature tokens](authenticate-shared-access-signature.md#generate-a-shared-access-signature-token) are not supported when using the Event Hubs for Apache Kafka endpoint.
+
+## Samples
+For a **tutorial** with step-by-step instructions to create an event hub and access it using SAS or OAuth, see [Quickstart: Data streaming with Event Hubs using the Kafka protocol](event-hubs-quickstart-kafka-enabled-event-hubs.md).
+
+## Other Event Hubs features
+
+The Event Hubs for Apache Kafka feature is one of three protocols concurrently available on Azure Event Hubs, complementing HTTP and AMQP. You can write with any of these protocols and read with any another, so that your current Apache Kafka producers can continue publishing via Apache Kafka, but your reader can benefit from the native integration with Event Hubs' AMQP interface, such as Azure Stream Analytics or Azure Functions. Conversely, you can readily integrate Azure Event Hubs into AMQP routing networks as a target endpoint, and yet read data through Apache Kafka integrations.
+
+Additionally, Event Hubs features such as [Capture](event-hubs-capture-overview.md), which enables extremely cost efficient long-term archival via Azure Blob Storage and Azure Data Lake Storage, and [Geo Disaster-Recovery](event-hubs-geo-dr.md) also work with the Event Hubs for Kafka feature.
+
+## Idempotency
+
+Azure Event Hubs for Apache Kafka supports both idempotent producers and idempotent consumers.
+
+One of the core tenets of Azure Event Hubs is the concept of **at-least once** delivery. This approach ensures that events will always be delivered. It also means that events can be received more than once, even repeatedly, by consumers such as a function. For this reason, it's important that the consumer supports the [idempotent consumer](https://microservices.io/patterns/communication-style/idempotent-consumer.html) pattern.
+
+## Apache Kafka feature differences
+
+The goal of Event Hubs for Apache Kafka is to provide access to Azure Event Hubs capabilities to applications that are locked into the Apache Kafka API and would otherwise have to be backed by an Apache Kafka cluster.
+
+As explained [above](#is-apache-kafka-the-right-solution-for-your-workload), the Azure Messaging fleet provides rich and robust coverage for a multitude of messaging scenarios, and although the following features aren't currently supported through Event Hubs' support for the Apache Kafka API, we point out where and how the desired capability is available.
+
+### Transactions
+
+[Azure Service Bus](../service-bus-messaging/service-bus-transactions.md) has robust transaction support that allows receiving and settling messages and sessions while sending outbound messages resulting from message processing to multiple target entities under the consistency protection of a transaction. The feature set not only allows for exactly once processing of each message in a sequence, but also avoids the risk of another consumer inadvertently reprocessing the same messages as it would be the case with Apache Kafka. Service Bus is the recommended service for transactional message workloads.
+
+### Compression
+
+The client-side [compression](https://cwiki.apache.org/confluence/display/KAFKA/Compression) feature of Apache Kafka compresses a batch of multiple messages into a single message on the producer side and decompresses the batch on the consumer side. The Apache Kafka broker treats the batch as a special message.
+
+This feature is fundamentally at odds with Azure Event Hubs' multi-protocol model, which allows for messages, even those sent in batches, to be individually retrievable from the broker and through any protocol.
+
+The payload of any Event Hubs event is a byte stream and the content can be compressed with an algorithm of your choosing. The Apache Avro encoding format supports compression natively.
+
+### Kafka Streams
+
+Kafka Streams is a client library for stream analytics that is part of the Apache Kafka open-source project, but is separate from the Apache Kafka event stream broker.
+
+The most common reason Azure Event Hubs customers ask for Kafka Streams support is because they're interested in Confluent's "ksqlDB" product. "ksqlDB" is a proprietary shared source project that is [licensed such](https://github.com/confluentinc/ksql/blob/master/LICENSE) that no vendor "offering software-as-a-service, platform-as-a-service, infrastructure-as-a-service, or other similar online services that compete with Confluent products or services" is permitted to use or offer "ksqlDB" support. Practically, if you use ksqlDB, you must either operate Kafka yourself or you must use ConfluentΓÇÖs cloud offerings. The licensing terms might also affect Azure customers who offer services for a purpose excluded by the license.
+
+Standalone and without ksqlDB, Kafka Streams has fewer capabilities than many alternative frameworks and services, most of which have built-in streaming SQL interfaces, and all of which integrate with Azure Event Hubs today:
+
+- [Azure Stream Analytics](../stream-analytics/stream-analytics-introduction.md)
+- [Azure Synapse Analytics (via Event Hubs Capture)](../event-grid/event-grid-event-hubs-integration.md)
+- [Azure Databricks](/azure/databricks/scenarios/databricks-stream-from-eventhubs)
+- [Apache Samza](https://samza.apache.org/learn/documentation/latest/connectors/eventhubs)
+- [Apache Storm](event-hubs-storm-getstarted-receive.md)
+- [Apache Spark](event-hubs-kafka-spark-tutorial.md)
+- [Apache Flink](event-hubs-kafka-flink-tutorial.md)
+- [Akka Streams](event-hubs-kafka-akka-streams-tutorial.md)
+
+The listed services and frameworks can generally acquire event streams and reference data directly from a diverse set of sources through adapters. Kafka Streams can only acquire data from Apache Kafka and your analytics projects are therefore locked into Apache Kafka. To use data from other sources, you're required to first import data into Apache Kafka with the Kafka Connect framework.
+
+If you must use the Kafka Streams framework on Azure, [Apache Kafka on HDInsight](../hdinsight/kafk) will provide you with that option. Apache Kafka on HDInsight provides full control over all configuration aspects of Apache Kafka, while being fully integrated with various aspects of the Azure platform, from fault/update domain placement to network isolation to monitoring integration.
+
+## Next steps
+This article provided an introduction to Event Hubs for Kafka. To learn more, see [Apache Kafka developer guide for Azure Event Hubs](apache-kafka-developer-guide.md).
+
+For a **tutorial** with step-by-step instructions to create an event hub and access it using SAS or OAuth, see [Quickstart: Data streaming with Event Hubs using the Kafka protocol](event-hubs-quickstart-kafka-enabled-event-hubs.md).
+
+Also, see the [OAuth samples on GitHub](https://github.com/Azure/azure-event-hubs-for-kafka/tree/master/tutorials/oauth).
event-hubs Dynamically Add Partitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/dynamically-add-partitions.md
You can specify the number of partitions at the time of creating an event hub. I
> Dynamic additions of partitions is available only in **premium** and **dedicated** tiers of Event Hubs. > [!NOTE]
-> For Apache Kafka clients, an **event hub** maps to a **Kafka topic**. For more mappings between Azure Event Hubs and Apache Kafka, see [Kafka and Event Hubs conceptual mapping](event-hubs-for-kafka-ecosystem-overview.md#kafka-and-event-hubs-conceptual-mapping)
+> For Apache Kafka clients, an **event hub** maps to a **Kafka topic**. For more mappings between Azure Event Hubs and Apache Kafka, see [Kafka and Event Hubs conceptual mapping](azure-event-hubs-kafka-overview.md#kafka-and-event-hubs-conceptual-mapping)
## Update the partition count
event-hubs Event Hubs About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-about.md
Event Hubs represents the "front door" for an event pipeline, often called an *e
The following sections describe key features of the Azure Event Hubs service: ## Fully managed PaaS
-Event Hubs is a fully managed Platform-as-a-Service (PaaS) with little configuration or management overhead, so you focus on your business solutions. [Event Hubs for Apache Kafka ecosystems](event-hubs-for-kafka-ecosystem-overview.md) gives you the PaaS Kafka experience without having to manage, configure, or run your clusters.
+Event Hubs is a fully managed Platform-as-a-Service (PaaS) with little configuration or management overhead, so you focus on your business solutions. [Event Hubs for Apache Kafka ecosystems](azure-event-hubs-kafka-overview.md) gives you the PaaS Kafka experience without having to manage, configure, or run your clusters.
## Support for real-time and batch processing Ingest, buffer, store, and process your stream in real time to get actionable insights. Event Hubs uses a [partitioned consumer model](event-hubs-scalability.md#partitions), enabling multiple applications to process the stream concurrently and letting you control the speed of processing. Azure Event Hubs also integrates with [Azure Functions](../azure-functions/index.yml) for a serverless architecture.
With Event Hubs, you can start with data streams in megabytes, and grow to gigab
With a broad ecosystem based on the industry-standard AMQP 1.0 protocol and available in various languages [.NET](https://github.com/Azure/azure-sdk-for-net/), [Java](https://github.com/Azure/azure-sdk-for-java/), [Python](https://github.com/Azure/azure-sdk-for-python/), [JavaScript](https://github.com/Azure/azure-sdk-for-js/), you can easily start processing your streams from Event Hubs. All supported client languages provide low-level integration. The ecosystem also provides you with seamless integration with Azure services like Azure Stream Analytics and Azure Functions and thus enables you to build serverless architectures. ## Event Hubs for Apache Kafka
-[Event Hubs for Apache Kafka ecosystems](event-hubs-for-kafka-ecosystem-overview.md) furthermore enables [Apache Kafka (1.0 and later)](https://kafka.apache.org/) clients and applications to talk to Event Hubs. You don't need to set up, configure, and manage your own Kafka and Zookeeper clusters or use some Kafka-as-a-Service offering not native to Azure.
+[Event Hubs for Apache Kafka ecosystems](azure-event-hubs-kafka-overview.md) furthermore enables [Apache Kafka (1.0 and later)](https://kafka.apache.org/) clients and applications to talk to Event Hubs. You don't need to set up, configure, and manage your own Kafka and Zookeeper clusters or use some Kafka-as-a-Service offering not native to Azure.
## Event Hubs premium and dedicated Event Hubs **premium** caters to high-end streaming needs that require superior performance, better isolation with predictable latency and minimal interference in a managed multitenant PaaS environment. On top of all the features of the standard offering, the premium tier offers several extra features such as [dynamic partition scale up](dynamically-add-partitions.md), extended retention, and [customer-managed-keys](configure-customer-managed-key.md). For more information, see [Event Hubs Premium](event-hubs-premium-overview.md).
event-hubs Event Hubs Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-create.md
An Event Hubs namespace provides a unique scoping container, in which you create
:::image type="content" source="./media/event-hubs-quickstart-portal/namespace-home-page.png" lightbox="./media/event-hubs-quickstart-portal/namespace-home-page.png" alt-text="Screenshot of the home page for your Event Hubs namespace in the Azure portal."::: > [!NOTE]
- > Azure Event Hubs provides you with a Kafka endpoint. This endpoint enables your Event Hubs namespace to natively understand [Apache Kafka](https://kafka.apache.org/intro) message protocol and APIs. With this capability, you can communicate with your event hubs as you would with Kafka topics without changing your protocol clients or running your own clusters. Event Hubs supports [Apache Kafka versions 1.0](https://kafka.apache.org/10/documentation.html) and later. For more information, see [Use Event Hubs from Apache Kafka applications](event-hubs-for-kafka-ecosystem-overview.md).
+ > Azure Event Hubs provides you with a Kafka endpoint. This endpoint enables your Event Hubs namespace to natively understand [Apache Kafka](https://kafka.apache.org/intro) message protocol and APIs. With this capability, you can communicate with your event hubs as you would with Kafka topics without changing your protocol clients or running your own clusters. Event Hubs supports [Apache Kafka versions 1.0](https://kafka.apache.org/10/documentation.html) and later. For more information, see [Use Event Hubs from Apache Kafka applications](azure-event-hubs-kafka-overview.md).
## Create an event hub
event-hubs Event Hubs Dedicated Cluster Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-dedicated-cluster-create-portal.md
In this article, you created an Event Hubs cluster. For step-by-step instruction
- [Python](event-hubs-python-get-started-send.md) - [JavaScript](event-hubs-node-get-started-send.md) - [Use Azure portal to enable Event Hubs Capture](event-hubs-capture-enable-through-portal.md)-- [Use Azure Event Hubs for Apache Kafka](event-hubs-for-kafka-ecosystem-overview.md)
+- [Use Azure Event Hubs for Apache Kafka](azure-event-hubs-kafka-overview.md)
event-hubs Event Hubs Exchange Events Different Protocols https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-exchange-events-different-protocols.md
String myStringProperty = new String(rawbytes, StandardCharsets.UTF_8);
In this article, you learned how to stream into Event Hubs without changing your protocol clients or running your own clusters. To learn more about Event Hubs and Event Hubs for Kafka, see the following articles: * [Learn about Event Hubs](./event-hubs-about.md)
-* [Learn about Event Hubs for Kafka](event-hubs-for-kafka-ecosystem-overview.md)
+* [Learn about Event Hubs for Kafka](azure-event-hubs-kafka-overview.md)
* [Explore more samples on the Event Hubs for Kafka GitHub](https://github.com/Azure/azure-event-hubs-for-kafka) * Use [MirrorMaker](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=27846330) to [stream events from Kafka on premises to Event Hubs on cloud.](event-hubs-kafka-mirror-maker-tutorial.md) * Learn how to stream into Event Hubs using [native Kafka applications](event-hubs-quickstart-kafka-enabled-event-hubs.md), [Apache Flink](event-hubs-kafka-flink-tutorial.md), or [Akka Streams](event-hubs-kafka-akka-streams-tutorial.md)
event-hubs Event Hubs Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-features.md
Azure Event Hubs is a scalable event processing service that ingests and process
This article builds on the information in the [overview article](./event-hubs-about.md), and provides technical and implementation details about Event Hubs components and features. > [!TIP]
-> [The protocol support for **Apache Kafka** clients](event-hubs-for-kafka-ecosystem-overview.md) (versions >=1.0) provides network endpoints that enable applications built to use Apache Kafka with any client to use Event Hubs. Most existing Kafka applications can simply be reconfigured to point to an Event Hub namespace instead of a Kafka cluster bootstrap server.
+> [The protocol support for **Apache Kafka** clients](azure-event-hubs-kafka-overview.md) (versions >=1.0) provides network endpoints that enable applications built to use Apache Kafka with any client to use Event Hubs. Most existing Kafka applications can simply be reconfigured to point to an Event Hub namespace instead of a Kafka cluster bootstrap server.
> >From the perspective of cost, operational effort, and reliability, Azure Event Hubs is a great alternative to deploying and operating your own Kafka and Zookeeper clusters and to Kafka-as-a-Service offerings not native to Azure. >
event-hubs Event Hubs Kafka Akka Streams Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-kafka-akka-streams-tutorial.md
In this tutorial, you learn how to:
To complete this tutorial, make sure you have the following prerequisites:
-* Read through the [Event Hubs for Apache Kafka](event-hubs-for-kafka-ecosystem-overview.md) article.
+* Read through the [Event Hubs for Apache Kafka](azure-event-hubs-kafka-overview.md) article.
* An Azure subscription. If you do not have one, create a [free account](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio) before you begin. * [Java Development Kit (JDK) 1.8+](/azure/developer/java/fundamentals/java-support-on-azure) * On Ubuntu, run `apt-get install default-jdk` to install the JDK.
event-hubs Event Hubs Kafka Connect Debezium https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-kafka-connect-debezium.md
To complete this walk through, you'll require:
- Azure subscription. If you don't have one, [create a free account](https://azure.microsoft.com/free/). - Linux/MacOS - Kafka release (version 1.1.1, Scala version 2.11), available from [kafka.apache.org](https://kafka.apache.org/downloads#1.1.1)-- Read through the [Event Hubs for Apache Kafka](./event-hubs-for-kafka-ecosystem-overview.md) introduction article
+- Read through the [Event Hubs for Apache Kafka](./azure-event-hubs-kafka-overview.md) introduction article
## Create an Event Hubs namespace An Event Hubs namespace is required to send and receive from any Event Hubs service. See [Creating an event hub](event-hubs-create.md) for instructions to create a namespace and an event hub. Get the Event Hubs connection string and fully qualified domain name (FQDN) for later use. For instructions, see [Get an Event Hubs connection string](event-hubs-get-connection-string.md).
Follow the latest instructions in the [Debezium documentation](https://debezium.
Minimal reconfiguration is necessary when redirecting Kafka Connect throughput from Kafka to Event Hubs. The following `connect-distributed.properties` sample illustrates how to configure Connect to authenticate and communicate with the Kafka endpoint on Event Hubs: > [!IMPORTANT]
-> - Debezium will auto-create a topic per table and a bunch of metadata topics. Kafka **topic** corresponds to an Event Hubs instance (event hub). For Apache Kafka to Azure Event Hubs mappings, see [Kafka and Event Hubs conceptual mapping](event-hubs-for-kafka-ecosystem-overview.md#kafka-and-event-hubs-conceptual-mapping).
+> - Debezium will auto-create a topic per table and a bunch of metadata topics. Kafka **topic** corresponds to an Event Hubs instance (event hub). For Apache Kafka to Azure Event Hubs mappings, see [Kafka and Event Hubs conceptual mapping](azure-event-hubs-kafka-overview.md#kafka-and-event-hubs-conceptual-mapping).
> - There are different **limits** on number of event hubs in an Event Hubs namespace depending on the tier (Basic, Standard, Premium, or Dedicated). For these limits, See [Quotas](compare-tiers.md#quotas). ```properties
event-hubs Event Hubs Kafka Connect Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-kafka-connect-tutorial.md
To complete this walkthrough, make sure you have the following prerequisites:
- [Git](https://www.git-scm.com/downloads) - Linux/MacOS - Kafka release (version 1.1.1, Scala version 2.11), available from [kafka.apache.org](https://kafka.apache.org/downloads#1.1.1)-- Read through the [Event Hubs for Apache Kafka](./event-hubs-for-kafka-ecosystem-overview.md) introduction article
+- Read through the [Event Hubs for Apache Kafka](./azure-event-hubs-kafka-overview.md) introduction article
## Create an Event Hubs namespace An Event Hubs namespace is required to send and receive from any Event Hubs service. See [Creating an event hub](event-hubs-create.md) for instructions to create a namespace and an event hub. Get the Event Hubs connection string and fully qualified domain name (FQDN) for later use. For instructions, see [Get an Event Hubs connection string](event-hubs-get-connection-string.md).
event-hubs Event Hubs Kafka Flink Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-kafka-flink-tutorial.md
ms.devlang: java
# Use Apache Flink with Azure Event Hubs for Apache Kafka
-This tutorial shows you how to connect Apache Flink to an event hub without changing your protocol clients or running your own clusters. For more information on Event Hubs' support for the Apache Kafka consumer protocol, see [Event Hubs for Apache Kafka](event-hubs-for-kafka-ecosystem-overview.md).
+This tutorial shows you how to connect Apache Flink to an event hub without changing your protocol clients or running your own clusters. For more information on Event Hubs' support for the Apache Kafka consumer protocol, see [Event Hubs for Apache Kafka](azure-event-hubs-kafka-overview.md).
In this tutorial, you learn how to:
In this tutorial, you learn how to:
To complete this tutorial, make sure you have the following prerequisites:
-* Read through the [Event Hubs for Apache Kafka](event-hubs-for-kafka-ecosystem-overview.md) article.
+* Read through the [Event Hubs for Apache Kafka](azure-event-hubs-kafka-overview.md) article.
* An Azure subscription. If you do not have one, create a [free account](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio) before you begin. * [Java Development Kit (JDK) 1.7+](/azure/developer/java/fundamentals/java-support-on-azure) * On Ubuntu, run `apt-get install default-jdk` to install the JDK.
event-hubs Event Hubs Kafka Mirror Maker Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-kafka-mirror-maker-tutorial.md
You can use Apache Kafka's MirrorMaker 1 unidirectionally from Apache Kafka to E
To complete this tutorial, make sure you have:
-* Read through the [Event Hubs for Apache Kafka](event-hubs-for-kafka-ecosystem-overview.md) article.
+* Read through the [Event Hubs for Apache Kafka](azure-event-hubs-kafka-overview.md) article.
* An Azure subscription. If you do not have one, create a [free account](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio) before you begin. * [Java Development Kit (JDK) 1.7+](/azure/developer/java/fundamentals/java-support-on-azure) * On Ubuntu, run `apt-get install default-jdk` to install the JDK.
event-hubs Event Hubs Kafka Mirrormaker 2 Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-kafka-mirrormaker-2-tutorial.md
Mirror Maker 2 dynamically detects changes to topics and ensures source and targ
To complete this tutorial, make sure you have:
-* Read through the [Event Hubs for Apache Kafka](event-hubs-for-kafka-ecosystem-overview.md) article.
+* Read through the [Event Hubs for Apache Kafka](azure-event-hubs-kafka-overview.md) article.
* An Azure subscription. If you don't have one, create a [free account](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio) before you begin. * [Java Development Kit (JDK) 1.7+](/azure/developer/java/fundamentals/java-support-on-azure) * On Ubuntu, run `apt-get install default-jdk` to install the JDK.
event-hubs Event Hubs Quickstart Kafka Enabled Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-quickstart-kafka-enabled-event-hubs.md
This quickstart shows how to stream into Event Hubs without changing your protoc
To complete this quickstart, make sure you have the following prerequisites:
-* Read through the [Event Hubs for Apache Kafka](event-hubs-for-kafka-ecosystem-overview.md) article.
+* Read through the [Event Hubs for Apache Kafka](azure-event-hubs-kafka-overview.md) article.
* An Azure subscription. If you don't have one, create a [free account](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio) before you begin. * Create a Windows virtual machine and install the following components: * [Java Development Kit (JDK) 1.7+](/azure/developer/java/fundamentals/java-support-on-azure).
expressroute Expressroute Locations Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations-providers.md
Title: 'Locations and connectivity providers: Azure ExpressRoute | Microsoft Docs'
-description: This article provides a detailed overview of locations where services are offered and how to connect to Azure regions. Sorted by location.
+ Title: 'Locations and connectivity providers for Azure ExpressRoute'
+description: This article provides a detailed overview of available providers and services per each ExpressRoute location to connect to Azure regions.
- Previously updated : 11/10/2022 Last updated : 01/09/2023 --+
-# ExpressRoute partners and peering locations
+
+# ExpressRoute peering locations and connectivity partners
> [!div class="op_single_selector"] > * [Locations By Provider](expressroute-locations.md)
The tables in this article provide information on ExpressRoute geographical cove
> [!Note] > Azure regions and ExpressRoute locations are two distinct and different concepts, understanding the difference between the two is critical to exploring Azure hybrid networking connectivity. >
->
## Azure regions Azure regions are global datacenters where Azure compute, networking, and storage resources are located. When creating an Azure resource, a customer needs to select a resource location. The resource location determines which Azure datacenter (or availability zone) the resource is created in. ## ExpressRoute locations
-ExpressRoute locations (sometimes referred to as peering locations or meet-me-locations) are co-location facilities where Microsoft Enterprise edge (MSEE) devices are located. ExpressRoute locations are the entry point to Microsoft's network ΓÇô and are globally distributed, providing customers the opportunity to connect to Microsoft's network around the world. These locations are where ExpressRoute partners and ExpressRoute Direct customers issue cross connections to Microsoft's network. In general, the ExpressRoute location does not need to match the Azure region. For example, a customer can create an ExpressRoute circuit with the resource location *East US*, in the *Seattle* Peering location.
+ExpressRoute locations (sometimes referred to as peering locations or meet-me-locations) are co-location facilities where Microsoft Enterprise edge (MSEE) devices are located. ExpressRoute locations are the entry point to Microsoft's network ΓÇô and are globally distributed, providing customers the opportunity to connect to Microsoft's network around the world. These locations are where ExpressRoute partners and ExpressRoute Direct customers issue cross connections to Microsoft's network. In general, the ExpressRoute location doesn't need to match the Azure region. For example, a customer can create an ExpressRoute circuit with the resource location *East US*, in the *Seattle* Peering location.
-You will have access to Azure services across all regions within a geopolitical region if you connected to at least one ExpressRoute location within the geopolitical region.
+You'll have access to Azure services across all regions within a geopolitical region if you connected to at least one ExpressRoute location within the geopolitical region.
[!INCLUDE [expressroute-azure-regions-geopolitical-region](../../includes/expressroute-azure-regions-geopolitical-region.md)]
You will have access to Azure services across all regions within a geopolitical
The following table shows connectivity locations and the service providers for each location. If you want to view service providers and the locations for which they can provide service, see [Locations by service provider](expressroute-locations.md).
-* **Local Azure Regions** are the ones that [ExpressRoute Local](expressroute-faqs.md) at each peering location can access. **n/a** indicates that ExpressRoute Local is not available at that peering location.
+* **Local Azure Regions** refers to the regions that can be accessed by [ExpressRoute Local](expressroute-faqs.md#expressroute-local) at each peering location. **n/a** indicates that ExpressRoute Local isn't available at that peering location.
* **Zone** refers to [pricing](https://azure.microsoft.com/pricing/details/expressroute/).
-* **ER Direct** refers to [ExpressRoute Direct](expressroute-erdirect-about.md) support at each peering location. If you want to view the available bandwidth see [Determine available bandwidth](expressroute-howto-erdirect.md#resources)
+* **ER Direct** refers to [ExpressRoute Direct](expressroute-erdirect-about.md) support at each peering location. If you want to view the available bandwidth at a location, see [Determine available bandwidth](expressroute-howto-erdirect.md#resources)
### Global commercial Azure | **Location** | **Address** | **Zone** | **Local Azure regions** | **ER Direct** | **Service providers** |
The following table shows connectivity locations and the service providers for e
| **Las Vegas** | [Switch LV](https://www.switch.com/las-vegas) | 1 | n/a | Supported | CenturyLink Cloud Connect, Megaport, PacketFabric | | **London** | [Equinix LD5](https://www.equinix.com/locations/europe-colocation/united-kingdom-colocation/london-data-centers/ld5/) | 1 | UK South | Supported | AT&T NetBond, British Telecom, CenturyLink, Colt, Equinix, euNetworks, Intelsat, InterCloud, Internet Solutions - Cloud Connect, Interxion, Jisc, Level 3 Communications, Megaport, MTN, NTT Communications, Orange, PCCW Global Limited, Tata Communications, Telehouse - KDDI, Telenor, Telia Carrier, Verizon, Vodafone, Zayo | | **London2** | [Telehouse North Two](https://www.telehouse.net/data-centres/emea/uk-data-centres/london-data-centres/north-two) | 1 | UK South | Supported | BICS, British Telecom, CenturyLink Cloud Connect, Colt, Equinix, GTT, Interxion, IX Reach, JISC, Megaport, NTT Global DataCenters EMEA, Orange, SES, Sohonet, Telehouse - KDDI, Zayo |
-| **Los Angeles** | [CoreSite LA1](https://www.coresite.com/data-centers/locations/los-angeles/one-wilshire) | 1 | n/a | Supported | CoreSite, Equinix*, Megaport, Neutrona Networks, NTT, Zayo</br></br> **New ExpressRoute circuits are no longer supported with Equinix in Los Angeles. Please create new circuits in Los Angeles2.* |
+| **Los Angeles** | [CoreSite LA1](https://www.coresite.com/data-centers/locations/los-angeles/one-wilshire) | 1 | n/a | Supported | CoreSite, Equinix*, Megaport, Neutrona Networks, NTT, Zayo</br></br> **New ExpressRoute circuits are no longer supported with Equinix in Los Angeles. Create new circuits in Los Angeles2.* |
| **Los Angeles2** | [Equinix LA1](https://www.equinix.com/locations/americas-colocation/united-states-colocation/los-angeles-data-centers/la1/) | 1 | n/a | Supported | Equinix | | **Madrid** | [Interxion MAD1](https://www.interxion.com/es/donde-estamos/europa/madrid) | 1 | West Europe | Supported | DE-CIX, Interxion, Megaport, Telefonica | | **Marseille** |[Interxion MRS1](https://www.interxion.com/Locations/marseille/) | 1 | France South | n/a | Colt, DE-CIX, GEANT, Interxion, Jaguar Network, Ooredoo Cloud Connect |
The following table shows connectivity locations and the service providers for e
| **Washington DC2** | [Coresite VA2](https://www.coresite.com/data-center/va2-reston-va) | 1 | East US, East US 2 | n/a | CenturyLink Cloud Connect, Coresite, Intelsat, Megaport, Viasat, Zayo | | **Zurich** | [Interxion ZUR2](https://www.interxion.com/Locations/zurich/) | 1 | Switzerland North | Supported | Colt, Equinix, Intercloud, Interxion, Megaport, Swisscom, Zayo |
- **+** denotes coming soon
### National cloud environments
-Azure national clouds are isolated from each other and from global commercial Azure. ExpressRoute for one Azure cloud cannot connect to the Azure regions in the others.
+Azure national clouds are isolated from each other and from global commercial Azure. ExpressRoute for one Azure cloud can't connect to the Azure regions in the others.
### US Government cloud | **Location** | **Address** | **Local Azure regions**| **ER Direct** | **Service providers** |
Azure national clouds are isolated from each other and from global commercial Az
To learn more, see [ExpressRoute in China](http://www.windowsazure.cn/home/features/expressroute/).
-** ExpressRoute Local is not available in this location.
- ## <a name="c1partners"></a>Connectivity through Exchange providers
-If your connectivity provider is not listed in previous sections, you can still create a connection.
+If your connectivity provider isn't listed in previous sections, you can still create a connection.
-* Check with your connectivity provider to see if they are connected to any of the exchanges in the table above. You can check the following links to gather more information about services offered by exchange providers. Several connectivity providers are already connected to Ethernet exchanges.
+* Check with your connectivity provider to see if they're connected to any of the exchanges in the table above. You can check the following links to gather more information about services offered by exchange providers. Several connectivity providers are already connected to Ethernet exchanges.
* [Cologix](https://www.cologix.com/) * [CoreSite](https://www.coresite.com/) * [DE-CIX](https://www.de-cix.net/en/de-cix-service-world/cloud-exchange)
If your connectivity provider is not listed in previous sections, you can still
* Follow steps in [Create an ExpressRoute circuit](expressroute-howto-circuit-classic.md) to set up connectivity. ## Connectivity through satellite operators
-If you are remote and do not have fiber connectivity or want to explore other connectivity options, you can check the following satellite operators.
+If you're remote and don't have fiber connectivity or want to explore other connectivity options, you can check the following satellite operators.
* Intelsat * [SES](https://www.ses.com/networks/signature-solutions/signature-cloud/ses-and-azure-expressroute)
Enabling private connectivity to fit your needs can be challenging, based on the
| **Europe** |Avanade Inc., Altogee, Bright Skies GmbH, Inframon, MSG Services, New Signature, Nelite, Orange Networks, sol-tec | | **North America** |Avanade Inc., Equinix Professional Services, FlexManage, Lightstream, Perficient, Presidio | | **South America** |Avanade Inc., Venha Pra Nuvem |+ ## Next steps * For more information about ExpressRoute, see the [ExpressRoute FAQ](expressroute-faqs.md).
-* Ensure that all prerequisites are met. See [ExpressRoute prerequisites](expressroute-prerequisites.md).
+* Ensure that all prerequisites are met. For more information, see [ExpressRoute prerequisites & checklist](expressroute-prerequisites.md).
<!--Image References--> [0]: ./media/expressroute-locations/expressroute-locations-map.png "Location map"
external-attack-surface-management Understanding Dashboards https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/understanding-dashboards.md
Microsoft identifies organizations' attack surfaces through proprietary technolo
At the top of this dashboard, Defender EASM provides a list of security priorities organized by severity (high, medium, low). Large organizationsΓÇÖ attack surfaces can be incredibly broad, so prioritizing the key findings derived from our expansive data helps users quickly and efficiently address the most important exposed elements of their attack surface. These priorities can include critical CVEs, known associations to compromised infrastructure, use of deprecated technology, infrastructure best practice violations, or compliance issues.
-Insight Priorities are determined by MicrosoftΓÇÖs assessment of the potential impact of each insight. For instance, high severity insights may include vulnerabilities that are new, exploited frequently, particularly damaging, or easily exploited by hackers with a lower skill level. Low Severity Insights may include use of deprecated technology that is no longer supported, infrastructure that will soon expire, or compliance issues that do not align with security best practices. Each Insight contains suggested remediation actions to protect against potential exploits.
+Insight Priorities are determined by MicrosoftΓÇÖs assessment of the potential impact of each insight. For instance, high severity insights may include vulnerabilities that are new, exploited frequently, particularly damaging, or easily exploited by hackers with a lower skill level. Low severity insights may include use of deprecated technology that is no longer supported, infrastructure that will soon expire, or compliance issues that do not align with security best practices. Each insight contains suggested remediation actions to protect against potential exploits.
+
+Some insights will be flagged with "Potential" in the title. A "Potential" insight occurs when Defender EASM is unable to confirm that an asset is impacted by a vulnerability. This is common when our scanning system detects the presence of a specific service but cannot detect the version number; for example, some services enable administrators to hide version information. Vulnerabilities are often associated with specific versions of the software, so manual investigation is required to determine whether the asset is impacted. Other vulnerabilities can be remediated by steps that Defender EASM is unable to detect. For instance, users can make recommended changes to service configurations or run backported patches. If an insight is prefaced with "Potential", the system has reason to believe that the asset is impacted by the vulnerability but is unable to confirm it for one of the above listed reasons. To manually investigate, please click the insight name to review remediation guidance that can help you determine whether your assets are impacted.
+ ![Screenshot of attack surface priorities with clickable options highlighted](media/Dashboards-2.png)
-Based on the Attack Surface Priorities chart displayed above, a user would want to first investigate the two Medium Severity Observations. You can click the top-listed observation (ΓÇ£Hosts with Expired SSL CertificatesΓÇ¥) to be directly routed to a list of applicable assets, or instead select ΓÇ£View All 91 InsightsΓÇ¥ to see a comprehensive, expandable list of all potential observations that Defender EASM categorizes as ΓÇ£medium severityΓÇ¥.
+A user will usually decide to first investigate any High Severity Observations. You can click the top-listed observation to be directly routed to a list of impacted assets, or instead select ΓÇ£View All __ InsightsΓÇ¥ to see a comprehensive, expandable list of all potential observations within that severity group.
-The Medium Severity Observations page features a list of all potential insights in the left-hand column. This list is sorted by the number of assets that are impacted by each security risk, displaying the issues that impact the greatest number of assets first. To view the details of any security risk, simply click on it from this list.
+The Observations page features a list of all potential insights in the left-hand column. This list is sorted by the number of assets that are impacted by each security risk, displaying the issues that impact the greatest number of assets first. To view the details of any security risk, simply click on it from this list.
![Screenshot of attack surface drilldown for medium severity priorities](media/Dashboards-3.png)
For instance, the ΓÇ£clientUpdateProhibitedΓÇ¥ status code prevents unauthorized
### Open Ports
-This section helps users understand how their IP space is managed, detecting services that are exposed on the open internet. Attackers commonly scan ports across the internet to look for known exploits related to service vulnerabilities or misconfigurations. Microsoft identifies these open ports to compliment vulnerability assessment tools, flagging observations for review to ensure they are properly managed by your information technology team.
+This section helps users understand how their IP space is managed, detecting services that are exposed on the open internet. Attackers commonly scan ports across the internet to look for known exploits related to service vulnerabilities or misconfigurations. Microsoft identifies these open ports to complement vulnerability assessment tools, flagging observations for review to ensure they are properly managed by your information technology team.
![Screenshot of open ports chart](media/Dashboards-15.png)
external-attack-surface-management Using And Managing Discovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/using-and-managing-discovery.md
Custom discoveries are organized into Discovery Groups. They are independent see
![Screenshot of pre-baked attack surface selection page.](media/Discovery_7.png)
- Alternatively, users can manually input their seeds. Defender EASM accepts domains, IP blocks, hosts, email contacts, ASNs, certificate common names, and WhoIs organizations as seed values. You can also specify entities to exclude from asset discovery to ensure they are not added to your inventory if detected. For example, this is useful for organizations that have subsidiaries that will likely be connected to their central infrastructure, but do not belong to your organization.
+ Alternatively, users can manually input their seeds. Defender EASM accepts organization names, domains, IP blocks, hosts, email contacts, ASNs, and WhoIs organizations as seed values. You can also specify entities to exclude from asset discovery to ensure they are not added to your inventory if detected. For example, this is useful for organizations that have subsidiaries that will likely be connected to their central infrastructure, but do not belong to your organization.
Once your seeds have been selected, select **Review + Create**.
Run history is organized by the seed assets scanned during the discovery run. To
### Viewing seeds and exclusions
-The Discovery page defaults to a list view of Discovery Groups, but users can also view lists of all seeds and excluded entities from this page. Simply click the either tab to view a list of all the seeds or exclusions that power your discovery groups.
+The Discovery page defaults to a list view of Discovery Groups, but users can also view lists of all seeds and excluded entities from this page. Simply click either tab to view a list of all the seeds or exclusions that power your discovery groups.
### Seeds
Similarly, you can click the ΓÇ£ExclusionsΓÇ¥ tab to see a list of entities that
- [Discovering your attack surface](discovering-your-attack-surface.md) - [Understanding asset details](understanding-asset-details.md)-- [Understanding dashboards](understanding-dashboards.md)
+- [Understanding dashboards](understanding-dashboards.md)
external-attack-surface-management What Is Discovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/what-is-discovery.md
Through this process, Microsoft enables organizations to proactively monitor the
To create a comprehensive mapping of your organizationΓÇÖs attack surface, the system first intakes known assets (i.e. ΓÇ£seedsΓÇ¥) that are recursively scanned to discover additional entities through their connections to a seed. An initial seed may be any of the following kinds of web infrastructure indexed by Microsoft: -- Pages-- Host Name-- Domain-- Contact Email Address-- IP Block-- IP Address-- ASN-
-![Screenshot of Seed list view on discovery screen](media/Discovery-2.png)
+- Organization Names
+- Domains
+- IP Blocks
+- Hosts
+- Email Contacts
+- ASNs
+- Whois organizations
Starting with a seed, the system then discovers associations to other online infrastructure to discover other assets owned by your organization; this process ultimately creates your attack surface inventory. The discovery process uses the seeds as the central nodes and spiders outward towards the periphery of your attack surface by identifying all the infrastructure directly connected to the seed, and then identifying all the things related to each of the things in the first set of connections, etc. This process continues until we reach the edge of what your organization is responsible for managing.
firewall-manager Ip Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall-manager/ip-groups.md
Previously updated : 07/30/2020 Last updated : 01/10/2023
You can now select **IP Group** as a **Source type** or **Destination type** for
## IP address limits
-You can have a maximum of 100 IP Groups per firewall with a maximum 5000 individual IP addresses or IP prefixes per each IP Group.
+For IP Group limits, see [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-firewall-limits)
## Related Azure PowerShell cmdlets
The following Azure PowerShell cmdlets can be used to create and manage IP Group
## Next steps -- [Tutorial: Secure your virtual WAN using Azure Firewall Manager](secure-cloud-network.md)
+- [Tutorial: Secure your virtual WAN using Azure Firewall Manager](secure-cloud-network.md)
firewall Ip Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/ip-groups.md
Previously updated : 10/13/2022 Last updated : 01/10/2023
IP Groups are available in all public cloud regions.
## IP address limits
-You can have a maximum of 200 IP Groups per firewall with a maximum of 5,000 individual IP addresses or IP prefixes per each IP Group.
+For IP Group limits, see [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-firewall-limits)
## Related Azure PowerShell cmdlets
industrial-iot Industrial Iot Platform Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industrial-iot/industrial-iot-platform-versions.md
We're pleased to announce the declaration of Long-Term Support (LTS) for version
|Version |Type |Date |Highlights | |-|--|-|| |2.5.4 |Stable |March 2020 |IoT Hub Direct Method Interface, control from cloud without any microservices (standalone mode), OPC UA Server interface, uses OPC Foundation's OPC stack - [Release notes](https://github.com/Azure/Industrial-IoT/releases/tag/2.5.4)|
-|[2.7.206](https://github.com/Azure/Industrial-IoT/tree/release/2.7.206) |Stable |January 2021 |Configuration through REST API (orchestrated mode), supports Samples telemetry format and PubSub format - [Release notes](https://github.com/Azure/Industrial-IoT/releases/tag/2.7.206)|
+| 2.7.206 |Stable |January 2021 |Configuration through REST API (orchestrated mode), supports Samples telemetry format and PubSub format - [Release notes](https://github.com/Azure/Industrial-IoT/releases/tag/2.7.206)|
|[2.8](https://github.com/Azure/Industrial-IoT/tree/2.8.0) |Long-term support (LTS)|July 2021 |IoT Edge update to 1.1 LTS, OPC stack logging and tracing for better OPC Publisher diagnostics, Security fixes - [Release notes](https://github.com/Azure/Industrial-IoT/releases/tag/2.8.0)| |[2.8.1](https://github.com/Azure/Industrial-IoT/tree/2.8.1) |Patch release for LTS 2.8|November 2021 |Critical bug fixes, security updates, performance optimizations for LTS v2.8| |[2.8.2](https://github.com/Azure/Industrial-IoT/tree/2.8.2) |Patch release for LTS 2.8|March 2022 |Backwards compatibility with 2.5.x, bug fixes, security updates, performance optimizations for LTS v2.8|
iot-central How To Connect Devices X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/how-to-connect-devices-x509.md
Title: Connect devices with X.509 certificates to your application-+ description: This article describes how devices can use X.509 certificates to authenticate to your application.
iot-central How To Connect Iot Edge Transparent Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/how-to-connect-iot-edge-transparent-gateway.md
Title: Connect an IoT Edge transparent gateway to an Azure IoT Central application
-description: How to connect devices through an IoT Edge transparent gateway to an IoT Central application. The article shows how to use both the IoT Edge 1.1 and 1.2 runtimes.
+ Title: Connect an IoT Edge transparent gateway to an application
+description: How to connect devices through an IoT Edge transparent gateway to an IoT Central application. The article shows how to use the IoT Edge 1.4 runtime.
+ Previously updated : 10/11/2022 Last updated : 01/10/2023
An IoT Edge device can act as a gateway that provides a connection between other
IoT Edge supports the [*transparent* and *translation* gateway patterns](../../iot-edge/iot-edge-as-gateway.md). This article summarizes how to implement the transparent gateway pattern. In this pattern, the gateway passes messages from the downstream device through to the IoT Hub endpoint in your IoT Central application. The gateway doesn't manipulate the messages as they pass through. In IoT Central, each downstream device appears as child to the gateway device: For simplicity, this article uses virtual machines to host the downstream and gateway devices. In a real scenario, the downstream device and gateway would run on physical devices on your local network.
-This article shows how to implement the scenario by using either the IoT Edge 1.1 runtime or the IoT Edge 1.2 runtime.
+This article shows how to implement the scenario by using the IoT Edge 1.4 runtime.
## Prerequisites
-# [IoT Edge 1.1](#tab/edge1-1)
-
-To complete the steps in this article, you need:
--- An active Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.--- An [IoT Central application created](howto-create-iot-central-application.md) from the **Custom application** template. To learn more, see [Create an IoT Central application](howto-create-iot-central-application.md).-
-To follow the steps in this article, download the following files to your computer:
--- [Thermostat device model (thermostat-1.json)](https://raw.githubusercontent.com/Azure/iot-plugandplay-models/main/dtmi/com/example/thermostat-1.json) - this file is the device model for the downstream devices.-- [Transparent gateway manifest (EdgeTransparentGatewayManifest.json)](https://raw.githubusercontent.com/Azure-Samples/iot-central-docs-samples/main/transparent-gateway-1-1/EdgeTransparentGatewayManifest.json) - this file is the IoT Edge deployment manifest for the gateway device.-
-# [IoT Edge 1.2](#tab/edge1-2)
To complete the steps in this article, you need: - An active Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
To complete the steps in this article, you need:
To follow the steps in this article, download the following files to your computer: - [Thermostat device model (thermostat-1.json)](https://raw.githubusercontent.com/Azure/iot-plugandplay-models/main/dtmi/com/example/thermostat-1.json) - this file is the device model for the downstream devices.-- [Transparent gateway manifest (EdgeTransparentGatewayManifest.json)](https://raw.githubusercontent.com/Azure-Samples/iot-central-docs-samples/main/transparent-gateway-1-2/EdgeTransparentGatewayManifest.json) - this file is the IoT Edge deployment manifest for the gateway device.--
+- [Transparent gateway manifest (EdgeTransparentGatewayManifest.json)](https://raw.githubusercontent.com/Azure-Samples/iot-central-docs-samples/main/transparent-gateway-1-4/EdgeTransparentGatewayManifest.json) - this file is the IoT Edge deployment manifest for the gateway device.
## Import deployment manifest
To find these values, navigate to each device in the device list and select **Co
To let you try out this scenario, the following steps show you how to deploy the gateway and downstream devices to Azure virtual machines. > [!TIP]
-> To learn how to deploy the IoT Edge 1.1 or 1.2 runtime to a physical device, see [Create an IoT Edge device](../../iot-edge/how-to-create-iot-edge-device.md) in the IoT Edge documentation.
-
-# [IoT Edge 1.1](#tab/edge1-1)
-
-To try out the transparent gateway scenario, select the following button to deploy two Linux virtual machines. One virtual machine has the IoT Edge 1.1 runtime installed and is the transparent IoT Edge gateway. The other virtual machine is a downstream device where you run code to send simulated thermostat telemetry:
-
-[![Deploy to Azure Button](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2Fiot-central-docs-samples%2Fmain%2Ftransparent-gateway-1-1%2FDeployGatewayVMs.json)
-
-When the two virtual machines are deployed and running, verify the IoT Edge gateway device is running on the `edgegateway` virtual machine:
-
-1. Go to the **Devices** page in your IoT Central application. If the IoT Edge gateway device is connected to IoT Central, its status is **Provisioned**.
+> To learn how to deploy the IoT Edge runtime to a physical device, see [Create an IoT Edge device](../../iot-edge/how-to-create-iot-edge-device.md) in the IoT Edge documentation.
-1. Open the IoT Edge gateway device and verify the status of the modules on the **Modules** page. If the IoT Edge runtime started successfully, the status of the **$edgeAgent** and **$edgeHub** modules is **Running**:
-
- :::image type="content" source="media/how-to-connect-iot-edge-transparent-gateway/iot-edge-runtime-1-1.png" alt-text="Screenshot showing the $edgeAgent and $edgeHub version 1.1 modules running on the IoT Edge gateway." lightbox="media/how-to-connect-iot-edge-transparent-gateway/iot-edge-runtime-1-1.png":::
-
- > [!TIP]
- > You may have to wait for several minutes while the virtual machine starts up and the device is provisioned in your IoT Central application.
+To try out the transparent gateway scenario, select the following button to deploy two Linux virtual machines. One virtual machine has the IoT Edge 1.4 runtime installed and is the transparent IoT Edge gateway. The other virtual machine is a downstream device where you run code to send simulated thermostat telemetry:
-# [IoT Edge 1.2](#tab/edge1-2)
-
-To try out the transparent gateway scenario, select the following button to deploy two Linux virtual machines. One virtual machine has the IoT Edge 1.2 runtime installed and is the transparent IoT Edge gateway. The other virtual machine is a downstream device where you run code to send simulated thermostat telemetry:
-
-[![Deploy to Azure Button](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2Fiot-central-docs-samples%2Fmain%2Ftransparent-gateway-1-2%2FDeployGatewayVMs.json)
+[![Deploy to Azure Button](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2Fiot-central-docs-samples%2Fmain%2Ftransparent-gateway-1-4%2FDeployGatewayVMs.json)
When the two virtual machines are deployed and running, verify the IoT Edge gateway device is running on the `edgegateway` virtual machine:
When the two virtual machines are deployed and running, verify the IoT Edge gate
1. Open the IoT Edge gateway device and verify the status of the modules on the **Modules** page. If the IoT Edge runtime started successfully, the status of the **$edgeAgent** and **$edgeHub** modules is **Running**:
- :::image type="content" source="media/how-to-connect-iot-edge-transparent-gateway/iot-edge-runtime-1-2.png" alt-text="Screenshot showing the $edgeAgent and $edgeHub version 1.2 modules running on the IoT Edge gateway." lightbox="media/how-to-connect-iot-edge-transparent-gateway/iot-edge-runtime-1-2.png":::
+ :::image type="content" source="media/how-to-connect-iot-edge-transparent-gateway/iot-edge-runtime-1-4.png" alt-text="Screenshot showing the $edgeAgent and $edgeHub version 1.4 modules running on the IoT Edge gateway." lightbox="media/how-to-connect-iot-edge-transparent-gateway/iot-edge-runtime-1-4.png":::
> [!TIP] > You may have to wait for several minutes while the virtual machine starts up and the device is provisioned in your IoT Central application. -- ## Configure the gateway For your IoT Edge device to function as a transparent gateway, it needs some certificates to prove its identity to any downstream devices. This article uses demo certificates. In a production environment, use certificates from your certificate authority. To generate the demo certificates and install them on your gateway device:
-# [IoT Edge 1.1](#tab/edge1-1)
-
-1. Use SSH to connect to and sign in on your gateway device virtual machine.
-
-1. Run the following commands to clone the IoT Edge repository and generate your demo certificates:
-
- ```bash
- # Clone the repo
- cd ~
- git clone https://github.com/Azure/iotedge.git
-
- # Generate the demo certificates
- mkdir certs
- cd certs
- cp ~/iotedge/tools/CACertificates/*.cnf .
- cp ~/iotedge/tools/CACertificates/certGen.sh .
- ./certGen.sh create_root_and_intermediate
- ./certGen.sh create_edge_device_ca_certificate "mycacert"
- ```
-
- After you run the previous commands, the following files are ready to use in the next steps:
-
- - *~/certs/certs/azure-iot-test-only.root.ca.cert.pem* - The root CA certificate used to make all the other demo certificates for testing an IoT Edge scenario.
- - *~/certs/certs/iot-edge-device-mycacert-full-chain.cert.pem* - A device CA certificate that's referenced from the IoT Edge configuration file. In a gateway scenario, this CA certificate is how the IoT Edge device verifies its identity to downstream devices.
- - *~/certs/private/iot-edge-device-mycacert.key.pem* - The private key associated with the device CA certificate.
-
- To learn more about these demo certificates, see [Create demo certificates to test IoT Edge device features](../../iot-edge/how-to-create-test-certificates.md).
-
-1. Open the *config.yaml* file in a text editor. For example:
-
- ```bash
- sudo nano /etc/iotedge/config.yaml
- ```
-
-1. Locate the `Certificate settings` settings. Uncomment and modify the certificate settings as follows:
-
- ```text
- certificates:
- device_ca_cert: "file:///home/AzureUser/certs/certs/iot-edge-device-ca-mycacert-full-chain.cert.pem"
- device_ca_pk: "file:///home/AzureUser/certs/private/iot-edge-device-ca-mycacert.key.pem"
- trusted_ca_certs: "file:///home/AzureUser/certs/certs/azure-iot-test-only.root.ca.cert.pem"
- ```
-
- The example shown above assumes you're signed in as **AzureUser** and created a device CA certificate called "mycacert".
-
-1. Save the changes and restart the IoT Edge runtime:
-
- ```bash
- sudo systemctl restart iotedge
- ```
-
-If the IoT Edge runtime starts successfully after your changes, the status of the **$edgeAgent** and **$edgeHub** modules changes to **Running** on the **Modules** page for your gateway device in IoT Central.
-
-If the runtime doesn't start, check the changes you made in the IoT Edge configuration file and see [Troubleshoot your IoT Edge device](../../iot-edge/troubleshoot.md).
-
-Your transparent gateway is now configured and ready to start forwarding telemetry from downstream devices.
-
-# [IoT Edge 1.2](#tab/edge1-2)
- 1. Use SSH to connect to and sign in on your gateway device virtual machine. 1. Run the following commands to clone the IoT Edge repository and generate your demo certificates:
If the runtime doesn't start, check the changes you made in the IoT Edge configu
Your transparent gateway is now configured and ready to start forwarding telemetry from downstream devices. -- ## Provision a downstream device IoT Central relies on the Device Provisioning Service (DPS) to provision devices in IoT Central. Currently, IoT Edge can't use DPS provision a downstream device to your IoT Central application. The following steps show you how to provision the `thermostat1` device manually. To complete these steps, you need an environment with Python 3.6 (or higher) installed and internet connectivity. The [Azure Cloud Shell](https://shell.azure.com/) has Python 3.7 pre-installed:
IoT Central relies on the Device Provisioning Service (DPS) to provision devices
1. Run the following command to download the Python script that does the device provisioning: ```bash
- wget https://raw.githubusercontent.com/Azure-Samples/iot-central-docs-samples/main/transparent-gateway-1-1/provision_device.py
+ wget https://raw.githubusercontent.com/Azure-Samples/iot-central-docs-samples/main/transparent-gateway-1-4/provision_device.py
``` 1. To provision the `thermostat1` downstream device in your IoT Central application, run the following commands, replacing `{your application id scope}` and `{your device primary key}`. You made a note of these values when you added the devices to your IoT Central application:
To run the thermostat simulator on the `leafdevice` virtual machine:
```bash cd ~
- wget https://raw.githubusercontent.com/Azure-Samples/iot-central-docs-samples/main/transparent-gateway-1-1/simple_thermostat.py
+ wget https://raw.githubusercontent.com/Azure-Samples/iot-central-docs-samples/main/transparent-gateway-1-4/simple_thermostat.py
``` 1. Install the Azure IoT device Python module:
iot-central Howto Connect Eflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-connect-eflow.md
Title: Connect Azure IoT Edge for Linux on Windows (EFLOW)-+ description: Learn how to connect an Azure IoT Edge for Linux on Windows (EFLOW) device to an IoT Central application
iot-central Howto Migrate To Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-migrate-to-iot-hub.md
Title: Migrate devices from Azure IoT Central to Azure IoT Hub-+ description: Describes how to use the migration tool to migrate devices that currently connect to an Azure IoT Central application to an Azure IoT hub.
iot-central Howto Transform Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-transform-data.md
Title: Transform data for Azure IoT Central | Microsoft Docs
-description: IoT devices send data in various formats that you may need to transform. This article describes how to transform data both on the way into IoT Central and on the way out. The scenarios described use IoT Edge and Azure Functions.
+ Title: Transform data for an IoT Central application
+
+description: IoT devices send data in various formats that you may need to transform. This article describes how to transform data both on the way in and out of IoT Central.
Previously updated : 10/11/2022 Last updated : 01/10/2023
In this example, the downstream device doesn't need a device template. The downs
To create a device template for the IoT Edge gateway device:
-1. Save a copy of the deployment manifest to your local development machine: [moduledeployment.json](https://raw.githubusercontent.com/iot-for-all/iot-central-transform-with-iot-edge/main/edgemodule/moduledeployment.json).
+1. Save a copy of the deployment manifest to your local development machine: [moduledeployment.json](https://raw.githubusercontent.com/Azure-Samples/iot-central-docs-samples/main/iotedge/moduledeployment.json).
1. Open your local copy of the *moduledeployment.json* manifest file in a text editor.
-1. Find the `registryCredentials` section and replace the placeholders with the values you made a note of when you created the Azure container registry. The `address` value looks like `<username>.azurecr.io`.
+1. Find the `registryCredentials` section and replace the placeholders with the values you made a note of when you created the Azure container registry. The `address` value looks like `{your username}.azurecr.io`.
-1. Find the `settings` section for the `transformmodule`. Replace `<acr or docker repo>` with the same `address` value you used in the previous step. Save the changes.
+1. Find the `settings` section for the `transformmodule`. Replace `{your username}` with the same value you used in the previous step. Save the changes.
1. In your IoT Central application, navigate to the **Edge manifests** page.
To create a device template for the IoT Edge gateway device:
1. Select **+ New**, select **Azure IoT Edge**, and then select **Next: Customize**.
-1. Enter *IoT Edge gateway device* as the device template name. Don't select **This is a gateway device**.
+1. Enter *IoT Edge gateway device* as the device template name. Select **This is a gateway device**.
1. Select **Next: Review**, then select **Create**.
The deployment manifest doesn't specify the telemetry the module sends. To add t
Save your changes.
+1. In the model, select **Relationships**. Don't select **Relationships** in the **transformmodule** module.
+
+1. Select **Add relationship**.
+
+1. Enter *Downstream Sensor* as the display name, *sensor* as the name, and select **Any** as the target. Select **Save**.
+ 1. Select **Publish** to publish the device template. To register a gateway device in IoT Central: 1. In your IoT Central application, navigate to the **Devices** page.
-1. Select **IoT Edge gateway device** and select **Create a device**. Enter *IoT Edge gateway device* as the device name, enter *gateway-01* as the device ID, make sure **IoT Edge gateway device** is selected as the device template and **No** is selected as **Simulate this device?**. Select **Transformer** as the edge manifest. Select **Create**.
+1. Select **IoT Edge gateway device** and select **+ New**. Enter *IoT Edge gateway device* as the device name, enter *gateway-01* as the device ID, make sure **IoT Edge gateway device** is selected as the device template and **No** is selected as **Simulate this device?**. Select **Transformer** as the edge manifest. Select **Create**.
1. In the list of devices, click on the **IoT Edge gateway device**, and then select **Connect**.
To register a downstream device in IoT Central:
1. Don't select a device template. Select **+ New**. Enter *Downstream 01* as the device name, enter *downstream-01* as the device ID, make sure that the device template is **Unassigned** and **No** is selected as **Simulate this device?**. Select **Create**.
-1. In the list of devices, click on the **Downstream 01** device, and then select **Connect**.
+1. In the list of devices, click on the **Downstream 01** device, then select **Manage device > Attach to gateway**.
+
+1. In the **Attach to a gateway** dialog, select the **IoT Edge gateway device** device template, and the **IoT Edge gateway device** device instance. Select **Attach**.
+
+1. On the **Downstream 01** device, select **Connect**.
1. Make a note of the **ID scope**, **Device ID**, and **Primary key** values for the **Downstream 01** device. You use them later. ### Deploy the gateway and downstream devices
-For convenience, this article uses Azure virtual machines to run the gateway and downstream devices. To create the two Azure virtual machines, select the **Deploy to Azure** button below and use the information in the following table to complete the **Custom deployment** form:
+For convenience, this article uses Azure virtual machines to run the gateway and downstream devices. To create the two Azure virtual machines, select the **Deploy to Azure** button shown after the following table. Use the information in the table to complete the **Custom deployment** form:
| Field | Value | | -- | -- |
For convenience, this article uses Azure virtual machines to run the gateway and
| Authentication Type | Password | | Admin Password Or Key | Your choice of password for the **AzureUser** account on both virtual machines. |
-[![Deploy to Azure Button](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2Fiot-central-docs-samples%2Fmain%2Ftransparent-gateway-1-1%2FDeployGatewayVMs.json)
+[![Deploy to Azure Button](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2Fiot-central-docs-samples%2Fmain%2Ftransparent-gateway-1-4%2FDeployGatewayVMs.json)
Select **Review + Create**, and then **Create**. It takes a couple of minutes to create the virtual machines in the **ingress-scenario** resource group.
To generate the demo certificates and install them on your gateway device:
To learn more about these demo certificates, see [Create demo certificates to test IoT Edge device features](../../iot-edge/how-to-create-test-certificates.md).
-1. Open the *config.yaml* file in a text editor. For example:
+1. Open the *config.toml* file in a text editor. For example:
```bash
- sudo nano /etc/iotedge/config.yaml
+ sudo nano /etc/aziot/config.toml
```
-1. Locate the `Certificate settings` settings. Uncomment and modify the certificate settings as follows:
+1. Uncomment and modify the certificate settings as follows:
```text
- certificates:
- device_ca_cert: "file:///home/AzureUser/certs/certs/iot-edge-device-ca-mycacert-full-chain.cert.pem"
- device_ca_pk: "file:///home/AzureUser/certs/private/iot-edge-device-ca-mycacert.key.pem"
- trusted_ca_certs: "file:///home/AzureUser/certs/certs/azure-iot-test-only.root.ca.cert.pem"
+ trust_bundle_cert = "file:///home/AzureUser/certs/certs/azure-iot-test-only.root.ca.cert.pem"
+
+ ...
+
+ [edge_ca]
+ cert = "file:///home/AzureUser/certs/certs/iot-edge-device-ca-mycacert-full-chain.cert.pem"
+ pk = "file:///home/AzureUser/certs/private/iot-edge-device-ca-mycacert.key.pem"
```
- The example shown above assumes you're signed in as **AzureUser** and created a device CA certificated called "mycacert".
+ The previous example assumes you're signed in as **AzureUser** and created a device CA certificated called "mycacert".
-1. Save the changes and run the following command to verify that the *config.yaml* file is correct:
+1. Save the changes and run the following command to verify that the *config.toml* file is correct:
```bash sudo iotedge check
To generate the demo certificates and install them on your gateway device:
1. Restart the IoT Edge runtime: ```bash
- sudo systemctl restart iotedge
+ sudo iotedge config apply
``` If the IoT Edge runtime starts successfully after your changes, the status of the **$edgeAgent** and **$edgeHub** modules changes to **Running**. You can see these status values on the **Modules** page for your gateway device in IoT Central.
-If the runtime doesn't start, check the changes you made in *config.yaml* and see [Troubleshoot your IoT Edge device](../../iot-edge/troubleshoot.md).
+If the runtime doesn't start, check the changes you made in *config.toml* and see [Troubleshoot your IoT Edge device](../../iot-edge/troubleshoot.md).
### Connect downstream device to IoT Edge device
Before you set up this scenario, you need to get some connection settings from y
### Set up a compute engine
-This scenario uses the same Azure Functions deployment as the IoT Central device bridge. To deploy the device bridge, select the **Deploy to Azure** button below and use the information in the following table to complete the **Custom deployment** form:
+This scenario uses the same Azure Functions deployment as the IoT Central device bridge. To deploy the device bridge, select the **Deploy to Azure** button shown after the following table. Use the information in the table to complete the **Custom deployment** form:
| Field | Value | | -- | -- |
iot-central Tutorial Connect Iot Edge Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/tutorial-connect-iot-edge-device.md
Title: Tutorial - Connect an IoT Edge device to your application-+ description: This tutorial shows you how to register, provision, and connect an IoT Edge device to your IoT Central application.
iot-dps Libraries Sdks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/libraries-sdks.md
The DPS management SDKs help you build backend applications that manage the DPS
| Platform | Package | Code repository | Reference | | --|--|--|--|
-| .NET|[NuGet](https://www.nuget.org/packages/Microsoft.Azure.Management.DeviceProvisioningServices) |[GitHub](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/deviceprovisioningservices/Microsoft.Azure.Management.DeviceProvisioningServices)| [Reference](/dotnet/api/overview/azure/deviceprovisioningservice/management) |
+| .NET|[NuGet](https://www.nuget.org/packages/Microsoft.Azure.Management.DeviceProvisioningServices) |[GitHub](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/deviceprovisioningservices/Microsoft.Azure.Management.DeviceProvisioningServices)| [Reference](/dotnet/api/overview/azure/resourcemanager.deviceprovisioningservices-readme) |
| Java|[Maven](https://mvnrepository.com/artifact/com.azure.resourcemanager/azure-resourcemanager-deviceprovisioningservices) |[GitHub](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/deviceprovisioningservices/azure-resourcemanager-deviceprovisioningservices)| [Reference](/java/api/com.azure.resourcemanager.deviceprovisioningservices) | | Node.js|[npm](https://www.npmjs.com/package/@azure/arm-deviceprovisioningservices)|[GitHub](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/deviceprovisioningservices/arm-deviceprovisioningservices)|[Reference](/javascript/api/overview/azure/arm-deviceprovisioningservices-readme) | | Python|[pip](https://pypi.org/project/azure-mgmt-iothubprovisioningservices/) |[GitHub](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/iothub/azure-mgmt-iothubprovisioningservices)|[Reference](/python/api/azure-mgmt-iothubprovisioningservices) |
iot-edge Gpu Acceleration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/gpu-acceleration.md
description: Learn about how to configure your Azure IoT Edge for Linux on Windo
Previously updated : 06/22/2021 Last updated : 6/7/2022
iot-edge How To Access Dtpm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-access-dtpm.md
description: Learn about how to configure access the dTPM on your Azure IoT Edg
Previously updated : 07/12/2022 Last updated : 8/1/2022
iot-edge How To Collect And Transport Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-collect-and-transport-metrics.md
description: Use Azure Monitor to remotely monitor IoT Edge's built-in metrics
Previously updated : 08/11/2021 Last updated : 03/18/2022
iot-edge How To Configure Iot Edge For Linux On Windows Iiot Dmz https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-configure-iot-edge-for-linux-on-windows-iiot-dmz.md
Previously updated : 07/13/2022 Last updated : 07/22/2022
iot-edge How To Configure Iot Edge For Linux On Windows Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-configure-iot-edge-for-linux-on-windows-networking.md
description: Learn about how to configure custom networking for Azure IoT Edge f
Previously updated : 03/21/2022 Last updated : 10/21/2022
iot-edge How To Configure Multiple Nics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-configure-multiple-nics.md
Previously updated : 07/12/2022 Last updated : 7/22/2022
iot-edge How To Connect Downstream Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-connect-downstream-device.md
description: How to configure downstream devices to connect to Azure IoT Edge ga
Previously updated : 10/15/2020 Last updated : 06/02/2022
iot-edge How To Create Iot Edge Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-create-iot-edge-device.md
Previously updated : 11/11/2021 Last updated : 10/01/2022
iot-edge How To Provision Devices At Scale Linux On Windows Symmetric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-provision-devices-at-scale-linux-on-windows-symmetric.md
Title: Create and provision IoT Edge devices using symmetric keys on Linux on Wi
description: Use symmetric key attestation to test provisioning Linux on Windows devices at scale for Azure IoT Edge with device provisioning service Previously updated : 02/09/2022 Last updated : 11/15/2022
iot-edge How To Provision Devices At Scale Linux On Windows X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-provision-devices-at-scale-linux-on-windows-x509.md
Title: Create and provision IoT Edge devices using X.509 certificates on Linux o
description: Use X.509 certificate attestation to test provisioning devices at scale for Azure IoT Edge with device provisioning service Previously updated : 02/09/2022 Last updated : 11/15/2022
iot-edge How To Provision Devices At Scale Linux X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-provision-devices-at-scale-linux-x509.md
Title: Create and provision IoT Edge devices at scale using X.509 certificates o
description: Use X.509 certificates to test provisioning devices at scale for Azure IoT Edge with device provisioning service Previously updated : 05/13/2022 Last updated : 08/17/2022
iot-edge How To Provision Devices At Scale Windows Tpm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-provision-devices-at-scale-windows-tpm.md
Title: Create and provision devices with a virtual TPM on Windows - Azure IoT Ed
description: Use a simulated TPM on a Windows device to test the Azure device provisioning service for Azure IoT Edge Previously updated : 10/28/2021 Last updated : 9/19/2022
iot-edge How To Provision Single Device Linux On Windows Symmetric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-provision-single-device-linux-on-windows-symmetric.md
Previously updated : 07/05/2022 Last updated : 11/15/2022
iot-edge How To Provision Single Device Linux Symmetric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-provision-single-device-linux-symmetric.md
Previously updated : 07/11/2022 Last updated : 9/12/2022
iot-edge How To Provision Single Device Linux X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-provision-single-device-linux-x509.md
Previously updated : 07/11/2022 Last updated : 11/21/2022
iot-edge Iot Edge For Linux On Windows Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-for-linux-on-windows-networking.md
Previously updated : 03/17/2022 Last updated : 11/15/2022
iot-edge Iot Edge For Linux On Windows Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-for-linux-on-windows-support.md
Title: Supported operating systems, container engines - Azure IoT Edge for Linux
description: Learn which operating systems can run Azure IoT Edge for Linux on Windows Previously updated : 06/23/2022 Last updated : 11/15/2022
iot-edge Iot Edge For Linux On Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-for-linux-on-windows.md
Previously updated : 07/05/2022 Last updated : 11/15/2022
iot-edge Iot Edge Limits And Restrictions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-limits-and-restrictions.md
Title: Limits and restrictions - Azure IoT Edge | Microsoft Docs
description: Description of the limits and restrictions when using IoT Edge. Previously updated : 09/01/2022 Last updated : 11/17/2022
iot-edge Module Composition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/module-composition.md
description: Learn how a deployment manifest declares which modules to deploy, h
Previously updated : 01/05/2023 Last updated : 07/06/2022
iot-edge Nested Virtualization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/nested-virtualization.md
description: Learn about how to navigate nested virtualization in Azure IoT Edge
Previously updated : 2/24/2021 Last updated : 11/15/2022
iot-edge Offline Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/offline-capabilities.md
Title: Operate devices offline - Azure IoT Edge | Microsoft Docs
description: Understand how IoT Edge devices and modules can operate without internet connection for extended periods of time, and how IoT Edge can enable regular IoT devices to operate offline too. Previously updated : 07/05/2022 Last updated : 07/26/2022
iot-edge Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/troubleshoot.md
description: Use this article to learn standard diagnostic skills for Azure IoT
Previously updated : 05/04/2021 Last updated : 08/25/2022
iot-edge Tutorial Deploy Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-deploy-function.md
description: In this tutorial, you develop an Azure Function as an IoT Edge modu
Previously updated : 07/29/2020 Last updated : 05/11/2022
iot-edge Tutorial Deploy Stream Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-deploy-stream-analytics.md
Title: 'Tutorial - Stream Analytics at the edge using Azure IoT Edge'
description: 'In this tutorial, you deploy Azure Stream Analytics as a module to an IoT Edge device' Previously updated : 05/03/2021 Last updated : 9/22/2022
iot-edge Tutorial Machine Learning Edge 06 Custom Modules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-machine-learning-edge-06-custom-modules.md
description: 'This tutorial shows how to create and deploy IoT Edge modules that
Previously updated : 6/30/2020 Last updated : 9/12/2022
iot-edge Tutorial Machine Learning Edge 07 Send Data To Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-machine-learning-edge-07-send-data-to-hub.md
description: 'This tutorial shows how you can use your development machine as a
Previously updated : 6/30/2020 Last updated : 9/12/2022
iot-hub Iot Hub Devguide Sdks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-sdks.md
The Azure IoT service SDKs contain code to facilitate building applications that
|||||| | .NET | [NuGet](https://www.nuget.org/packages/Microsoft.Azure.Devices ) | [GitHub](https://github.com/Azure/azure-iot-sdk-csharp) | [Samples](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/iothub/service/samples) | [Reference](/dotnet/api/microsoft.azure.devices) | | Java | [Maven](https://mvnrepository.com/artifact/com.microsoft.azure.sdk.iot/iot-service-client) | [GitHub](https://github.com/Azure/azure-iot-sdk-java) | [Samples](https://github.com/Azure/azure-iot-sdk-java/tree/main/service/iot-service-samples/pnp-service-sample) | [Reference](/java/api/com.microsoft.azure.sdk.iot.service) |
-| Node | [npm](https://www.npmjs.com/package/azure-iothub) | [GitHub](https://github.com/Azure/azure-iot-sdk-node) | [Samples](https://github.com/Azure/azure-iot-sdk-node/tree/main/service/samples) | [Reference](/javascript/api/azure-iothub/) |
+| Node | [npm](https://www.npmjs.com/package/azure-iothub) | [GitHub](https://github.com/Azure/azure-iot-sdk-node) | | [Reference](/javascript/api/azure-iothub/) |
| Python | [pip](https://pypi.org/project/azure-iot-hub) | [GitHub](https://github.com/Azure/azure-iot-hub-python) | [Samples](https://github.com/Azure/azure-iot-hub-python/tree/main/samples) | [Reference](/python/api/azure-iot-hub) | ## Azure IoT Hub management SDKs
iot-hub Iot Hub Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-upgrade.md
Last updated 11/08/2022 + # How to upgrade your IoT hub As your IoT solution grows, Azure IoT Hub is ready to help you scale. Azure IoT Hub offers two tiers, basic (B) and standard (S), to accommodate customers that want to use different features. Within each tier are three sizes (1, 2, and 3) that determine the number of messages that can be sent each day.
When you have more devices and need more capabilities, there are three ways to a
* Add units within the IoT hub to increase the daily message limit for that hub. For example, each extra unit in a B1 IoT hub allows for an extra 400,000 messages per day.
-* Change the size of the IoT hub. For example, migrate a hub from the B1 tier to the B2 tier to increase the number of messages that each unit can support per day from 400,000 to 6 million.
+- Change the size of the IoT hub. For example, migrate a hub from the B1 tier to the B2 tier to increase the number of messages that each unit can support per day from 400,000 to 6 million.
+Both these changes can occur without interrupting existing operations.
-* Upgrade to a higher tier. For example, upgrade a hub from the B1 tier to the S1 tier for access to advanced features with the same messaging capacity.
+- * Upgrade to a higher tier. For example, upgrade a hub from the B1 tier to the S1 tier for access to advanced features with the same messaging capacity.
-These changes can all occur without interrupting existing operations.
+> When you are upgrading your IoT Hub to a higher tier, some messages may be received out of order for a short period of time. If your business logic relies on the order of messages, we recommend upgrading during non-business hours.
If you want to downgrade your IoT hub, you can remove units and reduce the size of the IoT hub but you can't downgrade to a lower tier. For example, you can move from the S2 tier to the S1 tier, but not from the S2 tier to the B1 tier. Only one type of [Iot Hub edition](https://azure.microsoft.com/pricing/details/iot-hub/) within a tier can be chosen per IoT hub. For example, you can create an IoT hub with multiple units of S1. However, you can't create an IoT hub with a mix of units from different editions, such as S1 and B3 or S1 and S2.
machine-learning How To Access Azureml Behind Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-azureml-behind-firewall.md
The hosts in the following tables are owned by Microsoft, and provide services r
**Azure Machine Learning hosts** > [!IMPORTANT]
-> In the following table, replace `<storage>` with the name of the default storage account for your Azure Machine Learning workspace.
+> In the following table, replace `<storage>` with the name of the default storage account for your Azure Machine Learning workspace. Replace `<region>` with the region of your workspace.
# [Azure public](#tab/public)
The hosts in the following tables are owned by Microsoft, and provide services r
| API |\*.azureml.ms | TCP | 443 | | API | \*.azureml.net | TCP | 443 | | Model management | \*.modelmanagement.azureml.net | TCP | 443 |
-| Integrated notebook | \*.notebooks.azure.net | TCP | 443 |
+| Integrated notebook | \*.\<region\>.notebooks.azure.net | TCP | 443 |
| Integrated notebook | \<storage\>.file.core.windows.net | TCP | 443, 445 | | Integrated notebook | \<storage\>.dfs.core.windows.net | TCP | 443 | | Integrated notebook | \<storage\>.blob.core.windows.net | TCP | 443 |
machine-learning How To Access Data Interactive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-data-interactive.md
path_on_datastore '<path>'
uri = f'azureml://subscriptions/{subscription}/resourcegroups/{resource_group}/workspaces/{workspace}/datastores/{datastore_name}/paths/{path_on_datastore}'. ```
-These Datastore URIs are a known implementation of [Filesystem spec](https://filesystem-spec.readthedocs.io/latest/https://docsupdatetracker.net/index.html#) (`fsspec`): A unified pythonic interface to local, remote and embedded file systems and bytes storage.
+These Datastore URIs are a known implementation of [Filesystem spec](https://filesystem-spec.readthedocs.io/en/latest/https://docsupdatetracker.net/index.html) (`fsspec`): A unified pythonic interface to local, remote and embedded file systems and bytes storage.
The Azure ML Datastore implementation of `fsspec` automatically handles credential/identity passthrough used by the Azure ML datastore. This means you don't need to expose account keys in your scripts or do additional sign-in procedures on a compute instance.
azcopy cp $SOURCE $DEST
## Next steps - [Interactive Data Wrangling with Apache Spark in Azure Machine Learning (preview)](interactive-data-wrangling-with-apache-spark-azure-ml.md)-- [Access data in a job](how-to-read-write-data-v2.md)
+- [Access data in a job](how-to-read-write-data-v2.md)
machine-learning How To Configure Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-environment.md
To use this file from your code, use the [`MLClient.from_config`](/python/api/az
Create a workspace configuration file in one of the following methods:
-* Azure portal
+* Azure Machine Learning studio
- **Download the file**: In the [Azure portal](https://portal.azure.com), select **Download config.json** from the **Overview** section of your workspace.
+ **Download the file**:
+ 1. Sign in to [Azure Machine Learning studio](https://ml.azure.com)
+ 1. In the upper right Azure Machine Learning studio toolbar, select your workspace name.
+ 1. Select the **Download config file** link.
- ![Azure portal](./media/how-to-configure-environment/configure.png)
+ :::image type="content" source="media/how-to-configure-environment/configure.png" alt-text="Screenshot shows how to download your config file." lightbox="media/how-to-configure-environment/configure.png":::
* Azure Machine Learning Python SDK
machine-learning How To Manage Models Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-models-mlflow.md
Azure Machine Learning supports MLflow for model management. This represents a c
[!INCLUDE [mlflow-prereqs](../../includes/machine-learning-mlflow-prereqs.md)]
+* Some operations may be executed directly using the MLflow fluent API (`mlflow.<method>`). However, others may require to create an MLflow client, which allows to communicate with Azure Machine Learning in the MLflow protocol. You can create an `MlflowClient` object as follows. This tutorial will use the object `client` to refer to such MLflow client.
+
+ ```python
+ using mlflow
+
+ client = mlflow.tracking.MlflowClient()
+ ```
+ ## Registering new models in the registry ### Creating models from an existing run
mlflow.register_model(f"runs:/{run_id}/{artifact_path}", model_name)
> [!NOTE] > Models can only be registered to the registry in the same workspace where the run was tracked. Cross-workspace operations are not supported by the moment in Azure Machine Learning.
+> [!TIP]
+> We recommend to register models from runs or using the method `mlflow.<flavor>.log_model` from inside the run as it keeps lineage from the job that generated the asset.
+ ### Creating models from assets If you have a folder with an MLModel MLflow model, then you can register it directly. There's no need for the model to be always in the context of a run. To do that you can use the URI schema `file://path/to/model` to register MLflow models stored in the local file system. Let's create a simple model using `Scikit-Learn` and save it in MLflow format in the local storage:
model_local_path = os.path.abspath("./regressor")
mlflow.register_model(f"file://{model_local_path}", "local-model-test") ```
-> [!NOTE]
-> Notice how the model URI schema `file:/` requires absolute paths.
- ## Querying model registries ### Querying all the models in the registry
-You can query all the registered models in the registry using the MLflow client with the method `list_registered_models`. The MLflow client is required to do all these operations.
-
-```python
-using mlflow
-
-client = mlflow.tracking.MlflowClient()
-```
-
-The following sample prints all the model's names:
+You can query all the registered models in the registry using the MLflow client. The following sample prints all the model's names:
```python for model in client.search_registered_models():
machine-learning How To Prevent Data Loss Exfiltration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-prevent-data-loss-exfiltration.md
Azure Machine Learning has several inbound and outbound dependencies. Some of th
* __Storage Outbound__: This requirement comes from compute instance and compute cluster. A malicious agent can use this outbound rule to exfiltrate data by provisioning and saving data in their own storage account. You can remove data exfiltration risk by using an Azure Service Endpoint Policy and Azure Batch's simplified node communication architecture.
- * __AzureFrontDoor.frontend outbound__: Azure Front Door is required by the Azure Machine Learning studio UI and AutoML. To narrow down the list of possible outbound destinations to just the ones required by Azure ML, allowlist the following fully qualified domain names (FQDN) on your firewall.
+ * __AzureFrontDoor.frontend outbound__: Azure Front Door is used by the Azure Machine Learning studio UI and AutoML. Instead of allowing outbound to the service tag (AzureFrontDoor.frontend), switch to the following fully qulified domain names (FQDN). Switching to these FQDNs removes unnecessary outbound traffic included in the service tag and allows only what is needed for Azure Machine Learning studio UI and AutoML.
- `ml.azure.com` - `automlresources-prod.azureedge.net`
When using Azure ML curated environments, make sure to use the latest environmen
# [Firewall](#tab/firewall)
- __Allow__ outbound traffic over __TCP port 443__ to the following FQDNs. Replace instances of `<region>` with the Azure region that contains your compute cluster or instance:
+ __Allow__ outbound traffic over __TCP port 443__ to the following FQDNs:
* `mcr.microsoft.com` * `*.data.mcr.microsoft.com`
machine-learning How To Secure Online Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-online-endpoint.md
Previously updated : 12/08/2022 Last updated : 01/06/2023
When `public_network_access` is `Disabled`, inbound scoring requests are receive
## Outbound (resource access)
-To restrict communication between a deployment and external resources, including the Azure resources it uses, set the deployment's `egress_public_network_access` flag to `disabled`. Use this flag to ensure that the download of the model, code, and images needed by your deployment are secured with a private endpoint. Note that disabling the flag alone is not enoughΓÇöyour workspace must also have a private link that allows access to Azure resources via a private endpoint. See the [Prerequisites](#prerequisites) for more details.
+To restrict communication between a deployment and external resources, including the Azure resources it uses, set the deployment's `egress_public_network_access` flag to `disabled`. Use this flag to ensure that the download of the model, code, and images needed by your deployment are secured with a private endpoint. Note that disabling the flag alone is not enough ΓÇö your workspace must also have a private link that allows access to Azure resources via a private endpoint. See the [Prerequisites](#prerequisites) for more details.
> [!WARNING] > You cannot update (enable or disable) the `egress_public_network_access` flag after creating the deployment. Attempting to change the flag while updating the deployment will fail with an error.
+> [!NOTE]
+> For online deployments with `egress_public_network_access` flag set to `disabled`, access from the deployments to Microsoft Container Registry (MCR) is restricted. If you want to leverage container images from MCR (such as when using curated environment or mlflow no-code deployment), recommendation is to push the images into the Azure Container Registry (ACR) which is attached with the workspace. The images in this ACR is accessible to secured deployments via the private endpoints which are automatically created on behalf of you when you set `egress_public_network_access` flag to `disabled`. For a quick example, please refer to this [custom container example](https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/online/custom-container/minimal/single-model).
+ # [Azure CLI](#tab/cli) ```azurecli
machine-learning How To Submit Spark Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-submit-spark-jobs.md
Previously updated : 12/01/2022 Last updated : 01/10/2023
[!INCLUDE [preview disclaimer](../../includes/machine-learning-preview-generic-disclaimer.md)]
-Azure Machine Learning provides the ability to submit standalone machine learning jobs or creating a [machine learning pipeline](./concept-ml-pipelines.md) comprising multiple steps in a machine learning workflow. Azure Machine Learning supports creation of a standalone Spark job, and creation of a reusable Spark component that can be used in Azure Machine Learning pipelines. In this article you will learn how to submit Spark jobs using:
-- Azure Machine Learning studio UI
+Azure Machine Learning supports submission of standalone machine learning jobs, and creation of [machine learning pipelines](./concept-ml-pipelines.md), that involve multiple machine learning workflow steps. Azure Machine Learning handles both standalone Spark job creation, and creation of reusable Spark components that Azure Machine Learning pipelines can use. In this article, you'll learn how to submit Spark jobs using:
+- Azure Machine Learning Studio UI
- Azure Machine Learning CLI - Azure Machine Learning SDK ## Prerequisites
-### Studio UI
-Prerequisites for submitting a Spark job from Azure Machine Learning studio UI are as follows:
-- An Azure subscription; if you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free) before you begin.-- An Azure Machine Learning workspace. See [Create workspace resources](./quickstart-create-resources.md).-- To enable this feature:
- 1. Navigate to Azure Machine Learning studio UI.
- 2. Select **Manage preview features** (megaphone icon) among the icons on the top right side of the screen.
- 3. In **Managed preview feature** panel, toggle on **Run notebooks and jobs on managed Spark** feature.
- :::image type="content" source="media/interactive-data-wrangling-with-apache-spark-azure-ml/how_to_enable_managed_spark_preview.png" alt-text="Screenshot showing option for enabling Managed Spark preview.":::
-- [(Optional): An attached Synapse Spark pool in the Azure Machine Learning workspace](./how-to-manage-synapse-spark-pool.md).- # [CLI](#tab/cli) [!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)] - An Azure subscription; if you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free) before you begin.
Prerequisites for submitting a Spark job from Azure Machine Learning studio UI a
- [Install the Azure Machine Learning SDK for Python](/python/api/overview/azure/ml/installv2). - [(Optional): An attached Synapse Spark pool in the Azure Machine Learning workspace](./how-to-manage-synapse-spark-pool.md).
+# [Studio UI](#tab/ui)
+These prerequisites cover the submission of a Spark job from Azure Machine Learning Studio UI:
+- An Azure subscription; if you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free) before you begin.
+- An Azure Machine Learning workspace. See [Create workspace resources](./quickstart-create-resources.md).
+- To enable this feature:
+ 1. Navigate to Azure Machine Learning Studio UI.
+ 2. Select **Manage preview features** (megaphone icon) from the icons on the top right side of the screen.
+ 3. In **Managed preview feature** panel, toggle on **Run notebooks and jobs on managed Spark** feature.
+ :::image type="content" source="media/interactive-data-wrangling-with-apache-spark-azure-ml/how_to_enable_managed_spark_preview.png" alt-text="Screenshot showing option for enabling Managed Spark preview.":::
+- [(Optional): An attached Synapse Spark pool in the Azure Machine Learning workspace](./how-to-manage-synapse-spark-pool.md).
+ ## Ensuring resource access for Spark jobs
-Spark jobs can use either user identity passthrough or a managed identity to access data and other resource. Different mechanisms for accessing resources while using Azure Machine Learning Managed (Automatic) Spark compute and attached Synapse Spark pool are summarized in the following table.
+Spark jobs can use either user identity passthrough, or a managed identity, to access data and other resources. The following table summarizes the different mechanisms for resource access while using Azure Machine Learning Managed (Automatic) Spark compute and attached Synapse Spark pool.
|Spark pool|Supported identities|Default identity| | - | -- | - | |Managed (Automatic) Spark compute|User identity and managed identity|User identity| |Attached Synapse Spark pool|User identity and managed identity|Managed identity - compute identity of the attached Synapse Spark pool|
-Azure Machine Learning Managed (Automatic) Spark compute uses user assigned managed identity attached to the workspace, if an option to use managed identity is defined in the CLI or SDK code. You can attach a user assigned managed identity to an existing Azure Machine Learning workspace using Azure Machine Learning CLI v2 or using `ARMClient`.
+If the CLI or SDK code defines an option to use managed identity, Azure Machine Learning Managed (Automatic) Spark compute uses user-assigned managed identity attached to the workspace. You can attach a user-assigned managed identity to an existing Azure Machine Learning workspace using Azure Machine Learning CLI v2, or with `ARMClient`.
### Attach user assigned managed identity using CLI v2
-1. Create YAML file defining the user assigned managed identity that should be attached to the workspace:
+1. Create a YAML file that defines the user-assigned managed identity that should be attached to the workspace:
```yaml identity: type: system_assigned,user_assigned
Azure Machine Learning Managed (Automatic) Spark compute uses user assigned mana
'/subscriptions/<SUBSCRIPTION_ID/resourceGroups/<RESOURCE_GROUP>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<AML_USER_MANAGED_ID>': {} ```
-1. Use the YAML file in `az ml workspace update` command, with the `--file` parameter, to attach the user assigned managed identity:
+1. With the `--file` parameter, use the YAML file in the `az ml workspace update` command to attach the user assigned managed identity:
```azurecli az ml workspace update --subscription <SUBSCRIPTION_ID> --resource-group <RESOURCE_GROUP> --name <AML_WORKSPACE_NAME> --file <YAML_FILE_NAME>.yaml ``` ### Attach user assigned managed identity using `ARMClient`
-1. Install [ARMClient](https://github.com/projectkudu/ARMClient), a simple command line tool to invoke the Azure Resource Manager API.
-1. Create a JSON file defining the user assigned managed identity that should be attached to the workspace:
+1. Install [DMClient](https://github.com/projectkudu/ARMClient), a simple command line tool that invokes the Azure Resource Manager API.
+1. Create a JSON file that defines the user-assigned managed identity that should be attached to the workspace:
```json { "properties":{
Azure Machine Learning Managed (Automatic) Spark compute uses user assigned mana
} } ```
-1. Execute following command in the PowerShell or command prompt to attach the user assigned managed identity to the workspace.
+1. Execute the following command in the PowerShell prompt or the command prompt, to attach the user-assigned managed identity to the workspace.
```cmd armclient PATCH https://management.azure.com/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP>/providers/Microsoft.MachineLearningServices/workspaces/<AML_WORKSPACE_NAME>?api-version=2022-05-01 '@<JSON_FILE_NAME>.json' ``` > [!NOTE]
-> - To ensure successful execution of spark job, the identity being used for the Spark job should be assigned **Contributor** and **Storage Blob Data Contributor** roles on the Azure storage account used for data input and output.
-> - If an [attached Synapse Spark pool](./how-to-manage-synapse-spark-pool.md) points to a Synapse Spark pool in an Azure Synapse workspace that has a managed virtual network associated with it, [a managed private endpoint to storage account should be configured](../synapse-analytics/security/connect-to-a-secure-storage-account.md) to ensure data access.
+> - To ensure successful execution of the Spark job, assign the **Contributor** and **Storage Blob Data Contributor** roles, on the Azure storage account used for data input and output, to the identity that the Spark job uses
+> - If an [attached Synapse Spark pool](./how-to-manage-synapse-spark-pool.md) points to a Synapse Spark pool, in an Azure Synapse workspace that has a managed virtual network associated with it, [a managed private endpoint to storage account should be configured](../synapse-analytics/security/connect-to-a-secure-storage-account.md) to ensure data access.
## Submit a standalone Spark job
-Once a Python script is developed by [interactive data wrangling](./interactive-data-wrangling-with-apache-spark-azure-ml.md), it can be used for submitting a batch job to process a larger volume of data after making necessary changes for parameterization of the Python script. A simple data wrangling batch job can be submitted as a standalone Spark job.
+A Python script developed by [interactive data wrangling](./interactive-data-wrangling-with-apache-spark-azure-ml.md) can be used to submit a batch job to process a larger volume of data, after making necessary changes for Python script parameterization. A simple data wrangling batch job can be submitted as a standalone Spark job.
-A Spark job requires a Python script that takes arguments, which can be developed by modifying the Python code developed from [interactive data wrangling](./interactive-data-wrangling-with-apache-spark-azure-ml.md). A sample Python script is shown here.
+A Spark job requires a Python script that takes arguments, which can be developed with modification of the Python code developed from [interactive data wrangling](./interactive-data-wrangling-with-apache-spark-azure-ml.md). A sample Python script is shown here.
```python # titanic.py
df.to_csv(args.wrangled_data, index_col="PassengerId")
The above script takes two arguments `--titanic_data` and `--wrangled_data`, which pass the path of input data and output folder respectively. # [Azure CLI](#tab/cli)- [!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
-A standalone Spark job can be defined as a YAML specification file, which can be used in the `az ml job create` command, with the `--file` parameter, to create a job. Define these properties in the YAML file as follows:
+To create a job, a standalone Spark job can be defined as a YAML specification file, which can be used in the `az ml job create` command, with the `--file` parameter. Define these properties in the YAML file as follows:
### YAML properties in the Spark job specification - `type` - set to `spark`.
A standalone Spark job can be defined as a YAML specification file, which can be
- If dynamic allocation of executors is disabled, define this property: - `spark.executor.instances` - the number of Spark executor instances. - `environment` - an [Azure Machine Learning environment](./reference-yaml-environment.md) to run the job.-- `args` - the command line arguments that should be passed to the job entry point Python script or class. See the YAML specification file provided below for an example.
+- `args` - the command line arguments that should be passed to the job entry point Python script or class. See the YAML specification file provided here for an example.
- `resources` - this property defines the resources to be used by an Azure Machine Learning Managed (Automatic) Spark compute. It uses the following properties: - `instance_type` - the compute instance type to be used for Spark pool. The following instance types are currently supported: - `standard_e4s_v3`
resources:
``` > [!NOTE]
-> To use an attached Synapse Spark pool, define `compute` property in the sample YAML specification file shown above instead of `resources` property.
+> To use an attached Synapse Spark pool, define the `compute` property in the sample YAML specification file shown above, instead of the `resources` property.
The YAML files shown above can be used in the `az ml job create` command, with the `--file` parameter, to create a standalone Spark job as shown:
You can execute the above command from:
- your local computer that has [Azure Machine Learning CLI](./how-to-configure-cli.md?tabs=public) installed. # [Python SDK](#tab/sdk)- [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)] ### Standalone Spark job using Python SDK
To create a standalone Spark job, use the `azure.ai.ml.spark` function, with the
- `archives` - a list of archives that is automatically extracted and placed in the working directory of each executor, for successful execution of the job. This parameter is optional. - `conf` - a dictionary with pre-defined Spark configuration key-value pairs. - `driver_cores`: the number of cores allocated for the Spark driver.-- `driver_memory`: the allocated memory for the Spark driver, with a size unit suffix `k`, `m`, `g` or `t` (e.g. `512m`, `2g`).
+- `driver_memory`: the allocated memory for the Spark driver, with a size unit suffix `k`, `m`, `g` or `t` (for example, `512m`, `2g`).
- `executor_cores`: the number of cores allocated for the Spark executor.-- `executor_memory`: the allocated memory for the Spark executor, with a size unit suffix `k`, `m`, `g` or `t` (e.g. `512m`, `2g`).
+- `executor_memory`: the allocated memory for the Spark executor, with a size unit suffix `k`, `m`, `g` or `t` (for example, `512m`, `2g`).
- `dynamic_allocation_enabled` - a boolean parameter that defines whether or not executors should be allocated dynamically. - If dynamic allocation of executors is enabled, then define these parameters: - `dynamic_allocation_min_executors` - the minimum number of Spark executors instances for dynamic allocation.
To create a standalone Spark job, use the `azure.ai.ml.spark` function, with the
- `executor_instances` - the number of Spark executor instances. - `environment` - the Azure Machine Learning environment that will run the job. This parameter should pass: - an object of `azure.ai.ml.entities.Environment`, or an Azure Machine Learning environment name (string).-- `args` - the command line arguments that should be passed to the job entry point Python script or class. See the sample code provided below for an example.
+- `args` - the command line arguments that should be passed to the job entry point Python script or class. See the sample code provided here for an example.
- `resources` - the resources to be used by an Azure Machine Learning Managed (Automatic) Spark compute. This parameter should pass a dictionary with: - `instance_type` - a key that defines the compute instance type to be used for the Managed (Automatic) Spark compute. The following instance types are currently supported: - `Standard_E4S_V3`
ml_client.jobs.stream(returned_spark_job.name)
``` > [!NOTE]
-> To use an attached Synapse Spark pool, define `compute` parameter in the `azure.ai.ml.spark` function instead of `resources`.
+> To use an attached Synapse Spark pool, define the `compute` parameter in the `azure.ai.ml.spark` function, instead of `resources`.
+
+# [Studio UI](#tab/ui)
+This functionality isn't available in the Studio UI. The Studio UI doesn't support this feature.
-### Submit a standalone Spark job from Azure Machine Learning studio UI
-To submit a standalone Spark job using the Azure Machine Learning studio UI:
+### Submit a standalone Spark job from Azure Machine Learning Studio UI
+To submit a standalone Spark job using the Azure Machine Learning Studio UI:
- In the left pane, select **+ New**. - Select **Spark job (preview)**. - On the **Compute** screen: 1. Under **Select compute type**, select **Spark automatic compute (Preview)** for Managed (Automatic) Spark compute, or **Attached compute** for an attached Synapse Spark pool. 1. If you selected **Spark automatic compute (Preview)**:
To submit a standalone Spark job using the Azure Machine Learning studio UI:
1. In the pop-up screen titled **Path selection**, select the path of code files on the workspace default blob storage. 1. Select **Save**. 1. Input the name of **Entry file** for the standalone job. This file should contain the Python code that takes arguments.
- 1. To add any additional Python file(s) required by the standalone job at runtime, select **+ Add file** under **Py files** and input the name of the `.zip`, `.egg`, or `.py` file to be placed in the `PYTHONPATH` for successful execution of the job. Multiple files can be added.
+ 1. To add any another Python file(s) required by the standalone job at runtime, select **+ Add file** under **Py files** and input the name of the `.zip`, `.egg`, or `.py` file to be placed in the `PYTHONPATH` for successful execution of the job. Multiple files can be added.
1. To add any Jar file(s) required by the standalone job at runtime, select **+ Add file** under **Jars** and input the name of the `.jar` file to be included in the Spark driver and the executor `CLASSPATH` for successful execution of the job. Multiple files can be added. 1. To add archive(s) that should be extracted into the working directory of each executor for successful execution of the job, select **+ Add file** under **Archives** and input the name of the archive. Multiple archives can be added. 1. Adding **Py files**, **Jars**, and **Archives** is optional. 1. To add an input, select **+ Add input** under **Inputs** and
- 1. Enter an **Input name**. This is the name by which the input should be referred later in the **Arguments**.
+ 1. Enter an **Input name**. The input should refer to this name later in the **Arguments**.
1. Select an **Input type**. 1. For type **Data**: 1. Select **Data type** as **File** or **Folder**. 1. Select **Data source** as **Upload from local**, **URI**, or **Datastore**. - For **Upload from local**, select **Browse** under **Path to upload**, to choose the input file or folder.
- - For **URI**, enter a storage data URI (e.g. `abfss://` or `wasbs://` URI), or enter a data asset `azureml://`.
+ - For **URI**, enter a storage data URI (for example, `abfss://` or `wasbs://` URI), or enter a data asset `azureml://`.
- For **Datastore**: 1. **Select a datastore** from the dropdown menu. 1. Under **Path to data**, select **Browse**.
To submit a standalone Spark job using the Azure Machine Learning studio UI:
1. For type **Boolean**, select **True** or **False** as **Input value**. 1. For type **String**, enter a string as **Input value**. 1. To add an input, select **+ Add output** under **Outputs** and
- 1. Enter an **Output name**. This is the name by which the output should be referred later to in the **Arguments**.
+ 1. Enter an **Output name**. The output should refer to this name later to in the **Arguments**.
1. Select **Output type** as **File** or **Folder**.
- 1. For **Output URI destination**, enter a storage data URI (e.g. `abfss://` or `wasbs://` URI) or enter a data asset `azureml://`.
+ 1. For **Output URI destination**, enter a storage data URI (for example, `abfss://` or `wasbs://` URI) or enter a data asset `azureml://`.
1. Enter **Arguments** by using the names defined in the **Input name** and **Output name** fields in the earlier steps, and the names of input and output arguments used in the Python script **Entry file**. For example, if the **Input name** and **Output name** are defined as `job_input` and `job_output`, and the arguments are added in the **Entry file** as shown here ``` python
To submit a standalone Spark job using the Azure Machine Learning studio UI:
1. Select **Create** to submit the standalone Spark job. ## Spark component in a pipeline job
-A Spark component allows the flexibility to use the same component in multiple [Azure Machine Learning pipelines](./concept-ml-pipelines.md) as a pipeline step.
+A Spark component offers the flexibility to use the same component in multiple [Azure Machine Learning pipelines](./concept-ml-pipelines.md), as a pipeline step.
# [Azure CLI](#tab/cli)- [!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)] YAML syntax for a Spark component resembles the [YAML syntax for Spark job specification](#yaml-properties-in-the-spark-job-specification) in most ways. These properties are defined differently in the Spark component YAML specification:
YAML syntax for a Spark component resembles the [YAML syntax for Spark job speci
- `version` - the version of the Spark component. - `display_name` - the name of the Spark component to display in the UI and elsewhere. - `description` - the description of the Spark component.-- `inputs` - this property is similar to `inputs` property described in [YAML syntax for Spark job specification](#yaml-properties-in-the-spark-job-specification), except that it does not define the `path` property. This code snippet shows an example of the Spark component `inputs` property:
+- `inputs` - this property is similar to `inputs` property described in [YAML syntax for Spark job specification](#yaml-properties-in-the-spark-job-specification), except that it doesn't define the `path` property. This code snippet shows an example of the Spark component `inputs` property:
```yaml inputs:
YAML syntax for a Spark component resembles the [YAML syntax for Spark job speci
mode: direct ``` -- `outputs` - this property is similar to the `outputs` property described in [YAML syntax for Spark job specification](#yaml-properties-in-the-spark-job-specification), except that it does not define the `path` property. This code snippet shows an example of the Spark component `outputs` property:
+- `outputs` - this property is similar to the `outputs` property described in [YAML syntax for Spark job specification](#yaml-properties-in-the-spark-job-specification), except that it doesn't define the `path` property. This code snippet shows an example of the Spark component `outputs` property:
```yaml outputs:
YAML syntax for a Spark component resembles the [YAML syntax for Spark job speci
``` > [!NOTE]
-> A Spark component does not define `identity`, `compute` or `resources` properties. These properties are defined in the pipeline YAML specification file.
+> A Spark component does not define `identity`, `compute` or `resources` properties. The pipeline YAML specification file defines these properties.
This YAML specification file provides an example of a Spark component:
conf:
spark.dynamicAllocation.maxExecutors: 4 ```
-The Spark component defined in the above YAML specification file can be used in an Azure Machine Learning pipeline job. See [pipeline job YAML schema](./reference-yaml-job-pipeline.md) to learn more about the YAML syntax that defines a pipeline job. This is an example YAML specification file for a pipeline job, with a Spark component, and an Azure Machine Learning Managed (Automatic) Spark compute:
+The Spark component defined in the above YAML specification file can be used in an Azure Machine Learning pipeline job. See [pipeline job YAML schema](./reference-yaml-job-pipeline.md) to learn more about the YAML syntax that defines a pipeline job. This example shows a YAML specification file for a pipeline job, with a Spark component, and an Azure Machine Learning Managed (Automatic) Spark compute:
```yaml $schema: http://azureml/sdk-2-0/PipelineJob.json
jobs:
runtime_version: "3.2" ``` > [!NOTE]
-> To use an attached Synapse Spark pool, define `compute` property in the sample YAML specification file shown above instead of `resources` property.
+> To use an attached Synapse Spark pool, define the `compute` property in the sample YAML specification file shown above, instead of `resources` property.
The above YAML specification file can be used in `az ml job create` command, using the `--file` parameter, to create a pipeline job as shown:
You can execute the above command from:
- your local computer that has [Azure Machine Learning CLI](./how-to-configure-cli.md?tabs=public) installed. # [Python SDK](#tab/sdk)- [!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-To create an Azure Machine Learning pipeline with a Spark component, you should be familiar with creating [Azure Machine Learning pipelines from components using Python SDK](./tutorial-pipeline-python-sdk.md#create-the-pipeline-from-components). A Spark component is created using `azure.ai.ml.spark` function. The function parameters are defined almost the same way as for the [standalone Spark job](#standalone-spark-job-using-python-sdk). These parameters are defined differently for the Spark component:
+To create an Azure Machine Learning pipeline with a Spark component, you should have familiarity with creation of [Azure Machine Learning pipelines from components, using Python SDK](./tutorial-pipeline-python-sdk.md#create-the-pipeline-from-components). A Spark component is created using `azure.ai.ml.spark` function. The function parameters are defined almost the same way as for the [standalone Spark job](#standalone-spark-job-using-python-sdk). These parameters are defined differently for the Spark component:
- `name` - the name of the Spark component. - `display_name` - the name of the Spark component that will display in the UI and elsewhere.
To create an Azure Machine Learning pipeline with a Spark component, you should
- `outputs` - this parameter is similar to `outputs` parameter described for the [standalone Spark job](#standalone-spark-job-using-python-sdk), except that the `azure.ai.ml.Output` class is instantiated without the `path` parameter. > [!NOTE]
-> A Spark component created using `azure.ai.ml.spark` function does not define `identity`, `compute` or `resources` parameters. These parameters are defined in the Azure Machine Learning pipeline.
+> A Spark component created using `azure.ai.ml.spark` function does not define `identity`, `compute` or `resources` parameters. The Azure Machine Learning pipeline defines these parameters.
You can submit a pipeline job with a Spark component from: - an Azure Machine Learning Notebook connected to an Azure Machine Learning compute instance. - [Visual Studio Code connected to an Azure Machine Learning compute instance](./how-to-set-up-vs-code-remote.md?tabs=studio). - your local computer that has [the Azure Machine Learning SDK for Python](/python/api/overview/azure/ml/installv2) installed.
-This Python code snippet shows use of a managed identity, together with the creation of an Azure Machine Learning pipeline job, with a Spark component, and an Azure Machine Learning Managed (Automatic) Synapse compute:
+This Python code snippet shows use of a managed identity, together with the creation of an Azure Machine Learning pipeline job. Additionally, it shows use of a Spark component and an Azure Machine Learning Managed (Automatic) Synapse compute:
```python from azure.ai.ml import MLClient, dsl, spark, Input, Output
ml_client.jobs.stream(pipeline_job.name)
``` > [!NOTE]
-> To use an attached Synapse Spark pool, define `compute` parameter in the `azure.ai.ml.spark` function instead of `resources` parameter. For example, in the code sample shown above, define `spark_step.compute = "<ATTACHED_SPARK_POOL_NAME>"` instead of defining `spark_step.resources`.
+> To use an attached Synapse Spark pool, define the `compute` parameter in the `azure.ai.ml.spark` function, instead of `resources` parameter. For example, in the code sample shown above, define `spark_step.compute = "<ATTACHED_SPARK_POOL_NAME>"` instead of defining `spark_step.resources`.
+
+# [Studio UI](#tab/ui)
+This functionality isn't available in the Studio UI. The Studio UI doesn't support this feature.
machine-learning How To Use Batch Azure Data Factory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-azure-data-factory.md
Azure Data Factory can invoke the REST APIs of batch endpoints by using the [Web
You can use a service principal or a [managed identity](../active-directory/managed-identities-azure-resources/overview.md) to authenticate against Batch Endpoints. We recommend using a managed identity as it simplifies the use of secrets. > [!IMPORTANT]
-> When your data is stored in cloud locations instead of Azure Machine Learning Data Stores, the identity of the compute is used to read the data instead of the identity used to invoke the endpoint.
+> Batch Endpoints can consume data stored in storage accounts instead of Azure Machine Learning Data Stores or Data Assets. However, you may need to configure additional permissions for the identity of the compute where the batch endpoint runs on. See [Security considerations when reading data](how-to-access-data-batch-endpoints-jobs.md#security-considerations-when-reading-data).
# [Using a Managed Identity](#tab/mi)
The pipeline requires the following parameters to be configured:
To create this pipeline in your existing Azure Data Factory, follow these steps: 1. Open Azure Data Factory Studio and under __Factory Resources__ click the plus sign.
-2. Select __Pipeline__ > __Import from pipeline template__
-3. You will be prompted to select a `zip` file. Uses [the following template if using managed identities](https://azuremlexampledata.blob.core.windows.net/data/templates/batch-inference/Run-BatchEndpoint-MI.zip) or [the following one if using a service principal](https://azuremlexampledata.blob.core.windows.net/data/templates/batch-inference/Run-BatchEndpoint-SP.zip).
-4. A preview of the pipeline will show up in the portal. Click __Use this template__.
-5. The pipeline will be created for you with the name __Run-BatchEndpoint__.
-6. Configure the parameters of the batch deployment you are using:
+
+1. Select __Pipeline__ > __Import from pipeline template__
+
+1. You will be prompted to select a `zip` file. Uses [the following template if using managed identities](https://azuremlexampledata.blob.core.windows.net/data/templates/batch-inference/Run-BatchEndpoint-MI.zip) or [the following one if using a service principal](https://azuremlexampledata.blob.core.windows.net/data/templates/batch-inference/Run-BatchEndpoint-SP.zip).
+
+1. A preview of the pipeline will show up in the portal. Click __Use this template__.
+
+1. The pipeline will be created for you with the name __Run-BatchEndpoint__.
+
+1. Configure the parameters of the batch deployment you are using:
# [Using a Managed Identity](#tab/mi)
To create this pipeline in your existing Azure Data Factory, follow these steps:
> Ensure that your batch endpoint has a default deployment configured before submitting a job to it. The created pipeline will invoke the endpoint and hence a default deployment needs to be created and configured. > [!TIP]
- > For best reusability, use the created pipeline as a template and call it from within other Azure Data Factory pipelines by leveraging the [Execute pipeline activity](../data-factory/control-flow-execute-pipeline-activity.md). In that case, do not configure the parameters in the created pipeline but pass them when you are executing the pipeline.
+ > For best reusability, use the created pipeline as a template and call it from within other Azure Data Factory pipelines by leveraging the [Execute pipeline activity](../data-factory/control-flow-execute-pipeline-activity.md). In that case, do not configure the parameters in the inner pipeline but pass them as parameters from the outer pipeline as shown in the following image:
> > :::image type="content" source="./media/how-to-use-batch-adf/pipeline-run.png" alt-text="Screenshot of the pipeline parameters expected for the resulting pipeline when invoked from another pipeline.":::
machine-learning Quickstart Create Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/quickstart-create-resources.md
Create a *compute instance* to use this development environment for the rest of
1. If you didn't just create a workspace in the previous section, sign in to [Azure Machine Learning studio](https://ml.azure.com) now, and select your workspace. 1. On the left side, select **Compute**.
- :::image type="content" source="media/quickstart-create-resources/compute-section.png" alt-text="Screenshot: shows Compute section on left hand side of screen.":::
+ :::image type="content" source="media/quickstart-create-resources/compute-section.png" alt-text="Screenshot: shows Compute section on left hand side of screen." lightbox="media/quickstart-create-resources/compute-section.png":::
1. Select **+New** to create a new compute instance. 1. Supply a name, Keep all the defaults on the first page.
machine-learning Quickstart Spark Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/quickstart-spark-jobs.md
Title: "Quickstart: Apache Spark jobs in Azure Machine Learning (preview)"
+ Title: "Quickstart: Submit Apache Spark jobs in Azure Machine Learning (preview)"
description: Learn how to submit Apache Spark jobs with Azure Machine Learning
Previously updated : 12/13/2022 Last updated : 01/09/2023 #Customer intent: As a Full Stack ML Pro, I want to submit a Spark job in Azure Machine Learning.
managed-grafana How To Create Api Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-create-api-keys.md
description: Learn how to generate and manage Grafana API keys, and start making
+ Previously updated : 08/31/2022 Last updated : 11/17/2022 # Generate and manage Grafana API keys in Azure Managed Grafana
+> [!NOTE]
+> This document is deprecated as the API keys feature has been replaced by a new feature in Grafana 9.1. Go to [Service accounts](./how-to-service-accounts.md) to access the current recommended method to create and manage API keys.
+
+> [!TIP]
+> To switch to using service accounts, in Grafana instances created before the release of Grafana 9.1, go to **Configuration > API keys and select Migrate to service accounts now**. Select **Yes, migrate now**. Each existing API keys will be automatically migrated into a service account with a token. The service account will be created with the same permission as the API Key and current API keys will continue to work as before.
+ In this guide, learn how to generate and manage API keys, and start making API calls to the Grafana server. Grafana API keys will enable you to create integrations between Azure Managed Grafana and other services. ## Prerequisites
In this guide, learn how to generate and manage API keys, and start making API c
## Enable API keys
-API keys are disabled by default in Azure Managed Grafana. You can enable this feature during the creation of the instance on the Azure portal, or you can activate it on an existing instance, using the Azure portal or the CLI.
+API keys are disabled by default in Azure Managed Grafana. You can enable this feature during the creation of the instance in the Azure portal, or you can activate it on an existing instance, using the Azure portal or the CLI.
### Create an Azure Managed Grafana workspace with API key creation enabled
az grafana update --name <azure-managed-grafana-name> --api-keys Enabled
| **Managed Grafana role** | Choose a Managed Grafana role: Viewer, Editor or Admin. | *Editor* | | **Time to live** | Enter a time before your API key expires. Use *s* for seconds, *m* for minutes, *h* for hours, *d* for days, *w* for weeks, *M* for months, *y* for years. | 7d |
- :::image type="content" source="media/create-api-keys/form.png" alt-text="Screenshot of the Grafana dashboard. API creation form filled out.":::
+ :::image type="content" source="media/create-api-keys/form.png" alt-text="Screenshot of the Grafana dashboard. API creation form is filled out.":::
1. Once the key has been generated, a message pops up with the new key and a curl command including your key. Copy this information and save it in your records now, as it will be hidden once you leave this page. If you close this page without save the new API key, you'll need to generate a new one.
az grafana api-key delete --name <azure-managed-grafana-name> --key <key>
## Next steps
-In this how-to guide, you learned how to create an API key for Azure Managed Grafana. To learn how to call Grafana APIs, see:
+In this how-to guide, you learned how to create an API key for Azure Managed Grafana. When you're ready, start using service accounts as the new way to authenticate applications that interact with Grafana:
> [!div class="nextstepaction"]
-> [Call Grafana APIs](how-to-api-calls.md)
+> [User service accounts](how-to-service-accounts.md)
managed-grafana How To Service Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-service-accounts.md
+
+ Title: How to use service accounts in Azure Managed Grafana
+description: In this guide, learn how to use service accounts in Azure Managed Grafana.
++++ Last updated : 11/30/2022++
+# How to use service accounts in Azure Managed Grafana
+
+In this guide, learn how to use service accounts. Service accounts are used to run automated operations and authenticate applications in Grafana with the Grafana API.
+
+Common use cases include:
+
+- Provisioning or configuring dashboards
+- Scheduling reports
+- Defining alerts
+- Setting up an external SAML authentication provider
+- Interacting with Grafana without signing in as a user
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+- An Azure Managed Grafana instance. If you don't have one yet, [create an Azure Managed Grafana instance](./quickstart-managed-grafana-portal.md).
+
+## Enable service accounts
+
+Service accounts are disabled by default in Azure Managed Grafana. If your existing Grafana workspace doesn't have service accounts enabled, you can enable them by updating the preference settings of your Grafana instance.
+
+### [Portal](#tab/azure-portal)
+
+ 1. In the Azure portal, under **Settings**, select **Configuration**, and then under **API keys and service accounts**, select **Enable**.
+
+ :::image type="content" source="media/service-accounts/enable.png" alt-text="Screenshot of the Azure platform. Enable service accounts.":::
+ 1. Select **Save** to confirm that you want to enable API keys and service accounts in Azure Managed Grafana.
+
+### [Azure CLI](#tab/azure-cli)
+
+1. Azure Managed Grafana CLI extension 0.3.0 or above is required. To update your extension, run `az extension update --name amg`.
+1. Run the [az grafana update](/cli/azure/grafana#az-grafana-update) command to enable the creation of API keys and service accounts in an existing Azure Managed Grafana instance. In the command below, replace `<azure-managed-grafana-name>` with the name of the Azure Managed Grafana instance to update.
+
+```azurecli-interactive
+az grafana update --name <azure-managed-grafana-name> service-account Enabled
+```
+++
+## Create a service account
+
+Follow the steps below to create a new Grafana service account and list existing service accounts:
+
+### [Portal](#tab/azure-portal)
+
+1. Go to your Grafana instance endpoint, and under **Configuration**, select **Service accounts**.
+1. Select **Add service account**, and enter a **Display name** and a **Role** for your new Grafana service account: *Viewer*, *Editor* or *Admin* and select **Create**.
+
+ :::image type="content" source="media/service-accounts/service-accounts.png" alt-text="Screenshot of Grafana. Add service account page.":::
+1. The page displays the notification *Service account successfully created* and some information about your new service account.
+1. Select the back arrow sign to view a list of all the service accounts of your Grafana instance.
+
+### [Azure CLI](#tab/azure-cli)
+
+Run the `az grafana service-account create` command to create a service account. Replace the placeholders `<azure-managed-grafana-name>`, `<service-account-name>` and `<role>` with your own information.
+
+Available roles: `Admin`, `Editor`, `Viewer`.
+
+```azurecli-interactive
+az grafana service-account create --name <azure-managed-grafana-name> --service-account <service-account-name> --role <role>
+```
+
+#### List service accounts
+
+Run the `az grafana service-account list` command to get a list of all service accounts that belong to a given Azure Managed Grafana instance. Replace `<azure-managed-grafana-name>` with the name of your Azure Managed Grafana workspace.
+
+```azurecli-interactive
+az grafana service-account list --name <azure-managed-grafana-name> --output table
+```
+
+Example of output:
+
+```output
+AvatarUrl IsDisabled Login Name OrgId Role Tokens
+-- -- - - --
+/avatar/abc12345678 False sa-account1 account1 1 Viewer 0
+```
+
+#### Display service account details
+
+Run the `az grafana service-account show` command to get the details of a service account. Replace `<azure-managed-grafana-name>` and `<service-account-name>` with your own information.
+
+```azurecli-interactive
+az grafana service-account show --name <azure-managed-grafana-name> --service-account <service-account-name>
+```
+++
+## Add a service account token and review tokens
+
+Once you've created a service account, add one or more access tokens. Access tokens are generated strings used to authenticate to the Grafana API.
+
+### [Portal](#tab/azure-portal)
+
+1. To create a service account token, select **Add token**.
+1. Use the automatically generated **Display name** or enter a name of your choice, and optionally select an **Expiration date** or keep the default option to set no expiry date.
+
+ :::image type="content" source="media/service-accounts/add-service-account-token.png" alt-text="Screenshot of the Azure platform. Add service account token page.":::
+
+1. Select **Generate token**, and take note of the token generated. This token will only be shown once, so make sure you save it, as loosing a token requires creating a new one.
+1. Select the service account to access information about your service account, including a list of all associated tokens.
+
+### [Azure CLI](#tab/azure-cli)
+
+#### Create a new token
+
+1. Create a Grafana service account token with `az grafana service-account token create`. Replace the placeholders `<azure-managed-grafana-name>`, `<service-account-name>` and `<token-name>` with your own information.
+
+ Optionally set an expiry time:
+
+ | Parameter | Description | Example |
+ ||-|-|
+ | `--time-to-live` | Tokens have an unlimited expiry date by default. Set an expiry time to disable the token after a given time. Use `s` for seconds, `m` for minutes, `h` for hours, `d` for days, `w` for weeks, `M` for months or `y` for years. | `15d` |
+
+ ```azurecli-interactive
+ az grafana service-account token create --name <azure-managed-grafana-name> --service-account <service-account-name> --token <token-name> --time-to-live 15d
+ ```
+
+1. Take note of the generated token. This token will only be shown once, so make sure you save it, as loosing a token requires creating a new one.
+
+#### List service account tokens
+
+Run the `az grafana service-account token list` command to get a list of all tokens that belong to a given service account. Replace the placeholders `<azure-managed-grafana-name>` and `<service-account-name>` with your own information.
+
+```azurecli-interactive
+az grafana service-account token list --name <azure-managed-grafana-name> --service-account <service-account-name> --output table
+```
+
+Example of output:
+
+```output
+Created Expiration HasExpired Name SecondsUntilExpiration
+-- --
+2022-12-07T11:40:45Z 2022-12-08T11:40:45Z False token1 85890.870731556
+2022-12-07T11:42:35Z 2022-12-22T11:42:35Z False token2 0
+```
+++
+## Edit a service account
+
+In this section, learn how to update a Grafana service account in the following ways:
+
+- Edit the name of a service account
+- Edit the role of a service account
+- Disable a service account
+- Enable a service account
+
+### [Portal](#tab/azure-portal)
+
+Actions:
+
+- To edit the name, select the service account and under **Information** select **Edit**.
+- To edit the role, select the service account and under **Information**, select the role and choose another role name.
+- To disable a service account, select a service account and at the top of the page select **Disable service account**, then select **Disable service account** to confirm. Disabled service accounts can be re-enabled by selecting **Enable service account**.
++
+The notification *Service account updated* is instantly displayed.
+
+### [Azure CLI](#tab/azure-cli)
+
+Edit a service account with `az grafana service-account update`. Replace the placeholders `<azure-managed-grafana-name>`, and `<service-account-name>` with your own information and use one or several of the following parameters:
+
+| Parameter | Description |
+|--|-|
+| `--is-disabled` | Enter `--is-disabled true` disable a service account, or `--is-disabled false` to enable a service account. |
+| `--name` | Enter another name for your service account. |
+| `--role` | Enter another role for your service account. Available roles: `Admin`, `Editor`, `Viewer`. |
+
+```azurecli-interactive
+az grafana service-account update --name <azure-managed-grafana-name> --service-account <service-account-name> --role <role> --enabled false
+```
+
+To disable a service account run the `az grafana update` command and use the option `--is-disabled true`. To enable a service account, use `--is-disabled false`.
+
+```azurecli-interactive
+az grafana update --service-account Disabled --name <service-account-name>
+```
+++
+## Delete a service account
+
+### [Portal](#tab/azure-portal)
+
+To delete a Grafana service account, select a service account and at the top of the page select **Delete service account**, then select **Delete service account** to confirm. Deleting a service account is final and a service account can't be recovered once deleted.
++
+### [Azure CLI](#tab/azure-cli)
+
+To delete a service account, run the `az grafana service-account delete` command. Replace the placeholders `<azure-managed-grafana-name>` and `<service-account-name>` with your own information.
+
+```azurecli-interactive
+az grafana service-account delete --name <azure-managed-grafana-name> --service-account <service-account-name>
+```
+++
+## Delete a service account token
+
+### [Portal](#tab/azure-portal)
+
+To delete a service account token, select a service account and under **Tokens**, select **Delete (x)**. Select **Delete** to confirm.
++
+### [Azure CLI](#tab/azure-cli)
+
+To delete a service account, run the `az grafana service-account token delete` command. Replace the placeholders `<azure-managed-grafana-name>`, `<service-account-name>` and `<token-name>` with your own information.
+
+```azurecli-interactive
+az grafana service-account token delete --name <azure-managed-grafana-name> --service-account <service-account-name> --toke <token-name>
+```
+++
+## Next steps
+
+In this how-to guide, you learned how to create and manage service accounts and tokens to run automated operations in Azure Managed Grafana. When you're ready, explore more articles:
+
+> [!div class="nextstepaction"]
+> [Enable zone redundancy](how-to-enable-zone-redundancy.md)
marketplace Isv Customer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/isv-customer.md
Use this page to define private offer terms, notification contacts, and pricing
- **Customer Information** ΓÇô Specify the billing account for the customer receiving this private offer. This will only be available to the configured customer billing account and the customer will need to be an owner or contributor or signatory on the billing account to accept the offer.
- > [!NOTE]
- > Customers can find their billing account ID in 2 ways. 1) In the [Azure portal](https://aka.ms/PrivateOfferAzurePortal) under **Cost Management + Billing** **Properties** **ID**. A user in the customer organization should have access to the billing account to see the ID in Azure Portal. 2) If customer knows the subscription they plan to use for the purchase, click on **Subscriptions**, click on the relevant subscription **Properties** (or Billing Properties) **Billing Account ID**. See [Billing account scopes in the Azure portal](/azure/cost-management-billing/manage/view-all-accounts).
+ > [!NOTE]
+ > Customers can find their billing account ID in 2 ways. 1) In the [Azure portal](https://aka.ms/PrivateOfferAzurePortal) under **Cost Management + Billing** > **Properties** >**ID**. A user in the customer organization should have access to the billing account to see the ID in Azure Portal. 2) If the customer knows the subscription they plan to use for the purchase, click on **Subscriptions**, click on the relevant subscription **Properties** (or Billing Properties) **Billing Account ID**. See [Billing account scopes in the Azure portal](/azure/cost-management-billing/manage/view-all-accounts).
:::image type="content" source="media/isv-customer/customer-properties.png" alt-text="Shows the offer Properties tab in Partner Center."::: -- **Private offer terms** ΓÇô Specify the duration, accept-by date, and terms:
+![Screenshot showing subscription name properties.](media/isv-customer/subscription-name-properties.png)
+
- - **Start date** ΓÇô Choose **Accepted date** if you want the private offer to start as soon as the customer accepts it. If a private offer is extended to an existing customer of a Pay-as-you-go product, this will make the private price applicable for the entire month. To have your private offer start in an upcoming month, select **Specific month** and choose one. The start date for this option will always be the first day of the selected month.
+- **Private offer terms** ΓÇô Specify the duration, accept-by date, and terms:
+ - **Start date** ΓÇô Choose **Accepted date** if you want the private offer to start as soon as the customer accepts it. If a private offer is extended to an existing customer of a Pay-as-you-go product, this will make the private price applicable for the entire month. To have your private offer start in an upcoming month, select **Specific month** and choose one. The start date for this option will always be the first day of the selected month.
- **End date** ΓÇô Choose the month for your private offer's **End date**. This will always be the last day of the selected month. - **Accept by** ΓÇô Choose the expiration date for your private offer. Your customer must accept the private offer prior to this date. - **Terms and conditions** ΓÇô Optionally, upload a PDF with terms and conditions your customer must accept as part of the private offer.
The payout amount and agency fee that Microsoft charges is based on the private
- [ISV to Customer Private Offer Acceptance](https://www.youtube.com/watch?v=HWpLOOtfWZs) - [ISV to Customer Private Offer Purchase Experience](https://www.youtube.com/watch?v=mPX7gqdHqBk) ++
marketplace Price Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/price-changes.md
For a price decrease to a Software as a service offer to take effect on the firs
For a price increase to a Software as a service offer to take effect on the first of a future month, 90 days out, publish the price change at least four days before the end of the current month. > [!Note]
-> Offers will be billed to customers in the customersΓÇÖ agreement currency, using the local market price that was published at the time the offer was created. The amount that customers pay, and that ISVs are paid, depends on the Foreign Exchange rates at the time the customer transacts the offer. Learn more on ["How we convert currency?"](/azure/marketplace/marketplace-geo-availability-currencies).
+> Offers will be billed to customers in the customersΓÇÖ agreement currency, using the local market price that was published at the time the offer was created. The amount that customers pay, and that ISVs are paid, depends on the Foreign Exchange rates at the time the customer transacts the offer. Learn more on ["How we convert currency?"](/azure/marketplace/marketplace-geo-availability-currencies#how-we-convert-currency).
## Changing the flat fee of a SaaS or Azure app offer To update the monthly or yearly price of a SaaS or Azure app offer:
Customers are billed the new price for consumption of the resource that happens
## Canceling or modifying a price change
-If the price change was configured within the last 2 days, it can be cancelled using the cancel button next to the price change expected on date and then publishing the changes. For a price change configured more than 2 days ago that has not yet taken affect, [submit a support request](https://partner.microsoft.com/support/?stage=1), that includes the Plan ID, price, and the market (if the change was market specific) in the request.
+If the price change was configured within the last 2 days, it can be canceled using the cancel button next to the price change expected on date and then publishing the changes. For a price change configured more than 2 days ago that has not yet taken effect, [submit a support request](https://partner.microsoft.com/support/?stage=1) that includes the Plan ID, price, and the market (if the change was market specific) in the request.
If the price change was an increase and the cancelation was after the 2-day period, we will email the customers a second time to inform them of the cancelation. After the price change is canceled, follow the steps in the appropriate part of this article to schedule a new price change with the needed modifications.  ## Next steps - Sign in to [Partner Center](https://go.microsoft.com/fwlink/?linkid=2166002).--------------------------------------------------
mysql Tutorial Power Automate With Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/tutorial-power-automate-with-mysql.md
+
+ Title: Create a Power Automate flow with Azure Database for MySQL Flexible Server
+description: Create a Power Automate flow with Azure Database for MySQL Flexible Server
+++++ Last updated : 1/15/2023++
+# Tutorial: Create a Power Automate flow app with Azure Database for MySQL Flexible Server
+
+Power Automate is a service that helps you create automated workflows between your favorite apps and services to synchronize files, get notifications, collect data, and more. Here are a few examples of what you can do with Power Automate.
+
+- Automate business processes
+- Move business data between systems on a schedule
+- Connect to more than 500 data sources or any publicly available API
+- Perform CRUD (create, read, update, delete) operations on data
+
+In this quickstart shows how to create an automated workflow usingPower automate flow with [Azure database for MySQL connector](/connectors/azuremysql/).
+
+## Prerequisites
+
+* An account on [flow.microsoft.com](https://flow.microsoft.com).
+
+* An Azure account and subscription. If you don't have a subscription, [sign up for a free Azure account](https://azure.microsoft.com/free).
+
+- Create an Azure Database for MySQL Flexible server using [Azure portal](./quickstart-create-server-portal.md) <br/> or [Azure CLI](./quickstart-create-server-cli.md) if you don't have one.
+- Populate the database server with this [sample data](https://raw.githubusercontent.com/Azure-Samples/mysql-database-samples/main/mysqltutorial.org/mysql-classicmodesl.sql).
+
+[Having issues? Let us know](https://github.com/MicrosoftDocs/azure-docs/issues)
+
+## Overview of cloud flows
+
+Create a cloud flow when you want your automation to be triggered either automatically, instantly, or via a schedule. Here are types of flows you can create and then use with Azure database for MySQL connector.
+
+| **Flow type** | **Use case** | **Automation target** |
+|-|--|-|
+| Automated cloud flows | Create an automation that is triggered by an event such as arrival of an email from a specific person, or a mention of your company in social media.| Connectors for cloud or on-premises services connect your accounts and enable them to talk to each other. |
+| Instant cloud flows | Start an automation with a click of a button. You can automate for repetitive tasks from your Desktop or Mobile devices. For example, instantly send a reminder to the team with a push of a button from your mobile device. | Wide range of tasks such as requesting an approval, an action in Teams or SharePoint. |
+| Scheduled flows | Schedule an automation such as daily data upload to SharePoint or a database. |Tasks that need to be automated on a schedule.
+
+For this tutorial, we'll use **instant cloud flow* that can be triggered manually from any device, easy-to-share instant flows automate tasks so you donΓÇÖt have to repeat yourself.
+
+## Specify an event to start the flow
+Follow the steps to create an instant cloud flow with a manual trigger.
+
+1. In [Power Automate](https://flow.microsoft.com), select **Create** from the navigation bar on the left.
+2. Under **Start from blank*, select **Instant cloud flow**.
+3. Give your flow a name in the **Flow name" field and select **Manually trigger a flow**.
+
+ :::image type="content" source="./media/tutorial-power-automate-with-mysql/create-instant-cloud-flow.png" alt-text="Screenshot that shows how to create instant cloud flow app.":::
+
+4. Select the **Create** button at the bottom of the screen.
+
+## Create a MySQL operation
+An operation is an action. Power Automate flow allows you to add one or more advanced options and multiple actions for the same trigger. For example, add an advanced option that sends an email message as high priority. In addition to sending mail when an item is added to a list created in Microsoft Lists, create a file in Dropbox that contains the same information.
+
+1. Once the flow app is created, select **Next Step** to create an operation.
+2. In the box that shows Search connectors and actions, enter **Azure database for MySQL**.
+3. Select **Azure database for MySQL** connector and then select **Get Rows** operation. Get rows operation allows you to get all the rows from a table or query.
+
+ :::image type="content" source="./media/tutorial-power-automate-with-mysql/azure-mysql-connector-add-action.png" alt-text="Screenshot that shows how to view all the actions for Azure database for MySQL connector.":::
+
+5. Add a new MySQL connection and enter the **authentication type**,**server name**, **database name**, **username**, **password**. Select **encrypt connection** if SSL is enabled on your MySQL server.
+
+ :::image type="content" source="./media/tutorial-power-automate-with-mysql/add-mysql-connection-information.png" alt-text="Screenshot that adding a new MySQL connection for Azure Database for MySQL server.":::
+
+ > [!NOTE]
+ > If you get an error **Test connection failed. Details: Authentication to host `'servername'` for user `'username'` using method 'mysql_native_password' failed with message: Access denied for user `'username'@'IP address'`(using password: YES)**, please update the firewall rules on MySQL server in [Azure protal](https://portal.azure.com) with this IP address.
+
+5. After the connection is successfully added, provide the **servername, database name and table name** parameters for **Get Rows** operation using the newly added connection. Select **advanced options** to add more filters or limit the number of rows returned.
+
+ :::image type="content" source="./media/tutorial-power-automate-with-mysql/get-rows-from-table.png" alt-text="Screenshot that shows configuring Get Rows operation.":::
+
+6. Select **Save**.
+
+## Test and run your flow
+After saving the flow, we need to test it and run the flow app.
+
+1. Select **Flow checker** to see if there are any errors that need to be resolved.
+2. Select **Test** and then select **Manually** to test the trigger.
+3. Select **Run flow**.
+4. When the flow is successfully executed, you can select **click to download** in the output section to see the JSON response received.
+
+ :::image type="content" source="./media/tutorial-power-automate-with-mysql/run-flow-to-get-rows-from-table.png" alt-text="Screenshot that shows output of the run.":::
+
+## Next steps
+[Azure database for MySQL connector](/connectors/azuremysql/) reference
partner-solutions Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/datadog/create.md
Title: Create Datadog description: This article describes how to use the Azure portal to create an instance of Datadog.- Previously updated : 06/08/2022 ++ Last updated : 01/06/2023 +
-# QuickStart: Get started with Datadog by creating new instance
+# QuickStart: Get started with Datadog - An Azure Native ISV Service by creating new instance
-In this quickstart, you'll create a new instance of Datadog. You can either create a new Datadog organization or [link to an existing Datadog organization](link-to-existing-organization.md).
+In this quickstart, you'll create a new instance of Datadog - An Azure Native ISV Service. You can either create a new Datadog organization or [link to an existing Datadog organization](link-to-existing-organization.md).
## Prerequisites
Use Azure resource tags to configure which metrics and logs are sent to Datadog.
Tag rules for sending **metrics** are: -- By default, metrics are collected for all resources, except virtual machines, virtual machine scale sets, and app service plans.-- Virtual machines, virtual machine scale sets, and app service plans with _Include_ tags send metrics to Datadog.-- Virtual machines, virtual machine scale sets, and app service plans with _Exclude_ tags don't send metrics to Datadog.
+- By default, metrics are collected for all resources, except virtual machines, Virtual Machine Scale Sets, and App Service plans.
+- Virtual machines, Virtual Machine Scale Sets, and App Service plan with _Include_ tags send metrics to Datadog.
+- Virtual machines, Virtual Machine Scale Sets, and App Service plan with _Exclude_ tags don't send metrics to Datadog.
- If there's a conflict between inclusion and exclusion rules, exclusion takes priority. Tag rules for sending **logs** are:
Tag rules for sending **logs** are:
- Azure resources with _Exclude_ tags don't send logs to Datadog. - If there's a conflict between inclusion and exclusion rules, exclusion takes priority.
-For example, the following screenshot shows a tag rule where only those virtual machines, virtual machine scale sets, and app service plans tagged as _Datadog = True_ send metrics to Datadog.
+For example, the following screenshot shows a tag rule where only those virtual machines, Virtual Machine Scale Sets, and App Service plan tagged as _Datadog = True_ send metrics to Datadog.
:::image type="content" source="media/create/config-metrics-logs.png" alt-text="Screenshot of how to configure metrics and logs in Azure for Datadog.":::
When the process completes, select **Go to Resource** to see the Datadog resourc
## Next steps
-> [!div class="nextstepaction"]
-> [Manage the Datadog resource](manage.md)
+- [Manage the Datadog resource](manage.md)
partner-solutions Get Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/datadog/get-support.md
Title: Get support for Datadog resource description: This article describes how to contact support for a Datadog resource.- Previously updated : 05/28/2021 + Last updated : 01/06/2023+
-# Get support for Datadog resource
+# Get support for Datadog - An Azure Native ISV Service
-This article describes how to contact support when working with a Datadog resource. Before contacting support, see [Fix common errors](troubleshoot.md).
+This article describes how to contact support when working with Datadog - An Azure Native ISV Service. Before contacting support, see [Fix common errors](troubleshoot.md).
## Contact support
-To contact support about the Azure Datadog integration, select **New Support request** in the left pane. Select the link to the Datadog portal.
+To contact support about the Datadog - An Azure Native ISV Service, select **New Support request** in the left pane. Select the link to the Datadog portal.
:::image type="content" source="media/get-support/support-request.png" alt-text="Create a new Datadog support request" border="true"::: ## Next steps
-For potential solutions, see [Fix common errors](troubleshoot.md).
-
-To learn about making changes to your existing Datadog resource, see [Manage the Datadog resource](manage.md).
+- For potential solutions, see [Fix common errors](troubleshoot.md).
+- To learn about making changes to your existing Datadog resource, see [Manage the Datadog resource](manage.md).
partner-solutions Link To Existing Organization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/datadog/link-to-existing-organization.md
Title: Link to existing Datadog description: This article describes how to use the Azure portal to link to an existing instance of Datadog. Previously updated : 05/28/2021 Last updated : 01/06/2023
In this quickstart, you'll link to an existing organization of Datadog. You can
## Prerequisites
-Before creating your first instance of Datadog in Azure, [configure your environment](prerequisites.md). These steps must be completed before continuing with the next steps in this quickstart.
+Before creating your first instance of Datadog - An Azure Native ISV Service, [configure your environment](prerequisites.md). These steps must be completed before continuing with the next steps in this quickstart.
## Find offer
-Use the Azure portal to find Datadog.
+Use the Azure portal to find Datadog - An Azure Native ISV Service.
1. Go to the [Azure portal](https://portal.azure.com/) and sign in.
Use the Azure portal to find Datadog.
:::image type="content" source="media/link-to-existing-organization/marketplace.png" alt-text="Marketplace icon.":::
-1. In the Marketplace, search for **Datadog**.
+1. In the Marketplace, search for **Datadog - An Azure Native ISV Service**.
1. In the plan overview screen, select **Set up + subscribe**.
Use the Azure portal to find Datadog.
The portal displays a selection asking whether you would like to create a Datadog organization or link Azure subscription to an existing Datadog organization.
-If you are linking to an existing Datadog organization, select **Create** under the **Link Azure subscription to an existing Datadog organization**
+If you're linking to an existing Datadog organization, select **Create** under the **Link Azure subscription to an existing Datadog organization**
:::image type="content" source="media/link-to-existing-organization/datadog-create-link-selection.png" alt-text="Create or link a Datadog organization" border="true":::
Provide the following values.
Select **Link to Datadog organization**. The link opens a Datadog authentication window. Sign in to Datadog.
-By default, Azure links your current Datadog organization to your Datadog resource. If you would like to link to a different organization, select the appropriate organization in the authentication window, as shown below.
+By default, Azure links your current Datadog organization to your Datadog resource. If you would like to link to a different organization, select the appropriate organization in the authentication window.
:::image type="content" source="media/link-to-existing-organization/select-datadog-organization.png" alt-text="Select appropriate Datadog organization to link" border="true":::
Use Azure resource tags to configure which metrics and logs are sent to Datadog.
Tag rules for sending **metrics** are: -- By default, metrics are collected for all resources, except virtual machines, virtual machine scale sets, and app service plans.-- Virtual machines, virtual machine scale sets, and app service plans with *Include* tags send metrics to Datadog.-- Virtual machines, virtual machine scale sets, and app service plans with *Exclude* tags don't send metrics to Datadog.
+- By default, metrics are collected for all resources, except **Virtual Machines, Virtual Machine Scale Sets, and App Service Plans**.
+- **Virtual Machines, Virtual Machine Scale Sets, and App Service Plans** with *Include* tags send metrics to Datadog.
+- **Virtual Machines, Virtual Machine Scale Sets, and App Service Plans** with *Exclude* tags don't send metrics to Datadog.
- If there's a conflict between inclusion and exclusion rules, exclusion takes priority Tag rules for sending **logs** are:
Tag rules for sending **logs** are:
- Azure resources with *Exclude* tags don't send logs to Datadog. - If there's a conflict between inclusion and exclusion rules, exclusion takes priority.
-For example, the screenshot below shows a tag rule where only those virtual machines, virtual machine scale sets, and app service plans tagged as *Datadog = True* send metrics to Datadog.
+For example, the screenshot shows a tag rule where only those **Virtual Machines, Virtual Machine Scale Sets, and App Service Plans** tagged as *Datadog = True* send metrics to Datadog.
:::image type="content" source="media/link-to-existing-organization/config-metrics-logs.png" alt-text="Configure Logs and Metrics." border="true":::
When the process completes, select **Go to Resource** to see the Datadog resourc
## Next steps
-> [!div class="nextstepaction"]
-> [Manage the Datadog resource](manage.md)
+- [Manage the Datadog resource](manage.md)
+
partner-solutions Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/datadog/manage.md
Title: Manage a Datadog resource description: This article describes management of a Datadog resource in the Azure portal. How to set up single sign-on, delete a Confluent organization, and get support.- Previously updated : 05/28/2021 + + Last updated : 01/06/2023
-# Manage the Datadog resource
+# Manage the Datadog - An Azure Native ISV Service resource
-This article shows how to manage the settings for your Azure integration with Datadog.
+This article shows how to manage the settings for your Datadog - An Azure Native ISV Service.
## Resource overview
If you would like to reconfigure single sign-on, select **Single sign-on** in th
To establish single sign-on through Azure Active directory, select **Enable single sign-on through Azure Active Directory**.
-The portal retrieves the appropriate Datadog application from Azure Active Directory. The app comes from the enterprise app name you selected when setting up integration. Select the Datadog app name as shown below:
+The portal retrieves the appropriate Datadog application from Azure Active Directory. The app comes from the enterprise app name you selected when setting up integration. Select the Datadog app name:
:::image type="content" source="media/manage/reconfigure-single-sign-on.png" alt-text="Reconfigure single sign-on application." border="true"::: ΓÇâ
To change the Datadog billing plan, go to **Overview** and select **Change Plan*
:::image type="content" source="media/manage/datadog-select-change-plan.png" alt-text="Select change Datadog billing plan." border="true":::
-The portal retrieves all the available Datadog plans for your tenant. Select the appropriate plan and click on **Change Plan**.
+The portal retrieves all the available Datadog plans for your tenant. Select the appropriate plan and select on **Change Plan**.
:::image type="content" source="media/manage/datadog-change-plan.png" alt-text="Select the Datadog billing plan to change." border="true"::: ΓÇâ
To disable the Azure integration with Datadog, go to **Overview**. Select **Disa
:::image type="content" source="media/manage/disable.png" alt-text="Disable Datadog resource." border="true":::
-To enable the Azure integration with Datadog, go to **Overview**. Select **Enable** and **OK**. Selecting **Enable** retrieves any previous configuration for metrics and logs. The configuration determines which Azure resources emit metrics and logs to Datadog. After completing the step, metrics and logs are sent to Datadog.
+To enable the Azure integration with Datadog, go to **Overview**. Select **Enable** and **OK**. Selecting **Enable** retrieves any previous configuration for metrics and logs. The configuration determines which Azure resources emit metrics and logs to Datadog. After you complete this step, metrics and logs are sent to Datadog.
:::image type="content" source="media/manage/enable.png" alt-text="Enable Datadog resource." border="true":::
If more than one Datadog resource is mapped to the Datadog organization, deletin
## Next steps
-For help with troubleshooting, see [Troubleshooting Datadog solutions](troubleshoot.md).
+- For help with troubleshooting, see [Troubleshooting Datadog solutions](troubleshoot.md).
partner-solutions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/datadog/overview.md
Title: Datadog overview description: Learn about using Datadog in the Azure Marketplace.- Previously updated : 05/28/2021 + + Last updated : 01/06/2023+
-# What is Datadog?
+# What is Datadog - An Azure Native ISV Service?
## Overview Datadog is a monitoring and analytics platform for large-scale applications. It encompasses infrastructure monitoring, application performance monitoring, log management, and user-experience monitoring. Datadog aggregates data across your entire stack with 400+ integrations for troubleshooting, alerting, and graphing. You can use it as a single source for troubleshooting, optimizing performance, and cross-team collaboration.
-Datadog's offering in the Azure Marketplace enables you to manage Datadog in the Azure console as an integrated service. This availability means you can implement Datadog as a monitoring solution for your cloud workloads through a streamlined workflow. The workflow covers everything from procurement to configuration. The onboarding experience simplifies how you start monitoring the health and performance of your applications, whether they're based entirely in Azure or spread across hybrid or multi-cloud environments.
+Datadog's offering in the Azure Marketplace enables you to manage Datadog in the Azure console as an integrated service. This availability means you can implement Datadog as a monitoring solution for your cloud workloads through a streamlined workflow. The workflow covers everything from procurement to configuration. The onboarding experience simplifies how you start monitoring the health and performance of your applications, whether they're based entirely in Azure or spread across hybrid or multicloud environments.
-You provision the Datadog resources through a resource provider named `Microsoft.Datadog`. You can create, provision, and manage Datadog organization resources through the [Azure portal](https://portal.azure.com/). Datadog owns and runs the software as a service (SaaS) application including the organization and API keys.
+You create the Datadog resources through a resource provider named `Microsoft.Datadog`. You can create and manage Datadog organization resources through the [Azure portal](https://portal.azure.com/). Datadog owns and runs the software as a service (SaaS) application including the organization and API keys.
## Capabilities
-Integrating Datadog with Azure provides the following capabilities:
+Datadog - An Azure Native ISV Service provides the following capabilities:
-- **Integrated onboarding** - Datadog is an integrated service on Azure. You can provision Datadog and manage the integration through the Azure portal.
+- **Integrated onboarding** - Datadog is an integrated service on Azure. You can create a Datadog resource and manage the integration through the Azure portal.
- **Unified billing** - Datadog costs are reported through Azure monthly bill. - **Single sign-on to Datadog** - You don't need a separate authentication for the Datadog portal. - **Log forwarder** - Enables automated forwarding of subscription activity and resource logs to Datadog.
Integrating Datadog with Azure provides the following capabilities:
## Datadog links
-For more help using the Datadog service, see the following links to the [Datadog website](https://www.datadoghq.com/):
+For more help using the Datadog - An Azure Native ISV service, see the following links to the [Datadog website](https://www.datadoghq.com/):
- [Azure solution guide](https://www.datadoghq.com/solutions/azure/) - [Blog announcing the Datadog <> Azure Partnership](https://www.datadoghq.com/blog/azure-datadog-partnership/)
For more help using the Datadog service, see the following links to the [Datadog
## Next steps
-To create an instance of Datadog, see [QuickStart: Get started with Datadog](create.md).
+- To create an instance of Datadog, see [QuickStart: Get started with Datadog - An Azure Native ISV Service](create.md).
partner-solutions Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/datadog/prerequisites.md
Title: Prerequisites for Datadog on Azure description: This article describes how to configure your Azure environment to create an instance of Datadog.- Previously updated : 05/28/2021 + + Last updated : 01/06/2023
-# Configure environment before Datadog deployment
+# Configure environment before Datadog - An Azure Native ISV Service deployment
-This article describes how to set up your environment before deploying your first instance of Datadog. These conditions are prerequisites for completing the quickstarts.
+This article describes how to set up your environment before deploying your first instance of Datadog - An Azure Native ISV Service. These conditions are prerequisites for completing the quickstarts.
## Access control
-To set up the Azure Datadog integration, you must have **Owner** access on the Azure subscription. [Confirm that you have the appropriate access](../../role-based-access-control/check-access.md) before starting the setup.
+To set up the Datadog - An Azure Native ISV Service, you must have **Owner** access on the Azure subscription. [Confirm that you have the appropriate access](../../role-based-access-control/check-access.md) before starting the setup.
## Add enterprise application
-To use the Security Assertion Markup Language (SAML) Single Sign-On (SSO) feature within the Datadog resource, you must set up an enterprise application. To add an enterprise application, you need one of these roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+To use the Security Assertion Markup Language (SAML) single sign-on (SSO) feature within the Datadog resource, you must set up an enterprise application. To add an enterprise application, you need one of these roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
Use the following steps to set up the enterprise application:
Use the following steps to set up the enterprise application:
## Next steps
-To create an instance of Datadog, see [QuickStart: Get started with Datadog](create.md).
+To create an instance of Datadog, see [QuickStart: Get started with Datadog - An Azure Native ISV Service](create.md).
partner-solutions Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/datadog/troubleshoot.md
Title: Troubleshooting for Datadog description: This article provides information about troubleshooting for Datadog on Azure.- Previously updated : 05/28/2021 + + Last updated : 01/06/2023
-# Fix common errors for Datadog on Azure
+# Fix common errors for Datadog - An Azure Native ISV Service
-This document contains information about troubleshooting your solutions that use Datadog.
+This document contains information about troubleshooting your solutions that use Datadog - An Azure Native ISV Service.
## Purchase errors * Purchase fails because a valid credit card isn't connected to the Azure subscription or a payment method isn't associated with the subscription.
- Use a different Azure subscription. Or, add or update the credit card or payment method for the subscription. For more information, see [updating the credit and payment method](../../cost-management-billing/manage/change-credit-card.md).
+ Use a different Azure subscription. Or, add or update the credit card or payment method for the subscription. For more information, see [updating the credit and payment method](/azure/cost-management-billing/manage/change-credit-card).
* The EA subscription doesn't allow Marketplace purchases.
- Use a different subscription. Or, check if your EA subscription is enabled for Marketplace purchase. For more information, see [Enable Marketplace purchases](../../cost-management-billing/manage/ea-azure-marketplace.md#enabling-azure-marketplace-purchases). If those options don't solve the problem, contact [Datadog support](https://www.datadoghq.com/support).
+ Use a different subscription. Or, check if your EA subscription is enabled for Marketplace purchase. For more information, see [Enable Marketplace purchases](/azure/cost-management-billing/manage/ea-azure-marketplace.md#enabling-azure-marketplace-purchases). If those options don't solve the problem, contact [Datadog support](https://www.datadoghq.com/support).
-## Unable to create Datadog resource
+## Unable to create Datadog - An Azure Native ISV Service resource
To set up the Azure Datadog integration, you must have **Owner** access on the Azure subscription. Ensure you have the appropriate access before starting the setup. ## Single sign-on errors
-**Unable to save Single sign-on settings** - This error happens where there's another Enterprise app that is using the Datadog SAML identifier. To find which app is using it, select **Edit** on the Basic SAML Configuration section.
+- **Unable to save Single sign-on settings**
+ - This error happens where there's another Enterprise app that is using the Datadog SAML identifier. To find which app is using it, select **Edit** on the Basic SAML Configuration section.
-To resolve this issue, either disable the other app or use the other app as the Enterprise app to set up SAML SSO with Datadog. If you decide to use the other app, ensure the app has the [required settings](create.md#configure-single-sign-on).
+ To resolve this issue, either disable the other app or use the other app as the Enterprise app to set up SAML SSO with Datadog. If you decide to use the other app, ensure the app has the [required settings](create.md#configure-single-sign-on).
-**App not showing in Single sign-on setting page** - First, search for the application ID. If no result is shown, check the SAML settings of the app. The grid only shows apps with correct SAML settings.
+- **App not showing in Single sign-on setting page**
+ - First, search for the application ID. If no result is shown, check the SAML settings of the app. The grid only shows apps with correct SAML settings.
-The Identifier URL must be `https://us3.datadoghq.com/account/saml/metadata.xml`.
-
-The reply URL must be `https://us3.datadoghq.com/account/saml/assertion`.
-
-The following image shows the correct values.
+ The Identifier URL must be `https://us3.datadoghq.com/account/saml/metadata.xml`.
+
+ The reply URL must be `https://us3.datadoghq.com/account/saml/assertion`.
+
+ The following image shows the correct values.
+ :::image type="content" source="media/troubleshoot/troubleshooting.png" alt-text="Check SAML settings for the Datadog application in Azure A D." border="true":::
-**Guest users invited to the tenant are unable to access Single sign-on** - Some users have two email addresses in Azure portal. Typically, one email is the user principal name (UPN) and the other email is an alternative email.
+- **Guest users invited to the tenant are unable to access Single sign-on**
+ - Some users have two email addresses in Azure portal. Typically, one email is the user principal name (UPN) and the other email is an alternative email.
-When inviting guest user, use the home tenant UPN. By using the UPN, you keep the email address in-sync during the Single sign-on process. You can find the UPN by looking for the email address in the top-right corner of the user's Azure portal.
+ When inviting guest user, use the home tenant UPN. By using the UPN, you keep the email address in-sync during the Single sign-on process. You can find the UPN by looking for the email address in the top-right corner of the user's Azure portal.
## Logs not being emitted
-Only resources listed in the Azure Monitor resource log categories emit logs to Datadog. To verify whether the resource is emitting logs to Datadog, navigate to Azure diagnostic setting for the specific resource. Verify that there's a Datadog diagnostic setting.
+- Only resources listed in the Azure Monitor resource log categories emit logs to Datadog.
+
+ To verify whether the resource is emitting logs to Datadog:
+
+ 1. Navigate to Azure diagnostic setting for the specific resource.
+
+ 1. Verify that there's a Datadog diagnostic setting.
+
+ :::image type="content" source="media/troubleshoot/diagnostic-setting.png" alt-text="Datadog diagnostic setting on the Azure resource" border="true":::
+
+- Resource doesn't support sending logs. Only resource types with monitoring log categories can be configured to send logs. For more information, see [supported categories](/azure/azure-monitor/essentials/resource-logs-categories).
+
+- Limit of five diagnostic settings reached. Each Azure resource can have a maximum of five diagnostic settings. For more information, see [diagnostic settings](/azure/azure-monitor/essentials/diagnostic-settings?tabs=portal).
+- Export of Metrics data isn't supported currently by the partner solutions under Azure Monitor diagnostic settings.
## Metrics not being emitted The Datadog resource is assigned a **Monitoring Reader** role in the appropriate Azure subscription. This role enables the Datadog resource to collect metrics and send those metrics to Datadog.
-To verify the resource has the correct role assignment, open the Azure portal and select the subscription. In the left pane, select **Access Control (IAM)**. Search for the Datadog resource name. Confirm that the Datadog resource has the **Monitoring Reader** role assignment, as shown below.
+To verify the resource has the correct role assignment, open the Azure portal and select the subscription. In the left pane, select **Access Control (IAM)**. Search for the Datadog resource name. Confirm that the Datadog resource has the **Monitoring Reader** role assignment.
:::image type="content" source="media/troubleshoot/datadog-role-assignment.png" alt-text="Datadog role assignment in the Azure subscription" border="true"::: ## Datadog agent installation fails
-The Azure Datadog integration provides you the ability to install Datadog agent on a virtual machine or app service. For configuring the Datadog agent, the API key selected as **Default Key** in the API Keys screen is used. If a default key isn't selected, the Datadog agent installation will fail.
+The Azure Datadog integration provides you the ability to install Datadog agent on a virtual machine or app service. The API key selected as **Default Key** in the API Keys screen is used to configure the Datadog agent. If a default key isn't selected, the Datadog agent installation fails.
If the Datadog agent has been configured with an incorrect key, navigate to the API keys screen and change the **Default Key**. You'll have to uninstall the Datadog agent and reinstall it to configure the virtual machine with the new API keys. ## Next steps
-Learn about [managing your instance](manage.md) of Datadog.
+- Learn about [managing your instance](manage.md) of Datadog.
partner-solutions Dynatrace Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-troubleshoot.md
Title: Troubleshooting Azure Native Dynatrace Service description: This article provides information about troubleshooting Dynatrace for Azure -- + Previously updated : 10/12/2022++ Last updated : 01/06/2023
This document contains information about troubleshooting your solutions that use
- To set up the Azure Native Dynatrace Service, you must have **Owner** or **Contributor** access on the Azure subscription. Ensure you have the appropriate access before starting the setup. -- Create fails because Last Name is empty. This happens when the user info in Azure AD is incomplete and doesn't contain Last Name. Contact your Azure tenant's global administrator to rectify this and try again.
+- Create fails because Last Name is empty. The issue happens when the user info in Azure AD is incomplete and doesn't contain Last Name. Contact your Azure tenant's global administrator to rectify the issue and try again.
+
+### Logs not being emitted
+
+- Resource doesn't support sending logs. Only resource types with monitoring log categories can be configured to send logs. For more information, see [supported categories](/azure/azure-monitor/essentials/resource-logs-categories).
+
+- Limit of five diagnostic settings reached. Each Azure resource can have a maximum of five diagnostic settings. For more information, see [diagnostic settings](/azure/azure-monitor/essentials/diagnostic-settings?tabs=portal)
+
+- Export of Metrics data isn't supported currently by the partner solutions under Azure Monitor diagnostic settings.
+ ### Single sign-on errors -- **Single sign-on configuration indicates lack of permissions** - This occurs when the user that is trying to configure single sign-on doesn't have Manage users permissions for the Dynatrace account. For a description of how to configure this permission, see [here](https://www.dynatrace.com/support/help/shortlink/azure-native-integration#setup).-- **Unable to save single sign-on settings** - This error happens when there's another Enterprise app that is using the Dynatrace SAML identifier. To find which app is using it, select **Edit** on the Basic **SAML** configuration section.
- To resolve this issue, either disable the other app or use the other app as the Enterprise app to set up SAML SSO.
+- **Single sign-on configuration indicates lack of permissions**
+ - Occurs when the user that is trying to configure single sign-on doesn't have Manage users permissions for the Dynatrace account. For a description of how to configure this permission, see [here](https://www.dynatrace.com/support/help/shortlink/azure-native-integration#setup).
+- **Unable to save single sign-on settings**
+ - Error happens when there's another Enterprise app that is using the Dynatrace SAML identifier. To find which app is using it, select **Edit** on the Basic **SAML** configuration section.
+ To resolve this issue, either disable the other app or use the other app as the Enterprise app to set up SAML SSO.
- **App not showing in Single sign-on settings page** - First, search for application ID. If no result is shown, check the SAML settings of the app. The grid only shows apps with correct SAML settings.
partner-solutions Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/elastic/troubleshoot.md
Title: Troubleshooting Elastic description: This article provides information about troubleshooting Elastic integration with Azure Previously updated : 09/02/2021 Last updated : 01/06/2023
This document contains information about troubleshooting your solutions that use
## Unable to create an Elastic resource
-Elastic integration with Azure can only be set up by users who have *Owner* or *Contributor* access on the Azure subscription. [Confirm that you have the appropriate access](../../role-based-access-control/check-access.md).
+Elastic integration with Azure can only be set up by users who have *Owner* or *Contributor* access on the Azure subscription. [Confirm that you have the appropriate access](/azure/role-based-access-control/check-access).
## Logs not being emitted to Elastic
-Only resources listed in [Azure Monitor resource log categories](../../azure-monitor/essentials/resource-logs-categories.md) emit logs to Elastic. To verify whether the resource is emitting logs to Elastic, navigate to [Azure diagnostic setting](../../azure-monitor/essentials/diagnostic-settings.md) for the resource. Verify that there's a diagnostic setting option available.
+- Only resources listed in [Azure Monitor resource log categories](../../azure-monitor/essentials/resource-logs-categories.md) emit logs to Elastic. To verify whether the resource is emitting logs to Elastic:
+ 1. Navigate to [Azure diagnostic setting](../../azure-monitor/essentials/diagnostic-settings.md) for the resource.
+ 1. Verify that there's a diagnostic setting option available.
+
+ :::image type="content" source="media/troubleshoot/check-diagnostic-setting.png" alt-text="Verify diagnostic setting":::
+
+- Resource doesn't support sending logs. Only resource types with monitoring log categories can be configured to send logs. For more information, see [supported categories](/azure/azure-monitor/essentials/resource-logs-categories).
+
+- Limit of five diagnostic settings reached. Each Azure resource can have a maximum of five diagnostic settings. For more information, see [diagnostic settings](/azure/azure-monitor/essentials/diagnostic-settings?tabs=portal)
+
+- Export of Metrics data is not supported currently by the partner solutions under Azure Monitor diagnostic settings.
## Purchase errors * Purchase fails because a valid credit card isn't connected to the Azure subscription or a payment method isn't associated with the subscription.
- Use a different Azure subscription. Or, add or update the credit card or payment method for the subscription. For more information, see [updating the credit and payment method](../../cost-management-billing/manage/change-credit-card.md).
+ Use a different Azure subscription. Or, add or update the credit card or payment method for the subscription. For more information, see [updating the credit and payment method](/azure/cost-management-billing/manage/change-credit-card).
* The EA subscription doesn't allow Marketplace purchases.
- Use a different subscription. Or, check if your EA subscription is enabled for Marketplace purchase. For more information, see [Enable Marketplace purchases](../../cost-management-billing/manage/ea-azure-marketplace.md#enabling-azure-marketplace-purchases).
+ Use a different subscription. Or, check if your EA subscription is enabled for Marketplace purchase. For more information, see [Enable Marketplace purchases](/azure/cost-management-billing/manage/ea-azure-marketplace#enabling-azure-marketplace-purchases).
+ ## Get support
partner-solutions Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/logzio/troubleshoot.md
Title: Troubleshooting Logz.io description: This article describes how to troubleshoot Logz.io integration with Azure.- Previously updated : 05/24/2022 + + Last updated : 01/06/2023+ # Troubleshooting Logz.io integration with Azure
This article describes how to troubleshoot the Logz.io integration with Azure.
## Owner role needed to create resource
-To set up Logz.io, you must be assigned the [Owner role](../../role-based-access-control/rbac-and-directory-admin-roles.md#azure-roles) in the Azure subscription. Before you begin this integration, [check your access](../../role-based-access-control/check-access.md).
+To set up Logz.io, you must be assigned the [Owner role](/azure/role-based-access-control/rbac-and-directory-admin-roles) in the Azure subscription. Before you begin this integration, [check your access](/azure/role-based-access-control/check-access).
## Single sign-on errors
Use the following patterns to add new values:
- **Identifier**: `urn:auth0:logzio:<Application ID>` - **Reply URL**: `https://logzio.auth0.com/login/callback?connection=<Application ID>` +
+### Logs not being sent to Logz.io
-## Logs not being sent to Logz.io
+- Only resources listed in [Azure Monitor resource log categories](/azure/azure-monitor/essentials/resource-logs-categories) send logs to Logz.io. To verify whether a resource is sending logs to Logz.io:
-Only resources listed in [Azure Monitor resource log categories](../../azure-monitor/essentials/resource-logs-categories.md), will send logs to Logz.io.
+ 1. Go to [Azure diagnostic setting](/azure/azure-monitor/essentials/diagnostic-settings) for the specific resource.
+ 1. Verify that there's a Logz.io diagnostic setting.
-To verify whether a resource is sending logs to Logz.io:
+ :::image type="content" source="media/troubleshoot/diagnostics.png" alt-text="Screenshot of the Azure monitoring diagnostic settings for Logz.io.":::
-1. Go to [Azure diagnostic setting](../../azure-monitor/essentials/diagnostic-settings.md) for the specific resource.
-1. Verify that there's a Logz.io diagnostic setting.
+- Limit of five diagnostic settings reached. Each Azure resource can have a maximum of five diagnostic settings. For more information, see [diagnostic settings](/azure/azure-monitor/essentials/diagnostic-settings?tabs=portal).
+- Export of Metrics data isn't supported currently by the partner solutions under Azure Monitor diagnostic settings.
## Register resource provider
-You must register `Microsoft.Logz` in the Azure subscription that contains the Logz.io resource, and any subscriptions with resources that send data to Logz.io. For more information about troubleshooting resource provider registration, see [Resolve errors for resource provider registration](../../azure-resource-manager/troubleshooting/error-register-resource-provider.md).
+You must register `Microsoft.Logz` in the Azure subscription that contains the Logz.io resource, and any subscriptions with resources that send data to Logz.io. For more information about troubleshooting resource provider registration, see [Resolve errors for resource provider registration](/azure/azure-resource-manager/troubleshooting/error-register-resource-provider).
## Limit reached in monitored resources Azure Monitor Diagnostics supports a maximum of five diagnostic settings on single resource or subscription. When you reach that limit, the resource will show **Limit reached** in **Monitored resources**. You can't add monitoring with Logz.io. ## VM extension installation failed A virtual machine (VM) can only be monitored by a single Logz.io account (main or sub). If you try to install the agent on a VM that is already monitored by another account, you see the following error: ## Purchase errors
Purchase fails because a valid credit card isn't connected to the Azure subscrip
To resolve a purchase error: - Use a different Azure subscription.-- Add or update the subscription's credit card or payment method. For more information, see [Add or update a credit card for Azure](../../cost-management-billing/manage/change-credit-card.md).
+- Add or update the subscription's credit card or payment method. For more information, see [Add or update a credit card for Azure](/azure/cost-management-billing/manage/change-credit-card).
You can view the error's output from the resource's deployment page, by selecting **Operation Details**.
purview Catalog Lineage User Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/catalog-lineage-user-guide.md
Previously updated : 09/20/2022 Last updated : 01/09/2023 # Microsoft Purview Data Catalog lineage user guide
Databases & storage solutions such as Oracle, Teradata, and SAP have query engin
|**Category**| **Data source** | |||
+|Azure| [Azure Databricks](register-scan-azure-databricks.md)
|Database| [Cassandra](register-scan-cassandra-source.md)| || [Db2](register-scan-db2.md) | || [Google BigQuery](register-scan-google-bigquery-source.md)|
purview Create Sensitivity Label https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/create-sensitivity-label.md
Title: Labeling in the Microsoft Purview Data Map description: Start utilizing sensitivity labels and classifications to enhance your Microsoft Purview assets--++
purview Microsoft Purview Connector Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/microsoft-purview-connector-overview.md
Previously updated : 10/10/2022 Last updated : 01/09/2022
The table below shows the supported capabilities for each data source. Select th
|| [Azure Data Share](how-to-link-azure-data-share.md) | [Yes](how-to-link-azure-data-share.md) | No | [Yes](how-to-link-azure-data-share.md) | No | No| || [Azure Database for MySQL](register-scan-azure-mysql-database.md) | [Yes](register-scan-azure-mysql-database.md#register) | [Yes](register-scan-azure-mysql-database.md#scan) | No* | No | No | || [Azure Database for PostgreSQL](register-scan-azure-postgresql.md) | [Yes](register-scan-azure-postgresql.md#register) | [Yes](register-scan-azure-postgresql.md#scan) | No* | No | No |
+|| [Azure Databricks](register-scan-azure-databricks.md) | [Yes](register-scan-azure-databricks.md#register) | [Yes](register-scan-azure-databricks.md#scan) | [Yes](register-scan-azure-databricks.md#lineage) | No | No |
|| [Azure Dedicated SQL pool (formerly SQL DW)](register-scan-azure-synapse-analytics.md)| [Yes](register-scan-azure-synapse-analytics.md#register) | [Yes](register-scan-azure-synapse-analytics.md#scan)| No* | No | No | || [Azure Files](register-scan-azure-files-storage-source.md)|[Yes](register-scan-azure-files-storage-source.md#register) | [Yes](register-scan-azure-files-storage-source.md#scan) | Limited* | No | No | || [Azure SQL Database](register-scan-azure-sql-database.md)| [Yes](register-scan-azure-sql-database.md#register-the-data-source) |[Yes](register-scan-azure-sql-database.md#scope-and-run-the-scan)| [Yes (Preview)](register-scan-azure-sql-database.md#extract-lineage-preview) | [Yes](register-scan-azure-sql-database.md#set-up-access-policies) (Preview) | No |
purview Register Scan Amazon S3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-amazon-s3.md
Title: Amazon S3 multi-cloud scanning connector for Microsoft Purview description: This how-to guide describes details of how to scan Amazon S3 buckets in Microsoft Purview.--++
purview Register Scan Azure Databricks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-databricks.md
+
+ Title: Connect to and manage Azure Databricks
+description: This guide describes how to connect to Azure Databricks in Microsoft Purview, and how to use Microsoft Purview to scan and manage your Azure Databricks source.
+++++ Last updated : 01/09/2023+++
+# Connect to and manage Azure Databricks in Microsoft Purview (Preview)
+
+This article outlines how to register Azure Databricks, and how to authenticate and interact with Azure Databricks in Microsoft Purview. For more information about Microsoft Purview, read the [introductory article](overview.md).
++
+## Supported capabilities
+
+|**Metadata Extraction**| **Full Scan** |**Incremental Scan**|**Scoped Scan**|**Classification**|**Access Policy**|**Lineage**|**Data Sharing**|
+|||||||||
+| [Yes](#register)| [Yes](#scan)| No | No | No | No| [Yes](#lineage) | No |
+
+When scanning Azure Databricks source, Microsoft Purview supports:
+
+- Extracting technical metadata including:
+
+ - Azure Databricks workspace
+ - Hive server
+ - Databases
+ - Tables including the columns, foreign keys, unique constraints, and storage description
+ - Views including the columns and storage description
+
+- Fetching relationship between external tables and Azure Data Lake Storage Gen2/Azure Blob assets.
+- Fetching static lineage on assets relationships among tables and views.
+
+This connector brings metadata from Databricks metastore. Comparing to scan via [Hive Metastore connector](register-scan-hive-metastore-source.md) in case you use it to scan Azure Databricks earlier:
+
+- You can directly set up scan for Azure Databricks workspaces without direct HMS access. It uses Databricks personal access token for authentication and connects to a cluster to perform scan.
+- The Databricks workspace info is captured.
+- The relationship between tables and storage assets is captured.
+
+## Prerequisites
+
+* You must have an Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+* You must have an active [Microsoft Purview account](create-catalog-portal.md).
+
+* You need Data Source Administrator and Data Reader permissions to register a source and manage it in the Microsoft Purview governance portal. For more information about permissions, see [Access control in Microsoft Purview](catalog-permissions.md).
+
+* Set up the latest [self-hosted integration runtime](https://www.microsoft.com/download/details.aspx?id=39717). For more information, see [Create and configure a self-hosted integration runtime](manage-integration-runtimes.md). The minimal supported elf-hosted Integration Runtime version is 5.20.8227.2.
+
+ * Ensure [JDK 11](https://www.oracle.com/java/technologies/downloads/#java11) is installed on the machine where the self-hosted integration runtime is installed. Restart the machine after you newly install the JDK for it to take effect.
+
+ * Ensure that Visual C++ Redistributable for Visual Studio 2012 Update 4 is installed on the machine where the self-hosted integration runtime is running. If you don't have this update installed, [download it now](https://www.microsoft.com/download/details.aspx?id=30679).
+
+* In your Azure Databricks workspace:
+
+ * [Generate a personal access token](/azure/databricks/dev-tools/auth#--azure-databricks-personal-access-tokens), and store it as a secret in Azure Key Vault.
+ * [Create a cluster](/azure/databricks/clusters/create-cluster). Note down the cluster ID - you can find it in Azure Databricks workspace -> Compute -> your cluster -> Tags -> Automatically added tags -> `ClusterId`.
+
+## Register
+
+This section describes how to register an Azure Databricks workspace in Microsoft Purview by using [the Microsoft Purview governance portal](https://web.purview.azure.com/).
+
+1. Go to your Microsoft Purview account.
+
+1. Select **Data Map** on the left pane.
+
+1. Select **Register**.
+
+1. In **Register sources**, select **Azure Databricks** > **Continue**.
+
+1. On the **Register sources (Azure Databricks)** screen, do the following:
+
+ 1. For **Name**, enter a name that Microsoft Purview will list as the data source.
+
+ 1. For **Azure subscription** and **Databricks workspace name**, select the subscription and workspace that you want to scan from the dropdown. The Databricks workspace URL will be automatically populated.
+
+ 1. For **Select a collection**, choose a collection from the list or create a new one. This step is optional.
+
+ :::image type="content" source="media/register-scan-azure-databricks/configure-sources.png" alt-text="Screenshot of registering Azure Databricks source." border="true":::
+
+1. Select **Finish**.
+
+## Scan
+
+> [!TIP]
+> To troubleshoot any issues with scanning:
+> 1. Confirm you have followed all [**prerequisites**](#prerequisites).
+> 1. Review our [**scan troubleshooting documentation**](troubleshoot-connections.md).
+
+Use the following steps to scan Azure Databricks to automatically identify assets. For more information about scanning in general, see [Scans and ingestion in Microsoft Purview](concept-scans-and-ingestion.md).
+
+1. In the Management Center, select integration runtimes. Make sure that a self-hosted integration runtime is set up. If it isn't set up, use the steps in [Create and manage a self-hosted integration runtime](./manage-integration-runtimes.md).
+
+1. Go to **Sources**.
+
+1. Select the registered Azure Databricks.
+
+1. Select **+ New scan**.
+
+1. Provide the following details:
+
+ 1. **Name**: Enter a name for the scan.
+
+ 1. **Connect via integration runtime**: Select the configured self-hosted integration runtime.
+
+ 1. **Credential**: Select the credential to connect to your data source. Make sure to:
+
+ * Select **Access Token Authentication** while creating a credential.
+ * Provide secret name of the personal access token that you created in [Prerequisites](#prerequisites) in the appropriate box.
+
+ For more information, see [Credentials for source authentication in Microsoft Purview](manage-credentials.md).
+
+ 1. **Cluster ID**: Specify the cluster ID that Microsoft Purview will connect to and perform the scan. You can find it in Azure Databricks workspace -> Compute -> your cluster -> Tags -> Automatically added tags -> `ClusterId`.
+
+ 1. **Mount points**: Provide the mount point and Azure Storage source location string when you have external storage manually mounted to Databricks. Use the format `/mnt/<path>=abfss://<container>@<adls_gen2_storage_account>.dfs.core.windows.net/;/mnt/<path>=wasbs://<container>@<blob_storage_account>.blob.core.windows.net` It will be used to capture the relationship between tables and the corresponding storage assets in Microsoft Purview. This setting is optional, if it's not specified, such relationship won't be retrieved.
+
+ You can get the list of mount points in your Databricks workspace by running the following Python command in a notebook:
+
+ ```
+ dbutils.fs.mounts()
+ ```
+
+ It will print all the mount points like below:
+
+ ```
+ [MountInfo(mountPoint='/databricks-datasets', source='databricks-datasets', encryptionType=''),
+ MountInfo(mountPoint='/mnt/ADLS2', source='abfss://samplelocation1@azurestorage1.dfs.core.windows.net/', encryptionType=''),
+ MountInfo(mountPoint='/databricks/mlflow-tracking', source='databricks/mlflow-tracking', encryptionType=''),
+ MountInfo(mountPoint='/mnt/Blob', source='wasbs://samplelocation2@azurestorage2.blob.core.windows.net', encryptionType=''),
+ MountInfo(mountPoint='/databricks-results', source='databricks-results', encryptionType=''),
+ MountInfo(mountPoint='/databricks/mlflow-registry', source='databricks/mlflow-registry', encryptionType=''), MountInfo(mountPoint='/', source='DatabricksRoot', encryptionType='')]ΓÇ»
+ ```
+
+ In this example, specify the following as mount points:
+
+ `/mnt/ADLS2=abfss://samplelocation1@azurestorage1.dfs.core.windows.net/;/mnt/Blob=wasbs://samplelocation2@azurestorage2.blob.core.windows.net`
+
+ 1. **Maximum memory available**: Maximum memory (in gigabytes) available on the customer's machine for the scanning processes to use. This value is dependent on the size of Hive Metastore database to be scanned.
+
+ :::image type="content" source="media/register-scan-azure-databricks/scan.png" alt-text="Screenshot of setting up Azure Databricks scan." border="true":::
+
+1. Select **Continue**.
+
+1. For **Scan trigger**, choose whether to set up a schedule or run the scan once.
+
+1. Review your scan and select **Save and Run**.
+
+Once the scan successfully completes, see how to [browse and search Azure Databricks assets](#browse-and-search-assets).
++
+## Browse and search assets
+
+After scanning your Azure Databricks, you can [browse data catalog](how-to-browse-catalog.md) or [search data catalog](how-to-search-catalog.md) to view the asset details.
+
+From the Databricks workspace asset, you can find the associated Hive Metastore and the tables/views, reversed applies too.
++++
+## Lineage
+
+Refer to the [supported capabilities](#supported-capabilities) section on the supported Azure Databricks scenarios. For more information about lineage in general, see [data lineage](concept-data-lineage.md) and [lineage user guide](catalog-lineage-user-guide.md).
+
+Go to the Hive table/view asset -> lineage tab, you can see the asset relationship when applicable. For relationship between table and external storage assets, you'll see Hive Table asset and the storage asset are directly connected bi-directionally, as they mutually impact each other.
++
+## Next steps
+
+Now that you've registered your source, use the following guides to learn more about Microsoft Purview and your data:
+
+- [Data Estate Insights in Microsoft Purview](concept-insights.md)
+- [Lineage in Microsoft Purview](catalog-lineage-user-guide.md)
+- [Search the data catalog](how-to-search-catalog.md)
purview Register Scan Hive Metastore Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-hive-metastore-source.md
When scanning Hive metastore source, Microsoft Purview supports:
- Databases - Tables including the columns, foreign keys, unique constraints, and storage description - Views including the columns and storage description
- - Processes
- Fetching static lineage on assets relationships among tables and views.
reliability Reliability Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-functions.md
Zone-redundant Premium plans are available in the following regions:
| Americas | Europe | Middle East | Africa | Asia Pacific | ||-||--|-| | Brazil South | France Central | Qatar Central | South Africa North | Australia East |
-| Canada Central | Germany West Central | | | Central India |
+| Canada Central | Germany West Central | UAE North | | Central India |
| Central US | North Europe | | | China North 3 |
-| East US | Sweden Central | | | East Asia |
-| East US 2 | UK South | | | Japan East |
-| South Central US | West Europe | | | Southeast Asia |
-| West US 2 | | | | |
-| West US 3 | | | | |
+| East US | Norway East | | | East Asia |
+| East US 2 | Sweden Central | | | Japan East |
+| South Central US | Switzerland North | | | Southeast Asia |
+| West US 2 | UK South | | | |
+| West US 3 | West Europe | | | |
### Prerequisites
search Search Get Started Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-javascript.md
This article demonstrates how to create the application step by step. Alternativ
Before you begin, have the following tools and
-+ An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/).
-
-+ An Azure Cognitive Search service. [Create a service](search-create-service-portal.md) or [find an existing service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices). You can use a free service for this quickstart.
++ [Visual Studio Code](https://code.visualstudio.com) or another IDE + [Node.js](https://nodejs.org) and [npm](https://www.npmjs.com)
-+ [Visual Studio Code](https://code.visualstudio.com) or another IDE
++ Azure Cognitive Search. [Create a service](search-create-service-portal.md) or [find an existing service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices). +
+You can use a free service for this quickstart.
## Set up your project
search Search Security Api Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-api-keys.md
Title: API key authentication
+ Title: Connect with API keys
-description: An API key controls inbound access to the service endpoint. Admin keys grant write access. Query keys can be created for read-only access.
+description: Learn how to use an admin or query API key for inbound access to an Azure Cognitive Search service endpoint.
- Previously updated : 08/15/2022+ Last updated : 01/10/2023
-# Use API keys for Azure Cognitive Search authentication
+# Connect to Cognitive Search using key authentication
-Cognitive Search offers key-based authentication as its primary authentication methodology. For inbound requests to a search service endpoint, such as requests that create or query an index, API keys are the only generally available authentication option you have. A few outbound request scenarios, particularly those involving indexers, can use Azure Active Directory identities and roles.
+Cognitive Search offers key-based authentication that you can use on connections to your search service. An API key is a unique string composed of 52 randomly generated numbers and letters. A request made to a search service endpoint will be accepted if both the request and the API key are valid.
+
+API keys are frequently used when making REST API calls to a search service. You can also use them in search solutions if Azure Active Directory isn't an option.
> [!NOTE]
-> [Azure role-based access control (RBAC)](search-security-rbac.md) for inbound requests to a search endpoint is now in preview. You can use this preview capability to supplement or replace API keys on search index requests.
+> A quick note about "key" terminology in Cognitive Search. An "API key", which is described in this article, refers to a GUID used for authenticating a request. A "document key" refers to a unique string in your indexed content that's used to uniquely identify documents in a search index. API keys and document keys are unrelated.
-## Using API keys in search
+## What's an API key?
-API keys are generated when the service created. Passing a valid API key on the request is considered proof that the request is from an authorized client. There are two kinds of keys. *Admin keys* convey write permissions on the service and also grant rights to query system information. *Query keys* convey read permissions and can be used by apps to query a specific index.
+There are two kinds of keys used for authenticating a request:
-When connecting to a search service, all requests must include an API key that was generated specifically for your service.
+| Type | Permission level | Maximum | How created|
+|||||
+| Admin | Full access (read-write) for all content operations | 2 <sup>1</sup>| Two admin keys, referred to as *primary* and *secondary* keys in the portal, are generated when the service is created and can be individually regenerated on demand. |
+| Query | Read-only access, scoped to the documents collection of a search index | 50 | One query key is generated with the service. More can be created on demand by a search service administrator. |
-+ In [REST solutions](search-get-started-rest.md), the API key is typically specified in a request header
+<sup>1</sup> Having two allows you to roll over one key while using the second key for continued access to the service.
-+ In [.NET solutions](search-howto-dotnet-sdk.md), a key is often specified as a configuration setting and then passed as an [AzureKeyCredential](/dotnet/api/azure.azurekeycredential)
+Visually, there's no distinction between an admin key or query key. Both keys are strings composed of 52 randomly generated alpha-numeric characters. If you lose track of what type of key is specified in your application, you can [check the key values in the portal](#find-existing-keys).
-You can view and manage API keys in the [Azure portal](https://portal.azure.com), or through [PowerShell](/powershell/module/az.search), [Azure CLI](/cli/azure/search), or [REST API](/rest/api/searchmanagement/).
+## Use API keys on connections
+
+API keys are specified on client requests to a search service. Passing a valid API key on the request is considered proof that the request is from an authorized client. If you're creating, modifying, or deleting objects, you'll need an admin API key. Otherwise, query keys are typically distributed to client applications that issue queries.
+
+You can specify API keys in a request header for REST API calls, or in code that calls the azure.search.documents client libraries in the Azure SDKs. If you're using the Azure portal to perform tasks, your role assignment determines the level of access.
+
+Best practices for using hard-coded in source files include:
+++ During early development and proof-of-concept testing when security is looser, use sample or public data.+++ After advancing into deeper development or production scenarios, switch to [Azure Active Directory and role-based access](search-security-rbac.md) to eliminate the need for having hard-coded keys. Or, if you want to continue using API keys, be sure to always monitor who has access to your API keys and regenerate API keys on a regular cadence.+
+### [**REST**](#tab/rest-use)
+++ Admin keys are only specified in HTTP request headers. You can't place an admin API key in a URL. See [Connect to Azure Cognitive Search using REST APIs](search-get-started-rest.md#connect-to-azure-cognitive-search) for an example that specifies an admin API key on a REST call.+++ Query keys are also specified in an HTTP request header for search, suggestion, or lookup operation that use POST.
+ Alternatively, you can pass a query key as a parameter on a URL if you're using GET: `GET /indexes/hotels/docs?search=*&$orderby=lastRenovationDate desc&api-version=2020-06-30&api-key=[query key]`
-## What is an API key?
+### [**Azure PowerShell**](#tab/azure-ps-use)
-An API key is a unique string composed of randomly generated numbers and letters that are passed on every request to the search service. The service will accept the request, if both the request itself and the key are valid.
+A script example showing API key usage can be found at [Quickstart: Create an Azure Cognitive Search index in PowerShell using REST APIs](search-get-started-powershell.md).
-Two types of keys are used to access your search service: admin (read-write) and query (read-only).
+### [**.NET**](#tab/dotnet-use)
-|Key|Description|Limits|
-||--||
-|Admin|Grants full rights to all operations, including the ability to manage the service, create and delete indexes, indexers, and data sources.<br /><br /> Two admin keys, referred to as *primary* and *secondary* keys in the portal, are generated when the service is created and can be individually regenerated on demand. Having two keys allows you to roll over one key while using the second key for continued access to the service.<br /><br /> Admin keys are only specified in HTTP request headers. You cannot place an admin API key in a URL.|Maximum of 2 per service|
-|Query|Grants read-only access to indexes and documents, and are typically distributed to client applications that issue search requests.<br /><br /> Query keys are created on demand.<br /><br /> Query keys can be specified in an HTTP request header for search, suggestion, or lookup operation. Alternatively, you can pass a query key as a parameter on a URL. Depending on how your client application formulates the request, it might be easier to pass the key as a query parameter:<br /><br /> `GET /indexes/hotels/docs?search=*&$orderby=lastRenovationDate desc&api-version=2020-06-30&api-key=[query key]`|50 per service|
+In search solutions, a key is often specified as a configuration setting and then passed as an [AzureKeyCredential](/dotnet/api/azure.azurekeycredential). See [How to use Azure.Search.Documents in a C# .NET Application](search-howto-dotnet-sdk.md) for an example.
- Visually, there is no distinction between an admin key or query key. Both keys are strings composed of 32 randomly generated alpha-numeric characters. If you lose track of what type of key is specified in your application, you can [check the key values in the portal](https://portal.azure.com).
+ > [!NOTE]
-> It's considered a poor security practice to pass sensitive data such as an `api-key` in the request URI. For this reason, Azure Cognitive Search only accepts a query key as an `api-key` in the query string, and you should avoid doing so unless the contents of your index should be publicly available. As a general rule, we recommend passing your `api-key` as a request header.
+> It's considered a poor security practice to pass sensitive data such as an `api-key` in the request URI. For this reason, Azure Cognitive Search only accepts a query key as an `api-key` in the query string. As a general rule, we recommend passing your `api-key` as a request header.
## Find existing keys
-You can obtain access keys in the portal or through [PowerShell](/powershell/module/az.search), [Azure CLI](/cli/azure/search), or [REST API](/rest/api/searchmanagement/).
+You can view and manage API keys in the [Azure portal](https://portal.azure.com), or through [PowerShell](/powershell/module/az.search), [Azure CLI](/cli/azure/search), or [REST API](/rest/api/searchmanagement/).
+
+### [**Azure portal**](#tab/portal-find)
+
+1. Sign in to the [Azure portal](https://portal.azure.com) and [find your search service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices).
+
+1. Under **Settings**, select **Keys** to view admin and query keys.
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. List the [search services](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices) for your subscription.
-1. Select the service and on the Overview page, click **Settings** >**Keys** to view admin and query keys.
- :::image type="content" source="media/search-security-overview/settings-keys.png" alt-text="Portal page, view settings, keys section" border="false":::
+### [**REST**](#tab/rest-find)
+
+Use [ListAdminKeys](/rest/api/searchmanagement/2020-08-01/admin-keys) or [ListQueryKeys](/rest/api/searchmanagement/2020-08-01/query-keys/list-by-search-service) in the Management REST API to return API keys.
+
+You must have a [valid role assignment](#permissions-to-view-or-manage-api-keys) to return or update API keys. See [Manage your Azure Cognitive Search service with REST APIs](search-manage-rest.md) for guidance on meeting role requirements using the REST APIs.
+
+```rest
+POST https://management.azure.com/subscriptions/{{subscriptionId}}/resourceGroups/{{resource-group}}/providers//Microsoft.Search/searchServices/{{search-service-name}}/listAdminKeys?api-version=2021-04-01-preview
+```
++ ## Create query keys
Query keys are used for read-only access to documents within an index for operat
Restricting access and operations in client apps is essential to safeguarding the search assets on your service. Always use a query key rather than an admin key for any query originating from a client app.
-1. Sign in to the [Azure portal](https://portal.azure.com).
-2. List the [search services](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices) for your subscription.
-3. Select the service and on the Overview page, click **Settings** >**Keys**.
-4. Click **Manage query keys**.
-5. Use the query key already generated for your service, or create up to 50 new query keys. The default query key is not named, but additional query keys can be named for manageability.
+### [**Azure portal**](#tab/portal-query)
- :::image type="content" source="media/search-security-overview/create-query-key.png" alt-text="Create or use a query key" border="false":::
+1. Sign in to the [Azure portal](https://portal.azure.com) and [find your search service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices).
-> [!Note]
-> A code example showing query key usage can be found in [DotNetHowTo](https://github.com/Azure-Samples/search-dotnet-getting-started/tree/master/DotNetHowTo).
+1. Under **Settings**, select **Keys** to view API keys.
+
+1. Under **Manage query keys**, use the query key already generated for your service, or create new query keys. The default query key isn't named, but other generated query keys can be named for manageability.
+
+ :::image type="content" source="media/search-security-overview/create-query-key.png" alt-text="Screenshot of the query key management options." border="true":::
+
+### [**Azure CLI**](#tab/azure-cli-query)
+
+A script example showing query key usage can be found at [Create or delete query keys](search-manage-azure-cli.md#create-or-delete-query-keys).
+
+### [**.NET**](#tab/dotnet-query)
+
+A code example showing query key usage can be found in [DotNetHowTo](https://github.com/Azure-Samples/search-dotnet-getting-started/tree/master/DotNetHowTo).
++ <a name="regenerate-admin-keys"></a> ## Regenerate admin keys
-Two admin keys are created for each service so that you can rotate a primary key, using the secondary key for business continuity.
+Two admin keys are created for each service so that you can rotate a primary key while using the secondary key for business continuity.
+
+1. In the **Settings** > **Keys** page, copy the secondary key.
+
+1. For all applications, update the API key settings to use the secondary key.
-1. In the **Settings** >**Keys** page, copy the secondary key.
-2. For all applications, update the API key settings to use the secondary key.
-3. Regenerate the primary key.
-4. Update all applications to use the new primary key.
+1. Regenerate the primary key.
-If you inadvertently regenerate both keys at the same time, all client requests using those keys will fail with HTTP 403 Forbidden. However, content is not deleted and you are not locked out permanently.
+1. Update all applications to use the new primary key.
-You can still access the service through the portal or programmatically. Management functions are operative through a subscription ID not a service API key, and thus still available even if your API keys are not.
+If you inadvertently regenerate both keys at the same time, all client requests using those keys will fail with HTTP 403 Forbidden. However, content isn't deleted and you aren't locked out permanently.
-After you create new keys via portal or management layer, access is restored to your content (indexes, indexers, data sources, synonym maps) once you have the new keys and provide those keys on requests.
+You can still access the service through the portal or programmatically. Management functions are operative through a subscription ID not a service API key, and are thus still available even if your API keys aren't.
-## Secure API keys
+After you create new keys via portal or management layer, access is restored to your content (indexes, indexers, data sources, synonym maps) once you provide those keys on requests.
-[Role assignments](search-security-rbac.md) determine who can read and manage keys. Members of the following roles can view and regenerate keys: Owner, Contributor, [Search Service Contributors](../role-based-access-control/built-in-roles.md#search-service-contributor). The Reader role does not have access to API keys.
+## Permissions to view or manage API keys
-Subscription administrators can view and regenerate all API keys. As a precaution, review role assignments to understand who has access to the admin keys.
+Permissions for viewing and managing API keys is conveyed through [role assignments](search-security-rbac.md). Members of the following roles can view and regenerate keys:
+++ Administrator and co-administrator (classic)++ Owner++ Contributor++ [Search Service Contributors](../role-based-access-control/built-in-roles.md#search-service-contributor) +
+The following roles don't have access to API keys:
+++ Reader++ Search Index Data Contributor++ Search Index Data Reader+
+## Secure API key access
+
+Use role assignments to restrict access to API keys.
+
+Note that it's not possible to use [customer-managed key encryption](search-security-manage-encryption-keys.md) to encrypt API keys. Only sensitive data within the search service itself (for example, index content or connection strings in data source object definitions) can be CMK-encrypted.
1. Navigate to your search service page in Azure portal.+ 1. On the left navigation pane, select **Access control (IAM)**, and then select the **Role assignments** tab.
-1. Set **Scope** to **This resource** to view role assignments for your service.
+
+1. In the **Role** filter, select the roles that have permission to view or manage keys (Owner, Contributor, Search Service Contributor). The resulting security principals assigned to those roles have key permissions on your search service.
+
+1. As a precaution, also check the **Classic administrators** tab for administrators and co-administrators.
## See also
service-fabric Service Fabric Managed Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-managed-disk.md
To use managed data disks on a node type, configure the underlying virtual machi
* Add a managed disk in data disks section of the template for the virtual machine scale set. * Update the Service Fabric extension for the virtual machine scale set with following settings: * For Windows: **useManagedDataDisk: true** and **dataPath: 'K:\\\\SvcFab'**. Note that drive K is just a representation. You can use any drive letter lexicographically greater than all the drive letters present in the virtual machine scale set SKU.
- * For Linux: **useManagedDataDisk:true** and **dataPath: '\mnt\sfdataroot'**.
+ * For Linux: **useManagedDataDisk:true** and **dataPath: '/mnt/sfroot'**.
Here's an Azure Resource Manager template for a Service Fabric extension:
static-web-apps Front End Frameworks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/front-end-frameworks.md
The following table lists the settings for a series of frameworks and libraries<
The intent of the table columns is explained by the following items: -- **Output location**: Lists the value for `output_location`, which is the [folder for built versions of application files](build-configuration.md).
+- **App artifact location (output location)**: Lists the value for `output_location`, which is the [folder for built versions of application files](build-configuration.md).
- **Custom build command**: When the framework requires a command different from `npm run build` or `npm run azure:build`, you can define a [custom build command](build-configuration.md#custom-build-commands).
The intent of the table columns is explained by the following items:
| [Framework7](https://framework7.io/) | `www` | `npm run build-prod` | | [Glimmer](https://glimmerjs.com/) | `dist` | n/a | | [HTML](https://developer.mozilla.org/docs/Web/HTML) | `/` | n/a |
+| [Hugo](https://gohugo.io/) | `public` | n/a |
| [Hyperapp](https://hyperapp.dev/) | `/` | n/a | | [JavaScript](https://developer.mozilla.org/docs/Web/javascript) | `/` | n/a | | [jQuery](https://jquery.com/) | `/` | n/a |
storage-mover Performance Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-mover/performance-targets.md
Previously updated : 09/07/2022 Last updated : 01/10/2023 <!--
Different agent resource configurations are tested:
### [4 CPU / 8-GiB RAM](#tab/minspec)
-4 virtual CPU cores at 2.7 GHz each and 8 GiB of memory (RAM) is the minimum specification for an Azure Storage Mover agent.
+4 virtual CPU cores at 2.7 GHz each and 8 GiB of memory (RAM) is the minimum specification for an Azure Storage Mover agent.
|Test | Single file, 1 TiB|&tilde;3.3M files, &tilde;200-K folders, &tilde;45 GiB |&tilde;50M files, &tilde;3M folders, &tilde;1 TiB | |--|-||--|
Different agent resource configurations are tested:
### [8 CPU / 16 GiB RAM](#tab/boostspec)
-8 virtual CPU cores at 2.7 GHz each and 8 GiB of memory (RAM) is the minimum specification for an Azure Storage Mover agent.
+8 virtual CPU cores at 2.7 GHz each and 16 GiB of memory (RAM) is the minimum specification for an Azure Storage Mover agent.
|Test | Single file, 1 TiB| &tilde;3.3M files, &tilde;200 K folders, &tilde;45 GiB | |--|-|--|
storage Security Restrict Copy Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/security-restrict-copy-operations.md
Title: Limit the source accounts for Azure Storage Account copy operations to accounts within the same tenant or on the same virtual network
+ Title: Permitted scope for copy operations (preview)
description: Learn how to use the "Permitted scope for copy operations (preview)" Azure storage account setting to limit the source accounts of copy operations to the same tenant or with private links to the same virtual network. Previously updated : 12/14/2022 Last updated : 01/10/2023 -+ # Restrict the source of copy operations to a storage account
This article shows you how to limit the source accounts of copy operations to ac
> **Permitted scope for copy operations** is currently in PREVIEW. > See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-## About the permitted scope for copy operations
+## About Permitted scope for copy operations (preview)
-The **AllowedCopyScope** property of a storage account is used to specify the environments from which data can be copied to the destination account. It is displayed in the Azure portal as configuration setting **Permitted scope for copy operations**. The property is not set by default and does not return a value until you explicitly set it. It has three possible values:
+The **AllowedCopyScope** property of a storage account is used to specify the environments from which data can be copied to the destination account. It is displayed in the Azure portal as configuration setting **Permitted scope for copy operations (preview)**. The property is not set by default and does not return a value until you explicitly set it. It has three possible values:
- ***(null)*** (default): Allow copying from any storage account to the destination account. - **AAD**: Permits copying only from accounts within the same Azure AD tenant as the destination account.
The URI is the full path to the source object being copied, which includes the s
You can also configure an alert rule based on this query to notify you about Copy Blob requests for the account. For more information, see [Create, view, and manage log alerts using Azure Monitor](../../azure-monitor/alerts/alerts-log.md).
-## Restrict the permitted scope for copy operations
+## Restrict the Permitted scope for copy operations (preview)
When you are confident that you can safely restrict the sources of copy requests to a specific scope, you can set the **AllowedCopyScope** property for the storage account to that scope.
-### Permissions for changing the permitted scope for copy operations
+### Permissions for changing the Permitted scope for copy operations (preview)
To set the **AllowedCopyScope** property for the storage account, a user must have permissions to create and manage storage accounts. Azure role-based access control (Azure RBAC) roles that provide these permissions include the **Microsoft.Storage/storageAccounts/write** or **Microsoft.Storage/storageAccounts/\*** action. Built-in roles with this action include:
Be careful to restrict assignment of these roles only to those who require the a
> [!NOTE] > The classic subscription administrator roles Service Administrator and Co-Administrator include the equivalent of the Azure Resource Manager [Owner](../../role-based-access-control/built-in-roles.md#owner) role. The **Owner** role includes all actions, so a user with one of these administrative roles can also create and manage storage accounts. For more information, see [Classic subscription administrator roles, Azure roles, and Azure AD administrator roles](../../role-based-access-control/rbac-and-directory-admin-roles.md#classic-subscription-administrator-roles).
-### Configure the permitted scope for copy operations
+### Configure the Permitted scope for copy operations (preview)
Using an account that has the necessary permissions, configure the permitted scope for copy operations in the Azure portal, with PowerShell or using the Azure CLI.
storage Storage Account Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-create.md
Previously updated : 01/03/2023 Last updated : 01/10/2023
The following table describes the fields on the **Basics** tab.
| Project details | Resource group | Required | Create a new resource group for this storage account, or select an existing one. For more information, see [Resource groups](../../azure-resource-manager/management/overview.md#resource-groups). | | Instance details | Storage account name | Required | Choose a unique name for your storage account. Storage account names must be between 3 and 24 characters in length and may contain numbers and lowercase letters only. | | Instance details | Region | Required | Select the appropriate region for your storage account. For more information, see [Regions and Availability Zones in Azure](../../availability-zones/az-overview.md).<br /><br />Not all regions are supported for all types of storage accounts or redundancy configurations. For more information, see [Azure Storage redundancy](storage-redundancy.md).<br /><br />The choice of region can have a billing impact. For more information, see [Storage account billing](storage-account-overview.md#storage-account-billing). |
-| Instance details | Performance | Required | Select **Standard** performance for general-purpose v2 storage accounts (default). This type of account is recommended by Microsoft for most scenarios. For more information, see [Types of storage accounts](storage-account-overview.md#types-of-storage-accounts).<br /><br />Select **Premium** for scenarios requiring low latency. After selecting **Premium**, select the type of premium storage account to create. The following types of premium storage accounts are available: <ul><li>[Block blobs](./storage-account-overview.md)</li><li>[File shares](../files/storage-files-planning.md#management-concepts)</li><li>[Page blobs](../blobs/storage-blob-pageblob-overview.md)</li></ul><br /><br />Microsoft recommends creating a general-purpose v2, premium block blob, or premium file share account for most scenarios. To select a legacy account type, use the link provided beneath **Instance details**. For more information about legacy account types, see [Legacy storage account types](storage-account-overview.md#legacy-storage-account-types). |
+| Instance details | Performance | Required | Select **Standard** performance for general-purpose v2 storage accounts (default). This type of account is recommended by Microsoft for most scenarios. For more information, see [Types of storage accounts](storage-account-overview.md#types-of-storage-accounts).<br /><br />Select **Premium** for scenarios requiring low latency. After selecting **Premium**, select the type of premium storage account to create. The following types of premium storage accounts are available: <ul><li>[Block blobs](./storage-account-overview.md)</li><li>[File shares](../files/storage-files-planning.md#management-concepts)</li><li>[Page blobs](../blobs/storage-blob-pageblob-overview.md)</li></ul><br />Microsoft recommends creating a general-purpose v2, premium block blob, or premium file share account for most scenarios. To select a legacy account type, use the link provided beneath **Instance details**. For more information about legacy account types, see [Legacy storage account types](storage-account-overview.md#legacy-storage-account-types). |
| Instance details | Redundancy | Required | Select your desired redundancy configuration. Not all redundancy options are available for all types of storage accounts in all regions. For more information about redundancy configurations, see [Azure Storage redundancy](storage-redundancy.md).<br /><br />If you select a geo-redundant configuration (GRS or GZRS), your data is replicated to a data center in a different region. For read access to data in the secondary region, select **Make read access to data available in the event of regional unavailability**. | The following image shows a standard configuration of the basic properties for a new storage account.
The following table describes the fields on the **Advanced** tab.
| Security | Enable storage account key access | Optional | When enabled, this setting allows clients to authorize requests to the storage account using either the account access keys or an Azure Active Directory (Azure AD) account (default). Disabling this setting prevents authorization with the account access keys. For more information, see [Prevent Shared Key authorization for an Azure Storage account](shared-key-authorization-prevent.md). | | Security | Default to Azure Active Directory authorization in the Azure portal | Optional | When enabled, the Azure portal authorizes data operations with the user's Azure AD credentials by default. If the user does not have the appropriate permissions assigned via Azure role-based access control (Azure RBAC) to perform data operations, then the portal will use the account access keys for data access instead. The user can also choose to switch to using the account access keys. For more information, see [Default to Azure AD authorization in the Azure portal](../blobs/authorize-data-operations-portal.md#default-to-azure-ad-authorization-in-the-azure-portal). | | Security | Minimum TLS version | Required | Select the minimum version of Transport Layer Security (TLS) for incoming requests to the storage account. The default value is TLS version 1.2. When set to the default value, incoming requests made using TLS 1.0 or TLS 1.1 are rejected. For more information, see [Enforce a minimum required version of Transport Layer Security (TLS) for requests to a storage account](transport-layer-security-configure-minimum-version.md). |
+| Security | Permitted scope for copy operations (preview) | Required | Select the scope of storage accounts from which data can be copied to the new account. The default value is `From any storage account`. When set to the default value, users with the appropriate permissions can copy data from any storage account to the new account.<br /><br />Select `From storage accounts in the same Azure AD tenant` to only allow copy operations from storage accounts within the same Azure AD tenant.<br />Select `From storage accounts that have a private endpoint to the same virtual network` to only allow copy operations from storage accounts with private endpoints on the same virtual network.<br /><br /> For more information, see [Restrict the source of copy operations to a storage account](security-restrict-copy-operations.md). |
| Data Lake Storage Gen2 | Enable hierarchical namespace | Optional | To use this storage account for Azure Data Lake Storage Gen2 workloads, configure a hierarchical namespace. For more information, see [Introduction to Azure Data Lake Storage Gen2](../blobs/data-lake-storage-introduction.md). | | Blob storage | Enable SFTP | Optional | Enable the use of Secure File Transfer Protocol (SFTP) to securely transfer of data over the internet. For more information, see [Secure File Transfer (SFTP) protocol support in Azure Blob Storage](../blobs/secure-file-transfer-protocol-support.md). | | Blob storage | Enable network file share (NFS) v3 | Optional | NFS v3 provides Linux file system compatibility at object storage scale enables Linux clients to mount a container in Blob storage from an Azure Virtual Machine (VM) or a computer on-premises. For more information, see [Network File System (NFS) 3.0 protocol support in Azure Blob Storage](../blobs/network-file-system-protocol-support.md). |
storage Storage Files Identity Ad Ds Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-enable.md
Previously updated : 11/29/2022 Last updated : 01/10/2023 recommendations: false
The AzFilesHybrid PowerShell module provides cmdlets for deploying and configuri
### Download AzFilesHybrid module - If you don't have [.NET Framework 4.7.2 or higher](https://dotnet.microsoft.com/download/dotnet-framework/) installed, install it now. It's required for the module to import successfully.-- [Download and unzip the latest version of the AzFilesHybrid module](https://github.com/Azure-Samples/azure-files-samples/releases). Note that AES-256 Kerberos encryption is supported on v0.2.2 or above. If you've enabled the feature with an AzFilesHybrid version below v0.2.2 and want to update to support AES-256 Kerberos encryption, see [this article](./storage-troubleshoot-windows-file-connection-problems.md#azure-files-on-premises-ad-ds-authentication-support-for-aes-256-kerberos-encryption).-- Install and execute the module on a device that is domain joined to on-premises AD DS with AD DS credentials that have permissions to create a service logon account or a computer account in the target AD (such as domain admin).
+- [Download and unzip the latest version of the AzFilesHybrid module](https://github.com/Azure-Samples/azure-files-samples/releases). Note that AES-256 Kerberos encryption is supported on v0.2.2 or above, and is the default encryption method beginning in v0.2.5. If you've enabled the feature with an AzFilesHybrid version below v0.2.2 and want to update to support AES-256 Kerberos encryption, see [this article](./storage-troubleshoot-windows-file-connection-problems.md#azure-files-on-premises-ad-ds-authentication-support-for-aes-256-kerberos-encryption).
+- Install and execute the module on a device that is domain joined to on-premises AD DS with AD DS credentials that have permissions to create a computer account or service logon account in the target AD (such as domain admin).
### Run Join-AzStorageAccount
-The `Join-AzStorageAccount` cmdlet performs the equivalent of an offline domain join on behalf of the specified storage account. By default, the script uses the cmdlet to create a [computer account](/windows/security/identity-protection/access-control/active-directory-accounts#manage-default-local-accounts-in-active-directory) in your AD domain. If for whatever reason you can't use a computer account, you can alter the script to create a [service logon account](/windows/win32/ad/about-service-logon-accounts) instead. Note that service logon accounts don't currently support AES-256 encryption.
+The `Join-AzStorageAccount` cmdlet performs the equivalent of an offline domain join on behalf of the specified storage account. The script below uses this cmdlet to create a [computer account](/windows/security/identity-protection/access-control/active-directory-accounts#manage-default-local-accounts-in-active-directory) in your AD domain. If for whatever reason you can't use a computer account, you can alter the script to create a [service logon account](/windows/win32/ad/about-service-logon-accounts) instead. Using AES-256 encryption with service logon accounts is supported beginning with AzFilesHybrid version 0.2.5.
The AD DS account created by the cmdlet represents the storage account. If the AD DS account is created under an organizational unit (OU) that enforces password expiration, you must update the password before the maximum password age. Failing to update the account password before that date results in authentication failures when accessing Azure file shares. To learn how to update the password, see [Update AD DS account password](storage-files-identity-ad-ds-update-password.md).
The AD DS account created by the cmdlet represents the storage account. If the A
> [!NOTE] > If the account used to join the storage account in AD DS is an **Owner** or **Contributor** in the Azure subscription where the target resources are located, then that account is already enabled to perform the join and no further assignments are required.
-The AD DS credential must also have permissions to create a service logon account or computer account in the target AD. Replace the placeholder values with your own before executing the script.
+The AD DS credential must also have permissions to create a computer account or service logon account in the target AD. Replace the placeholder values with your own before executing the script.
```PowerShell # Change the execution policy to unblock importing AzFilesHybrid.psm1 module
$DomainAccountType = "<ComputerAccount|ServiceLogonAccount>" # Default is set as
# storage account is created under the root directory. $OuDistinguishedName = "<ou-distinguishedname-here>" # Specify the encryption algorithm used for Kerberos authentication. Using AES256 is recommended.
-# Note that ServiceLogonAccount does not support AES256 encryption.
$EncryptionType = "<AES256|RC4|AES256,RC4>" # Select the target subscription for the current session
Select-AzSubscription -SubscriptionId $SubscriptionId
# with -OrganizationalUnitDistinguishedName. You can choose to provide one of the two names to specify # the target OU. You can choose to create the identity that represents the storage account as either a # Service Logon Account or Computer Account (default parameter value), depending on your AD permissions
-# and preference. Run Get-Help Join-AzStorageAccountForAuth for more details on this cmdlet. Note that
-# Service Logon Accounts do not support AES256 encryption.
+# and preference. Run Get-Help Join-AzStorageAccountForAuth for more details on this cmdlet.
Join-AzStorageAccount ` -ResourceGroupName $ResourceGroupName `
Join-AzStorageAccount `
-OrganizationalUnitDistinguishedName $OuDistinguishedName ` -EncryptionType $EncryptionType
-# Run the command below to enable AES256 encryption. If you plan to use RC4, you can skip this step.
-# Note that if you set $DomainAccountType to ServiceLogonAccount, running this command will change
-# the account type to ComputerAccount because ServiceLogonAccount doesn't support AES256.
-Update-AzStorageAccountAuthForAES256 -ResourceGroupName $ResourceGroupName -StorageAccountName $StorageAccountName
- # You can run the Debug-AzStorageAccountAuth cmdlet to conduct a set of basic checks on your AD configuration # with the logged on AD user. This cmdlet is supported on AzFilesHybrid v0.1.2+ version. For more details on # the checks performed in this cmdlet, see Azure Files Windows troubleshooting guide.
Set-AzStorageAccount `
To enable AES-256 encryption, follow the steps in this section. If you plan to use RC4 encryption, skip this section. > [!IMPORTANT]
-> In order to enable AES-256 encryption, the domain object that represents your storage account must be a computer account in the on-premises AD domain. Service logon accounts don't currently support AES-256 encryption. If your domain object doesn't meet this requirement, delete it and create a new domain object that does.
+> In order to enable AES-256 encryption, the domain object that represents your storage account must be a computer account or service logon account in the on-premises AD domain. If your domain object doesn't meet this requirement, delete it and create a new domain object that does.
Replace `<domain-object-identity>` and `<domain-name>` with your values, then run the following cmdlet to configure AES-256 support. You must have AD PowerShell cmdlets installed and execute the cmdlet in PowerShell 5.1 with elevated privileges.
storage Storage How To Use Files Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-how-to-use-files-linux.md
description: Learn how to mount an Azure file share over SMB on Linux and review
Previously updated : 11/03/2022 Last updated : 01/10/2023
uname -r
``` > [!Note]
-> SMB 2.1 support was added to Linux kernel version 3.7. If you are using a version of the Linux kernel after 3.7, it should support SMB 2.1.
+> SMB 2.1 support was added to Linux kernel version 3.7. If you're using a version of the Linux kernel after 3.7, it should support SMB 2.1.
## Applies to | File share type | SMB | NFS |
mntRoot="/mount"
sudo mkdir -p $mntRoot ```
-To mount an Azure file share on Linux, use the storage account name as the username of the file share, and the storage account key as the password. Since the storage account credentials may change over time, you should store the credentials for the storage account separately from the mount configuration.
+To mount an Azure file share on Linux, use the storage account name as the username of the file share, and the storage account key as the password. Because the storage account credentials may change over time, you should store the credentials for the storage account separately from the mount configuration.
The following example shows how to create a file to store the credentials. Remember to replace `<resource-group-name>` and `<storage-account-name>` with the appropriate information for your environment.
The final step is to restart the `autofs` service.
sudo systemctl restart autofs ```
+## Mount a file share snapshot
+
+If you want to mount a specific snapshot of an SMB Azure file share, you must supply the `snapshot` option as part of the `mount` command, where `snapshot` is the time that the particular snapshot was created in a format such as @GMT-2023.01.05-00.08.20. The `snapshot` option has been supported in the Linux kernel since version 4.19.
+
+After you've created the file share snapshot, following these instructions to mount it.
+
+1. In the Azure portal, navigate to the storage account that contains the file share that you want to mount a snapshot of.
+2. Select **Data storage > File shares** and select the file share.
+3. Select **Operations > Snapshots** and take note of the name of the snapshot you want to mount. The snapshot name will be a GMT timestamp, such as in the screenshot below.
+
+ :::image type="content" source="media/storage-how-to-use-files-linux/mount-snapshot.png" alt-text="Screenshot showing how to locate a file share snapshot name and timestamp in the Azure portal." border="true" :::
+
+4. Convert the timestamp to the format expected by the `mount` command, which is **@GMT-year.month.day-hour.minutes.seconds**. In this example, you'd convert **2023-01-05T00:08:20.0000000Z** to **@GMT-2023.01.05-00.08.20**.
+5. Run the `mount` command using the GMT time to specify the `snapshot` value. Be sure to replace `<storage-account-name>`, `<file-share-name>`, and the GMT timestamp with your values. The .cred file contains the credentials to be used to mount the share (see [Automatically mount file shares](#automatically-mount-file-shares)).
+
+ ```bash
+ sudo mount -t cifs //<storage-account-name>.file.core.windows.net/<file-share-name> /mnt/<file-share-name>/snapshot1 -o credentials=/etc/smbcredentials/snapshottestlinux.cred,snapshot=@GMT-2023.01.05-00.08.20
+ ```
+
+6. If you're able to browse the snapshot under the path `/mnt/<file-share-name>/snapshot1`, then the mount succeeded.
+
+If the mount fails, see [Troubleshoot Azure Files problems in Linux (SMB)](storage-troubleshoot-linux-file-connection-problems.md).
+ ## Next steps See these links for more information about Azure Files:
storage Storage Troubleshoot Linux File Connection Problems https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-troubleshoot-linux-file-connection-problems.md
description: Troubleshooting Azure Files problems in Linux. See common issues re
Previously updated : 09/12/2022 Last updated : 01/10/2023
In addition to the troubleshooting steps in this article, you can use [AzFileDia
Common causes for this problem are: -- You're using an Linux distribution with an outdated SMB client. See [Use Azure Files with Linux](storage-how-to-use-files-linux.md) for more information on common Linux distributions available in Azure that have compatible clients.-- SMB utilities (cifs-utils) are not installed on the client.-- The minimum SMB version, 2.1, is not available on the client.-- SMB 3.x encryption is not supported on the client. The preceding table provides a list of Linux distributions that support mounting from on-premises and cross-region using encryption. Other distributions require kernel 4.11 and later versions.-- You're trying to connect to an Azure file share from an Azure VM, and the VM is not in the same region as the storage account.
+- You're using a Linux distribution with an outdated SMB client. See [Use Azure Files with Linux](storage-how-to-use-files-linux.md) for more information on common Linux distributions available in Azure that have compatible clients.
+- SMB utilities (cifs-utils) aren't installed on the client.
+- The minimum SMB version, 2.1, isn't available on the client.
+- SMB 3.x encryption isn't supported on the client. The preceding table provides a list of Linux distributions that support mounting from on-premises and cross-region using encryption. Other distributions require kernel 4.11 and later versions.
+- You're trying to connect to an Azure file share from an Azure VM, and the VM isn't in the same region as the storage account.
- If the [Secure transfer required](../common/storage-require-secure-transfer.md) setting is enabled on the storage account, Azure Files will allow only connections that use SMB 3.x with encryption. ### Solution
If virtual network (VNET) and firewall rules are configured on the storage accou
Verify virtual network and firewall rules are configured properly on the storage account. To test if virtual network or firewall rules is causing the issue, temporarily change the setting on the storage account to **Allow access from all networks**. To learn more, see [Configure Azure Storage firewalls and virtual networks](../common/storage-network-security.md).
+<a id="mounterror22"></a>
+## "Mount error(22): Invalid argument" when trying to mount an Azure file share snapshot
+
+### Cause
+
+If the `snapshot` option for the `mount` command isn't passed in a recognized format, the `mount` command can fail with this error. To confirm, check kernel log messages (dmesg), and dmesg will show a log entry such as **cifs: Bad value for 'snapshot'**.
+
+### Solution
+
+Make sure you're passing the `snapshot` option for the `mount` command in the correct format. Refer to the mount.cifs manual page (e.g. `man mount.cifs`). A common error is passing the GMT timestamp in the wrong format, such as using hyphens or colons in place of periods. For more information, see [Mount a file share snapshot](storage-how-to-use-files-linux.md#mount-a-file-share-snapshot).
+
+<a id="badsnapshottoken"></a>
+## "Bad snapshot token" when trying to mount an Azure file share snapshot
+
+### Cause
+
+If the snapshot `mount` option is passed starting with @GMT, but the format is still wrong (such as using hyphens and colons instead of periods), the `mount` command can fail with this error.
+
+### Solution
+
+Make sure you're passing the GMT timestamp in the correct format, which is **@GMT-year.month.day-hour.minutes.seconds**. For more information, see [Mount a file share snapshot](storage-how-to-use-files-linux.md#mount-a-file-share-snapshot).
+
+<a id="mounterror2"></a>
+## "Mount error(2): No such file or directory" when trying to mount an Azure file share snapshot
+
+### Cause
+
+If the snapshot that you're attempting to mount doesn't exist, the `mount` command can fail with this error. To confirm, check kernel log messages (dmesg), and dmesg will show a log entry such as:
+
+```bash
+[Mon Dec 12 10:34:09 2022] CIFS: Attempting to mount \\snapshottestlinux.file.core.windows.net\snapshot-test-share1
+[Mon Dec 12 10:34:09 2022] CIFS: VFS: cifs_mount failed w/return code = -2
+```
+
+### Solution
+
+Make sure the snapshot you're attempting to mount exists. For more information on how to list the available snapshots for a given Azure file share, see [Mount a file share snapshot](storage-how-to-use-files-linux.md#mount-a-file-share-snapshot).
+ <a id="permissiondenied"></a> ## "[permission denied] Disk quota exceeded" when you try to open a file
storage Storage Troubleshooting Files Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-troubleshooting-files-performance.md
To determine whether most of your requests are metadata-centric, start by follow
![Screenshot of the metrics options for premium file shares, showing an "API name" property filter.](media/storage-troubleshooting-premium-fileshares/MetadataMetrics.png)
-#### Workaround
+#### Workarounds
- Check to see whether the application can be modified to reduce the number of metadata operations.-- Add a virtual hard disk (VHD) on the file share and mount the VHD from the client to perform file operations against the data. This approach works for single writer/reader scenarios or scenarios with multiple readers and no writers. Because the file system is owned by the client rather than Azure Files, this allows metadata operations to be local. The setup offers performance similar to that of local directly attached storage.
- - To mount a VHD on a Windows client, use the [`Mount-DiskImage`](/powershell/module/storage/mount-diskimage) PowerShell cmdlet.
- - To mount a VHD on Linux, consult the documentation for your Linux distribution. [Here's an example](https://man7.org/linux/man-pages/man5/nfs.5.html).
+- Separate the file share into multiple file shares within the same storage account.
+- Add a virtual hard disk (VHD) on the file share and mount the VHD from the client to perform file operations against the data. This approach works for single writer/reader scenarios or scenarios with multiple readers and no writers. Because the file system is owned by the client rather than Azure Files, this allows metadata operations to be local. The setup offers performance similar to that of local directly attached storage. However, because the data is in a VHD, it can't be accessed via any other means other than the SMB mount, such as REST API or through the Azure portal.
+ 1. From the machine which needs to access the Azure file share, mount the file share using the storage account key and map it to an available network drive (for example, Z:).
+ 1. Go to **Disk Management** and select **Action > Create VHD**.
+ 1. Set **Location** to the network drive that the Azure file share is mapped to, set **Virtual hard disk size** as needed, and select **Fixed size**.
+ 1. Select **OK**. Once the VHD creation is complete, it will automatically mount, and a new unallocated disk will appear.
+ 1. Right-click the new unknown disk and select **Initialize Disk**.
+ 1. Right-click the unallocated area and create a **New Simple Volume**.
+ 1. You should see a new drive letter appear in **Disk Management** representing this VHD with read/write access (for example, E:). In **File Explorer**, you should see the new VHD on the mapped Azure file share's network drive (Z: in this example). To be clear, there should be two drive letters present: the standard Azure file share network mapping on Z:, and the VHD mapping on the E: drive.
+ 1. There should be much better performance on heavy metadata operations against files on the VHD mapped drive (E:) versus the Azure file share mapped drive (Z:). If desired, it should be possible to disconnect the mapped network drive (Z:) and still access the mounted VHD drive (E:).
+
+ - To mount a VHD on a Windows client, you can also use the [`Mount-DiskImage`](/powershell/module/storage/mount-diskimage) PowerShell cmdlet.
+ - To mount a VHD on Linux, consult the documentation for your Linux distribution. [Here's an example](https://man7.org/linux/man-pages/man5/nfs.5.html).
### Cause 3: Single-threaded application
synapse-analytics Query Delta Lake Format https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/query-delta-lake-format.md
In this article, you'll learn how to write a query using serverless Synapse SQL pool to read Delta Lake files. Delta Lake is an open-source storage layer that brings ACID (atomicity, consistency, isolation, and durability) transactions to Apache Spark and big data workloads.
+You can learn more from the [how to query delta lake tables video](https://www.youtube.com/watch?v=LSIVX0XxVfc).
The serverless SQL pool in Synapse workspace enables you to read the data stored in Delta Lake format, and serve it to reporting tools. A serverless SQL pool can read Delta Lake files that are created using Apache Spark, Azure Databricks, or any other producer of the Delta Lake format.
-Apache Spark pools in Azure Synapse enable data engineers to modify Delta Lake files using Scala, PySpark, and .NET. Serverless SQL pools help data analysts to create reports
-on Delta Lake files created by data engineers. You can learn more from the [how to query delta lake tables video](https://www.youtube.com/watch?v=LSIVX0XxVfc).
+Apache Spark pools in Azure Synapse enable data engineers to modify Delta Lake files using Scala, PySpark, and .NET. Serverless SQL pools help data analysts to create reports on Delta Lake files created by data engineers.
+
+> [!IMPORTANT]
+> Querying Delta Lake format using the serverless SQL pool is **Generally available** functionality. However, querying Spark Delta tables is still in public preview and not production ready. There are known issues that might happen if you query Delta tables created using the Spark pools. See the known issues in the [self-help page](resources-self-help-sql-on-demand.md#delta-lake).
## Quickstart example
virtual-desktop Rdp Shortpath https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/rdp-shortpath.md
All connections begin by establishing a TCP-based [reverse connect transport](ne
1. After establishing the RDP Shortpath transport, all Dynamic Virtual Channels (DVCs), including remote graphics, input, and device redirection, are moved to the new transport. However, if a firewall or network topology prevents the client from establishing direct UDP connectivity, RDP continues with a reverse connect transport.
-If your users have both RDP Shortpath for managed network and public networks available to them, then the first algorithm found will be used. Whichever connection gets established first is what the user will use for that session.
+If your users have both RDP Shortpath for managed network and public networks available to them, then the first-found algorithm will be used. The user will use whichever connection gets established first for that session.
# [Public networks](#tab/public-networks)
All connections begin by establishing a TCP-based [reverse connect transport](ne
1. After RDP establishes the RDP Shortpath transport, all Dynamic Virtual Channels (DVCs), including remote graphics, input, and device redirection move to the new transport.
-If your users have both RDP Shortpath for managed network and public networks available to them, then the first algorithm found will be used. Whichever connection gets established first is what the user will use for that session.
+If your users have both RDP Shortpath for managed network and public networks available to them, then the first-found algorithm will be used. The user will use whichever connection gets established first for that session.
> [!IMPORTANT] > When using a TCP-based transport, outbound traffic from session host to client is through the Azure Virtual Desktop Gateway. With RDP Shortpath, outbound traffic is established directly between session host and client over the internet. This removes a hop which improves latency and end user experience. However, due to the changes in data flow between session host and client where the Gateway is no longer used, there will be standard [Azure egress network charges](https://azure.microsoft.com/pricing/details/bandwidth/) billed in addition per subscription for the internet bandwidth consumed. To learn more about estimating the bandwidth used by RDP, see [RDP bandwidth requirements](rdp-bandwidth.md).
To support RDP Shortpath for public networks, you typically don't need any parti
As RDP Shortpath uses UDP to establish a data flow, if a firewall on your network blocks UDP traffic, RDP Shortpath will fail and the connection will fall back to TCP-based reverse connect transport. Azure Virtual Desktop uses STUN servers provided by Azure Communication Services and Microsoft Teams. By the nature of the feature, outbound connectivity from the session hosts to the client is required. Unfortunately, you can't predict where your users are located in most cases. Therefore, we recommend allowing outbound UDP connectivity from your session hosts to the internet. To reduce the number of ports required, you can [limit the port range used by clients](configure-rdp-shortpath-limit-ports-public-networks.md) for the UDP flow. Use the following tables for reference when configuring firewalls for RDP Shortpath.
-If your users are in a scenario where RDP Shortpath for both managed network and public networks is available to them, then the first algorithm found will be used. Whichever connection gets established first is what the user will use for that session.
+If your users are in a scenario where RDP Shortpath for both managed network and public networks is available to them, then the first algorithm found will be used. The user will use whichever connection gets established first for that session.
> [!NOTE] > RDP Shortpath doesn't support Symmetric NAT, which is the mapping of a single private source *IP:Port* to a unique public destination *IP:Port*. This is because RDP Shortpath needs to reuse the same external port (or NAT binding) used in the initial connection. Where multiple paths are used, for example a highly available firewall pair, external port reuse cannot be guaranteed. Azure Firewall and Azure NAT Gateway use Symmetric NAT and so are not supported. For more information about NAT with Azure virtual networks, see [Source Network Address Translation with virtual networks](../virtual-network/nat-gateway/nat-gateway-resource.md#source-network-address-translation).
RDP Shortpath uses a TLS connection between the client and the session host usin
> [!NOTE] > The security offered by RDP Shortpath is the same as that offered by reverse connect transport.
+## Example scenarios
+
+Here are some example scenarios to show how connections are evaluated to decide whether RDP Shortpath is used across different network topologies.
+
+### Scenario 1
+
+A UDP connection can only be established between the client device and the session host over a public network (internet). A direct connection, such as a VPN, is not available.
++
+### Scenario 2
+
+A UDP connection can be established between the client device and the session host over a public network or over a direct VPN connection, but RDP Shortpath for managed networks is not enabled. When the client initiates the connection, the ICE/STUN protocol can see multiple routes and will evaluate each route and choose the one with the lowest latency.
+
+In this example, a UDP connection using RDP Shortpath for public networks over the direct VPN connection will be made as it has the lowest latency, as shown by the green line.
++
+### Scenario 3
+
+Both RDP Shortpath for public networks and managed networks are enabled. A UDP connection can be established between the client device and the session host over a public network or over a direct VPN connection. When the client initiates the connection, there are simultaneous attempts to connect using RDP Shortpath for managed networks through port 3390 (by default) and RDP Shortpath for public networks through the ICE/STUN protocol. The first-found algorithm will be used and the user will use whichever connection gets established first for that session.
+
+Since going over a public network has additional steps, for example a NAT device, a load balancer, or a STUN server, it is likely that the first-found algorithm will select the connection using RDP Shortpath for managed networks and be established first.
++
+### Scenario 4
+
+A UDP connection can be established between the client device and the session host over a public network or over a direct VPN connection, but RDP Shortpath for managed networks is not enabled. To prevent ICE/STUN from using a particular route, an admin can block one of the routes for UDP traffic. Blocking a route would ensure the remaining path is always used.
+
+In this example, UDP is blocked on the direct VPN connection and the ICE/STUN protocol establishes a connection over the public network.
++
+### Scenario 5
+
+Both RDP Shortpath for public networks and managed networks are configured, however a UDP connection could not be established. In this instance, RDP Shortpath will fail and the connection will fall back to TCP-based reverse connect transport.
++ ## Next steps - Learn how to [Configure RDP Shortpath](configure-rdp-shortpath.md).
virtual-machines Copy Files To Vm Using Scp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/copy-files-to-vm-using-scp.md
The `-r` flag instructs SCP to recursively copy the files and directories from t
## Next steps
-* [Manage users, SSH, and check or repair disks on Azure Linux VMs using the 'VMAccess' Extension](/extensions/vmaccess.md)
+* [Manage users, SSH, and check or repair disks on Azure Linux VMs using the 'VMAccess' Extension](/azure/virtual-machines/extensions/vmaccess)
virtual-machines Dedicated Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/dedicated-hosts.md
Previously updated : 12/07/2020 Last updated : 12/14/2022
-#Customer intent: As an IT administrator, I want to learn about more about using a dedicated host for my Azure virtual machines
+#Customer intent: As an IT administrator, I want to learn more about using a dedicated host for my Azure virtual machines
# Azure Dedicated Hosts **Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Uniform scale sets
-Azure Dedicated Host is a service that provides physical servers - able to host one or more virtual machines - dedicated to one Azure subscription. Dedicated hosts are the same physical servers used in our data centers, provided as a resource. You can provision dedicated hosts within a region, availability zone, and fault domain. Then, you can place VMs directly into your provisioned hosts, in whatever configuration best meets your needs.
-
+Azure Dedicated Host is a service that provides physical servers able to host one or more virtual machines assigned to one Azure subscription. Dedicated hosts are the same physical servers used in our data centers, provided instead as a directly accessible hardware resource. You can provision dedicated hosts within a region, availability zone, and fault domain. You can then place VMs directly into your provisioned hosts in whatever configuration best meets your needs.
## Benefits
-Reserving the entire host provides the following benefits:
+Reserving the entire host provides several benefits beyond those of a standard shared virtual machine host:
+
+- Cost Optimization: With the Azure hybrid benefit, you can bring your own licenses for Windows and SQL to Azure. For more information, see [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-benefit/).
+
+- Reliability: You have near complete control over maintenance events initiated by the Azure platform. While most maintenance events have little to no impact on your virtual machines, there are some sensitive workloads where each second of pause can have an impact. With dedicated hosts, you can opt in to a maintenance window to reduce the impact to your service.
-- Hardware isolation at the physical server level. No other VMs will be placed on your hosts. Dedicated hosts are deployed in the same data centers and share the same network and underlying storage infrastructure as other, non-isolated hosts.-- Control over maintenance events initiated by the Azure platform. While the majority of maintenance events have little to no impact on your virtual machines, there are some sensitive workloads where each second of pause can have an impact. With dedicated hosts, you can opt in to a maintenance window to reduce the impact to your service.-- With the Azure hybrid benefit, you can bring your own licenses for Windows and SQL to Azure. Using the hybrid benefits provides you with additional benefits. For more information, see [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-benefit/).
+- Performance Efficiency: Because you have control over a physical host, you can choose which applications share physical resources such as memory and storage. This can speed up certain workloads that benefit from low latency and high throughput on the host machine.
+- Security: Hardware isolation at the physical server level allows for sensitive memory data to remain isolated within a physical host. No other customer's VMs will be placed on your hosts. Dedicated hosts are deployed in the same data centers and share the same network and underlying storage infrastructure as other, non-isolated hosts.
## Groups, hosts, and VMs
For high availability, you should deploy multiple VMs, spread across multiple ho
### Use Availability Zones for fault isolation
-Availability zones are unique physical locations within an Azure region. Each zone is made up of one or more datacenters equipped with independent power, cooling, and networking. A host group is created in a single availability zone. Once created, all hosts will be placed within that zone. To achieve high availability across zones, you need to create multiple host groups (one per zone) and spread your hosts accordingly.
+Availability zones are unique physical locations within an Azure region. Each zone is made up of one or more datacenters equipped with independent power, cooling, and networking. A host group is created in a single availability zone. Once created, all hosts will be placed within that zone. To achieve high availability across zones, you need to create multiple host groups (one per zone) and spread your hosts between them accordingly.
If you assign a host group to an availability zone, all VMs created on that host must be created in the same zone. ### Use Fault Domains for fault isolation
-A host can be created in a specific fault domain. Just like VM in a scale set or availability set, hosts in different fault domains will be placed on different physical racks in the data center. When you create a host group, you are required to specify the fault domain count. When creating hosts within the host group, you assign fault domain for each host. The VMs do not require any fault domain assignment.
+A host can be created in a specific fault domain. Just like VM in a scale set or availability set, hosts in different fault domains will be placed on different physical racks in the data center. When you create a host group, you're required to specify the fault domain count. When creating hosts within the host group, you assign fault domain for each host. The VMs don't require any fault domain assignment.
-Fault domains are not the same as colocation. Having the same fault domain for two hosts does not mean they are in proximity with each other.
+Fault domains aren't the same as colocation. Having the same fault domain for two hosts doesn't mean they are in proximity with each other.
-Fault domains are scoped to the host group. You should not make any assumption on anti-affinity between two host groups (unless they are in different availability zones).
+Fault domains are scoped to the host group. You shouldn't make any assumption on anti-affinity between two host groups (unless they are in different availability zones).
VMs deployed to hosts with different fault domains, will have their underlying managed disks services on multiple storage stamps, to increase the fault isolation protection. ### Using Availability Zones and Fault Domains
-You can use both capabilities together to achieve even more fault isolation. In this case, you will specify the availability zone and fault domain count in for each host group, assign a fault domain to each of your hosts in the group, and assign an availability zone to each of your VMs
+You can use both capabilities together to achieve even more fault isolation. To use both, specify the availability zone and fault domain count in for each host group, assign a fault domain to each host in the group, then assign an availability zone to each VM.
The [Resource Manager sample template](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.compute/vm-dedicated-hosts/README.md) uses zones and fault domains to spread hosts for maximum resiliency in a region.
When creating a VM in Azure, you can select which dedicated host to use. You can
When creating a new host group, make sure the setting for automatic VM placement is selected. When creating your VM, select the host group and let Azure pick the best host for your VM.
-Host groups that are enabled for automatic placement do not require all the VMs to be automatically placed. You will still be able to explicitly pick a host, even when automatic placement is selected for the host group.
+Host groups that are enabled for automatic placement don't require all the VMs to be automatically placed. You'll still be able to explicitly pick a host, even when automatic placement is selected for the host group.
### Limitations Known issues and limitations when using automatic VM placement: -- You will not be able to redeploy your VM.-- You will not be able to use DCv2, Lsv2, NVasv4, NVsv3, Msv2, or M-series VMs with dedicated hosts
+- You won't be able to redeploy your VM.
+- You won't be able to use DCv2, Lsv2, NVasv4, NVsv3, Msv2, or M-series VMs with dedicated hosts.
-## Virtual machine scale set support
+## Virtual Machine Scale Set support
-Virtual machine scale sets let you treat a group of virtual machines as a single resource, and apply availability, management, scaling and orchestration policies as a group. Your existing dedicated hosts can also be used for virtual machine scale sets.
+Virtual Machine Scale Sets let you treat a group of virtual machines as a single resource, and apply availability, management, scaling and orchestration policies as a group. Your existing dedicated hosts can also be used for Virtual Machine Scale Sets.
-When creating a virtual machine scale set you can specify an existing host group to have all of the VM instances created on dedicated hosts.
+When creating a Virtual Machine Scale Set, you can specify an existing host group to have all of the VM instances created on dedicated hosts.
-The following requirements apply when creating a virtual machine scale set in a dedicated host group:
+The following requirements apply when creating a Virtual Machine Scale Set in a dedicated host group:
- Automatic VM placement needs to be enabled. - The availability setting of your host group should match your scale set.
The following requirements apply when creating a virtual machine scale set in a
- The supported VM sizes for your dedicated hosts should match the one used for your scale set. Not all scale-set orchestration and optimizations settings are supported by dedicated hosts. Apply the following settings to your scale set:-- Overprovisioning is not recommended, and it is disabled by default. You can enable overprovisioning, but the scale set allocation will fail if the host group does not have capacity for all of the VMs, including the overprovisioned instances.
+- Overprovisioning isn't recommended, and it's disabled by default. You can enable overprovisioning, but the scale set allocation will fail if the host group doesn't have capacity for all of the VMs, including the overprovisioned instances.
- Use the ScaleSetVM orchestration mode-- Do not use proximity placement groups for co-location
+- Don't use proximity placement groups for co-location
## Maintenance control
-The infrastructure supporting your virtual machines may occasionally be updated to improve reliability, performance, security, and to launch new features. The Azure platform tries to minimize the impact of platform maintenance whenever possible, but customers with *maintenance sensitive* workloads can't tolerate even few seconds that the VM needs to be frozen or disconnected for maintenance.
+The infrastructure supporting your virtual machines may occasionally be updated to improve reliability, performance, security, and to launch new features. The Azure platform tries to minimize the impact of platform maintenance whenever possible, however customers with *maintenance sensitive* workloads can't tolerate even few seconds that the VM needs to be shut down for maintenance.
-**Maintenance Control** provides customers with an option to skip regular platform updates scheduled on their dedicated hosts, then apply it at the time of their choice within a 35-day rolling window. Within the maintenance window, you can apply maintenance directly at the host level, in any order. Once the maintenance window is over, Microsoft will move forward and apply the pending maintenance to the hosts in an order which may not follow the user defined fault domains.
+**Maintenance Control** provides customers with an option to skip regular platform updates scheduled on their dedicated hosts, then apply it at the time of their choice within a 35-day rolling window. Within the maintenance window, you can apply maintenance directly at the host level, in any order. Once the maintenance window is over, Microsoft will move forward and apply the pending maintenance to the hosts in an order that may not follow the user defined fault domains.
For more information, see [Managing platform updates with Maintenance Control](./maintenance-configurations.md). ## Capacity considerations
-Once a dedicated host is provisioned, Azure assigns it to physical server. This guarantees the availability of the capacity when you need to provision your VM. Azure uses the entire capacity in the region (or zone) to pick a physical server for your host. It also means that customers can expect to be able to grow their dedicated host footprint without the concern of running out of space in the cluster.
+Once a dedicated host is provisioned, Azure assigns it to physical server. Doing so guarantees the availability of the capacity when you need to provision your VM. Azure uses the entire capacity in the region (or zone) to pick a physical server for your host. It also means that customers can expect to be able to grow their dedicated host footprint without the concern of running out of space in the cluster.
## Quotas
There are two types of quota that are consumed when you deploy a dedicated host.
To request a quota increase, create a support request in the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
-Provisioning a dedicated host will consume both dedicated host vCPU and the VM family vCPU quota, but it will not consume the regional vCPU. VMs placed on a dedicated host will not count against VM family vCPU quota. Should a VM be moved off a dedicated host into a multi-tenant environment, the VM will consume VM family vCPU quota.
+Provisioning a dedicated host will consume both dedicated host vCPU and the VM family vCPU quota, but it won't consume the regional vCPU. VMs placed on a dedicated host won't count against VM family vCPU quota. Should a VM be moved off a dedicated host into a multi-tenant environment, the VM will consume VM family vCPU quota.
![Screenshot of the usage and quotas page in the portal](./media/virtual-machines-common-dedicated-hosts/quotas.png) For more information, see [Virtual machine vCPU quotas](./windows/quotas.md).
-Free trial and MSDN subscriptions do not have quota for Azure Dedicated Hosts.
+Free trial and MSDN subscriptions don't have quota for Azure Dedicated Hosts.
## Pricing
-Users are charged per dedicated host, regardless how many VMs are deployed. In your monthly statement you will see a new billable resource type of hosts. The VMs on a dedicated host will still be shown in your statement, but will carry a price of 0.
+Users are charged per dedicated host, regardless how many VMs are deployed. In your monthly statement, you'll see a new billable resource type of hosts. The VMs on a dedicated host will still be shown in your statement, but will carry a price of 0.
The host price is set based on VM family, type (hardware size), and region. A host price is relative to the largest VM size supported on the host.
-Software licensing, storage and network usage are billed separately from the host and VMs. There is no change to those billable items.
+Software licensing, storage and network usage are billed separately from the host and VMs. There's no change to those billable items.
For more information, see [Azure Dedicated Host pricing](https://aka.ms/ADHPricing).
You can also save on costs with a [Reserved Instance of Azure Dedicated Hosts](p
## Sizes and hardware generations
-A SKU is defined for a host and it represents the VM size series and type. You can mix multiple VMs of different sizes within a single host as long as they are of the same size series.
+A SKU represents the VM size series and type on a given host. You can mix multiple VMs of different sizes within a single host as long as they are of the same size series.
The *type* is the hardware generation. Different hardware types for the same VM series will be from different CPU vendors and have different CPU generations and number of cores.
Azure monitors and manages the health status of your hosts. The following states
| Health State | Description | |-|-| | Host Available | There are no known issues with your host. |
-| Host Under Investigation | WeΓÇÖre having some issues with the host which weΓÇÖre looking into. This is a transitional state required for Azure to try and identify the scope and root cause for the issue identified. Virtual machines running on the host may be impacted. |
+| Host Under Investigation | WeΓÇÖre having some issues with the host that weΓÇÖre looking into. This transitional state is required for Azure to try to identify the scope and root cause for the issue identified. Virtual machines running on the host may be impacted. |
| Host Pending Deallocate | Azure canΓÇÖt restore the host back to a healthy state and ask you to redeploy your virtual machines out of this host. If `autoReplaceOnFailure` is enabled, your virtual machines are *service healed* to healthy hardware. Otherwise, your virtual machine may be running on a host that is about to fail.|
-| Host deallocated | All virtual machines have been removed from the host. You are no longer being charged for this host since the hardware was taken out of rotation. |
+| Host deallocated | All virtual machines have been removed from the host. You're no longer being charged for this host since the hardware was taken out of rotation. |
## Next steps - To deploy a dedicated host, see [Deploy VMs and scale sets to dedicated hosts](./dedicated-hosts-how-to.md). -- There is a [sample template](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.compute/vm-dedicated-hosts/README.md) that uses both zones and fault domains for maximum resiliency in a region.
+- There's a [sample template](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.compute/vm-dedicated-hosts/README.md) that uses both zones and fault domains for maximum resiliency in a region.
- You can also save on costs with a [Reserved Instance of Azure Dedicated Hosts](prepay-dedicated-hosts-reserved-instances.md).
virtual-machines Image Version Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/image-version-encryption.md
Title: Create an image version encrypted with your own keys
+ Title: Create an encrypted image version with customer-managed keys
description: Create an image version in an Azure Compute Gallery, by using customer-managed encryption keys. Previously updated : 12/6/2022 Last updated : 1/9/2023 ms.devlang: azurecli
-# Use customer-managed keys for encrypting images
+# Create an encrypted image version with customer-managed keys
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
This article requires that you already have a disk encryption set in each region
## Limitations
-When you're using customer-managed keys for encrypting images in an Azure Compute Gallery, these limitations apply:
+When you're using customer-managed keys for encrypting images in an Azure Compute Gallery, these limitations apply:
- Encryption key sets must be in the same subscription as your image.
When you're using customer-managed keys for encrypting images in an Azure Comput
- After you've used your own keys to encrypt a disk or image, you can't go back to using platform-managed keys for encrypting those disks or images.
+- VM image version source doesn't currently support customer-managed key encryption.
## PowerShell
virtual-machines Add Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/add-disk.md
Previously updated : 12/08/2022 Last updated : 01/09/2023
ssh azureuser@10.123.123.25
### Find the disk
-Once connected to your VM, you need to find the disk. In this example, we are using `lsblk` to list the disks.
+Once you connect to your VM, find the disk. In this example, we're using `lsblk` to list the disks.
```bash lsblk -o NAME,HCTL,SIZE,MOUNTPOINT | grep -i "sd"
sdb 1:0:1:0 14G
sdc 3:0:0:0 50G ```
-Here, `sdc` is the disk that we want, because it is 50G. If you add multiple disks, and aren't sure which disk it is based on size alone, you can go to the VM page in the portal, select **Disks**, and check the LUN number for the disk under **Data disks**. Compare the LUN number from the portal to the last number of the **HTCL** portion of the output, which is the LUN.
+Here, `sdc` is the disk that we want, because it's 50G. If you add multiple disks, and aren't sure which disk it's based on size alone, you can go to the VM page in the portal, select **Disks**, and check the LUN number for the disk under **Data disks**. Compare the LUN number from the portal to the last number of the **HTCL** portion of the output, which is the LUN.
### Format the disk
-Format the disk with `parted`, if the disk size is 2 tebibytes (TiB) or larger then you must use GPT partitioning, if it is under 2TiB, then you can use either MBR or GPT partitioning.
+Format the disk with `parted`, if the disk size is two tebibytes (TiB) or larger then you must use GPT partitioning, if it is under 2TiB, then you can use either MBR or GPT partitioning.
> [!NOTE] > It is recommended that you use the latest version `parted` that is available for your distro. > If the disk size is 2 tebibytes (TiB) or larger, you must use GPT partitioning. If disk size is under 2 TiB, then you can use either MBR or GPT partitioning.
-The following example uses `parted` on `/dev/sdc`, which is where the first data disk will typically be on most VMs. Replace `sdc` with the correct option for your disk. We are also formatting it using the [XFS](https://xfs.wiki.kernel.org/) filesystem.
+The following example uses `parted` on `/dev/sdc`, which is where the first data disk will typically be on most VMs. Replace `sdc` with the correct option for your disk. We're also formatting it using the [XFS](https://xfs.wiki.kernel.org/) filesystem.
```bash sudo parted /dev/sdc --script mklabel gpt mkpart xfspart xfs 0% 100%
sudo mount /dev/sdc1 /datadrive
### Persist the mount
-To ensure that the drive is remounted automatically after a reboot, it must be added to the */etc/fstab* file. It is also highly recommended that the UUID (Universally Unique Identifier) is used in */etc/fstab* to refer to the drive rather than just the device name (such as, */dev/sdc1*). If the OS detects a disk error during boot, using the UUID avoids the incorrect disk being mounted to a given location. Remaining data disks would then be assigned those same device IDs. To find the UUID of the new drive, use the `blkid` utility:
+To ensure that the drive is remounted automatically after a reboot, it must be added to the */etc/fstab* file. It's also highly recommended that the UUID (Universally Unique Identifier) is used in */etc/fstab* to refer to the drive rather than just the device name (such as, */dev/sdc1*). If the OS detects a disk error during boot, using the UUID avoids the incorrect disk being mounted to a given location. Remaining data disks would then be assigned those same device IDs. To find the UUID of the new drive, use the `blkid` utility:
```bash sudo blkid
The output looks similar to the following example:
> [!NOTE] > Improperly editing the **/etc/fstab** file could result in an unbootable system. If unsure, refer to the distribution's documentation for information on how to properly edit this file. It is also recommended that a backup of the /etc/fstab file is created before editing.
-Next, open the */etc/fstab* file in a text editor as follows:
-
-```bash
-sudo nano /etc/fstab
-```
-
-In this example, use the UUID value for the `/dev/sdc1` device that was created in the previous steps, and the mountpoint of `/datadrive`. Add the following line to the end of the `/etc/fstab` file:
+Next, open the **/etc/fstab** file in a text editor. Add a line to the end of the file, using the UUID value for the `/dev/sdc1` device that was created in the previous steps, and the mountpoint of `/datadrive`. Using the example from this article, the new line would look like the following:
```bash UUID=33333333-3b3b-3c3c-3d3d-3e3e3e3e3e3e /datadrive xfs defaults,nofail 1 2 ```
-In this example, we are using the nano editor, so when you are done editing the file, use `Ctrl+O` to write the file and `Ctrl+X` to exit the editor.
+When you're done editing the file, save and close the editor.
> [!NOTE] > Later removing a data disk without editing fstab could cause the VM to fail to boot. Most distributions provide either the *nofail* and/or *nobootwait* fstab options. These options allow a system to boot even if the disk fails to mount at boot time. Consult your distribution's documentation for more information on these parameters.
There are two ways to enable TRIM support in your Linux VM. As usual, consult yo
- In some cases, the `discard` option may have performance implications. Alternatively, you can run the `fstrim` command manually from the command line, or add it to your crontab to run regularly:
- **Ubuntu**
+# [Ubuntu](#tab/ubuntu)
- ```bash
- sudo apt-get install util-linux
- sudo fstrim /datadrive
- ```
+```bash
+sudo apt-get install util-linux
+sudo fstrim /datadrive
+```
- **RHEL/CentOS**
+# [RHEL](#tab/rhel)
- ```bash
- sudo yum install util-linux
- sudo fstrim /datadrive
- ```
+```bash
+sudo yum install util-linux
+sudo fstrim /datadrive
+```
+
+# [SUSE](#tab/suse)
+
+```bash
+sudo fstrim /datadrive
+```
+ ## Troubleshooting
There are two ways to enable TRIM support in your Linux VM. As usual, consult yo
## Next steps * To ensure your Linux VM is configured correctly, review the [Optimize your Linux machine performance](/previous-versions/azure/virtual-machines/linux/optimization) recommendations.
-* Expand your storage capacity by adding additional disks and [configure RAID](/previous-versions/azure/virtual-machines/linux/configure-raid) for additional performance.
+* Expand your storage capacity by adding more disks and [configure RAID](/previous-versions/azure/virtual-machines/linux/configure-raid) for extra performance.
virtual-machines Attach Disk Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/attach-disk-portal.md
description: Use the portal to attach new or existing data disk to a Linux VM.
Previously updated : 08/13/2021 Last updated : 01/09/2023
Before you attach disks to your VM, review these tips:
:::image type="content" source="./medi.png" alt-text="Review disk settings.":::
-1. When you are done, select **Save** at the top of the page to create the managed disk and update the VM configuration.
+1. When you're done, select **Save** at the top of the page to create the managed disk and update the VM configuration.
## Attach an existing disk 1. On the **Disks** pane, under **Data disks**, select **Attach existing disks**.
-1. Click the drop-down menu for **Disk name** and select a disk from the list of available managed disks.
+1. Select the drop-down menu for **Disk name** and select a disk from the list of available managed disks.
-1. Click **Save** to attach the existing managed disk and update the VM configuration:
+1. Select **Save** to attach the existing managed disk and update the VM configuration:
## Connect to the Linux VM to mount the new disk
ssh azureuser@10.123.123.25
## Find the disk
-Once connected to your VM, you need to find the disk. In this example, we are using `lsblk` to list the disks.
+Once connected to your VM, you need to find the disk. In this example, we're using `lsblk` to list the disks.
```bash lsblk -o NAME,HCTL,SIZE,MOUNTPOINT | grep -i "sd"
sdb 1:0:1:0 14G
sdc 3:0:0:0 4G ```
-In this example, the disk that I added is `sdc`. It is a LUN 0 and is 4GB.
+In this example, the disk that was added was `sdc`. It's a LUN 0 and is 4GB.
-For a more complex example, here is what multiple data disks look like in the portal:
+For a more complex example, here's what multiple data disks look like in the portal:
:::image type="content" source="./media/attach-disk-portal/find-disk.png" alt-text="Screenshot of multiple disks shown in the portal."::: In the image, you can see that there are 3 data disks: 4 GB on LUN 0, 16GB at LUN 1, and 32G at LUN 2.
-Here is what that might look like using `lsblk`:
+Here's what that might look like using `lsblk`:
```bash sda 0:0:0:0 30G
From the output of `lsblk` you can see that the 4GB disk at LUN 0 is `sdc`, the
> [!IMPORTANT] > If you are using an existing disk that contains data, skip to [mounting the disk](#mount-the-disk).
-> The following instuctions will delete data on the disk.
+> The following instructions will delete data on the disk.
-If you are attaching a new disk, you need to partition the disk.
+If you're attaching a new disk, you need to partition the disk.
The `parted` utility can be used to partition and to format a data disk.-- It is recommended that you use the latest version `parted` that is available for your distro.
+- Use the latest version `parted` that is available for your distro.
- If the disk size is 2 tebibytes (TiB) or larger, you must use GPT partitioning. If disk size is under 2 TiB, then you can use either MBR or GPT partitioning.
-The following example uses `parted` on `/dev/sdc`, which is where the first data disk will typically be on most VMs. Replace `sdc` with the correct option for your disk. We are also formatting it using the [XFS](https://xfs.wiki.kernel.org/) filesystem.
+The following example uses `parted` on `/dev/sdc`, which is where the first data disk will typically be on most VMs. Replace `sdc` with the correct option for your disk. We're also formatting it using the [XFS](https://xfs.wiki.kernel.org/) filesystem.
```bash sudo parted /dev/sdc --script mklabel gpt mkpart xfspart xfs 0% 100%
Use `mount` to then mount the filesystem. The following example mounts the */dev
sudo mount /dev/sdc1 /datadrive ```
-To ensure that the drive is remounted automatically after a reboot, it must be added to the */etc/fstab* file. It is also highly recommended that the UUID (Universally Unique Identifier) is used in */etc/fstab* to refer to the drive rather than just the device name (such as, */dev/sdc1*). If the OS detects a disk error during boot, using the UUID avoids the incorrect disk being mounted to a given location. Remaining data disks would then be assigned those same device IDs. To find the UUID of the new drive, use the `blkid` utility:
+To ensure that the drive is remounted automatically after a reboot, it must be added to the */etc/fstab* file. It's also highly recommended that the UUID (Universally Unique Identifier) is used in */etc/fstab* to refer to the drive rather than just the device name (such as, */dev/sdc1*). If the OS detects a disk error during boot, using the UUID avoids the incorrect disk being mounted to a given location. Remaining data disks would then be assigned those same device IDs. To find the UUID of the new drive, use the `blkid` utility:
```bash sudo blkid
The output looks similar to the following example:
``` > [!NOTE]
-> Improperly editing the **/etc/fstab** file could result in an unbootable system. If unsure, refer to the distribution's documentation for information on how to properly edit this file. It is also recommended that a backup of the /etc/fstab file is created before editing.
+> Improperly editing the **/etc/fstab** file could result in an unbootable system. If unsure, refer to the distribution's documentation for information on how to properly edit this file. You should create a backup of the **/etc/fstab** file is created before editing.
-Next, open the */etc/fstab* file in a text editor as follows:
-
-```bash
-sudo nano /etc/fstab
-```
-
-In this example, use the UUID value for the `/dev/sdc1` device that was created in the previous steps, and the mountpoint of `/datadrive`. Add the following line to the end of the `/etc/fstab` file:
+Next, open the **/etc/fstab** file in a text editor. Add a line to the end of the file, using the UUID value for the `/dev/sdc1` device that was created in the previous steps, and the mountpoint of `/datadrive`. Using the example from this article, the new line would look like the following:
```bash UUID=33333333-3b3b-3c3c-3d3d-3e3e3e3e3e3e /datadrive xfs defaults,nofail 1 2 ```
-We used the nano editor, so when you are done editing the file, use `Ctrl+O` to write the file and `Ctrl+X` to exit the editor.
+When you're done editing the file, save and close the editor.
> [!NOTE] > Later removing a data disk without editing fstab could cause the VM to fail to boot. Most distributions provide either the *nofail* and/or *nobootwait* fstab options. These options allow a system to boot even if the disk fails to mount at boot time. Consult your distribution's documentation for more information on these parameters.
You can see that `sdc` is now mounted at `/datadrive`.
### TRIM/UNMAP support for Linux in Azure
-Some Linux kernels support TRIM/UNMAP operations to discard unused blocks on the disk. This feature is primarily useful in standard storage to inform Azure that deleted pages are no longer valid and can be discarded, and can save money if you create large files and then delete them.
+Some Linux kernels support TRIM/UNMAP operations to discard unused blocks on the disk. This feature is primarily useful to inform Azure that deleted pages are no longer valid and can be discarded. This feature can save money on disks that are billed based on the amount of consumed storage, such as unmanaged standard disks and disk snapshots.
There are two ways to enable TRIM support in your Linux VM. As usual, consult your distribution for the recommended approach:
There are two ways to enable TRIM support in your Linux VM. As usual, consult yo
``` * In some cases, the `discard` option may have performance implications. Alternatively, you can run the `fstrim` command manually from the command line, or add it to your crontab to run regularly:
- **Ubuntu**
-
- ```bash
- sudo apt-get install util-linux
- sudo fstrim /datadrive
- ```
-
- **RHEL/CentOS**
+# [Ubuntu](#tab/ubuntu)
- ```bash
- sudo yum install util-linux
- sudo fstrim /datadrive
- ```
+```bash
+sudo apt-get install util-linux
+sudo fstrim /datadrive
+```
+
+# [RHEL](#tab/rhel)
+
+```bash
+sudo yum install util-linux
+sudo fstrim /datadrive
+```
+
+# [SUSE](#tab/suse)
+
+```bash
+sudo fstrim /datadrive
+```
+ ## Next steps
virtual-machines Detach Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/detach-disk.md
Previously updated : 06/08/2022 Last updated : 01/09/2023
Edit the */etc/fstab* file to remove references to the disk.
> [!NOTE] > Improperly editing the **/etc/fstab** file could result in an unbootable system. If unsure, refer to the distribution's documentation for information on how to properly edit this file. It is also recommended that a backup of the /etc/fstab file is created before editing.
-Open the */etc/fstab* file in a text editor as follows:
-
-```bash
-sudo vi /etc/fstab
-```
-
-In this example, the following line needs to be deleted from the */etc/fstab* file:
+Open the **/etc/fstab** file in a text editor and remove the line containing the UUID of your disk. Using the example values in this article, the line would look like the following:
```bash UUID=33333333-3b3b-3c3c-3d3d-3e3e3e3e3e3e /datadrive ext4 defaults,nofail 1 2 ```
-Use `umount` to unmount the disk. The following example unmounts the */dev/sdc1* partition from the */datadrive* mount point:
+Save and close the file when you're done.
+
+Next, use `umount` to unmount the disk. The following example unmounts the */dev/sdc1* partition from the */datadrive* mount point:
```bash sudo umount /dev/sdc1 /datadrive
virtual-machines Expand Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/expand-disks.md
Previously updated : 12/09/2022 Last updated : 01/06/2023
tmpfs tmpfs 65M 0 65M 0% /run/user/1000
user@ubuntu:~# ```
-# [SuSE](#tab/suse)
+# [SUSE](#tab/suse)
To increase the OS disk size in SUSE 12 SP4, SUSE SLES 12 for SAP, SUSE SLES 15, and SUSE SLES 15 for SAP:
To increase the OS disk size in SUSE 12 SP4, SUSE SLES 12 for SAP, SUSE SLES 15,
1. Access your VM as the **root** user by using the ```sudo``` command after logging in as another user: ```
- linux:~ # sudo -i
+ sudo -i
``` 1. Use the following command to install the **growpart** package, which will be used to resize the partition, if it is not already present: ```
- linux:~ # zypper install growpart
+ zypper install growpart
``` 1. Use the `lsblk` command to find the partition mounted on the root of the file system (**/**). In this case, we see that partition 4 of device **sda** is mounted on **/**:
To increase the OS disk size in SUSE 12 SP4, SUSE SLES 12 for SAP, SUSE SLES 15,
1. Resize the required partition by using the `growpart` command and the partition number determined in the preceding step: ```
- linux:~ # growpart /dev/sda 4
+ growpart /dev/sda 4
CHANGED: partition=4 start=3151872 old: size=59762655 end=62914527 new: size=97511391 end=100663263 ```
To increase the OS disk size in SUSE 12 SP4, SUSE SLES 12 for SAP, SUSE SLES 15,
The following output shows that the **/dev/sda4** partition has been resized to 46.5 GB: ```
- linux:~ # lsblk
+ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 48G 0 disk Γö£ΓöÇsda1 8:1 0 2M 0 part
To increase the OS disk size in SUSE 12 SP4, SUSE SLES 12 for SAP, SUSE SLES 15,
1. Identify the type of file system on the OS disk by using the `lsblk` command with the `-f` flag: ```
- linux:~ # lsblk -f
+ lsblk -f
NAME FSTYPE LABEL UUID MOUNTPOINT sda Γö£ΓöÇsda1
To increase the OS disk size in SUSE 12 SP4, SUSE SLES 12 for SAP, SUSE SLES 15,
Example output: ```
- linux:~ # xfs_growfs /
+ xfs_growfs /
meta-data=/dev/sda4 isize=512 agcount=4, agsize=1867583 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0 spinodes=0 rmapbt=0
To increase the OS disk size in SUSE 12 SP4, SUSE SLES 12 for SAP, SUSE SLES 15,
For **ext4**, use this command: ```
- linux:~ #resize2fs /dev/sda4
+ resize2fs /dev/sda4
``` 1. Verify the increased file system size for **df -Th** by using this command: ```
- linux:~ #df -Thl
+ df -Thl
``` Example output: ```
- linux:~ # df -Thl
+ df -Thl
Filesystem Type Size Used Avail Use% Mounted on devtmpfs devtmpfs 445M 4.0K 445M 1% /dev tmpfs tmpfs 458M 0 458M 0% /dev/shm
To increase the OS disk size in SUSE 12 SP4, SUSE SLES 12 for SAP, SUSE SLES 15,
1. Access your VM as the **root** user by using the ```sudo``` command after logging in as another user: ```bash
- [root@rhel-lvm ~]# sudo -i
+ sudo -i
``` 1. Use the `lsblk` command to determine which logical volume (LV) is mounted on the root of the file system (**/**). In this case, we see that **rootvg-rootlv** is mounted on **/**. If a different filesystem is in need of resizing, substitute the LV and mount point throughout this section. ```shell
- [root@rhel-lvm ~]# lsblk -f
+ lsblk -f
NAME FSTYPE LABEL UUID MOUNTPOINT fd0 sda
To increase the OS disk size in SUSE 12 SP4, SUSE SLES 12 for SAP, SUSE SLES 15,
1. Check whether there is free space in the LVM volume group (VG) containing the root partition. If there is free space, skip to step 12. ```bash
- [root@rhel-lvm ~]# vgdisplay rootvg
+ vgdisplay rootvg
Volume group VG Name rootvg System ID
To increase the OS disk size in SUSE 12 SP4, SUSE SLES 12 for SAP, SUSE SLES 15,
1. Install the **cloud-utils-growpart** package to provide the **growpart** command, which is required to increase the size of the OS disk and the gdisk handler for GPT disk layouts This package is preinstalled on most marketplace images ```bash
- [root@rhel-lvm ~]# yum install cloud-utils-growpart gdisk
+ yum install cloud-utils-growpart gdisk
``` 1. Determine which disk and partition holds the LVM physical volume (PV) or volumes in the volume group named **rootvg** by using the **pvscan** command. Note the size and free space listed between the brackets (**[** and **]**). ```bash
- [root@rhel-lvm ~]# pvscan
+ pvscan
PV /dev/sda4 VG rootvg lvm2 [<63.02 GiB / <38.02 GiB free] ``` 1. Verify the size of the partition by using `lsblk`. ```bash
- [root@rhel-lvm ~]# lsblk /dev/sda4
+ lsblk /dev/sda4
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda4 8:4 0 63G 0 part Γö£ΓöÇrootvg-tmplv 253:1 0 2G 0 lvm /tmp
To increase the OS disk size in SUSE 12 SP4, SUSE SLES 12 for SAP, SUSE SLES 15,
1. Expand the partition containing this PV using *growpart*, the device name, and partition number. Doing so will expand the specified partition to use all the free contiguous space on the device. ```bash
- [root@rhel-lvm ~]# growpart /dev/sda 4
+ growpart /dev/sda 4
CHANGED: partition=4 start=2054144 old: size=132161536 end=134215680 new: size=199272414 end=201326558 ``` 1. Verify that the partition has resized to the expected size by using the `lsblk` command again. Notice that in the example **sda4** has changed from 63G to 95G. ```bash
- [root@rhel-lvm ~]# lsblk /dev/sda4
+ lsblk /dev/sda4
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda4 8:4 0 95G 0 part Γö£ΓöÇrootvg-tmplv 253:1 0 2G 0 lvm /tmp
To increase the OS disk size in SUSE 12 SP4, SUSE SLES 12 for SAP, SUSE SLES 15,
1. Expand the PV to use the rest of the newly expanded partition ```bash
- [root@rhel-lvm ~]# pvresize /dev/sda4
+ pvresize /dev/sda4
Physical volume "/dev/sda4" changed 1 physical volume(s) resized or updated / 0 physical volume(s) not resized ```
To increase the OS disk size in SUSE 12 SP4, SUSE SLES 12 for SAP, SUSE SLES 15,
1. Verify the new size of the PV is the expected size, comparing to original **[size / free]** values. ```bash
- [root@rhel-lvm ~]# pvscan
+ pvscan
PV /dev/sda4 VG rootvg lvm2 [<95.02 GiB / <70.02 GiB free] ``` 1. Expand the LV by the required amount, which does not need to be all the free space in the volume group. In the following example, **/dev/mapper/rootvg-rootlv** is resized from 2 GB to 12 GB (an increase of 10 GB) through the following command. This command will also resize the file system on the LV. ```bash
- [root@rhel-lvm ~]# lvresize -r -L +10G /dev/mapper/rootvg-rootlv
+ lvresize -r -L +10G /dev/mapper/rootvg-rootlv
``` Example output: ```bash
- [root@rhel-lvm ~]# lvresize -r -L +10G /dev/mapper/rootvg-rootlv
+ lvresize -r -L +10G /dev/mapper/rootvg-rootlv
Size of logical volume rootvg/rootlv changed from 2.00 GiB (512 extents) to 12.00 GiB (3072 extents). Logical volume rootvg/rootlv successfully resized. meta-data=/dev/mapper/rootvg-rootlv isize=512 agcount=4, agsize=131072 blks
To increase the OS disk size in SUSE 12 SP4, SUSE SLES 12 for SAP, SUSE SLES 15,
Example output: ```shell
- [root@rhel-lvm ~]# df -Th /
+ df -Th /
Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/rootvg-rootlv xfs 12G 71M 12G 1% /
- [root@rhel-lvm ~]#
``` > [!NOTE]
To increase the OS disk size in SUSE 12 SP4, SUSE SLES 12 for SAP, SUSE SLES 15,
1. Access your VM as the **root** user by using the ```sudo``` command after logging in as another user: ```bash
- [root@rhel-raw ~]# sudo -i
+ sudo -i
``` 1. When the VM has restarted, perform the following steps:
To increase the OS disk size in SUSE 12 SP4, SUSE SLES 12 for SAP, SUSE SLES 15,
1. Install the **cloud-utils-growpart** package to provide the **growpart** command, which is required to increase the size of the OS disk and the gdisk handler for GPT disk layouts. This package is preinstalled on most marketplace images ```bash
- [root@rhel-raw ~]# yum install cloud-utils-growpart gdisk
+ yum install cloud-utils-growpart gdisk
``` 1. Use the **lsblk -f** command to verify the partition and filesystem type holding the root (**/**) partition ```bash
- [root@rhel-raw ~]# lsblk -f
+ lsblk -f
NAME FSTYPE LABEL UUID MOUNTPOINT sda Γö£ΓöÇsda1 xfs 2a7bb59d-6a71-4841-a3c6-cba23413a5d2 /boot
To increase the OS disk size in SUSE 12 SP4, SUSE SLES 12 for SAP, SUSE SLES 15,
1. For verification, start by listing the partition table of the sda disk with **gdisk**. In this example, we see a 48.0 GiB disk with partition #2 sized 29.0 GiB. The disk was expanded from 30 GB to 48 GB in the Azure portal. ```bash
- [root@rhel-raw ~]# gdisk -l /dev/sda
+ gdisk -l /dev/sda
GPT fdisk (gdisk) version 0.8.10 Partition table scan:
To increase the OS disk size in SUSE 12 SP4, SUSE SLES 12 for SAP, SUSE SLES 15,
1. Expand the partition for root, in this case sda2 by using the **growpart** command. Using this command expands the partition to use all of the contiguous space on the disk. ```bash
- [root@rhel-raw ~]# growpart /dev/sda 2
+ growpart /dev/sda 2
CHANGED: partition=2 start=2050048 old: size=60862464 end=62912512 new: size=98613214 end=100663262 ``` 1. Now print the new partition table with **gdisk** again. Notice that partition 2 has is now sized 47.0 GiB ```bash
- [root@rhel-raw ~]# gdisk -l /dev/sda
+ gdisk -l /dev/sda
GPT fdisk (gdisk) version 0.8.10 Partition table scan:
To increase the OS disk size in SUSE 12 SP4, SUSE SLES 12 for SAP, SUSE SLES 15,
1. Expand the filesystem on the partition with **xfs_growfs**, which is appropriate for a standard marketplace-generated RedHat system: ```bash
- [root@rhel-raw ~]# xfs_growfs /
+ xfs_growfs /
meta-data=/dev/sda2 isize=512 agcount=4, agsize=1901952 blks = sectsz=4096 attr=2, projid32bit=1 = crc=1 finobt=0 spinodes=0
To increase the OS disk size in SUSE 12 SP4, SUSE SLES 12 for SAP, SUSE SLES 15,
1. Verify the new size is reflected with the **df** command ```bash
- [root@rhel-raw ~]# df -hl
+ df -hl
Filesystem Size Used Avail Use% Mounted on devtmpfs 452M 0 452M 0% /dev tmpfs 464M 0 464M 0% /dev/shm
virtual-machines Image Builder Json https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/image-builder-json.md
# Create an Azure Image Builder Bicep or ARM JSON template
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets
Azure Image Builder uses a Bicep file or an ARM JSON template file to pass information into the Image Builder service. In this article we'll go over the sections of the files, so you can build your own. For latest API versions, see [template reference](/azure/templates/microsoft.virtualmachineimages/imagetemplates?tabs=bicep&pivots=deployment-language-bicep). To see examples of full .json files, see the [Azure Image Builder GitHub](https://github.com/Azure/azvmimagebuilder/tree/main/quickquickstarts).
virtual-machines Migrate To Premium Storage Using Azure Site Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/migrate-to-premium-storage-using-azure-site-recovery.md
Title: Migrate your Linux VMs to Azure Premium Storage with Azure Site Recovery description: Migrate your existing virtual machines to Azure Premium Storage by using Site Recovery. Premium Storage offers high-performance, low-latency disk support for I/O-intensive workloads running on Azure Virtual Machines.-+ Last updated 08/15/2017-+ # Use Site Recovery to migrate to Premium Storage
For specific scenarios for migrating virtual machines, see the following resourc
* [Migrate Azure Virtual Machines between Storage Accounts](https://azure.microsoft.com/blog/2014/10/22/migrate-azure-virtual-machines-between-storage-accounts/) * [Upload a Linux virtual hard disk](upload-vhd.md)
-* Migrating Virtual Machines from Amazon AWS to Microsoft Azure
+* [Migrating Virtual Machines from Amazon AWS to Microsoft Azure](/shows/it-ops-talk/migrate-your-aws-vms-to-azure-with-azure-migrate)
Also, see the following resources to learn more about Azure Storage and Azure Virtual Machines:
virtual-machines Spot Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/spot-cli.md
You can simulate an eviction of an Azure Spot Virtual Machine using REST, PowerS
In most cases, you will want to use the REST API [Virtual Machines - Simulate Eviction](/rest/api/compute/virtualmachines/simulateeviction) to help with automated testing of applications. For REST, a `Response Code: 204` means the simulated eviction was successful. You can combine simulated evictions with the [Scheduled Event service](scheduled-events.md), to automate how your app will respond when the VM is evicted.
-To see scheduled events in action, watch Azure Friday - Using Azure Scheduled Events to prepare for VM maintenance.
+To see scheduled events in action, watch Azure Friday - [Using Azure Scheduled Events to prepare for VM maintenance](https://youtu.be/ApsoXLVg_0U).
### Quick test
virtual-machines Managed Disks Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/managed-disks-overview.md
description: Overview of Azure managed disks, which handle the storage accounts
Previously updated : 10/19/2022 Last updated : 01/10/2023
Refer to our [design for high performance](premium-storage-performance.md) artic
## Next steps
-If you'd like a video going into more detail on managed disks, check out: [Better Azure VM Resiliency with Managed Disks).
+If you'd like a video going into more detail on managed disks, check out: [Better Azure VM Resiliency with Managed Disks](/shows/azure/managed-disks-azure-resiliency).
Learn more about the individual disk types Azure offers, which type is a good fit for your needs, and learn about their performance targets in our article on disk types.
virtual-machines Prepay Reserved Vm Instances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/prepay-reserved-vm-instances.md
Previously updated : 10/30/2017 Last updated : 01/09/2023 + # Save costs with Azure Reserved VM Instances
If you have questions or need help, [create a support request](https://portal.az
- [Understand reservation usage for a subscription with pay-as-you-go rates](../cost-management-billing/reservations/understand-reserved-instance-usage.md) - [Understand reservation usage for your Enterprise enrollment](../cost-management-billing/reservations/understand-reserved-instance-usage-ea.md) - [Windows software costs not included with reservations](../cost-management-billing/reservations/reserved-instance-windows-software-costs.md)
- - [Azure Reservations in Partner Center Cloud Solution Provider (CSP) program](/partner-center/azure-reservations)
+ - [Azure Reservations in Partner Center Cloud Solution Provider (CSP) program](/partner-center/azure-reservations)
virtual-machines Migrate To Premium Storage Using Azure Site Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/migrate-to-premium-storage-using-azure-site-recovery.md
For specific scenarios for migrating virtual machines, see the following resourc
* [Migrate Azure Virtual Machines between Storage Accounts](https://azure.microsoft.com/blog/2014/10/22/migrate-azure-virtual-machines-between-storage-accounts/) * [Create and upload a Windows Server VHD to Azure](upload-generalized-managed.md)
-* Migrating Virtual Machines from Amazon AWS to Microsoft Azure
+* [Migrating Virtual Machines from Amazon AWS to Microsoft Azure](/shows/it-ops-talk/migrate-your-aws-vms-to-azure-with-azure-migrate)
Also, see the following resources to learn more about Azure Storage and Azure Virtual Machines:
virtual-machines Spot Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/spot-powershell.md
You can simulate an eviction of an Azure Spot Virtual Machine using REST, PowerS
In most cases, you will want to use the REST API [Virtual Machines - Simulate Eviction](/rest/api/compute/virtualmachines/simulateeviction) to help with automated testing of applications. For REST, a `Response Code: 204` means the simulated eviction was successful. You can combine simulated evictions with the [Scheduled Event service](scheduled-events.md), to automate how your app will respond when the VM is evicted.
-To see scheduled events in action, watch Azure Friday - Using Azure Scheduled Events to prepare for VM maintenance.
+To see scheduled events in action, watch Azure Friday - [Using Azure Scheduled Events to prepare for VM maintenance](https://youtu.be/ApsoXLVg_0U).
### Quick test
virtual-network-manager Create Virtual Network Manager Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/create-virtual-network-manager-portal.md
Previously updated : 09/22/2022 Last updated : 01/10/2023
Now that the Network Group is created, and has the correct VNets, create a mesh
1. Select **Connectivity configuration** from the drop-down menu to begin creating a connectivity configuration.
- :::image type="content" source="./media/create-virtual-network-manager-portal/configuration-menu.png" alt-text="Screenshot of configuration drop-down menu.":::
+ :::image type="content" source="./media/create-virtual-network-manager-portal/connectivity-configuration-dropdown.png" alt-text="Screenshot of configuration drop-down menu.":::
1. On the **Basics** page, enter the following information, and select **Next: Topology >**.
virtual-network-manager How To Block High Risk Ports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/how-to-block-high-risk-ports.md
Previously updated : 06/28/2022 Last updated : 01/10/2023 # Protect high-risk network ports with Security Admin Rules in Azure Virtual Network Manager
-In this article, you'll learn to block high risk network ports using [Azure Virtual Network Manager](overview.md) and Security Admin Rules. You'll walk through the creation of an Azure Virtual Network Manager instance, group your virtual networks (VNets) with [network groups](concept-network-groups.md), and create & deploy security admin configurations for your organization. You'll deploy a general block rule for high risk ports. Then you'll create an exception for managing a specific application's VNet. This allows you to manage access to the application VNets using network security groups.
+In this article, you'll learn to block high risk network ports using [Azure Virtual Network Manager](overview.md) and Security Admin Rules. You'll walk through the creation of an Azure Virtual Network Manager instance, group your virtual networks (VNets) with [network groups](concept-network-groups.md), and create & deploy security admin configurations for your organization. You'll deploy a general block rule for high risk ports. Then you'll create an exception for managing a specific application's VNet using network security groups.
-While this article focuses on a single port, SSH, you can use protect any high-risk ports in your environment with the same steps. To learn more, review this list of [high risk ports](concept-security-admins.md#protect-high-risk-ports)
+While this article focuses on a single port, SSH, you can protect any high-risk ports in your environment with the same steps. To learn more, review this list of [high risk ports](concept-security-admins.md#protect-high-risk-ports)
> [!IMPORTANT] > Azure Virtual Network Manager is currently in public preview.
While this article focuses on a single port, SSH, you can use protect any high-r
* A group of virtual networks that can be split into network groups for applying granular security admin rules. ## Deploy virtual network environment-
-For this how-to, you'll need a virtual network environment that includes virtual networks that can be segregated for allowing and blocking specific network traffic. You may use the following table or your own configuration of virtual networks:
+You'll need a virtual network environment that includes virtual networks that can be segregated for allowing and blocking specific network traffic. You may use the following table or your own configuration of virtual networks:
| Name | IPv4 address space | subnet | | - | -| - |
In this section, you'll deploy a Virtual Network Manager instance with the Secur
## Create a network group
-With your virtual network manager created, you now create a network group to encapsulate the VNets you want to protect. This will include all of the VNets in the organization as a general all-encompassing rule to block high risk network ports is needed. You'll manually add all of the VNets.
+With your virtual network manager created, you now create a network group containing all of the VNets in the organization. You'll manually add all of the VNets.
1. Select **Network Groups**, under **Settings**. 1. Select **+ Create**, enter a *name* for the network group, and select **Add**. 1. On the *Network groups* page, select the network group you created.
ItΓÇÖs time to construct our security admin rules within a configuration in orde
:::image type="content" source="./media/create-virtual-network-manager-portal/add-configuration.png" alt-text="Screenshot of add a security admin configuration.":::
-1. Select **Security admin configuration** from the drop-down menu.
+1. Select **Security configuration** from the drop-down menu.
- :::image type="content" source="./media/how-to-block-network-traffic-portal/security-admin-drop-down.png" alt-text="Screenshot of add a configuration drop-down.":::
+ :::image type="content" source="./media/create-virtual-network-manager-portal/security-admin-dropdown.png" alt-text="Screenshot of add a configuration drop-down.":::
1. On the **Basics** tab, enter a *Name* to identify this security configuration and select **Next: Rule collections**.
In this section, you define the security rule to block high-risk network traffic
1. Then select **Review + Create** and **Create** to complete the security configuration. ## Deploy a security admin configuration
-In this section, you deploy the newly created security admin configuration to block high-risk ports to your network group. This is how the security admin configuration will take effect on the virtual networks included in the network group
+In this section, the rules created will take effect when you deploy the security admin configuration.
1. Select **Deployments** under *Settings*, then select **Deploy configuration**.
In this section, you deploy the newly created security admin configuration to bl
1. Select **Next** and **Deploy** to deploy the security admin configuration. ## Create a network group for exception virtual networks
-With traffic blocked across all of your VNets, you need an exception to allow traffic to specific virtual networks. To do this, you'll create a network group specifically for the VNets needing exclusion from the other security admin rule above.
+With traffic blocked across all of your VNets, you need an exception to allow traffic to specific virtual networks. You'll create a network group specifically for the VNets needing exclusion from the other security admin rule.
1. From your virtual network manager, select **Network Groups**, under **Settings**. 1. Select **+ Create**, enter a *name* for the application network group, and select **Add**.
With traffic blocked across all of your VNets, you need an exception to allow tr
## Create an exception Security Admin Rule collection and Rule
-In this section, you create a new rule collection that will allow high-risk traffic to a subset of virtual networks you've defined in a network group, and create security admin rule to add to our existing security admin configuration.
+In this section, you create a new rule collection and security admin rule that will allow high-risk traffic to the subset of virtual networks you've defined as exceptions. Next, you'll add it to your existing security admin configuration.
> [!IMPORTANT] > In order for your security admin rule to allow traffic to your application virtual networks, the priority needs to be set to a **lower number** than existing rules blocking traffic.
virtual-network-manager How To Block Network Traffic Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/how-to-block-network-traffic-portal.md
Previously updated : 07/01/2022 Last updated : 01/10/2023
Before you start to configure security admin rules, confirm that you've done the
:::image type="content" source="./media/create-virtual-network-manager-portal/add-configuration.png" alt-text="Screenshot of add a security admin configuration.":::
-1. Select **Security admin configuration** from the drop-down menu.
+1. Select **Security configuration** from the drop-down menu.
- :::image type="content" source="./media/how-to-block-network-traffic-portal/security-admin-drop-down.png" alt-text="Screenshot of add a configuration drop-down.":::
+ :::image type="content" source="./media/create-virtual-network-manager-portal/security-admin-dropdown.png" alt-text="Screenshot of add a configuration drop-down.":::
1. On the **Basics** tab, enter a *Name* to identify this security configuration and select **Next: Rule collections**.
Before you start to configure security admin rules, confirm that you've done the
| Source IP addresses | This field will appear when you select the source type of *IP address*. Enter an IPv4 or IPv6 address or a range using CIDR notation. When defining more than one address or blocks of addresses separate using a comma. Leave blank for this example.| | Source service tag | This field will appear when you select the source type of *Service tag*. Select service tag(s) for services you want to specify as the source. See [Available service tags](../virtual-network/service-tags-overview.md#available-service-tags), for the list of supported tags. | | Source port | Enter a single port number or a port range such as (1024-65535). When defining more than one port or port ranges, separate them using a comma. To specify any port, enter *. Leave blank for this example.|
- |**Desination**| |
+ |**Destination**| |
| Destination type | Select the destination type of either **IP address** or **Service tags**. | | Destination IP addresses | This field will appear when you select the destination type of *IP address*. Enter an IPv4 or IPv6 address or a range using CIDR notation. When defining more than one address or blocks of addresses separate using a comma. | | Destination service tag | This field will appear when you select the destination type of *Service tag*. Select service tag(s) for services you want to specify as the destination. See [Available service tags](../virtual-network/service-tags-overview.md#available-service-tags), for the list of supported tags. |
virtual-network-manager How To Create Hub And Spoke https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/how-to-create-hub-and-spoke.md
Previously updated : 11/02/2021 Last updated : 1/10/2023
This section will guide you through how to create a hub-and-spoke configuration
:::image type="content" source="./media/how-to-create-hub-and-spoke/configuration-list.png" alt-text="Screenshot of the configurations list.":::
-1. Select **Connectivity** from the drop-down menu.
+1. Select **Connectivity configuration** from the drop-down menu to begin creating a connectivity configuration.
- :::image type="content" source="./media/create-virtual-network-manager-portal/configuration-menu.png" alt-text="Screenshot of configuration drop-down menu.":::
+ :::image type="content" source="./media/create-virtual-network-manager-portal/connectivity-configuration-dropdown.png" alt-text="Screenshot of configuration drop-down menu.":::
1. On the *Add a connectivity configuration* page, enter, or select the following information:
virtual-network-manager How To Exclude Elements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/how-to-exclude-elements.md
The advanced editor can be used to select virtual networks during the creation o
```json {
- "allOf": [
- {
- "field": "Name",
- "contains": "myVNet01"
- }
- ]
+ "field": "Name",
+ "contains": "myVNet01"
} ``` 1. After a few minutes, select your network group and select **Group Members** under **Settings**. You should only see myVNet01-WestUS and myVNet01-EastUS.
The advanced editor can be used to select virtual networks during the creation o
```json [
- {
- "allOf": [
- {
- "field": "Name",
- "contains": "myVNet01"
- }
- ]
- }
+ {
+ "field": "Name",
+ "contains": "myVNet01"
+ }
] ```
virtual-network-manager Tutorial Create Secured Hub And Spoke https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/tutorial-create-secured-hub-and-spoke.md
Previously updated : 09/21/2022- Last updated : 01/10/2023+ # Tutorial: Create a secured hub and spoke network
Deploy a virtual network gateway into the hub virtual network. This virtual netw
1. Select **Configuration** under *Settings*, then select **+ Add a configuration**. Select **Connectivity** from the drop-down menu.
- :::image type="content" source="./media/tutorial-create-secured-hub-and-spoke/add-configuration.png" alt-text="Screenshot of add a configuration button for Network Manager.":::
+ :::image type="content" source="./media/create-virtual-network-manager-portal/connectivity-configuration-dropdown.png" alt-text="Screenshot of configuration drop-down menu.":::
1. On the **Basics** tab, enter and select the following information for the connectivity configuration:
virtual-network Public Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/public-ip-addresses.md
Public IP addresses with a standard SKU can be created as non-zonal, zonal, or z
A zone-redundant IP is created in all zones for a region and can survive any single zone failure. A zonal IP is tied to a specific availability zone, and shares fate with the health of the zone. A "non-zonal" public IP addresses are placed into a zone for you by Azure and doesn't give a guarantee of redundancy.
-In regions without availability zones, all public IP addresses are created as non-zonal. Public IP addresses created in a region that is later upgraded to have availability zones remain non-zonal.
+In regions without availability zones, all public IP addresses are created as non-zonal. Public IP addresses created in a region that is later upgraded to have availability zones remain non-zonal. A public IP's availability zone can't be changed after the public IP's creation.
> [!NOTE] > All basic SKU public IP addresses are created as non-zonal. Any IP that is upgraded from a basic SKU to standard SKU remains non-zonal.
virtual-network Move Across Regions Vnet Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/move-across-regions-vnet-portal.md
To export the virtual network and deploy the target virtual network by using the
1. Sign in to the [Azure portal](https://portal.azure.com), and then select **Resource Groups**. 1. Locate the resource group that contains the source virtual network, and then select it.
-1. Select **Settings** > **Export template**.
+1. Select **Automation** > **Export template**.
1. In the **Export template** pane, select **Deploy**. 1. To open the *parameters.json* file in your online editor, select **Template** > **Edit parameters**. 1. To edit the parameter of the virtual network name, change the **value** property under **parameters**:
In this tutorial, you moved an Azure virtual network from one region to another
- [Move resources to a new resource group or subscription](../azure-resource-manager/management/move-resource-group-and-subscription.md)-- [Move Azure virtual machines to another region](../site-recovery/azure-to-azure-tutorial-migrate.md)
+- [Move Azure virtual machines to another region](../site-recovery/azure-to-azure-tutorial-migrate.md)
virtual-network Virtual Network Manage Peering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-manage-peering.md
az network vnet peering delete --resource-group myResourceGroup --name VNetBtoVN
- When creating a global peering, the peered virtual networks can exist in any Azure public cloud region or China cloud regions or Government cloud regions. You can't peer across clouds. For example, a VNet in Azure public cloud can't be peered to a VNet in Azure China cloud. -- Resources in one virtual network can't communicate with the front-end IP address of a Basic Internal Load Balancer in a globally peered virtual network. Support for Basic Load Balancer only exists within the same region. Support for Standard Load Balancer exists for both, VNet Peering and Global VNet Peering. Some services that use a Basic load balancer don't work over global virtual network peering. For more information, see [Constraints related to Global VNet Peering and Load Balancers](virtual-networks-faq.md#what-are-the-constraints-related-to-global-vnet-peering-and-load-balancers).
+- Resources in one virtual network can't communicate with the front-end IP address of a Basic Load Balancer (internal or public) in a globally peered virtual network. Support for Basic Load Balancer only exists within the same region. Support for Standard Load Balancer exists for both, VNet Peering and Global VNet Peering. Some services that use a Basic load balancer don't work over global virtual network peering. For more information, see [Constraints related to Global VNet Peering and Load Balancers](virtual-networks-faq.md#what-are-the-constraints-related-to-global-vnet-peering-and-load-balancers).
- You can use remote gateways or allow gateway transit in globally peered virtual networks and locally peered virtual networks.
virtual-network Virtual Network Peering Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-peering-overview.md
You can also try the [Troubleshoot virtual network peering issues](virtual-netwo
The following constraints apply only when virtual networks are globally peered:
-* Resources in one virtual network can't communicate with the front-end IP address of a Basic Internal Load Balancer (ILB) in a globally peered virtual network.
+* Resources in one virtual network can't communicate with the front-end IP address of a Basic Load Balancer (internal or public) in a globally peered virtual network.
* Some services that use a Basic load balancer don't work over global virtual network peering. For more information, see [What are the constraints related to Global VNet Peering and Load Balancers?](virtual-networks-faq.md#what-are-the-constraints-related-to-global-vnet-peering-and-load-balancers). For more information, see [Requirements and constraints](virtual-network-manage-peering.md#requirements-and-constraints). To learn more about the supported number of peerings, see [Networking limits](../azure-resource-manager/management/azure-subscription-service-limits.md?toc=%2fazure%2fvirtual-network%2ftoc.json#azure-resource-manager-virtual-networking-limits).
vpn-gateway Bgp How To Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/bgp-how-to-cli.md
Title: 'Configure BGP for VPN Gateway using CLI'
+ Title: 'Configure BGP for VPN Gateway: CLI'
description: Learn how to configure BGP for VPN gateways using CLI.
Previously updated : 09/02/2020 Last updated : 01/09/2023
-# How to configure BGP on an Azure VPN gateway by using CLI
+# How to configure BGP for Azure VPN Gateway: CLI
-This article helps you enable BGP on a cross-premises Site-to-Site (S2S) VPN connection and a VNet-to-VNet connection (that is, a connection between virtual networks) by using the Azure [Resource Manager deployment model](../azure-resource-manager/management/deployment-models.md) and Azure CLI.
+This article helps you enable BGP on cross-premises site-to-site (S2S) VPN connections and VNet-to-VNet connections using Azure CLI. You can also create this configuration using the [Azure portal](bgp-howto.md) or [PowerShell](vpn-gateway-bgp-resource-manager-ps.md) steps.
-## About BGP
+BGP is the standard routing protocol commonly used in the Internet to exchange routing and reachability information between two or more networks. BGP enables the Azure VPN gateways and your on-premises VPN devices, called BGP peers or neighbors, to exchange "routes" that will inform both gateways on the availability and reachability for those prefixes to go through the gateways or routers involved. BGP can also enable transit routing among multiple networks by propagating routes a BGP gateway learns from one BGP peer to all other BGP peers.
-BGP is the standard routing protocol commonly used on the internet to exchange routing and reachability information between two or more networks. BGP enables the VPN gateways and your on-premises VPN devices, called BGP peers or neighbors, to exchange routes. The routes inform both gateways about the availability and reachability for prefixes to go through the gateways or routers involved. BGP can also enable transit routing among multiple networks by propagating the routes that a BGP gateway learns from one BGP peer, to all other BGP peers.
+For more information about the benefits of BGP and to understand the technical requirements and considerations of using BGP, see [About BGP and Azure VPN Gateway](vpn-gateway-bgp-overview.md).
-For more information on the benefits of BGP, and to understand the technical requirements and considerations of using BGP, see [Overview of BGP with Azure VPN gateways](vpn-gateway-bgp-overview.md).
+Each part of this article helps you form a basic building block for enabling BGP in your network connectivity. If you complete all three parts (configure BGP on the gateway, S2S connection, and VNet-to-VNet connection) you build the topology as shown in Diagram 1.
-This article helps you with the following tasks:
+**Diagram 1**
-* [Enable BGP for your VPN gateway](#enablebgp) (required)
-
- You can then complete either of the following sections, or both:
-
-* [Establish a cross-premises connection with BGP](#crossprembgp)
-* [Establish a VNet-to-VNet connection with BGP](#v2vbgp)
-
-Each of these three sections forms a basic building block for enabling BGP in your network connectivity. If you complete all three sections, you build the topology as shown in the following diagram:
-
-![BGP topology](./media/vpn-gateway-bgp-resource-manager-ps/bgp-crosspremv2v.png)
You can combine these sections to build a more complex multihop transit network that meets your needs.
-## <a name ="enablebgp"></a>Enable BGP for your VPN gateway
-This section is required before you perform any of the steps in the other two configuration sections. The following configuration steps set up the BGP parameters of the Azure VPN gateway as shown in the following diagram:
+## <a name ="enablebgp"></a>Enable BGP for the VPN gateway
-![BGP gateway](./media/vpn-gateway-bgp-resource-manager-ps/bgp-gateway.png)
+This section is required before you perform any of the steps in the other two configuration sections. The following configuration steps set up the BGP parameters of the Azure VPN gateway as shown in Diagram 2.
-### Before you begin
+**Diagram 2**
-Install the latest version of the CLI commands (2.0 or later). For information about installing the CLI commands, see [Install the Azure CLI](/cli/azure/install-azure-cli) and [Get Started with Azure CLI](/cli/azure/get-started-with-azure-cli).
-### Step 1: Create and configure TestVNet1
+### Create and configure TestVNet1
-#### <a name="Login"></a>1. Connect to your subscription
--
-#### 2. Create a resource group
+#### 1. Create a resource group
The following example creates a resource group named TestRG1 in the "eastus" location. If you already have a resource group in the region where you want to create your virtual network, you can use that one instead.
-```azurecli
-az group create --name TestBGPRG1 --location eastus
+```azurecli-interactive
+az group create --name TestRG1 --location eastus
```
-#### 3. Create TestVNet1
+#### 2. Create TestVNet1
The following example creates a virtual network named TestVNet1 and three subnets: GatewaySubnet, FrontEnd, and BackEnd. When you're substituting values, it's important that you always name your gateway subnet specifically GatewaySubnet. If you name it something else, your gateway creation fails. The first command creates the front-end address space and the FrontEnd subnet. The second command creates an additional address space for the BackEnd subnet. The third and fourth commands create the BackEnd subnet and GatewaySubnet.
-```azurecli
-az network vnet create -n TestVNet1 -g TestBGPRG1 --address-prefix 10.11.0.0/16 -l eastus --subnet-name FrontEnd --subnet-prefix 10.11.0.0/24 
- 
-az network vnet update -n TestVNet1 --address-prefixes 10.11.0.0/16 10.12.0.0/16 -g TestBGPRG1 
+```azurecli-interactive
+az network vnet create -n TestVNet1 -g TestRG1 --address-prefix 10.11.0.0/16 --subnet-name FrontEnd --subnet-prefix 10.11.0.0/24
+```
+
+```azurecli-interactive
+az network vnet update -n TestVNet1 --address-prefixes 10.11.0.0/16 10.12.0.0/16 -g TestRG1
 
-az network vnet subnet create --vnet-name TestVNet1 -n BackEnd -g TestBGPRG1 --address-prefix 10.12.0.0/24 
+az network vnet subnet create --vnet-name TestVNet1 -n BackEnd -g TestRG1 --address-prefix 10.12.0.0/24
 
-az network vnet subnet create --vnet-name TestVNet1 -n GatewaySubnet -g TestBGPRG1 --address-prefix 10.12.255.0/27 
+az network vnet subnet create --vnet-name TestVNet1 -n GatewaySubnet -g TestRG1 --address-prefix 10.12.255.0/27
```
-### Step 2: Create the VPN gateway for TestVNet1 with BGP parameters
+### Create the VPN gateway for TestVNet1 with BGP parameters
#### 1. Create the public IP address Request a public IP address. The public IP address will be allocated to the VPN gateway that you create for your virtual network.
-```azurecli
-az network public-ip create -n GWPubIP -g TestBGPRG1 --allocation-method Dynamic 
+```azurecli-interactive
+az network public-ip create -n GWPubIP -g TestRG1 --allocation-method Dynamic 
``` #### 2. Create the VPN gateway with the AS number Create the virtual network gateway for TestVNet1. BGP requires a Route-Based VPN gateway. You also need the additional parameter `-Asn` to set the autonomous system number (ASN) for TestVNet1. Creating a gateway can often take 45 minutes or more, depending on the selected gateway SKU.
-If you run this command by using the `--no-wait` parameter, you don't see any feedback or output. The `--no-wait` parameter allows the gateway to be created in the background. It does not mean that the VPN gateway is created immediately.
+If you run this command by using the `--no-wait` parameter, you don't see any feedback or output. The `--no-wait` parameter allows the gateway to be created in the background. It doesn't mean that the VPN gateway is created immediately.
-```azurecli
-az network vnet-gateway create -n VNet1GW -l eastus --public-ip-address GWPubIP -g TestBGPRG1 --vnet TestVNet1 --gateway-type Vpn --sku HighPerformance --vpn-type RouteBased --asn 65010 --no-wait
+```azurecli-interactive
+az network vnet-gateway create -n VNet1GW -l eastus --public-ip-address GWPubIP -g TestRG1 --vnet TestVNet1 --gateway-type Vpn --sku HighPerformance --vpn-type RouteBased --asn 65010 --no-wait
```
+After the gateway is created, you can use this gateway to establish a cross-premises connection or a VNet-to-VNet connection with BGP.
+ #### 3. Obtain the Azure BGP peer IP address After the gateway is created, you need to obtain the BGP peer IP address on the Azure VPN gateway. This address is needed to configure the VPN gateway as a BGP peer for your on-premises VPN devices.
-Run the following command and check the `bgpSettings` section at the top of the output:
+Run the following command.
-```azurecli
-az network vnet-gateway list -g TestBGPRG1 
- 
-  
+```azurecli-interactive
+az network vnet-gateway list -g TestRG1
+```
+
+Make a note of the `bgpSettings` section at the top of the output. You'll use this
+
+```azurecli-interactive
"bgpSettings": {        "asn": 65010,        "bgpPeeringAddress": "10.12.255.30", 
az network vnet-gateway list -g TestBGPRG1 
    } ```
-After the gateway is created, you can use this gateway to establish a cross-premises connection or a VNet-to-VNet connection with BGP.
+If you don't see the BgpPeeringAddress displayed as an IP address, your gateway is still being configured. Try again when the gateway is complete.
-## <a name ="crossprembgp"></a>Establish a cross-premises connection with BGP
+## Establish a cross-premises connection with BGP
-To establish a cross-premises connection, you need to create a local network gateway to represent your on-premises VPN device. Then you connect the Azure VPN gateway with the local network gateway. Although these steps are similar to creating other connections, they include the additional properties required to specify the BGP configuration parameters.
+To establish a cross-premises connection, you need to create a local network gateway to represent your on-premises VPN device. Then you connect the Azure VPN gateway with the local network gateway. Although these steps are similar to creating other connections, they include the additional properties required to specify the BGP configuration parameter, as shown in Diagram 3.
-![BGP for cross-premises](./media/vpn-gateway-bgp-resource-manager-ps/bgp-crossprem.png)
+**Diagram 3**
-### Step 1: Create and configure the local network gateway
+### Create and configure the local network gateway
This exercise continues to build the configuration shown in the diagram. Be sure to replace the values with the ones that you want to use for your configuration. When you're working with local network gateways, keep in mind the following things: * The local network gateway can be in the same location and resource group as the VPN gateway, or it can be in a different location and resource group. This example shows the gateways in different resource groups in different locations. * The minimum prefix that you need to declare for the local network gateway is the host address of your BGP peer IP address on your VPN device. In this case, it's a /32 prefix of 10.51.255.254/32.
-* As a reminder, you must use different BGP ASNs between your on-premises networks and the Azure virtual network. If they are the same, you need to change your VNet ASN if your on-premises VPN devices already use the ASN to peer with other BGP neighbors.
+* As a reminder, you must use different BGP ASNs between your on-premises networks and the Azure virtual network. If they're the same, you need to change your VNet ASN if your on-premises VPN devices already use the ASN to peer with other BGP neighbors.
-Before you proceed, make sure that you've completed the [Enable BGP for your VPN gateway](#enablebgp) section of this exercise and that you're still connected to Subscription 1. Notice that in this example, you create a new resource group. Also, notice the two additional parameters for the local network gateway: `Asn` and `BgpPeerAddress`.
+Before you proceed, make sure that you've completed the [Enable BGP for your VPN gateway](#enablebgp) section of this exercise. Notice that in this example, you create a new resource group. Also, notice the two additional parameters for the local network gateway: `Asn` and `BgpPeerAddress`.
-```azurecli
-az group create -n TestBGPRG5 -l eastus2 
+```azurecli-interactive
+az group create -n TestRG5 -l westus 
 
-az network local-gateway create --gateway-ip-address 23.99.221.164 -n Site5 -g TestBGPRG5 --local-address-prefixes 10.51.255.254/32 --asn 65050 --bgp-peering-address 10.51.255.254
+az network local-gateway create --gateway-ip-address 23.99.221.164 -n Site5 -g TestRG5 --local-address-prefixes 10.51.255.254/32 --asn 65050 --bgp-peering-address 10.51.255.254
```
-### Step 2: Connect the VNet gateway and local network gateway
+### Connect the VNet gateway and local network gateway
-In this step, you create the connection from TestVNet1 to Site5. You must specify the `--enable-bgp` parameter to enable BGP for this connection.
+In this step, you create the connection from TestVNet1 to Site5. You must specify the `--enable-bgp` parameter to enable BGP for this connection.
In this example, the virtual network gateway and local network gateway are in different resource groups. When the gateways are in different resource groups, you must specify the entire resource ID of the two gateways to set up a connection between the virtual networks.
In this example, the virtual network gateway and local network gateway are in di
Use the output from the following command to get the resource ID for VNet1GW:
-```azurecli
-az network vnet-gateway show -n VNet1GW -g TestBGPRG1
+```azurecli-interactive
+az network vnet-gateway show -n VNet1GW -g TestRG1
``` In the output, find the `"id":` line. You need the values within the quotation marks to create the connection in the next section.
Example output:
  "etag": "W/\"<your etag number>\"",    "gatewayDefaultSite": null,    "gatewayType": "Vpn", 
-  "id": "/subscriptions/<subscription ID>/resourceGroups/TestBGPRG1/providers/Microsoft.Network/virtualNetworkGateways/VNet1GW",
+  "id": "/subscriptions/<subscription ID>/resourceGroups/TestRG1/providers/Microsoft.Network/virtualNetworkGateways/VNet1GW",
``` Copy the values after `"id":` to a text editor, such as Notepad, so that you can easily paste them when creating your connection.
Copy the values after `"id":` to a text editor, such as Notepad, so that you can
Use the following command to get the resource ID of Site5 from the output:
-```azurecli
-az network local-gateway show -n Site5 -g TestBGPRG5
+```azurecli-interactive
+az network local-gateway show -n Site5 -g TestRG5
``` #### 3. Create the TestVNet1-to-Site5 connection
-In this step, you create the connection from TestVNet1 to Site5. As discussed earlier, it is possible to have both BGP and non-BGP connections for the same Azure VPN gateway. Unless BGP is enabled in the connection property, Azure will not enable BGP for this connection, even though BGP parameters are already configured on both gateways. Replace the subscription IDs with your own.
+In this step, you create the connection from TestVNet1 to Site5. As discussed earlier, it's possible to have both BGP and non-BGP connections for the same Azure VPN gateway. Unless BGP is enabled in the connection property, Azure won't enable BGP for this connection, even though BGP parameters are already configured on both gateways. Replace the subscription IDs with your own.
-```azurecli
-az network vpn-connection create -n VNet1ToSite5 -g TestBGPRG1 --vnet-gateway1 /subscriptions/<subscription ID>/resourceGroups/TestBGPRG1/providers/Microsoft.Network/virtualNetworkGateways/VNet1GW --enable-bgp -l eastus --shared-key "abc123" --local-gateway2 /subscriptions/<subscription ID>/resourceGroups/TestBGPRG5/providers/Microsoft.Network/localNetworkGateways/Site5 --no-wait
+```azurecli-interactive
+az network vpn-connection create -n VNet1ToSite5 -g TestRG1 --vnet-gateway1 /subscriptions/<subscription ID>/resourceGroups/TestRG1/providers/Microsoft.Network/virtualNetworkGateways/VNet1GW --enable-bgp -l eastus --shared-key "abc123" --local-gateway2 /subscriptions/<subscription ID>/resourceGroups/TestRG5/providers/Microsoft.Network/localNetworkGateways/Site5
```
-For this exercise, the following example lists the parameters to enter in the BGP configuration section of your on-premises VPN device:
+#### On-premises device configuration
+
+The following example lists the parameters you enter into the BGP configuration section on your on-premises VPN device for this exercise:
```
-Site5 ASN : 65050
-Site5 BGP IP : 10.52.255.254
-Prefixes to announce : (for example) 10.51.0.0/16 and 10.52.0.0/16
-Azure VNet ASN : 65010
-Azure VNet BGP IP : 10.12.255.30
-Static route : Add a route for 10.12.255.30/32, with nexthop being the VPN tunnel interface on your device
-eBGP Multihop : Ensure the "multihop" option for eBGP is enabled on your device if needed
+- Site5 ASN : 65050
+- Site5 BGP IP : 10.51.255.254
+- Prefixes to announce : (for example) 10.51.0.0/16
+- Azure VNet ASN : 65010
+- Azure VNet BGP IP : 10.12.255.30
+- Static route : Add a route for 10.12.255.30/32, with nexthop being the VPN tunnel interface on your device
+- eBGP Multihop : Ensure the "multihop" option for eBGP is enabled on your device if needed
``` The connection should be established after a few minutes. The BGP peering session starts after the IPsec connection is established.
-## <a name ="v2vbgp"></a>Establish a VNet-to-VNet connection with BGP
+## Establish a VNet-to-VNet connection with BGP
+
+This section adds a VNet-to-VNet connection with BGP, as shown in Diagram 4.
-This section adds a VNet-to-VNet connection with BGP, as shown in the following diagram:
+**Diagram 4**
-![BGP for VNet-to-VNet](./media/vpn-gateway-bgp-resource-manager-ps/bgp-vnet2vnet.png)
The following instructions continue from the steps in the preceding sections. To create and configure TestVNet1 and the VPN gateway with BGP, you must complete the [Enable BGP for your VPN gateway](#enablebgp) section.
-### Step 1: Create TestVNet2 and the VPN gateway
+### Create TestVNet2 and the VPN gateway
-It's important to make sure that the IP address space of the new virtual network, TestVNet2, does not overlap with any of your VNet ranges.
+It's important to make sure that the IP address space of the new virtual network, TestVNet2, doesn't overlap with any of your VNet ranges.
In this example, the virtual networks belong to the same subscription. You can set up VNet-to-VNet connections between different subscriptions. To learn more, see [Configure a VNet-to-VNet connection](vpn-gateway-howto-vnet-vnet-cli.md). Make sure that you add `-EnableBgp $True` when creating the connections to enable BGP. #### 1. Create a new resource group
-```azurecli
-az group create -n TestBGPRG2 -l westus
+```azurecli-interactive
+az group create -n TestRG2 -l eastus
``` #### 2. Create TestVNet2 in the new resource group The first command creates the front-end address space and the FrontEnd subnet. The second command creates an additional address space for the BackEnd subnet. The third and fourth commands create the BackEnd subnet and GatewaySubnet.
-```azurecli
-az network vnet create -n TestVNet2 -g TestBGPRG2 --address-prefix 10.21.0.0/16 -l westus --subnet-name FrontEnd --subnet-prefix 10.21.0.0/24 
- 
-az network vnet update -n TestVNet2 --address-prefixes 10.21.0.0/16 10.22.0.0/16 -g TestBGPRG2 
+```azurecli-interactive
+az network vnet create -n TestVNet2 -g TestRG2 --address-prefix 10.21.0.0/16 --subnet-name FrontEnd --subnet-prefix 10.21.0.0/24
+```
+
+```azurecli-interactive
+az network vnet update -n TestVNet2 --address-prefixes 10.21.0.0/16 10.22.0.0/16 -g TestRG2
 
-az network vnet subnet create --vnet-name TestVNet2 -n BackEnd -g TestBGPRG2 --address-prefix 10.22.0.0/24 
+az network vnet subnet create --vnet-name TestVNet2 -n BackEnd -g TestRG2 --address-prefix 10.22.0.0/24
 
-az network vnet subnet create --vnet-name TestVNet2 -n GatewaySubnet -g TestBGPRG2 --address-prefix 10.22.255.0/27
+az network vnet subnet create --vnet-name TestVNet2 -n GatewaySubnet -g TestRG2 --address-prefix 10.22.255.0/27
``` #### 3. Create the public IP address Request a public IP address. The public IP address will be allocated to the VPN gateway that you create for your virtual network.
-```azurecli
-az network public-ip create -n GWPubIP2 -g TestBGPRG2 --allocation-method Dynamic
+```azurecli-interactive
+az network public-ip create -n GWPubIP2 -g TestRG2 --allocation-method Dynamic
``` #### 4. Create the VPN gateway with the AS number Create the virtual network gateway for TestVNet2. You must override the default ASN on your Azure VPN gateways. The ASNs for the connected virtual networks must be different to enable BGP and transit routing.
- 
-```azurecli
-az network vnet-gateway create -n VNet2GW -l westus --public-ip-address GWPubIP2 -g TestBGPRG2 --vnet TestVNet2 --gateway-type Vpn --sku Standard --vpn-type RouteBased --asn 65020 --no-wait
+
+```azurecli-interactive
+az network vnet-gateway create -n VNet2GW -l eastus --public-ip-address GWPubIP2 -g TestRG2 --vnet TestVNet2 --gateway-type Vpn --sku Standard --vpn-type RouteBased --asn 65020 --no-wait
```
-### Step 2: Connect the TestVNet1 and TestVNet2 gateways
+### Connect the TestVNet1 and TestVNet2 gateways
In this step, you create the connection from TestVNet1 to Site5. To enable BGP for this connection, you must specify the `--enable-bgp` parameter.
-In the following example, the virtual network gateway and local network gateway are in different resource groups. When the gateways are in different resource groups, you must specify the entire resource ID of the two gateways to set up a connection between the virtual networks. 
+In the following example, the virtual network gateway and local network gateway are in different resource groups. When the gateways are in different resource groups, you must specify the entire resource ID of the two gateways to set up a connection between the virtual networks.
#### 1. Get the resource ID of VNet1GW Get the resource ID of VNet1GW from the output of the following command:
-```azurecli
-az network vnet-gateway show -n VNet1GW -g TestBGPRG1
+```azurecli-interactive
+az network vnet-gateway show -n VNet1GW -g TestRG1
+```
+
+Example value for the gateway resource:
+
+```
+"/subscriptions/<subscripion ID value>/resourceGroups/TestRG2/providers/Microsoft.Network/virtualNetworkGateways/VNet2GW"
``` #### 2. Get the resource ID of VNet2GW Get the resource ID of VNet2GW from the output of the following command:
-```azurecli
-az network vnet-gateway show -n VNet2GW -g TestBGPRG2
+```azurecli-interactive
+az network vnet-gateway show -n VNet2GW -g TestRG2
``` #### 3. Create the connections
-Create the connection from TestVNet1 to TestVNet2, and the connection from TestVNet2 to TestVNet1. Replace the subscription IDs with your own.
+Create the connection from TestVNet1 to TestVNet2, and the connection from TestVNet2 to TestVNet1. These commands use the resource IDs. For this exercise, most of the resource ID is already in the example. Be sure to replace the subscription ID values with your own. The subscription ID is used in multiple places in the same command. When using this command for production, you'll replace the entire resource ID for each object you are referencing.
-```azurecli
-az network vpn-connection create -n VNet1ToVNet2 -g TestBGPRG1 --vnet-gateway1 /subscriptions/<subscription ID>/resourceGroups/TestBGPRG1/providers/Microsoft.Network/virtualNetworkGateways/VNet1GW --enable-bgp -l eastus --shared-key "efg456" --vnet-gateway2 /subscriptions/<subscription ID>/resourceGroups/TestBGPRG2/providers/Microsoft.Network/virtualNetworkGateways/VNet2GW
+```azurecli-interactive
+az network vpn-connection create -n VNet1ToVNet2 -g TestRG1 --vnet-gateway1 /subscriptions/<subscription ID>/resourceGroups/TestRG1/providers/Microsoft.Network/virtualNetworkGateways/VNet1GW --enable-bgp -l eastus --shared-key "abc123" --vnet-gateway2 /subscriptions/<subscription ID>/resourceGroups/TestRG2/providers/Microsoft.Network/virtualNetworkGateways/VNet2GW
```
-```azurecli
-az network vpn-connection create -n VNet2ToVNet1 -g TestBGPRG2 --vnet-gateway1 /subscriptions/<subscription ID>/resourceGroups/TestBGPRG2/providers/Microsoft.Network/virtualNetworkGateways/VNet2GW --enable-bgp -l westus --shared-key "efg456" --vnet-gateway2 /subscriptions/<subscription ID>/resourceGroups/TestBGPRG1/providers/Microsoft.Network/virtualNetworkGateways/VNet1GW
+```azurecli-interactive
+az network vpn-connection create -n VNet2ToVNet1 -g TestRG2 --vnet-gateway1 /subscriptions/<subscription ID>/resourceGroups/TestRG2/providers/Microsoft.Network/virtualNetworkGateways/VNet2GW --enable-bgp -l eastus --shared-key "abc123" --vnet-gateway2 /subscriptions/<subscription ID>/resourceGroups/TestRG1/providers/Microsoft.Network/virtualNetworkGateways/VNet1GW
``` > [!IMPORTANT] > Enable BGP for *both* connections.
->
->
+>
After you complete these steps, the connection will be established in a few minutes. The BGP peering session will be up after the VNet-to-VNet connection is completed. ## Next steps
-After your connection is completed, you can add virtual machines to your virtual networks. For steps, see [Create a virtual machine](../virtual-machines/windows/quick-create-portal.md).
+For more information about BGP, see [About BGP and VPN Gateway](vpn-gateway-bgp-overview.md).
vpn-gateway Bgp Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/bgp-howto.md
Title: 'Configure BGP for VPN Gateway: Portal'
-description: Learn how to configure BGP for Azure VPN Gateway.
+description: Learn how to configure BGP for Azure VPN Gateway using the Azure portal.
Previously updated : 01/04/2023 Last updated : 01/09/2023 # How to configure BGP for Azure VPN Gateway
-This article helps you enable BGP on cross-premises site-to-site (S2S) VPN connections and VNet-to-VNet connections using the Azure portal.
+This article helps you enable BGP on cross-premises site-to-site (S2S) VPN connections and VNet-to-VNet connections using the Azure portal. This article helps you enable BGP on cross-premises site-to-site (S2S) VPN connections and VNet-to-VNet connections using Azure PowerShell. You can also create this configuration using the [Azure portal](bgp-howto.md) or [PowerShell](vpn-gateway-bgp-resource-manager-ps.md) steps.
-BGP is the standard routing protocol commonly used in the Internet to exchange routing and reachability information between two or more networks. BGP enables the Azure VPN gateways and your on-premises VPN devices, called BGP peers or neighbors, to exchange "routes" that will inform both gateways on the availability and reachability for those prefixes to go through the gateways or routers involved. BGP can also enable transit routing among multiple networks by propagating routes a BGP gateway learns from one BGP peer to all other BGP peers.
+BGP is the standard routing protocol commonly used in the Internet to exchange routing and reachability information between two or more networks. BGP enables the VPN gateways and your on-premises VPN devices, called BGP peers or neighbors, to exchange "routes" that will inform both gateways on the availability and reachability for those prefixes to go through the gateways or routers involved. BGP can also enable transit routing among multiple networks by propagating routes a BGP gateway learns from one BGP peer to all other BGP peers.
For more information about the benefits of BGP and to understand the technical requirements and considerations of using BGP, see [About BGP and Azure VPN Gateway](vpn-gateway-bgp-overview.md).
Each part of this article helps you form a basic building block for enabling BGP
**Diagram 1** You can combine parts together to build a more complex, multi-hop, transit network that meets your needs.
You can combine parts together to build a more complex, multi-hop, transit netwo
Verify that you have an Azure subscription. If you don't already have an Azure subscription, you can activate your [MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/) or sign up for a [free account](https://azure.microsoft.com/pricing/free-trial/).
-## <a name ="config"></a>Configure BGP on the virtual network gateway
+## <a name ="config"></a>Enable BGP for the VPN gateway
-In this section, you create and configure a virtual network, create and configure a virtual network gateway with BGP parameters, and obtain the Azure BGP Peer IP address. Diagram 2 shows the configuration settings to use when working with the steps in this section.
+This section is required before you perform any of the steps in the other two configuration sections. The following configuration steps set up the BGP parameters of the VPN gateway as shown in Diagram 2.
**Diagram 2** ### 1. Create TestVNet1
In this step, you create a VPN gateway with the corresponding BGP parameters.
> [!IMPORTANT] >
- > * By default, Azure assigns a private IP address from the GatewaySubnet prefix range automatically as the Azure BGP IP address on the Azure VPN gateway. The custom Azure APIPA BGP address is needed when your on premises VPN devices use an APIPA address (169.254.0.1 to 169.254.255.254) as the BGP IP. Azure VPN Gateway will choose the custom APIPA address if the corresponding local network gateway resource (on-premises network) has an APIPA address as the BGP peer IP. If the local network gateway uses a regular IP address (not APIPA), Azure VPN Gateway will revert to the private IP address from the GatewaySubnet range.
+ > * By default, Azure assigns a private IP address from the GatewaySubnet prefix range automatically as the Azure BGP IP address on the VPN gateway. The custom Azure APIPA BGP address is needed when your on premises VPN devices use an APIPA address (169.254.0.1 to 169.254.255.254) as the BGP IP. VPN Gateway will choose the custom APIPA address if the corresponding local network gateway resource (on-premises network) has an APIPA address as the BGP peer IP. If the local network gateway uses a regular IP address (not APIPA), VPN Gateway will revert to the private IP address from the GatewaySubnet range.
>
- > * The APIPA BGP addresses must not overlap between the on-premises VPN devices and all connected Azure VPN gateways.
+ > * The APIPA BGP addresses must not overlap between the on-premises VPN devices and all connected VPN gateways.
>
- > * When APIPA addresses are used on Azure VPN gateways, the gateways do not initiate BGP peering sessions with APIPA source IP addresses. The on-premises VPN device must initiate BGP peering connections.
+ > * When APIPA addresses are used on VPN gateways, the gateways do not initiate BGP peering sessions with APIPA source IP addresses. The on-premises VPN device must initiate BGP peering connections.
> 1. Select **Review + create** to run validation. Once validation passes, select **Create** to deploy the VPN gateway. Creating a gateway can often take 45 minutes or more, depending on the selected gateway SKU. You can see the deployment status on the Overview page for your gateway. ### 3. Get the Azure BGP Peer IP addresses
-Once the gateway is created, you can obtain the BGP Peer IP addresses on the Azure VPN gateway. These addresses are needed to configure your on-premises VPN devices to establish BGP sessions with the Azure VPN gateway.
+Once the gateway is created, you can obtain the BGP Peer IP addresses on the VPN gateway. These addresses are needed to configure your on-premises VPN devices to establish BGP sessions with the VPN gateway.
-On the virtual network gateway **Configuration** page, you can view the BGP configuration information on your Azure VPN gateway: ASN, Public IP address, and the corresponding BGP peer IP addresses on the Azure side (default and APIPA). You can also make the following configuration changes:
+On the virtual network gateway **Configuration** page, you can view the BGP configuration information on your VPN gateway: ASN, Public IP address, and the corresponding BGP peer IP addresses on the Azure side (default and APIPA). You can also make the following configuration changes:
* You can update the ASN or the APIPA BGP IP address if needed. * If you have an active-active VPN gateway, this page will show the Public IP address, default, and APIPA BGP IP addresses of the second VPN gateway instance.
To get the Azure BGP Peer IP address:
## <a name ="crosspremises"></a>Configure BGP on cross-premises S2S connections
-To establish a cross-premises connection, you need to create a *local network gateway* to represent your on-premises VPN device, and a *connection* to connect the VPN gateway with the local network gateway as explained in [Create site-to-site connection](tutorial-site-to-site-portal.md). The following sections contain the additional properties required to specify the BGP configuration parameters.
+To establish a cross-premises connection, you need to create a *local network gateway* to represent your on-premises VPN device, and a *connection* to connect the VPN gateway with the local network gateway as explained in [Create site-to-site connection](tutorial-site-to-site-portal.md). The following sections contain the additional properties required to specify the BGP configuration parameters, as shown in Diagram 3.
**Diagram 3** +
+Before proceeding, make sure you have enabled BGP for the VPN gateway.
### 1. Create a local network gateway
Configure a local network gateway with BGP settings.
#### Important configuration considerations * The ASN and the BGP peer IP address must match your on-premises VPN router configuration.
-* You can leave the **Address space** empty only if you're using BGP to connect to this network. Azure VPN gateway will internally add a route of your BGP peer IP address to the corresponding IPsec tunnel. If you're **NOT** using BGP between the Azure VPN gateway and this particular network, you **must** provide a list of valid address prefixes for the **Address space**.
-* You can optionally use an **APIPA IP address** (169.254.x.x) as your on-premises BGP peer IP if needed. But you'll also need to specify an APIPA IP address as described earlier in this article for your Azure VPN gateway, otherwise the BGP session can't establish for this connection.
+* You can leave the **Address space** empty only if you're using BGP to connect to this network. Azure VPN gateway will internally add a route of your BGP peer IP address to the corresponding IPsec tunnel. If you're **NOT** using BGP between the VPN gateway and this particular network, you **must** provide a list of valid address prefixes for the **Address space**.
+* You can optionally use an **APIPA IP address** (169.254.x.x) as your on-premises BGP peer IP if needed. But you'll also need to specify an APIPA IP address as described earlier in this article for your VPN gateway, otherwise the BGP session can't establish for this connection.
* You can enter the BGP configuration information during the creation of the local network gateway, or you can add or change BGP configuration from the **Configuration** page of the local network gateway resource. ### 2. Configure an S2S connection with BGP enabled
-In this step, you create a new connection that has BGP enabled. If you already have a connection and you want to enable BGP on it, you can [update an existing connection](#update).
+In this step, you create a new connection that has BGP enabled. If you already have a connection and you want to enable BGP on it, you can update it.
#### To create a connection
In this step, you create a new connection that has BGP enabled. If you already h
1. Select **Enable BGP** to enable BGP on this connection. 1. Click **OK** to save changes.
-#### <a name ="update"></a>To update an existing connection
+#### To update an existing connection
1. Go to your virtual network gateway **Connections** page. 1. Click the connection you want to modify.
In this step, you create a new connection that has BGP enabled. If you already h
1. Change the **BGP** setting to **Enabled**. 1. **Save** your changes.
-## <a name ="v2v"></a>Configure BGP on VNet-to-VNet connections
+#### On-premises device configuration
+
+The following example lists the parameters you enter into the BGP configuration section on your on-premises VPN device for this exercise:
+
+```
+- Site5 ASN : 65050
+- Site5 BGP IP : 10.51.255.254
+- Prefixes to announce : (for example) 10.51.0.0/16
+- Azure VNet ASN : 65010
+- Azure VNet BGP IP : 10.12.255.30
+- Static route : Add a route for 10.12.255.30/32, with nexthop being the VPN tunnel interface on your device
+- eBGP Multihop : Ensure the "multihop" option for eBGP is enabled on your device if needed
+```
+
+## Enable BGP on VNet-to-VNet connections
The steps to enable or disable BGP on a VNet-to-VNet connection are the same as the [S2S steps](#crosspremises). You can enable BGP when creating the connection, or update the configuration on an existing VNet-to-VNet connection.
->[!NOTE]
->A VNet-to-VNet connection without BGP will limit the communication to the two connected VNets only. Enable BGP to allow transit routing capability to other S2S or VNet-to-VNet connections of these two VNets.
->
+> [!NOTE]
+> A VNet-to-VNet connection without BGP will limit the communication to the two connected VNets only. Enable BGP to allow transit routing capability to other S2S or VNet-to-VNet connections of these two VNets.
-For context, referring to **Diagram 4**, if BGP were to be disabled between TestVNet2 and TestVNet1, TestVNet2 wouldn't learn the routes for the on-premises network, Site5, and therefore couldn't communicate with Site 5. Once you enable BGP, as shown in the Diagram 4, all three networks will be able to communicate over the IPsec and VNet-to-VNet connections.
+If you completed all three parts of this exercise, you have established the following network topology:
**Diagram 4** +
+For context, referring to **Diagram 4**, if BGP were to be disabled between TestVNet2 and TestVNet1, TestVNet2 wouldn't learn the routes for the on-premises network, Site5, and therefore couldn't communicate with Site 5. Once you enable BGP, as shown in the Diagram 4, all three networks will be able to communicate over the S2S IPsec and VNet-to-VNet connections.
## Next steps
vpn-gateway Vpn Gateway Bgp Resource Manager Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-bgp-resource-manager-ps.md
Title: 'Configure BGP for VPN Gateway using PowerShell'
+ Title: 'Configure BGP for VPN Gateway: PowerShell'
description: Learn how to configure BGP for VPN gateways using PowerShell.
Previously updated : 09/02/2020 Last updated : 01/09/2023
-# How to configure BGP on Azure VPN Gateways using PowerShell
+# How to configure BGP for VPN Gateway: PowerShell
-This article walks you through the steps to enable BGP on a cross-premises Site-to-Site (S2S) VPN connection and a VNet-to-VNet connection using PowerShell.
+This article helps you enable BGP on cross-premises site-to-site (S2S) VPN connections and VNet-to-VNet connections using Azure PowerShell. You can also create this configuration using the [Azure portal](bgp-howto.md) or [CLI](bgp-how-to-cli.md) steps.
-## About BGP
-BGP is the standard routing protocol commonly used in the Internet to exchange routing and reachability information between two or more networks. BGP enables the Azure VPN Gateways and your on-premises VPN devices, called BGP peers or neighbors, to exchange "routes" that will inform both gateways on the availability and reachability for those prefixes to go through the gateways or routers involved. BGP can also enable transit routing among multiple networks by propagating routes a BGP gateway learns from one BGP peer to all other BGP peers.
+BGP is the standard routing protocol commonly used in the Internet to exchange routing and reachability information between two or more networks. BGP enables the VPN gateways and your on-premises VPN devices, called BGP peers or neighbors, to exchange "routes" that will inform both gateways on the availability and reachability for those prefixes to go through the gateways or routers involved. BGP can also enable transit routing among multiple networks by propagating routes a BGP gateway learns from one BGP peer to all other BGP peers.
-See [Overview of BGP with Azure VPN Gateways](vpn-gateway-bgp-overview.md) for more discussion on benefits of BGP and to understand the technical requirements and considerations of using BGP.
+For more information about the benefits of BGP and to understand the technical requirements and considerations of using BGP, see [About BGP and Azure VPN Gateway](vpn-gateway-bgp-overview.md).
-## Getting started with BGP on Azure VPN gateways
+## Getting started
-This article walks you through the steps to do the following tasks:
+Each part of this article helps you form a basic building block for enabling BGP in your network connectivity. If you complete all three parts (configure BGP on the gateway, S2S connection, and VNet-to-VNet connection) you build the topology as shown in Diagram 1. You can combine these sections to build a more complex multihop transit network that meets your needs.
-* [Part 1 - Enable BGP on your Azure VPN gateway](#enablebgp)
-* Part 2 - Establish a cross-premises connection with BGP
-* [Part 3 - Establish a VNet-to-VNet connection with BGP](#v2vbgp)
+**Diagram 1**
-Each part of the instructions forms a basic building block for enabling BGP in your network connectivity. If you complete all three parts, you build the topology as shown in the following diagram:
-![BGP topology](./media/vpn-gateway-bgp-resource-manager-ps/bgp-crosspremv2v.png)
+## <a name ="enablebgp"></a>Enable BGP for the VPN gateway
-You can combine parts together to build a more complex, multi-hop, transit network that meets your needs.
+This section is required before you perform any of the steps in the other two configuration sections. The following configuration steps set up the BGP parameters of the VPN gateway as shown in Diagram 2.
-## <a name ="enablebgp"></a>Part 1 - Configure BGP on the Azure VPN Gateway
-The configuration steps set up the BGP parameters of the Azure VPN gateway as shown in the following diagram:
+**Diagram 2**
-![BGP Gateway](./media/vpn-gateway-bgp-resource-manager-ps/bgp-gateway.png)
### Before you begin
-* Verify that you have an Azure subscription. If you don't already have an Azure subscription, you can activate your [MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/) or sign up for a [free account](https://azure.microsoft.com/pricing/free-trial/).
-* Install the Azure Resource Manager PowerShell cmdlets. For more information about installing the PowerShell cmdlets, see [How to install and configure Azure PowerShell](/powershell/azure/).
-### Step 1 - Create and configure VNet1
+You can run the steps for this exercise using Azure Cloud Shell in your browser. If you want to use PowerShell directly from your computer instead, install the Azure Resource Manager PowerShell cmdlets. For more information about installing the PowerShell cmdlets, see [How to install and configure Azure PowerShell](/powershell/azure/).
+
+### Create and configure VNet1
+ #### 1. Declare your variables
-For this exercise, we start by declaring our variables. The following example declares the variables using the values for this exercise. Be sure to replace the values with your own when configuring for production. You can use these variables if you are running through the steps to become familiar with this type of configuration. Modify the variables, and then copy and paste into your PowerShell console.
-```powershell
+For this exercise, we start by declaring variables. The following example declares the variables using the values for this exercise. You can use the example variables (with the exception of subscription name) if you're running through the steps to become familiar with this type of configuration. Modify any variables, and then copy and paste into your PowerShell console. Be sure to replace the values with your own when configuring for production.
+
+```azurepowershell-interactive
$Sub1 = "Replace_With_Your_Subscription_Name"
-$RG1 = "TestBGPRG1"
+$RG1 = "TestRG1"
$Location1 = "East US" $VNetName1 = "TestVNet1" $FESubName1 = "FrontEnd"
$Connection15 = "VNet1toSite5"
``` #### 2. Connect to your subscription and create a new resource group+ To use the Resource Manager cmdlets, Make sure you switch to PowerShell mode. For more information, see [Using Windows PowerShell with Resource Manager](../azure-resource-manager/management/manage-resources-powershell.md).
-Open your PowerShell console and connect to your account. Use the following sample to help you connect:
+If you use Azure Cloud Shell, you automatically connect to your account. If you use PowerShell from your computer, open your PowerShell console and connect to your account. Use the following sample to help you connect:
-```powershell
+```azurepowershell-interactive
Connect-AzAccount Select-AzSubscription -SubscriptionName $Sub1 New-AzResourceGroup -Name $RG1 -Location $Location1 ```
+Next, create a new resource group.
+
+```azurepowershell-interactive
+New-AzResourceGroup -Name $RG1 -Location $Location1
+```
+ #### 3. Create TestVNet1+ The following sample creates a virtual network named TestVNet1 and three subnets, one called GatewaySubnet, one called FrontEnd, and one called Backend. When substituting values, it's important that you always name your gateway subnet specifically GatewaySubnet. If you name it something else, your gateway creation fails.
-```powershell
+```azurepowershell-interactive
$fesub1 = New-AzVirtualNetworkSubnetConfig -Name $FESubName1 -AddressPrefix $FESubPrefix1 $besub1 = New-AzVirtualNetworkSubnetConfig -Name $BESubName1 -AddressPrefix $BESubPrefix1 $gwsub1 = New-AzVirtualNetworkSubnetConfig -Name $GWSubName1 -AddressPrefix $GWSubPrefix1
$gwsub1 = New-AzVirtualNetworkSubnetConfig -Name $GWSubName1 -AddressPrefix $GWS
New-AzVirtualNetwork -Name $VNetName1 -ResourceGroupName $RG1 -Location $Location1 -AddressPrefix $VNetPrefix11,$VNetPrefix12 -Subnet $fesub1,$besub1,$gwsub1 ```
-### Step 2 - Create the VPN Gateway for TestVNet1 with BGP parameters
+### Create the VPN gateway with BGP enabled
+ #### 1. Create the IP and subnet configurations
-Request a public IP address to be allocated to the gateway you will create for your VNet. You'll also define the required subnet and IP configurations.
-```powershell
+Request a public IP address to be allocated to the gateway you'll create for your VNet. You'll also define the required subnet and IP configurations.
+
+```azurepowershell-interactive
$gwpip1 = New-AzPublicIpAddress -Name $GWIPName1 -ResourceGroupName $RG1 -Location $Location1 -AllocationMethod Dynamic $vnet1 = Get-AzVirtualNetwork -Name $VNetName1 -ResourceGroupName $RG1
$gwipconf1 = New-AzVirtualNetworkGatewayIpConfig -Name $GWIPconfName1 -Subnet $s
``` #### 2. Create the VPN gateway with the AS number
-Create the virtual network gateway for TestVNet1. BGP requires a Route-Based VPN gateway, and also the addition parameter, -Asn, to set the ASN (AS Number) for TestVNet1. If you do not set the ASN parameter, ASN 65515 is assigned. Creating a gateway can take a while (30 minutes or more to complete).
-```powershell
+Create the virtual network gateway for TestVNet1. BGP requires a Route-Based VPN gateway, and also an additional parameter *-Asn* to set the ASN (AS Number) for TestVNet1. If you don't set the ASN parameter, ASN 65515 is assigned. Creating a gateway can take a while (45 minutes or more to complete).
+
+```azurepowershell-interactive
New-AzVirtualNetworkGateway -Name $GWName1 -ResourceGroupName $RG1 -Location $Location1 -IpConfigurations $gwipconf1 -GatewayType Vpn -VpnType RouteBased -GatewaySku VpnGw1 -Asn $VNet1ASN ```
-#### 3. Obtain the Azure BGP Peer IP address
-Once the gateway is created, you need to obtain the BGP Peer IP address on the Azure VPN Gateway. This address is needed to configure the Azure VPN Gateway as a BGP Peer for your on-premises VPN devices.
+Once the gateway is created, you can use this gateway to establish cross-premises connection or VNet-to-VNet connection with BGP.
-```powershell
+#### 3. Get the Azure BGP Peer IP address
+
+Once the gateway is created, you need to obtain the BGP Peer IP address on the VPN gateway. This address is needed to configure the VPN gateway as a BGP Peer for your on-premises VPN devices.
+
+If you are using CloudShell, you may need to reestablish your variables if the session timed out while creating your gateway.
+
+Reestablish variables if necessary:
+
+```azurepowershell-interactive
+$RG1 = "TestRG1"
+$GWName1 = "VNet1GW"
+```
+
+Run the following command and note the "BgpPeeringAddress" value from the output.
+
+```azurepowershell-interactive
$vnet1gw = Get-AzVirtualNetworkGateway -Name $GWName1 -ResourceGroupName $RG1 $vnet1gw.BgpSettingsText ```
-The last command shows the corresponding BGP configurations on the Azure VPN Gateway; for example:
+Example output:
-```powershell
+```PowerShell
$vnet1gw.BgpSettingsText { "Asn": 65010,
$vnet1gw.BgpSettingsText
} ```
-Once the gateway is created, you can use this gateway to establish cross-premises connection or VNet-to-VNet connection with BGP. The following sections walk through the steps to complete the exercise.
+If you don't see the BgpPeeringAddress displayed as an IP address, your gateway is still being configured. Try again when the gateway is complete.
+
+## Establish a cross-premises connection with BGP
-## <a name ="crossprembbgp"></a>Part 2 - Establish a cross-premises connection with BGP
+To establish a cross-premises connection, you need to create a *local network gateway* to represent your on-premises VPN device, and a *connection* to connect the VPN gateway with the local network gateway as explained in [Create site-to-site connection](tutorial-site-to-site-portal.md). The following sections contain the properties required to specify the BGP configuration parameters, shown in Diagram 3.
-To establish a cross-premises connection, you need to create a Local Network Gateway to represent your on-premises VPN device, and a Connection to connect the VPN gateway with the local network gateway. While there are articles that walk you through these steps, this article contains the additional properties required to specify the BGP configuration parameters.
+**Diagram 3**
-![BGP for Cross-Premises](./media/vpn-gateway-bgp-resource-manager-ps/bgp-crossprem.png)
-Before proceeding, make sure you have completed [Part 1](#enablebgp) of this exercise.
+Before proceeding, make sure you enabled BGP for the VPN gateway in the previous section.
### Step 1 - Create and configure the local network gateway #### 1. Declare your variables
-This exercise continues to build the configuration shown in the diagram. Be sure to replace the values with the ones that you want to use for your configuration.
+This exercise continues to build the configuration shown in the diagram. Be sure to replace the values with the ones that you want to use for your configuration. For example, you need to the IP address for your VPN device. For this exercise, you can substitute a valid IP address if you don't plan on connecting to your VPN device at this time. You can later replace the IP address.
-```powershell
-$RG5 = "TestBGPRG5"
-$Location5 = "East US 2"
+```azurepowershell-interactive
+$RG5 = "TestRG5"
+$Location5 = "West US"
$LNGName5 = "Site5"
-$LNGPrefix50 = "10.52.255.254/32"
-$LNGIP5 = "Your_VPN_Device_IP"
+$LNGPrefix50 = "10.51.255.254/32"
+$LNGIP5 = "4.3.2.1"
$LNGASN5 = 65050
-$BGPPeerIP5 = "10.52.255.254"
+$BGPPeerIP5 = "10.51.255.254"
``` A couple of things to note regarding the local network gateway parameters: * The local network gateway can be in the same or different location and resource group as the VPN gateway. This example shows them in different resource groups in different locations.
-* The prefix you need to declare for the local network gateway is the host address of your BGP Peer IP address on your VPN device. In this case, it's a /32 prefix of "10.52.255.254/32".
-* As a reminder, you must use different BGP ASNs between your on-premises networks and Azure VNet. If they are the same, you need to change your VNet ASN if your on-premises VPN device already uses the ASN to peer with other BGP neighbors.
-
-Before you continue, make sure you are still connected to Subscription 1.
+* The prefix you need to declare for the local network gateway is the host address of your BGP Peer IP address on your VPN device. In this case, it's a /32 prefix of "10.51.255.254/32".
+* As a reminder, you must use different BGP ASNs between your on-premises networks and Azure VNet. If they're the same, you need to change your VNet ASN if your on-premises VPN device already uses the ASN to peer with other BGP neighbors.
#### 2. Create the local network gateway for Site5
-Be sure to create the resource group if it is not created, before you create the local network gateway. Notice the two additional parameters for the local network gateway: Asn and BgpPeerAddress.
+Create the resource group before you create the local network gateway.
-```powershell
+```azurepowershell-interactive
New-AzResourceGroup -Name $RG5 -Location $Location5
+```
+
+Create the local network gateway. Notice the two additional parameters for the local network gateway: Asn and BgpPeerAddress.
+```azurepowershell-interactive
New-AzLocalNetworkGateway -Name $LNGName5 -ResourceGroupName $RG5 -Location $Location5 -GatewayIpAddress $LNGIP5 -AddressPrefix $LNGPrefix50 -Asn $LNGASN5 -BgpPeeringAddress $BGPPeerIP5 ```
New-AzLocalNetworkGateway -Name $LNGName5 -ResourceGroupName $RG5 -Location $Loc
#### 1. Get the two gateways
-```powershell
+```azurepowershell-interactive
$vnet1gw = Get-AzVirtualNetworkGateway -Name $GWName1 -ResourceGroupName $RG1 $lng5gw = Get-AzLocalNetworkGateway -Name $LNGName5 -ResourceGroupName $RG5 ``` #### 2. Create the TestVNet1 to Site5 connection
-In this step, you create the connection from TestVNet1 to Site5. You must specify "-EnableBGP $True" to enable BGP for this connection. As discussed earlier, it is possible to have both BGP and non-BGP connections for the same Azure VPN Gateway. Unless BGP is enabled in the connection property, Azure will not enable BGP for this connection even though BGP parameters are already configured on both gateways.
+In this step, you create the connection from TestVNet1 to Site5. You must specify "-EnableBGP $True" to enable BGP for this connection. As discussed earlier, it's possible to have both BGP and non-BGP connections for the same VPN gateway. Unless BGP is enabled in the connection property, Azure won't enable BGP for this connection even though BGP parameters are already configured on both gateways.
-```powershell
+Redeclare your variables if necessary:
+
+```azurepowershell-interactive
+$Connection15 = "VNet1toSite5"
+$Location1 = "East US"
+```
+
+Then run the following command:
+
+```azurepowershell-interactive
New-AzVirtualNetworkGatewayConnection -Name $Connection15 -ResourceGroupName $RG1 -VirtualNetworkGateway1 $vnet1gw -LocalNetworkGateway2 $lng5gw -Location $Location1 -ConnectionType IPsec -SharedKey 'AzureA1b2C3' -EnableBGP $True ```
+#### On-premises device configuration
+ The following example lists the parameters you enter into the BGP configuration section on your on-premises VPN device for this exercise: ```- - Site5 ASN : 65050-- Site5 BGP IP : 10.52.255.254-- Prefixes to announce : (for example) 10.51.0.0/16 and 10.52.0.0/16
+- Site5 BGP IP : 10.51.255.254
+- Prefixes to announce : (for example) 10.51.0.0/16
- Azure VNet ASN : 65010 - Azure VNet BGP IP : 10.12.255.30 - Static route : Add a route for 10.12.255.30/32, with nexthop being the VPN tunnel interface on your device
The following example lists the parameters you enter into the BGP configuration
The connection is established after a few minutes, and the BGP peering session starts once the IPsec connection is established.
-## <a name ="v2vbgp"></a>Part 3 - Establish a VNet-to-VNet connection with BGP
+## Establish a VNet-to-VNet connection with BGP
+
+This section adds a VNet-to-VNet connection with BGP, as shown in the Diagram 4.
-This section adds a VNet-to-VNet connection with BGP, as shown in the following diagram:
+**Diagram 4**
-![Diagram that shows a V Net to V Net connection.](./media/vpn-gateway-bgp-resource-manager-ps/bgp-vnet2vnet.png)
-The following instructions continue from the previous steps. You must complete [Part I](#enablebgp) to create and configure TestVNet1 and the VPN Gateway with BGP.
+The following instructions continue from the previous steps. You must first complete the steps in the [Enable BGP for the VPN gateway](#enablebgp) section to create and configure TestVNet1 and the VPN gateway with BGP.
### Step 1 - Create TestVNet2 and the VPN gateway
-It is important to make sure that the IP address space of the new virtual network, TestVNet2, does not overlap with any of your VNet ranges.
+It's important to make sure that the IP address space of the new virtual network, TestVNet2, doesn't overlap with any of your VNet ranges.
In this example, the virtual networks belong to the same subscription. You can set up VNet-to-VNet connections between different subscriptions. For more information, see [Configure a VNet-to-VNet connection](vpn-gateway-vnet-vnet-rm-ps.md). Make sure you add the "-EnableBgp $True" when creating the connections to enable BGP.
In this example, the virtual networks belong to the same subscription. You can s
Be sure to replace the values with the ones that you want to use for your configuration.
-```powershell
-$RG2 = "TestBGPRG2"
-$Location2 = "West US"
+```azurepowershell-interactive
+$RG2 = "TestRG2"
+$Location2 = "East US"
$VNetName2 = "TestVNet2" $FESubName2 = "FrontEnd" $BESubName2 = "Backend"
$Connection12 = "VNet1toVNet2"
#### 2. Create TestVNet2 in the new resource group
-```powershell
+```azurepowershell-interactive
New-AzResourceGroup -Name $RG2 -Location $Location2 $fesub2 = New-AzVirtualNetworkSubnetConfig -Name $FESubName2 -AddressPrefix $FESubPrefix2
New-AzVirtualNetwork -Name $VNetName2 -ResourceGroupName $RG2 -Location $Locatio
#### 3. Create the VPN gateway for TestVNet2 with BGP parameters
-Request a public IP address to be allocated to the gateway you will create for your VNet and define the required subnet and IP configurations.
+Request a public IP address to be allocated to the gateway you'll create for your VNet and define the required subnet and IP configurations.
-```powershell
+Declare your variables.
+
+```azurepowershell-interactive
$gwpip2 = New-AzPublicIpAddress -Name $GWIPName2 -ResourceGroupName $RG2 -Location $Location2 -AllocationMethod Dynamic $vnet2 = Get-AzVirtualNetwork -Name $VNetName2 -ResourceGroupName $RG2
$subnet2 = Get-AzVirtualNetworkSubnetConfig -Name "GatewaySubnet" -VirtualNetw
$gwipconf2 = New-AzVirtualNetworkGatewayIpConfig -Name $GWIPconfName2 -Subnet $subnet2 -PublicIpAddress $gwpip2 ```
-Create the VPN gateway with the AS number. You must override the default ASN on your Azure VPN gateways. The ASNs for the connected VNets must be different to enable BGP and transit routing.
+Create the VPN gateway with the AS number. You must override the default ASN on your VPN gateways. The ASNs for the connected VNets must be different to enable BGP and transit routing.
-```powershell
+```azurepowershell-interactive
New-AzVirtualNetworkGateway -Name $GWName2 -ResourceGroupName $RG2 -Location $Location2 -IpConfigurations $gwipconf2 -GatewayType Vpn -VpnType RouteBased -GatewaySku VpnGw1 -Asn $VNet2ASN ```
In this example, both gateways are in the same subscription. You can complete th
#### 1. Get both gateways
-Make sure you log in and connect to Subscription 1.
+Reestablish variables if necessary:
-```powershell
+```azurepowershell-interactive
+$GWName1 = "VNet1GW"
+$GWName2 = "VNet2GW"
+$RG1 = "TestRG1"
+$RG2 = "TestRG2"
+$Connection12 = "VNet1toVNet2"
+$Connection21 = "VNet2toVNet1"
+$Location1 = "East US"
+$Location2 = "East US"
+```
+
+Get both gateways.
+
+```azurepowershell-interactive
$vnet1gw = Get-AzVirtualNetworkGateway -Name $GWName1 -ResourceGroupName $RG1 $vnet2gw = Get-AzVirtualNetworkGateway -Name $GWName2 -ResourceGroupName $RG2 ```
$vnet2gw = Get-AzVirtualNetworkGateway -Name $GWName2 -ResourceGroupName $RG2
In this step, you create the connection from TestVNet1 to TestVNet2, and the connection from TestVNet2 to TestVNet1.
-```powershell
+TestVNet1 to TestVNet2 connection.
+
+```azurepowershell-interactive
New-AzVirtualNetworkGatewayConnection -Name $Connection12 -ResourceGroupName $RG1 -VirtualNetworkGateway1 $vnet1gw -VirtualNetworkGateway2 $vnet2gw -Location $Location1 -ConnectionType Vnet2Vnet -SharedKey 'AzureA1b2C3' -EnableBgp $True
+```
+
+TestVNet2 to TestVNet1 connection.
+```azurepowershell-interactive
New-AzVirtualNetworkGatewayConnection -Name $Connection21 -ResourceGroupName $RG2 -VirtualNetworkGateway1 $vnet2gw -VirtualNetworkGateway2 $vnet1gw -Location $Location2 -ConnectionType Vnet2Vnet -SharedKey 'AzureA1b2C3' -EnableBgp $True ``` > [!IMPORTANT] > Be sure to enable BGP for BOTH connections.
->
->
After completing these steps, the connection is established after a few minutes. The BGP peering session is up once the VNet-to-VNet connection is completed.
-If you completed all three parts of this exercise, you have established the following network topology:
+If you completed all three parts of this exercise, you've established the following network topology:
+
+**Diagram 4**
+
-![BGP for VNet-to-VNet](./media/vpn-gateway-bgp-resource-manager-ps/bgp-crosspremv2v.png)
+For context, referring to **Diagram 4**, if BGP were to be disabled between TestVNet2 and TestVNet1, TestVNet2 wouldn't learn the routes for the on-premises network, Site5, and therefore couldn't communicate with Site 5. Once you enable BGP, as shown in the Diagram 4, all three networks will be able to communicate over the S2S IPsec and VNet-to-VNet connections.
## Next steps
-Once your connection is complete, you can add virtual machines to your virtual networks. See [Create a Virtual Machine](../virtual-machines/windows/quick-create-portal.md) for steps.
+For more information about BGP, see [About BGP and VPN Gateway](vpn-gateway-bgp-overview.md).
vpn-gateway Vpn Gateway Troubleshoot Site To Site Cannot Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-troubleshoot-site-to-site-cannot-connect.md
If the Internet-facing IP address of the VPN device is included in the **Local n
The perfect forward secrecy feature can cause disconnection problems. If the VPN device has perfect forward secrecy enabled, disable the feature. Then update the VPN gateway IPsec policy.
+> [!Note]
+> VPN gateways do not reply to ICMP on their local address.
## Next steps - [Configure a site-to-site connection to a virtual network](./tutorial-site-to-site-portal.md)