Updates from: 06/27/2023 01:09:28
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/billing.md
In this article, learn about MAU and Go Local billing, linking Azure AD B2C tena
## MAU overview
-A monthly active user (MAU) is a unique user that performs an authentication within a given month. A user that authenticates multiple times within a given month is counted as one MAU. Customers aren't charged for a MAUΓÇÖs subsequent authentications during the month, nor for inactive users. Authentications may include:
+A monthly active user (MAU) is a unique user that performs an authentication within a given month. A user that authenticates at least once within a given month is counted as one MAU. Customers aren't charged for a MAUΓÇÖs subsequent authentications during the month, nor for inactive users. Authentications may include:
- Active, interactive sign in by the user. For example, [sign-up or sign in](add-sign-up-and-sign-in-policy.md), [self-service password reset](add-password-reset-policy.md), or any type of [user flow](user-flow-overview.md) or [custom policy](custom-policy-overview.md). - Passive, non-interactive sign in such as [single sign-on (SSO)](session-behavior.md), or any type of token acquisition. For example, authorization code flow, token refresh, or [resource owner password credentials flow](add-ropc-policy.md).
active-directory-b2c Force Password Reset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/force-password-reset.md
Previously updated : 10/06/2022 Last updated : 06/26/2023
Once a password expiration policy has been set, you must also configure force pa
### Password expiry duration
-The password expiry duration default value is **90** days. The value is configurable by using the [Set-MsolPasswordPolicy](/powershell/module/msonline/set-msolpasswordpolicy) cmdlet from the Azure Active Directory Module for Windows PowerShell. This command updates the tenant, so that all users' passwords expire after number of days you configure.
+By default, the password is set not to expire. However, the value is configurable by using the [Set-MsolPasswordPolicy](/powershell/module/msonline/set-msolpasswordpolicy) cmdlet from the Azure Active Directory Module for Windows PowerShell. This command updates the tenant, so that all users' passwords expire after number of days you configure.
## Next steps
active-directory Concept Continuous Access Evaluation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-continuous-access-evaluation.md
Token expiration and refresh are a standard mechanism in the industry. When a cl
Customers have expressed concerns about the lag between when conditions change for a user, and when policy changes are enforced. Azure AD has experimented with the "blunt object" approach of reduced token lifetimes but found they can degrade user experiences and reliability without eliminating risks.
-Timely response to policy violations or security issues really requires a "conversation" between the token issuer (Azure AD), and the relying party (enlightened app). This two-way conversation gives us two important capabilities. The relying party can see when properties change, like network location, and tell the token issuer. It also gives the token issuer a way to tell the relying party to stop respecting tokens for a given user because of account compromise, disablement, or other concerns. The mechanism for this conversation is continuous access evaluation (CAE). The goal for critical event evaluation is for response to be near real time, but latency of up to 15 minutes may be observed because of event propagation time; however, IP locations policy enforcement is instant.
+Timely response to policy violations or security issues really requires a "conversation" between the token issuer (Azure AD), and the relying party (enlightened app). This two-way conversation gives us two important capabilities. The relying party can see when properties change, like network location, and tell the token issuer. It also gives the token issuer a way to tell the relying party to stop respecting tokens for a given user because of account compromise, disablement, or other concerns. The mechanism for this conversation is continuous access evaluation (CAE), an industry standard based on [Open ID Continuous Access Evaluation Profile (CAEP)](https://openid.net/specs/openid-caep-specification-1_0-01.html). The goal for critical event evaluation is for response to be near real time, but latency of up to 15 minutes may be observed because of event propagation time; however, IP locations policy enforcement is instant.
The initial implementation of continuous access evaluation focuses on Exchange, Teams, and SharePoint Online.
active-directory Concept Filter For Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/conditional-access/concept-filter-for-applications.md
Follow the instructions in the article, [Add or deactivate custom security attri
:::image type="content" source="media/concept-filter-for-applications/custom-attributes.png" alt-text="A screenshot showing custom security attribute and predefined values in Azure AD." lightbox="media/concept-filter-for-applications/custom-attributes.png"::: > [!NOTE]
-> Conditional Access filters for devices only works with custom security attributes of type "string".
+> Conditional Access filters for devices only works with custom security attributes of type "string". Custom Security Attributes support creation of Boolean data type but Conditional Access Policy only supports "string".
## Create a Conditional Access policy
active-directory Concept Azure Ad Register https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/concept-azure-ad-register.md
The goal of Azure AD registered - also known as Workplace joined - devices is to
Azure AD registered devices are signed in to using a local account like a Microsoft account on a Windows 10 or newer device. These devices have an Azure AD account for access to organizational resources. Access to resources in the organization can be limited based on that Azure AD account and Conditional Access policies applied to the device identity.
-Administrators can secure and further control these Azure AD registered devices using Mobile Device Management (MDM) tools like Microsoft Intune. MDM provides a means to enforce organization-required configurations like requiring storage to be encrypted, password complexity, and security software kept updated.
+Administrators can further control these Azure AD registered devices by enrolling the device(s) into Mobile Device Management (MDM) tools like Microsoft Intune. MDM provides a means to enforce organization-required configurations like requiring storage to be encrypted, password complexity, and security software kept updated.
Azure AD registration can be accomplished when accessing a work application for the first time or manually using the Windows 10 or Windows 11 Settings menu.
active-directory Mural Identity Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/mural-identity-provisioning-tutorial.md
This tutorial describes the steps you need to perform in both MURAL Identity and
## Capabilities Supported > [!div class="checklist"] > * Create users in MURAL Identity
+> * Remove users in MURAL Identity when they do not require access anymore.
> * Keep user attributes synchronized between Azure AD and MURAL Identity
+> * Provision groups and group memberships in MURAL Identity.
+> * [Single sign-on](mural-identity-tutorial.md) to MURAL Identity (recommended).
## Prerequisites
The scenario outlined in this tutorial assumes that you already have the followi
## Step 1. Plan your provisioning deployment 1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
-2. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-3. Determine what data to [map between Azure AD and MURAL Identity](../app-provisioning/customize-application-attributes.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and MURAL Identity](../app-provisioning/customize-application-attributes.md).
## Step 2. Configure MURAL Identity to support provisioning with Azure AD
This section guides you through the steps to configure the Azure AD provisioning
![Enterprise applications blade](common/enterprise-applications.png)
-2. In the applications list, select **MURAL Identity**.
+1. In the applications list, select **MURAL Identity**.
![The MURAL Identity link in the Applications list](common/all-applications.png)
-3. Select the **Provisioning** tab.
+1. Select the **Provisioning** tab.
![Provisioning tab](common/provisioning.png)
-4. Set the **Provisioning Mode** to **Automatic**.
+1. Set the **Provisioning Mode** to **Automatic**.
![Provisioning](common/provisioning-automatic.png)
-5. Under the **Admin Credentials** section, input your MURAL Identity Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to MURAL Identity. If the connection fails, ensure your MURAL Identity account has Admin permissions and try again.
+1. Under the **Admin Credentials** section, input your MURAL Identity Tenant URL and Secret Token. Click **Test Connection** to ensure Azure AD can connect to MURAL Identity. If the connection fails, ensure your MURAL Identity account has Admin permissions and try again.
![Token](common/provisioning-testconnection-tenanturltoken.png)
-6. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and select the **Send an email notification when a failure occurs** check box.
![Notification Email](common/provisioning-notification-email.png)
-7. Select **Save**.
+1. Select **Save**.
-8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to MURAL Identity**.
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to MURAL Identity**.
-9. Review the user attributes that are synchronized from Azure AD to MURAL Identity in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in MURAL Identity for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the MURAL Identity API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
+1. Review the user attributes that are synchronized from Azure AD to MURAL Identity in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in MURAL Identity for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the MURAL Identity API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
- |Attribute|Type|Supported for filtering|
- ||||
- |userName|String|✓
- |active|Boolean|
- |emails[type eq "work"].value|String|
- |name.givenName|String|
- |name.familyName|String|
- |externalId|String|
+ |Attribute|Type|Supported for filtering|Required by MURAL Identity
+ ||||
+ |userName|String|✓|✓
+ |emails[type eq "work"].value|String||✓
+ |active|Boolean||
+ |name.givenName|String||
+ |name.familyName|String||
+ |externalId|String||
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to MURAL Identity**.
+1. Review the group attributes that are synchronized from Azure AD to MURAL Identity in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the groups in MURAL Identity for update operations. Select the **Save** button to commit any changes.
+
+ |Attribute|Type|Supported for filtering|Required by MURAL Identity|
+ |||||
+ |displayName|String|✓|✓
+ |members|Reference||
+ |externalId|String||
10. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md). 11. To enable the Azure AD provisioning service for MURAL Identity, change the **Provisioning Status** to **On** in the **Settings** section.
This operation starts the initial synchronization cycle of all users and groups
## Step 6. Monitor your deployment Once you've configured provisioning, use the following resources to monitor your deployment:
-1. Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
-2. Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
-3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
## Troubleshooting Tips * When provisioning a user keep in mind that at MURAL we do not support numbers in the name fields (i.e. givenName or familyName). * When filtering on **userName** in the GET endpoint make sure that the email address is all lowercase otherwise you will get an empty result. This is because we convert email addresses to lowercase while provisioning accounts. * When de-provisioning an end-user (setting the active attribute to false), user will be soft-deleted and lose access to all their workspaces. When that same de-provisioned end-user is later activated again (setting the active attribute to true), user will not have access to the workspaces user previously belonged to. The end-user will see an error message "YouΓÇÖve been deactivated from this workspace", with an option to request reactivation which the workspace admin must approve.
-* If you have any other issues, please reach out to [MURAL Identity support team](mailto:support@mural.co)
+* If you have any other issues, please reach out to [MURAL Identity support team](mailto:support@mural.co).
+
+## Change log
+06/22/2023 - Added support for **Group Provisioning**.
## More resources
active-directory Shopify Plus Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/shopify-plus-provisioning-tutorial.md
This section guides you through the steps to configure the Azure AD provisioning
9. Review the user attributes that are synchronized from Azure AD to Shopify Plus in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Shopify Plus for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the Shopify Plus API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
- |Attribute|Type|Supported for Filtering|
- ||||
- |userName|String|✓|
+ |Attribute|Type|Supported for Filtering|Required by Shopify Plus
+ ||||
+ |userName|String|✓|✓
+ |roles|String||
|active|Boolean|
- |name.givenName|String|
- |name.familyName|String|
+ |name.givenName|String||✓
+ |name.familyName|String||✓
10. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
Once you've configured provisioning, use the following resources to monitor your
1. Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully 2. Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
-3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+3. If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
+## Change log
+06/22/2023 - Added support for **roles**.
## Additional resources
aks Csi Migrate In Tree Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-migrate-in-tree-volumes.md
Before proceeding, verify the following:
DISK_URI="$(kubectl get pv $PV -n $NAMESPACE -o jsonpath='{.spec.azureDisk.diskURI}')" TARGET_RESOURCE_GROUP="$(cut -d'/' -f5 <<<"$DISK_URI")" echo $DISK_URI
+ SUBSCRIPTION_ID="$(echo $DISK_URI | grep -o 'subscriptions/[^/]*' | sed 's#subscriptions/##g')"
echo $TARGET_RESOURCE_GROUP PERSISTENT_VOLUME_RECLAIM_POLICY="$(kubectl get pv $PV -n $NAMESPACE -o jsonpath='{.spec.persistentVolumeReclaimPolicy}')"
- az snapshot create --resource-group $TARGET_RESOURCE_GROUP --name $PVC-$FILENAME --source "$DISK_URI"
- SNAPSHOT_PATH=$(az snapshot list --resource-group $TARGET_RESOURCE_GROUP --query "[?name == '$PVC-$FILENAME'].id | [0]")
+ az snapshot create --resource-group $TARGET_RESOURCE_GROUP --name $PVC-$FILENAME --source "$DISK_URI" --subscription ${SUBSCRIPTION_ID}
+ SNAPSHOT_PATH=$(az snapshot list --resource-group $TARGET_RESOURCE_GROUP --query "[?name == '$PVC-$FILENAME'].id | [0]" --subscription ${SUBSCRIPTION_ID})
SNAPSHOT_HANDLE=$(echo "$SNAPSHOT_PATH" | tr -d '"') echo $SNAPSHOT_HANDLE sleep 10
aks Dapr Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/dapr-settings.md
az k8s-extension upgrade --cluster-type managedClusters \
--resource-group myResourceGroup \ --name dapr \ --extension-type Microsoft.Dapr \set global.tag=1.10.0-AzureLinux
+--set global.tag=1.10.0-mariner
``` -- [Learn more about using AzureLinux-based images with Dapr][dapr-azurelinux].
+- [Learn more about using Mariner-based images with Dapr][dapr-mariner].
- [Learn more about deploying AzureLinux on AKS][aks-azurelinux].
Once you have successfully provisioned Dapr in your AKS cluster, try deploying a
[dapr-supported-version]: https://docs.dapr.io/operations/support/support-release-policy/#supported-versions [dapr-troubleshooting]: https://docs.dapr.io/operations/troubleshooting/common_issues/ [supported-cloud-regions]: https://azure.microsoft.com/global-infrastructure/services/?products=azure-arc
-[dapr-azurelinux]: https://docs.dapr.io/operations/hosting/kubernetes/kubernetes-deploy/#using-azurelinux-based-images
+[dapr-mariner]: https://docs.dapr.io/operations/hosting/kubernetes/kubernetes-deploy/#using-mariner-based-images
aks Start Stop Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/start-stop-cluster.md
You may not need to continuously run your Azure Kubernetes Service (AKS) workloa
To better optimize your costs during these periods, you can turn off, or stop, your cluster. This action stops your control plane and agent nodes, allowing you to save on all the compute costs, while maintaining all objects except standalone pods. The cluster state is stored for when you start it again, allowing you to pick up where you left off.
+> [!NOTE]
+> AKS start operations will restore all objects from ETCD with the exception of standalone pods with the same names and ages. meaning that a pod's age will continue to be calculated from its original creation time. This count will keep increasing over time, regardless of whether the cluster is in a stopped state.
+ ## Before you begin This article assumes you have an existing AKS cluster. If you need an AKS cluster, you can create one using [Azure CLI][aks-quickstart-cli], [Azure PowerShell][aks-quickstart-powershell], or the [Azure portal][aks-quickstart-portal].
app-service Configure Authentication Provider Openid Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-provider-openid-connect.md
You can configure your app to use one or more OIDC providers. Each must be given
Your provider will require you to register the details of your application with it. One of these steps involves specifying a redirect URI. This redirect URI will be of the form `<app-url>/.auth/login/<provider-name>/callback`. Each identity provider should provide more instructions on how to complete these steps. `<provider-name>` will refer to the friendly name you give to the OpenID provider name in Azure. > [!NOTE]
-> Some providers may require additional steps for their configuration and how to use the values they provide. For example, Apple provides a private key which is not itself used as the OIDC client secret, and you instead must use it craft a JWT which is treated as the secret you provide in your app config (see the "Creating the Client Secret" section of the [Sign in with Apple documentation](https://developer.apple.com/documentation/sign_in_with_apple/generate_and_validate_tokens))
+> Some providers may require additional steps for their configuration and how to use the values they provide. For example, Apple provides a private key which is not itself used as the OIDC client secret, and you instead must use it to craft a JWT which is treated as the secret you provide in your app config (see the "Creating the Client Secret" section of the [Sign in with Apple documentation](https://developer.apple.com/documentation/sign_in_with_apple/generate_and_validate_tokens))
> You will need to collect a **client ID** and **client secret** for your application.
app-service Upgrade To Asev3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/upgrade-to-asev3.md
description: Take the first steps toward upgrading to App Service Environment v3
Previously updated : 05/12/2023 Last updated : 06/26/2023 # Upgrade to App Service Environment v3
This page is your one-stop shop for guidance and resources to help you upgrade s
|**2**|**Migrate**|Based on results of your review, either upgrade using the migration feature or follow the manual steps.<br><br>- [Use the automated migration feature](how-to-migrate.md)<br>- [Migrate manually](migration-alternatives.md)| |**3**|**Testing and troubleshooting**|Upgrading using the automated migration feature requires a 3-6 hour service window. Support teams are monitoring upgrades to ensure success. If you have a support plan and you need technical help, create a [support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).| |**4**|**Optimize your App Service plans**|Once your upgrade is complete, you can optimize the App Service plans for additional benefits.<br><br>Review the autoselected Isolated v2 SKU sizes and scale up or scale down your App Service plans as needed.<br><br>- [Scale down your App Service plans](../manage-scale-up.md)<br>- [App Service Environment post-migration scaling guidance](migrate.md#pricing)<br><br>Check out the pricing estimates if needed.<br><br>- [App Service pricing page](https://azure.microsoft.com/pricing/details/app-service/windows/)<br>- [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator)|
-|**5**|**Learn more**|Join the [free live webinar](https://msit.events.teams.microsoft.com/event/2c472562-426a-48d6-b963-21c73d6e6cb0@72f988bf-86f1-41af-91ab-2d7cd011db47) with FastTrack Architects.<br><br>Need more help? [Submit a request](https://cxp.azure.com/nominationportal/nominationform/fasttrack) to contact FastTrack.<br><br>[Frequently asked questions](migrate.md#frequently-asked-questions)<br><br>[Community support](https://aka.ms/asev1v2retirement)|
+|**5**|**Learn more**|Need more help? [Submit a request](https://cxp.azure.com/nominationportal/nominationform/fasttrack) to contact FastTrack.<br><br>[Frequently asked questions](migrate.md#frequently-asked-questions)<br><br>[Community support](https://aka.ms/asev1v2retirement)|
## Additional information
app-service Overview Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-managed-identity.md
Title: Managed identities
description: Learn how managed identities work in Azure App Service and Azure Functions, how to configure a managed identity and generate a token for a back-end resource. Previously updated : 01/27/2022 Last updated : 06/27/2023
This article shows you how to create a managed identity for App Service and Azure Functions applications and how to use it to access other resources. > [!IMPORTANT]
-> Managed identities for App Service and Azure Functions won't behave as expected if your app is migrated across subscriptions/tenants. The app needs to obtain a new identity, which is done by [disabling](#remove) and re-enabling the feature. Downstream resources also need to have access policies updated to use the new identity.
+> Because [managed identities don't support cross-directory scenarios](../active-directory/managed-identities-azure-resources/managed-identities-faq.md#can-i-use-a-managed-identity-to-access-a-resource-in-a-different-directorytenant), they won't behave as expected if your app is migrated across subscriptions or tenants. To recreate the managed identities after such a move, see [Will managed identities be recreated automatically if I move a subscription to another directory?](../active-directory/managed-identities-azure-resources/managed-identities-faq.md#will-managed-identities-be-recreated-automatically-if-i-move-a-subscription-to-another-directory). Downstream resources also need to have access policies updated to use the new identity.
> [!NOTE] > Managed identities are not available for [apps deployed in Azure Arc](overview-arc-integration.md). [!INCLUDE [app-service-managed-identities](../../includes/app-service-managed-identities.md)]
-The managed identity configuration is specific to the slot. To configure a managed identity for a deployment slot in the portal, navigate to the slot first. To find the managed identity for your web app or deployment slot in your Azure Active Directory tenant from the Azure portal, search for it directly from the **Overview** page of your tenant. Usually, the slot name is similar to `<app name>/slots/<slot name>`.
+The managed identity configuration is specific to the slot. To configure a managed identity for a deployment slot in the portal, navigate to the slot first. To find the managed identity for your web app or deployment slot in your Azure Active Directory tenant from the Azure portal, search for it directly from the **Overview** page of your tenant. Usually, the slot name is similar to `<app-name>/slots/<slot-name>`.
## Add a system-assigned identity
For example, a web app's template might look like the following JSON:
```json {
- "apiVersion": "2016-08-01",
+ "apiVersion": "2022-03-01",
"type": "Microsoft.Web/sites", "name": "[variables('appName')]", "location": "[resourceGroup().location]",
When the site is created, it has the following additional properties:
```json "identity": { "type": "SystemAssigned",
- "tenantId": "<TENANTID>",
- "principalId": "<PRINCIPALID>"
+ "tenantId": "<tenant-id>",
+ "principalId": "<principal-id>"
} ```
First, you'll need to create a user-assigned identity resource.
1. Select **Identity**.
-1. Within the **User assigned** tab, click **Add**.
+1. Select **User assigned** > **Add**.
-1. Search for the identity you created earlier and select it. Click **Add**.
+1. Search for the identity you created earlier, select it, and select **Add**.
![Managed identity in App Service](media/app-service-managed-service-identity/user-assigned-managed-identity-in-azure-portal.png)
-> [!IMPORTANT]
-> If you select **Add** after you select a user-assigned identity to add, your application will restart.
+ Once you select **Add**, the app restarts.
# [Azure CLI](#tab/cli)
Adding a user-assigned identity in App Service is currently not supported.
An Azure Resource Manager template can be used to automate deployment of your Azure resources. To learn more about deploying to App Service and Functions, see [Automating resource deployment in App Service](../app-service/deploy-complex-application-predictably.md) and [Automating resource deployment in Azure Functions](../azure-functions/functions-infrastructure-as-code.md).
-Any resource of type `Microsoft.Web/sites` can be created with an identity by including the following block in the resource definition, replacing `<RESOURCEID>` with the resource ID of the desired identity:
+Any resource of type `Microsoft.Web/sites` can be created with an identity by including the following block in the resource definition, replacing `<resource-id>` with the resource ID of the desired identity:
```json "identity": { "type": "UserAssigned", "userAssignedIdentities": {
- "<RESOURCEID>": {}
+ "<resource-id>": {}
} } ```
For example, a web app's template might look like the following JSON:
```json {
- "apiVersion": "2016-08-01",
+ "apiVersion": "2022-03-01",
"type": "Microsoft.Web/sites", "name": "[variables('appName')]", "location": "[resourceGroup().location]",
When the site is created, it has the following additional properties:
"identity": { "type": "UserAssigned", "userAssignedIdentities": {
- "<RESOURCEID>": {
- "principalId": "<PRINCIPALID>",
- "clientId": "<CLIENTID>"
+ "<resource-id>": {
+ "principalId": "<principal-id>",
+ "clientId": "<client-id>"
} } }
When you remove a system-assigned identity, it's deleted from Azure Active Direc
1. Select **Identity**. Then follow the steps based on the identity type: - **System-assigned identity**: Within the **System assigned** tab, switch **Status** to **Off**. Click **Save**.
- - **User-assigned identity**: Click the **User assigned** tab, select the checkbox for the identity, and click **Remove**. Click **Yes** to confirm.
+ - **User-assigned identity**: Select the **User assigned** tab, select the checkbox for the identity, and select **Remove**. Select **Yes** to confirm.
# [Azure CLI](#tab/cli)
azure-maps Choose Map Style https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/choose-map-style.md
map.setStyle({
}); ```
-For a fully functional sample that shows how the different styles affect how the map is rendered, see [Map style options] in the [Azure Maps Samples].
+For a fully functional sample that shows how the different styles affect how the map is rendered, see [Map style options] in the [Azure Maps Samples]. For the source code for this sample, see [Map style options source code].
<!-- <br/>
See the following articles for more code samples to add to your maps:
[Add a symbol layer]: map-add-pin.md [Add a bubble layer]: map-add-bubble-layer.md [Map style options]: https://samples.azuremaps.com/map/map-style-options
+[Map style options source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Map/Map%20style%20options/Map%20style%20options.html
[Azure Maps Samples]: https://samples.azuremaps.com
azure-maps Clustering Point Data Web Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/clustering-point-data-web-sdk.md
A bubble layer is a great way to render clustered points. Use expressions to sca
To display the size of the cluster on top of the bubble, use a symbol layer with text, and don't use an icon.
-For a complete working sample of how to implement displaying clusters using a bubble layer, see [Point Clusters in Bubble Layer] in the [Azure Maps Samples].
+For a complete working sample of how to implement displaying clusters using a bubble layer, see [Point Clusters in Bubble Layer] in the [Azure Maps Samples]. For the source code for this sample, see [Point Clusters in Bubble Layer source code].
:::image type="content" source="./media/cluster-point-data-web-sdk/display-clusters-using-bubble-layer.png" alt-text="Screenshot showing a map displaying clusters using a bubble layer.":::
When visualizing data points, the symbol layer automatically hides symbols that
Use clustering to show the data points density while keeping a clean user interface. The following sample shows you how to add custom symbols and represent clusters and individual data points using the symbol layer.
-For a complete working sample of how to implement displaying clusters using a symbol layer, see [Display clusters with a Symbol Layer] in the [Azure Maps Samples].
+For a complete working sample of how to implement displaying clusters using a symbol layer, see [Display clusters with a Symbol Layer] in the [Azure Maps Samples]. For the source code for this sample, see [Display clusters with a Symbol Layer source code].
:::image type="content" source="./media/cluster-point-data-web-sdk/display-clusters-using-symbol-layer.png" alt-text="Screenshot showing a map displaying clusters with a symbol layer.":::
For a complete working sample of how to implement displaying clusters using a sy
Heat maps are a great way to display the density of data on the map. This visualization method can handle a large number of data points on its own. If the data points are clustered and the cluster size is used as the weight of the heat map, then the heat map can handle even more data. To achieve this option, set the `weight` option of the heat map layer to `['get', 'point_count']`. When the cluster radius is small, the heat map looks nearly identical to a heat map using the unclustered data points, but it performs better. However, the smaller the cluster radius, the more accurate the heat map is, but with fewer performance benefits.
-For a complete working sample that demonstrates how to create a heat map that uses clustering on the data source, see [Cluster weighted Heat Map] in the [Azure Maps Samples].
+For a complete working sample that demonstrates how to create a heat map that uses clustering on the data source, see [Cluster weighted Heat Map] in the [Azure Maps Samples]. For the source code for this sample, see [Cluster weighted Heat Map source code].
:::image type="content" source="./media/cluster-point-data-web-sdk/cluster-weighted-heat-map.png" alt-text="Screenshot showing a heat map that uses clustering on the data source.":::
function clusterClicked(e) {
The point data that a cluster represents is spread over an area. In this sample when the mouse is hovered over a cluster, two main behaviors occur. First, the individual data points contained in the cluster are used to calculate a convex hull. Then, the convex hull is displayed on the map to show an area. A convex hull is a polygon that wraps a set of points like an elastic band and can be calculated using the `atlas.math.getConvexHull` method. All points contained in a cluster can be retrieved from the data source using the `getClusterLeaves` method.
-For a complete working sample that demonstrates how to do this, see [Display cluster area with Convex Hull] in the [Azure Maps Samples].
+For a complete working sample that demonstrates how to do this, see [Display cluster area with Convex Hull] in the [Azure Maps Samples]. For the source code for this sample, see [Display cluster area with Convex Hull source code].
:::image type="content" source="./media/cluster-point-data-web-sdk/display-cluster-area.png" alt-text="Screenshot showing a map that displays cluster areas represented by drop pins that show Convex Hull marking the cluster area when selected.":::
For a complete working sample that demonstrates how to do this, see [Display clu
Often clusters are represented using a symbol with the number of points that are within the cluster. But, sometimes it's desirable to customize the style of clusters with more metrics. With cluster aggregates, custom properties can be created and populated using an [aggregate expression] calculation. Cluster aggregates can be defined in `clusterProperties` option of the `DataSource`.
-The [Cluster aggregates] sample uses an aggregate expression. The code calculates a count based on the entity type property of each data point in a cluster. When a user selects a cluster, a popup shows with additional information about the cluster.
+The [Cluster aggregates] sample uses an aggregate expression. The code calculates a count based on the entity type property of each data point in a cluster. When a user selects a cluster, a popup shows with additional information about the cluster. For the source code for this sample, see [Cluster aggregates source code].
:::image type="content" source="./media/cluster-point-data-web-sdk/cluster-aggregates.png" alt-text="Screenshot showing a map that uses clustering defined using data-driven style expression calculation. These calculations aggregate values across all points contained within the cluster.":::
See code examples to add functionality to your app:
[aggregate expression]: data-driven-style-expressions-web-sdk.md#aggregate-expression [Azure Maps Samples]: https://samples.azuremaps.com [Point Clusters in Bubble Layer]: https://samples.azuremaps.com/bubble-layer/point-clusters-in-bubble-layer
+[Point Clusters in Bubble Layer source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Bubble%20Layer/Point%20Clusters%20in%20Bubble%20Layer/Point%20Clusters%20in%20Bubble%20Layer.html
[Display clusters with a Symbol Layer]: https://samples.azuremaps.com/symbol-layer/display-clusters-with-a-symbol-layer
+[Display clusters with a Symbol Layer source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Symbol%20Layer/Display%20clusters%20with%20a%20Symbol%20layer/Display%20clusters%20with%20a%20Symbol%20layer.html
[Cluster weighted Heat Map]: https://samples.azuremaps.com/heat-map-layer/cluster-weighted-heat-map
+[Cluster weighted Heat Map source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Heat%20Map%20Layer/Cluster%20weighted%20Heat%20Map/Cluster%20weighted%20Heat%20Map.html
[Display cluster area with Convex Hull]: https://samples.azuremaps.com/spatial-math/display-cluster-area-with-convex-hull
+[Display cluster area with Convex Hull source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Spatial%20Math/Display%20cluster%20area%20with%20Convex%20Hull/Display%20cluster%20area%20with%20Convex%20Hull.html
[Cluster aggregates]: https://samples.azuremaps.com/bubble-layer/cluster-aggregates
+[Cluster aggregates source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Bubble%20Layer/Cluster%20aggregates/Cluster%20aggregates.html
azure-maps Create Data Source Web Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/create-data-source-web-sdk.md
var flowLayer = new atlas.layer.LineLayer(source, null, {
map.layers.add(flowLayer, 'labels'); ```
-For a complete working sample of how to display data from a vector tile source on the map, see [Vector tile line layer] in the [Azure Maps Samples].
+For a complete working sample of how to display data from a vector tile source on the map, see [Vector tile line layer] in the [Azure Maps Samples]. For the source code for this sample, see [Vector tile line layer sample code].
:::image type="content" source="./media/create-data-source-web-sdk/vector-tile-line-layer.png" alt-text="Screenshot showing a map displaying data from a vector tile source.":::
See the following articles for more code samples to add to your maps:
[Mapbox Vector Tile Specification]: https://github.com/mapbox/vector-tile-spec [Vector tile line layer]: https://samples.azuremaps.com/vector-tiles/vector-tile-line-layer
+[Vector tile line layer sample code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Vector%20tiles/Vector%20tile%20line%20layer/Vector%20tile%20line%20layer.html
[Azure Maps Samples]: https://samples.azuremaps.com
azure-maps Drawing Tools Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-tools-events.md
Title: Drawing tool events | Microsoft Azure Maps
+ Title: Drawing tools events | Microsoft Azure Maps
description: This article demonstrates how to add a drawing toolbar to a map using Microsoft Azure Maps Web SDK
-# Drawing tool events
+# Drawing tools events
When using drawing tools on a map, it's useful to react to certain events as the user draws on the map. This table lists all events supported by the `DrawingManager` class.
When using drawing tools on a map, it's useful to react to certain events as the
| `drawingmodechanged` | Fired when the drawing mode has changed. The new drawing mode is passed into the event handler. | | `drawingstarted` | Fired when the user starts drawing a shape or puts a shape into edit mode. |
-For a complete working sample of how to display data from a vector tile source on the map, see [Drawing tool events] in the [Azure Maps Samples]. In this sample you can draw shapes on the map and watch as the events fire.
+For a complete working sample of how to display data from a vector tile source on the map, see [Drawing tools events] in the [Azure Maps Samples]. In this sample you can draw shapes on the map and watch as the events fire. For the source code for this sample, see [Drawing tools events sample code].
The following image shows a screenshot of the complete working sample that demonstrates how the events in the Drawing Tools module work.
Let's see some common scenarios that use the drawing tools events.
This code demonstrates how to monitor an event of a user drawing shapes. For this example, the code monitors shapes of polygons, rectangles, and circles. Then, it determines which data points on the map are within the drawn area. The `drawingcomplete` event is used to trigger the select logic. In the select logic, the code loops through all the data points on the map. It checks if there's an intersection of the point and the area of the drawn shape. This example makes use of the open-source [Turf.js](https://turfjs.org/) library to perform a spatial intersection calculation.
-For a complete working sample of how to use the drawing tools to draw polygon areas on the map with points within them that can be selected, see [Select data in drawn polygon area] in the [Azure Maps Samples].
+For a complete working sample of how to use the drawing tools to draw polygon areas on the map with points within them that can be selected, see [Select data in drawn polygon area] in the [Azure Maps Samples]. For the source code for this sample, see [Select data in drawn polygon area sample code].
:::image type="content" source="./media/drawing-tools-events/select-data-in-drawn-polygon-area.png" alt-text="Screenshot showing a map displaying points within polygon areas.":::
For a complete working sample of how to use the drawing tools to draw polygon ar
This code searches for points of interests inside the area of a shape after the user finished drawing the shape. You can modify and execute the code by clicking 'Edit on Code pen' on the top-right corner of the frame. The `drawingcomplete` event is used to trigger the search logic. If the user draws a rectangle or polygon, a search inside geometry is performed. If a circle is drawn, the radius and center position is used to perform a point of interest search. The `drawingmodechanged` event is used to determine when the user switches to the drawing mode, and this event clears the drawing canvas.
-For a complete working sample of how to use the drawing tools to search for points of interests within drawn areas, see [Draw and search polygon area] in the [Azure Maps Samples].
+For a complete working sample of how to use the drawing tools to search for points of interests within drawn areas, see [Draw and search polygon area] in the [Azure Maps Samples]. For the source code for this sample, see [Draw and search polygon area sample code].
:::image type="content" source="./media/drawing-tools-events/draw-and-search-polygon-area.png" alt-text="Screenshot showing a map displaying the Draw and search in polygon area sample.":::
For a complete working sample of how to use the drawing tools to search for poin
The following code shows how the drawing events can be used to create a measuring tool. The `drawingchanging` is used to monitor the shape, as it's being drawn. As the user moves the mouse, the dimensions of the shape are calculated. The `drawingcomplete` event is used to do a final calculation on the shape after it has been drawn. The `drawingmodechanged` event is used to determine when the user is switching into a drawing mode. Also, the `drawingmodechanged` event clears the drawing canvas and clears old measurement information.
-For a complete working sample of how to use the drawing tools to measure distances and areas, see [Create a measuring tool] in the [Azure Maps Samples].
+For a complete working sample of how to use the drawing tools to measure distances and areas, see [Create a measuring tool] in the [Azure Maps Samples]. For the source code for this sample, see [Create a measuring tool sample code].
:::image type="content" source="./media/drawing-tools-events/create-a-measuring-tool.png" alt-text="Screenshot showing a map displaying the measuring tool sample.":::
Check out more code samples:
> [Code sample page](https://aka.ms/AzureMapsSamples) [Azure Maps Samples]:https://samples.azuremaps.com
-[Drawing tool events]: https://samples.azuremaps.com/drawing-tools-module/drawing-tools-events
+[Drawing tools events]: https://samples.azuremaps.com/drawing-tools-module/drawing-tools-events
+[Drawing tools events sample code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Drawing%20Tools%20Module/Drawing%20tools%20events/Drawing%20tools%20events.html
[Select data in drawn polygon area]: https://samples.azuremaps.com/drawing-tools-module/select-data-in-drawn-polygon-area
+[Select data in drawn polygon area sample code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Drawing%20Tools%20Module/Select%20data%20in%20drawn%20polygon%20area/Select%20data%20in%20drawn%20polygon%20area.html
[Draw and search polygon area]: https://samples.azuremaps.com/drawing-tools-module/draw-and-search-polygon-area
+[Draw and search polygon area sample code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Drawing%20Tools%20Module/Draw%20and%20search%20polygon%20area/Draw%20and%20search%20polygon%20area.html
[Create a measuring tool]: https://samples.azuremaps.com/drawing-tools-module/create-a-measuring-tool
+[Create a measuring tool sample code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Drawing%20Tools%20Module/Create%20a%20measuring%20tool/Create%20a%20measuring%20tool.html
azure-maps How To Use Image Templates Web Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-image-templates-web-sdk.md
The [Symbol layer with built-in icon template] sample demonstrates how to do thi
:::image type="content" source="./media/how-to-use-image-templates-web-sdk/symbol-layer-with-built-in-icon-template.png" alt-text="Screenshot showing a map displaying a symbol layer using the marker-flat image template with a teal primary color and a white secondary color.":::
+For the source code for this sample, see [Symbol layer with built-in icon template sample code].
+ <!-- <br/> <iframe height="500" scrolling="no" title="Symbol layer with built-in icon template" src="//codepen.io/azuremaps/embed/VoQMPp/?height=500&theme-id=0&default-tab=js,result&editable=true" frameborder='no' loading="lazy" allowtransparency="true" allowfullscreen="true">
The [Symbol layer with built-in icon template] sample demonstrates how to do thi
Once an image template is loaded into the map image sprite, it can be rendered along the path of a line by adding a LineString to a data source and using a symbol layer with a `lineSpacing`option and by referencing the ID of the image resource in the `image` option of th `iconOptions`.
-The [Line layer with built-in icon template] demonstrates how to do this. As show in the following screenshot, it renders a red line on the map and uses a symbol layer using the `car` image template with a dodger blue primary color and a white secondary color.
+The [Line layer with built-in icon template] demonstrates how to do this. As show in the following screenshot, it renders a red line on the map and uses a symbol layer using the `car` image template with a dodger blue primary color and a white secondary color. For the source code for this sample, see [Line layer with built-in icon template sample code].
:::image type="content" source="./media/how-to-use-image-templates-web-sdk/line-layer-with-built-in-icon-template.png" alt-text="Screenshot showing a map displaying a line layer marking the route with car icons along the route.":::
The [Line layer with built-in icon template] demonstrates how to do this. As sho
Once an image template is loaded into the map image sprite, it can be rendered as a fill pattern in a polygon layer by referencing the image resource ID in the `fillPattern` option of the layer.
-The [Fill polygon with built-in icon template] sample demonstrates how to render a polygon layer using the `dot` image template with a red primary color and a transparent secondary color, as shown in the following screenshot.
+The [Fill polygon with built-in icon template] sample demonstrates how to render a polygon layer using the `dot` image template with a red primary color and a transparent secondary color, as shown in the following screenshot. For the source code for this sample, see [Fill polygon with built-in icon template sample code].
:::image type="content" source="./media/how-to-use-image-templates-web-sdk/fill-polygon-with-built-in-icon-template.png" alt-text="Screenshot showing a map displaying a polygon layer using the dot image template with a red primary color and a transparent secondary color.":::
The [Fill polygon with built-in icon template] sample demonstrates how to render
An image template can be retrieved using the `altas.getImageTemplate` function and used as the content of an HTML marker. The template can be passed into the `htmlContent` option of the marker, and then customized using the `color`, `secondaryColor`, and `text` options.
-The [HTML Marker with built-in icon template] sample demonstrates this using the `marker-arrow` template with a red primary color, a pink secondary color, and a text value of "00", as shown in the following screenshot.
+The [HTML Marker with built-in icon template] sample demonstrates this using the `marker-arrow` template with a red primary color, a pink secondary color, and a text value of "00", as shown in the following screenshot. For the source code for this sample, see [HTML Marker with built-in icon template sample code].
:::image type="content" source="./media/how-to-use-image-templates-web-sdk/html-marker-with-built-in-icon-template.png" alt-text="Screenshot showing a map displaying the marker-arrow template with a red primary color, a pink secondary color, and a text value of 00 inside the red arrow.":::
SVG image templates support the following placeholder values:
| `{scale}` | The SVG image is converted to an png image when added to the map image sprite. This placeholder can be used to scale a template before it's converted to ensure it renders clearly. | | `{text}` | The location to render text when used with an HTML Marker. |
-The [Add custom icon template to atlas namespace] sample demonstrates how to take an SVG template, and add it to the Azure Maps web SDK as a reusable icon template, as shown in the following screenshot.
+The [Add custom icon template to atlas namespace] sample demonstrates how to take an SVG template, and add it to the Azure Maps web SDK as a reusable icon template, as shown in the following screenshot. For the source code for this sample, see [Add custom icon template to atlas namespace sample code].
:::image type="content" source="./media/how-to-use-image-templates-web-sdk/add-custom-icon-template-to-atlas-namespace.png" alt-text="Screenshot showing a map displaying a polygon layer in the shape of a big green triangle with multiple images of blue anchors inside.":::
See the following articles for more code samples where image templates can be us
[Fill polygon with built-in icon template]: https://samples.azuremaps.com/polygons/fill-polygon-with-built-in-icon-template [HTML Marker with built-in icon template]: https://samples.azuremaps.com/html-markers/html-marker-with-built-in-icon-template [Add custom icon template to atlas namespace]: https://samples.azuremaps.com/map/add-custom-icon-template-to-atlas-namespace+
+[Symbol layer with built-in icon template sample code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Symbol%20Layer/Symbol%20layer%20with%20built-in%20icon%20template/Symbol%20layer%20with%20built-in%20icon%20template.html
+[Line layer with built-in icon template sample code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Line%20Layer/Line%20layer%20with%20built-in%20icon%20template/Line%20layer%20with%20built-in%20icon%20template.html
+[Fill polygon with built-in icon template sample code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Polygons/Fill%20polygon%20with%20built-in%20icon%20template/Fill%20polygon%20with%20built-in%20icon%20template.html
+[HTML Marker with built-in icon template sample code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/HTML%20Markers/HTML%20Marker%20with%20built-in%20icon%20template/HTML%20Marker%20with%20built-in%20icon%20template.html
+[Add custom icon template to atlas namespace sample code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Map/Add%20custom%20icon%20template%20to%20atlas%20namespace/Add%20custom%20icon%20template%20to%20atlas%20namespace.html
azure-maps How To Use Services Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-services-module.md
Title: Use the Azure Maps Services module | Microsoft Azure Maps
+ Title: Use the Azure Maps Services module
+ description: Learn about the Azure Maps services module. See how to load and use this helper library to access Azure Maps REST services in web or Node.js applications.-- Previously updated : 03/25/2019++ Last updated : 06/26/2023
The Azure Maps Web SDK provides a *services module*. This module is a helper lib
import * as service from "azure-maps-rest"; ```
-1. Create an authentication pipeline. The pipeline must be created before you can initialize a service URL client endpoint. Use your own Azure Maps account key or Azure Active Directory (Azure AD) credentials to authenticate an Azure Maps Search service client. In this example, the Search service URL client will be created.
+1. Create an authentication pipeline. The pipeline must be created before you can initialize a service URL client endpoint. Use your own Azure Maps account key or Azure Active Directory (Azure AD) credentials to authenticate an Azure Maps Search service client. In this example, the Search service URL client will be created.
If you use a subscription key for authentication:
The Azure Maps Web SDK provides a *services module*. This module is a helper lib
}); ```
- Here's the full, running code sample:
+Here's the full, running code sample:
-<br/>
+```javascript
+<html>
+ <head>
+
+ <script src="https://atlas.microsoft.com/sdk/javascript/service/2/atlas-service.min.js"></script>
+
+ <script type="text/javascript">
+
+ // Get an Azure Maps key at https://azure.com/maps.
+ var subscriptionKey = '{Your-Azure-Maps-Subscription-key}';
+
+ // Use SubscriptionKeyCredential with a subscription key.
+ var subscriptionKeyCredential = new atlas.service.SubscriptionKeyCredential(subscriptionKey);
+
+ // Use subscriptionKeyCredential to create a pipeline.
+ var pipeline = atlas.service.MapsURL.newPipeline(subscriptionKeyCredential, {
+ retryOptions: { maxTries: 4 } // Retry options
+ });
+ // Create an instance of the SearchURL client.
+ var searchURL = new atlas.service.SearchURL(pipeline);
+
+ // Search for "1 microsoft way, redmond, wa".
+ searchURL.searchAddress(atlas.service.Aborter.timeout(10000), '1 microsoft way, redmond, wa')
+ .then(response => {
+ var html = [];
+
+ // Display the total results.
+ html.push('Total results: ', response.summary.numResults, '<br/><br/>');
+
+ // Create a table of the results.
+ html.push('<table><tr><td></td><td>Result</td><td>Latitude</td><td>Longitude</td></tr>');
+
+ for(var i=0;i<response.results.length;i++){
+ html.push('<tr><td>', (i+1), '.</td><td>',
+ response.results[i].address.freeformAddress,
+ '</td><td>',
+ response.results[i].position.lat,
+ '</td><td>',
+ response.results[i].position.lon,
+ '</td></tr>');
+ }
+
+ html.push('</table>');
+
+ // Add the resulting HTML to the body of the page.
+ document.body.innerHTML = html.join('');
+ });
+
+ </script>
+</head>
+
+<style>
+ table {
+ border: 1px solid black;
+ border-collapse: collapse;
+ }
+ td, th {
+ border: 1px solid black;
+ padding: 5px;
+ }
+</style>
+
+<body> </body>
+
+</html>
+```
+
+The following image is a screenshot showing the results of this sample code, a table with the address searched for, along with the resulting coordinates.
++
+<!-
<iframe height="500" scrolling="no" title="Using the Services Module" src="//codepen.io/azuremaps/embed/zbXGMR/?height=500&theme-id=0&default-tab=js,result&editable=true" frameborder='no' loading="lazy" allowtransparency="true" allowfullscreen="true"> See the Pen <a href='https://codepen.io/azuremaps/pen/zbXGMR/'>Using the Services Module</a> by Azure Maps
- (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
-
-<br/>
+ (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe>
+>
## Azure Government cloud support
For more code samples that use the services module, see these articles:
> [Get information from a coordinate](./map-get-information-from-coordinate.md) > [!div class="nextstepaction"]
-> [Show directions from A to B](./map-route.md)
+> [Show directions from A to B](./map-route.md)
azure-maps Map Accessibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-accessibility.md
Any additional information that is placed on the base map should have correspond
A marker or symbol is often used to represent a location on the map. Additional information about the location is typically displayed in a popup when the user interacts with the marker. In most applications, popups appear when a user selects a marker. However, clicking and tapping require the user to use a mouse and a touch screen, respectively. A good practice is to make popups accessible when using a keyboard. This functionality can be achieved by creating a popup for each data point and adding it to the map.
-The [Accessible popups] example loads points of interests on the map using a symbol layer and adds a popup to the map for each point of interest. A reference to each popup is stored in the properties of each data point. It can also be retrieved for a marker, such as when a marker is selected. When focused on the map, pressing the tab key allows the user to step through each popup on the map.
+The [Accessible popups] example loads points of interests on the map using a symbol layer and adds a popup to the map for each point of interest. A reference to each popup is stored in the properties of each data point. It can also be retrieved for a marker, such as when a marker is selected. When focused on the map, pressing the tab key allows the user to step through each popup on the map. For the source code for this sample, see [Accessible popups source code].
:::image type="content" source="./media/map-accessibility/accessible-popups.png" alt-text="A screenshot showing a maps with accessible popups.":::
Take a look at these useful accessibility tools:
> [No Coffee Vision Simulator](https://uxpro.cc/toolbox/nocoffee/) [Accessible popups]: https://samples.azuremaps.com/popups/accessible-popups
+[Accessible popups source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Popups/Accessible%20popups/Accessible%20popups.html
[Accessibility Conformance Reports]: https://cloudblogs.microsoft.com/industry-blog/government/2018/09/11/accessibility-conformance-reports/ [Accessible Rich Internet Applications (ARIA)]: https://www.w3.org/WAI/standards-guidelines/aria/
azure-maps Map Add Bubble Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-bubble-layer.md
This code shows you how to use a bubble layer to render a point on the map and a
## Customize a bubble layer
-The Bubble layer only has a few styling options. Use the [Bubble Layer Options] sample to try them out.
+The Bubble layer only has a few styling options. Use the [Bubble Layer Options] sample to try them out. For the source code for this sample, see [Bubble Layer Options source code].
:::image type="content" source="./media/map-add-bubble-layer/bubble-layer-options.png" alt-text="Screenshot showing a the Bubble Layer Options sample that shows a map with bubbles and selectable bubble layer options to the left of the map.":::
See the following articles for more code samples to add to your maps:
> [Code samples](/samples/browse/?products=azure-maps) [Bubble Layer Options]: https://samples.azuremaps.com/bubble-layer/bubble-layer-options
+[Bubble Layer Options source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Bubble%20Layer/Bubble%20Layer%20Options/Bubble%20Layer%20Options.html
[bubble layer]: /javascript/api/azure-maps-control/atlas.layer.bubblelayer
azure-maps Map Add Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-controls.md
The style picker control is defined by the [StyleControl] class. For more inform
## Customize controls
-The [Navigation Control Options] sample is a tool to test out the various options for customizing the controls.
+The [Navigation Control Options] sample is a tool to test out the various options for customizing the controls. For the source code for this sample, see [Navigation Control Options source code].
:::image type="content" source="./media/map-add-controls/map-navigation-control-options.png" alt-text="Screenshot showing the Map Navigation Control Options sample, which contains a map displaying zoom, compass, pitch and style controls and options on the left side of the screen that enable you to change the Control Position, Control Style, Zoom Delta, Pitch Delta, Compass Rotation Delta, Picker Styles, and Style Picker Layout properties.":::
See the following articles for full code:
[CompassControl]: /javascript/api/azure-maps-control/atlas.control.compasscontrol [StyleControl]: /javascript/api/azure-maps-control/atlas.control.stylecontrol [Navigation Control Options]: https://samples.azuremaps.com/controls/map-navigation-control-options
+[Navigation Control Options source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Controls/Map%20Navigation%20Control%20Options/Map%20Navigation%20Control%20Options.html
[choose a map style]: choose-map-style.md [Add a pin]: map-add-pin.md [Add a popup]: map-add-popup.md
azure-maps Map Add Custom Html https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-custom-html.md
map.events.add('click',marker, () => {
}); ```
-For a complete working sample of how to add an HTML marker, see [Simple HTML Marker] in the [Azure Maps Samples].
+For a complete working sample of how to add an HTML marker, see [Simple HTML Marker] in the [Azure Maps Samples]. For the source code for this sample, see [Simple HTML Marker source code].
:::image type="content" source="./media/map-add-custom-html/simple-html-marker.png" alt-text="Screenshot showing a map of the world with a simple HtmlMarker.":::
For a complete working sample of how to add an HTML marker, see [Simple HTML Mar
The default `htmlContent` of an Html marker is an SVG template with place folders `{color}` and `{text}` in it. You can create custom SVG strings and add these same placeholders into your SVG such that setting the `color` and `text` options of the marker update these placeholders in your SVG.
-For a complete working sample of how to create a custom SVG template and use it with the HtmlMarker class, see [HTML Marker with Custom SVG Template] in the [Azure Maps Samples]. When running this sample, select the button in the upper left hand side of the window labeled **Update Marker Options** to change the `color` and `text` options from the SVG template used in the HtmlMarker.
+For a complete working sample of how to create a custom SVG template and use it with the HtmlMarker class, see [HTML Marker with Custom SVG Template] in the [Azure Maps Samples]. When running this sample, select the button in the upper left hand side of the window labeled **Update Marker Options** to change the `color` and `text` options from the SVG template used in the HtmlMarker. For the source code for this sample, see [HTML Marker with Custom SVG Template source code].
:::image type="content" source="./media/map-add-custom-html/html-marker-with-custom-svg-template.png" alt-text="Screenshot showing a map of the world with a custom SVG template used with the HtmlMarker class and a button labeled update marker options, that when selected changes the color and text options from the SVG template used in the HtmlMarker. ":::
The CSS:
</style> ```
-For a complete working sample of how to use CSS and HTML to create a marker on the map, see [CSS Styled HTML Marker] in the [Azure Maps Samples].
+For a complete working sample of how to use CSS and HTML to create a marker on the map, see [CSS Styled HTML Marker] in the [Azure Maps Samples]. For the source code for this sample, see [CSS Styled HTML Marker source code].
:::image type="content" source="./media/map-add-custom-html/css-styled-html-marker.gif" alt-text="Screenshot showing a CSS styled HTML marker. ":::
For a complete working sample of how to use CSS and HTML to create a marker on t
This sample shows how to make an HTML marker draggable. HTML markers support `drag`, `dragstart`, and `dragend` events.
-For a complete working sample of how to use CSS and HTML to create a marker on the map, see [Draggable HTML Marker] in the [Azure Maps Samples].
+For a complete working sample of how to use CSS and HTML to create a marker on the map, see [Draggable HTML Marker] in the [Azure Maps Samples]. For the source code for this sample, see [Draggable HTML Marker source code].
:::image type="content" source="./media/map-add-custom-html/draggable-html-marker.gif" alt-text="Screenshot showing a map of the United States with a yellow thumb tack being dragged to demonstrate a draggable HTML marker. ":::
For a complete working sample of how to use CSS and HTML to create a marker on t
## Add mouse events to HTML markers
-For a complete working sample of how to add mouse and drag events to an HTML marker, see [HTML Marker events] in the [Azure Maps Samples].
+For a complete working sample of how to add mouse and drag events to an HTML marker, see [HTML Marker events] in the [Azure Maps Samples]. For the source code for this sample, see [HTML Marker events source code].
:::image type="content" source="./media/map-add-custom-html/html-marker-events.gif" alt-text="Screenshot showing a map of the world with an HtmlMarker and a list of HtmlMarker events that become highlighted in green when that event fires.":::
For more code examples to add to your maps, see the following articles:
[Draggable HTML Marker]: https://samples.azuremaps.com/html-markers/draggable-html-marker [HTML Marker events]: https://samples.azuremaps.com/html-markers/html-marker-events
+[Simple HTML Marker source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/HTML%20Markers/Simple%20HTML%20Marker/Simple%20HTML%20Marker.html
+[HTML Marker with Custom SVG Template source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/HTML%20Markers/HTML%20Marker%20with%20Custom%20SVG%20Template/HTML%20Marker%20with%20Custom%20SVG%20Template.html
+[CSS Styled HTML Marker source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/HTML%20Markers/CSS%20Styled%20HTML%20Marker/CSS%20Styled%20HTML%20Marker.html
+[Draggable HTML Marker source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/HTML%20Markers/Draggable%20HTML%20Marker/Draggable%20HTML%20Marker.html
+[HTML Marker events source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/HTML%20Markers/HTML%20Marker%20events/HTML%20Marker%20events.html
+ [HtmlMarker]: /javascript/api/azure-maps-control/atlas.htmlmarker [HtmlMarkerOptions]: /javascript/api/azure-maps-control/atlas.htmlmarkeroptions [HtmlMarkerManager]: /javascript/api/azure-maps-control/atlas.htmlmarkermanager
azure-maps Map Add Drawing Toolbar https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-drawing-toolbar.md
Last updated 06/05/2023
- # Add a drawing tools toolbar to a map
drawingManager = new atlas.drawing.DrawingManager(map, {
}); ```
-For a complete working sample that demonstrates how to add a drawing toolbar to your map, see [Add drawing toolbar to map] in the [Azure Maps Samples].
+For a complete working sample that demonstrates how to add a drawing toolbar to your map, see [Add drawing toolbar to map] in the [Azure Maps Samples]. For the source code for this sample, see [Add drawing toolbar to map source code].
:::image type="content" source="./media/map-add-drawing-toolbar/add-drawing-toolbar.png" alt-text="Screenshot showing the drawing toolbar on a map."::: <! <iframe height="500" scrolling="no" title="Add drawing toolbar" src="//codepen.io/azuremaps/embed/ZEzLeRg/?height=265&theme-id=0&default-tab=js,result&editable=true" frameborder='no' loading="lazy" allowtransparency="true" allowfullscreen="true"> See the Pen <a href='https://codepen.io/azuremaps/pen/ZEzLeRg/'>Add drawing toolbar</a> by Azure Maps
- (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+ (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe>
> ## Limit displayed toolbar options
-The following code creates an instance of the drawing manager and displays the toolbar with just a polygon drawing tool on the map.
+The following code creates an instance of the drawing manager and displays the toolbar with just a polygon drawing tool on the map.
```javascript //Create an instance of the drawing manager and display the drawing toolbar with polygon drawing tool.
The following screenshot shows a sample of an instance of the drawing manager th
<! <iframe height="500" scrolling="no" title="Add a polygon drawing tool" src="//codepen.io/azuremaps/embed/OJLWWMy/?height=265&theme-id=0&default-tab=js,result&editable=true" frameborder='no' loading="lazy" allowtransparency="true" allowfullscreen="true"> See the Pen <a href='https://codepen.io/azuremaps/pen/OJLWWMy/'>Add a polygon drawing tool</a> by Azure Maps
- (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+ (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe>
> ## Change drawing rendering style
drawingManager.setOptions({
}); ```
-For a complete working sample that demonstrates how to customize the rendering of the drawing shapes in the drawing manager by accessing the rendering layers, see [Change drawing rendering style] in the [Azure Maps Samples].
+For a complete working sample that demonstrates how to customize the rendering of the drawing shapes in the drawing manager by accessing the rendering layers, see [Change drawing rendering style] in the [Azure Maps Samples]. For the source code for this sample, see [Change drawing rendering style source code].
:::image type="content" source="./media/map-add-drawing-toolbar/change-drawing-rendering-style.png" alt-text="Screenshot showing different drawing shaped rendered on a map."::: <! <iframe height="500" scrolling="no" title="Change drawing rendering style" src="//codepen.io/azuremaps/embed/OJLWpyj/?height=265&theme-id=0&default-tab=js,result&editable=true" frameborder='no' loading="lazy" allowtransparency="true" allowfullscreen="true"> See the Pen <a href='https://codepen.io/azuremaps/pen/OJLWpyj/'>Change drawing rendering style</a> by Azure Maps
- (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+ (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe>
> > [!NOTE]
Learn more about the classes and methods used in this article:
[Azure Maps Samples]: https://samples.azuremaps.com [Add drawing toolbar to map]: https://samples.azuremaps.com/drawing-tools-module/add-drawing-toolbar-to-map
-[Change drawing rendering style]: https://samples.azuremaps.com/drawing-tools-module/change-drawing-rendering-style
+[Change drawing rendering style]: https://samples.azuremaps.com/drawing-tools-module/change-drawing-rendering-style
+
+[Add drawing toolbar to map source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Drawing%20Tools%20Module/Add%20drawing%20toolbar%20to%20map/Add%20drawing%20toolbar%20to%20map.html
+[Change drawing rendering style source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Drawing%20Tools%20Module/Change%20drawing%20rendering%20style/Change%20drawing%20rendering%20style.html
azure-maps Map Add Heat Map Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-heat-map-layer.md
map.layers.add(new atlas.layer.HeatMapLayer(datasource, null, {
}), 'labels'); ```
-The [Simple Heat Map Layer] sample demonstrates how to create a simple heat map from a data set of point features.
+The [Simple Heat Map Layer] sample demonstrates how to create a simple heat map from a data set of point features. For the source code for this sample, see [Simple Heat Map Layer source code].
:::image type="content" source="./media/map-add-heat-map-layer/add-a-heat-map-layer.png" alt-text="Screenshot showing a map displaying a heat map.":::
The previous example customized the heat map by setting the radius and opacity o
However, if you use an expression, the weight of each data point can be based on the properties of each data point. For example, suppose each data point represents an earthquake. The magnitude value has been an important metric for each earthquake data point. Earthquakes happen all the time, but most have a low magnitude, and aren't noticed. Use the magnitude value in an expression to assign the weight to each data point. By using the magnitude value to assign the weight, you get a better representation of the significance of earthquakes within the heat map. - `source` and `source-layer`: Enable you to update the data source.
-The [Heat Map Layer Options] sample shows how the different options of the heat map layer that affects rendering.
+The [Heat Map Layer Options] sample shows how the different options of the heat map layer that affects rendering. For the source code for this sample, see [Heat Map Layer Options source code].
:::image type="content" source="./media/map-add-heat-map-layer/heat-map-layer-options.png" alt-text="Screenshot showing a map displaying a heat map, and a panel with editable settings that show how the different options of the heat map layer affect rendering.":::
Use a `zoom` expression to scale the radius for each zoom level, such that each
Scaling the radius so that it doubles with each zoom level creates a heat map that looks consistent on all zoom levels. To apply this scaling, use `zoom` with a base 2 `exponential interpolation` expression, with the pixel radius set for the minimum zoom level and a scaled radius for the maximum zoom level calculated as `2 * Math.pow(2, minZoom - maxZoom)` as shown in the following sample. Zoom the map to see how the heat map scales with the zoom level.
-The [Consistent zoomable Heat Map] sample shows how to create a heat map where the radius of each data point covers the same physical area on the ground, creating a more consistent user experience when zooming the map. The heat map in this sample scales consistently between zoom levels 10 and 22. Each zoom level of the map has twice as many pixels vertically and horizontally as the previous zoom level. Doubling the radius with each zoom level creates a heat map that looks consistent across all zoom levels.
+The [Consistent zoomable Heat Map] sample shows how to create a heat map where the radius of each data point covers the same physical area on the ground, creating a more consistent user experience when zooming the map. The heat map in this sample scales consistently between zoom levels 10 and 22. Each zoom level of the map has twice as many pixels vertically and horizontally as the previous zoom level. Doubling the radius with each zoom level creates a heat map that looks consistent across all zoom levels. For the source code for this sample, see [Consistent zoomable Heat Map source code].
:::image type="content" source="./media/map-add-heat-map-layer/consistent-zoomable-heat-map.png" alt-text="Screenshot showing a map displaying a heat map that uses a zoom expression that scales the radius for each zoom level.":::
For more code examples to add to your maps, see the following articles:
[Simple Heat Map Layer]: https://samples.azuremaps.com/heat-map-layer/simple-heat-map-layer [Heat Map Layer Options]: https://samples.azuremaps.com/heat-map-layer/heat-map-layer-options [Consistent zoomable Heat Map]: https://samples.azuremaps.com/heat-map-layer/consistent-zoomable-heat-map+
+[Simple Heat Map Layer source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Heat%20Map%20Layer/Simple%20Heat%20Map%20Layer/Simple%20Heat%20Map%20Layer.html
+[Heat Map Layer Options source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Heat%20Map%20Layer/Heat%20Map%20Layer%20Options/Heat%20Map%20Layer%20Options.html
+[Consistent zoomable Heat Map source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Heat%20Map%20Layer/Consistent%20zoomable%20Heat%20Map/Consistent%20zoomable%20Heat%20Map.html
azure-maps Map Add Image Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-image-layer.md
map.layers.add(new atlas.layer.ImageLayer({
})); ```
-For a fully functional sample that shows how to overlay an image of a map of Newark New Jersey from 1922 as an Image layer, see [Simple Image Layer] in the [Azure Maps Samples].
+For a fully functional sample that shows how to overlay an image of a map of Newark New Jersey from 1922 as an Image layer, see [Simple Image Layer] in the [Azure Maps Samples]. For the source code for this sample, see [Simple Image Layer source code].
:::image type="content" source="./media/map-add-image-layer/simple-image-layer.png" alt-text="A screenshot showing a map with an image of a map of Newark New Jersey from 1922 as an Image layer.":::
This sample demonstrates how to add KML ground overlay information as an image l
The code uses the static `getCoordinatesFromEdges` function from the [ImageLayer](/javascript/api/azure-maps-control/atlas.layer.imagelayer) class. It calculates the four corners of the image using the north, south, east, west, and rotation information of the KML ground overlay.
-For a fully functional sample that shows how to use a KML Ground Overlay as Image Layer, see [KML Ground Overlay as Image Layer] in the [Azure Maps Samples].
+For a fully functional sample that shows how to use a KML Ground Overlay as Image Layer, see [KML Ground Overlay as Image Layer] in the [Azure Maps Samples]. For the source code for this sample, see [KML Ground Overlay as Image Layer source code].
:::image type="content" source="./media/map-add-image-layer/kml-ground-overlay-as-image-layer.png" alt-text="A screenshot showing a map with a KML Ground Overlay appearing as Image Layer.":::
For a fully functional sample that shows how to use a KML Ground Overlay as Imag
## Customize an image layer
-The image layer has many styling options. For a fully functional sample that shows how the different options of the image layer affect rendering, see [Image Layer Options] in the [Azure Maps Samples].
+The image layer has many styling options. For a fully functional sample that shows how the different options of the image layer affect rendering, see [Image Layer Options] in the [Azure Maps Samples]. For the source code for this sample, see [Image Layer Options source code].
:::image type="content" source="./media/map-add-image-layer/image-layer-options.png" alt-text="A screenshot showing a map with a panel that has the different options of the image layer that affect rendering. In this sample, you can change styling options and see the effect it has on the map.":::
See the following articles for more code samples to add to your maps:
[Azure Maps Samples]: https://samples.azuremaps.com [KML Ground Overlay as Image Layer]: https://samples.azuremaps.com/image-layer/kml-ground-overlay-as-image-layer [Image Layer Options]: https://samples.azuremaps.com/image-layer/image-layer-options+
+[Simple Image Layer source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Image%20Layer/Simple%20Image%20Layer/Simple%20Image%20Layer.html
+[KML Ground Overlay as Image Layer source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Image%20Layer/KML%20Ground%20Overlay%20as%20Image%20Layer/KML%20Ground%20Overlay%20as%20Image%20Layer.html
+[Image Layer Options source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Image%20Layer/Image%20Layer%20Options/Image%20Layer%20Options.html
azure-maps Map Add Line Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-line-layer.md
This code will create a map that appears as follows:
You may apply a single stroke color to a line. You can also fill a line with a gradient of colors to show transition from one line segment to the next line segment. For example, line gradients can be used to represent changes over time and distance, or different temperatures across a connected line of objects. In order to apply this feature to a line, the data source must have the `lineMetrics` option set to `true`, and then a color gradient expression can be passed to the `strokeColor` option of the line. The stroke gradient expression has to reference the `['line-progress']` data expression that exposes the calculated line metrics to the expression.
-For a fully functional sample that shows how to apply a stroke gradient to a line on the map, see [Line with Stroke Gradient] in the [Azure Maps Samples].
+For a fully functional sample that shows how to apply a stroke gradient to a line on the map, see [Line with Stroke Gradient] in the [Azure Maps Samples]. For the source code for this sample, see [Line with Stroke Gradient source code].
:::image type="content" source="./media/map-add-line-layer/line-with-stroke-gradient.png"alt-text="A screenshot showing a line with a stroke gradient on the map.":::
For a fully functional sample that shows how to apply a stroke gradient to a lin
## Customize a line layer
-The Line layer has several styling options. For a fully functional sample that interactively demonstrates the line options, see [Line Layer Options] in the [Azure Maps Samples].
+The Line layer has several styling options. For a fully functional sample that interactively demonstrates the line options, see [Line Layer Options] in the [Azure Maps Samples]. For the source code for this sample, see [Line Layer Options source code].
:::image type="content" source="./media/map-add-line-layer/line-layer-options.png"alt-text="A screenshot showing the Line Layer Options sample that shows how the different options of the line layer affect rendering.":::
See the following articles for more code samples to add to your maps:
[Line with Stroke Gradient]: https://samples.azuremaps.com/line-layer/line-with-stroke-gradient [Azure Maps Samples]: https://samples.azuremaps.com [Line Layer Options]: https://samples.azuremaps.com/line-layer/line-layer-options+
+[Line with Stroke Gradient source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Line%20Layer/Line%20with%20Stroke%20Gradient/Line%20with%20Stroke%20Gradient.html
+[Line Layer Options source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Line%20Layer/Line%20Layer%20Options/Line%20Layer%20Options.html
azure-maps Map Add Pin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-pin.md
# Add a symbol layer to a map
-Connect a symbol to a data source, and use it to render an icon or a text at a given point.
+Connect a symbol to a data source, and use it to render an icon or a text at a given point.
Symbol layers are rendered using WebGL. Use a symbol layer to render large collections of points on the map. Compared to HTML marker, the symbol layer renders a large number of point data on the map, with better performance. However, the symbol layer doesn't support traditional CSS and HTML elements for styling.
function InitMap()
## Customize a symbol layer
-The symbol layer has many styling options available. The [Symbol Layer Options] sample shows how the different options of the symbol layer that affects rendering.
+The symbol layer has many styling options available. The [Symbol Layer Options] sample shows how the different options of the symbol layer that affects rendering. For the source code for this sample, see [Symbol Layer Options source code].
:::image type="content" source="./media/map-add-pin/symbol-layer-options.png" alt-text="A screenshot of map with a panel on the left side of the map with the various symbol options that can be interactively set.":::
See the following articles for more code samples to add to your maps:
> [Add HTML Makers](map-add-bubble-layer.md) [Symbol Layer Options]: https://samples.azuremaps.com/?search=symbol%20layer&sample=symbol-layer-options
+[Symbol Layer Options source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Symbol%20Layer/Symbol%20Layer%20Options/Symbol%20Layer%20Options.html
azure-maps Map Add Popup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-popup.md
map.events.add('mouseleave', symbolLayer, function (){
There are cases in which the best approach is to create one popup and reuse it. For example, you may have a large number of points and want to show only one popup at a time. By reusing the popup, the number of DOM elements created by the application is greatly reduced, which can provide better performance. The following sample creates 3-point features. If you click on any of them, a popup will be displayed with the content for that point feature.
-For a fully functional sample that shows how to create one popup and reuse it rather than creating a popup for each point feature, see [Reusing Popup with Multiple Pins] in the [Azure Maps Samples].
+For a fully functional sample that shows how to create one popup and reuse it rather than creating a popup for each point feature, see [Reusing Popup with Multiple Pins] in the [Azure Maps Samples]. For the source code for this sample, see [Reusing Popup with Multiple Pins source code].
:::image type="content" source="./media/map-add-popup/reusing-popup-with-multiple-pins.png"alt-text="A screenshot of map with three blue pins.":::
For a fully functional sample that shows how to create one popup and reuse it ra
By default, the popup has a white background, a pointer arrow on the bottom, and a close button in the top-right corner. The following sample changes the background color to black using the `fillColor` option of the popup. The close button is removed by setting the `CloseButton` option to false. The HTML content of the popup uses padded of 10 pixels from the edges of the popup. The text is made white, so it shows up nicely on the black background.
-For a fully functional sample that shows how to customize the look of a popup, see [Customize a popup] in the [Azure Maps Samples].
+For a fully functional sample that shows how to customize the look of a popup, see [Customize a popup] in the [Azure Maps Samples]. For the source code for this sample, see [Customize a popup source code].
:::image type="content" source="./media/map-add-popup/customize-popup.png"alt-text="A screenshot of map with a custom popup in the center of the map with the caption 'hello world'.":::
function InitMap()
Similar to reusing popup, you can reuse popup templates. This approach is useful when you only want to show one popup template at a time, for multiple points. By reusing the popup template, the number of DOM elements created by the application is reduced, which then improves your application performance. The following sample uses the same popup template for three points. If you click on any of them, a popup will be displayed with the content for that point feature.
-For a fully functional sample that shows hot to reuse a single popup template with multiple features that share a common set of property fields, see [Reuse a popup template] in the [Azure Maps Samples].
+For a fully functional sample that shows hot to reuse a single popup template with multiple features that share a common set of property fields, see [Reuse a popup template] in the [Azure Maps Samples]. For the source code for this sample, see [Reuse a popup template source code].
:::image type="content" source="./media/map-add-popup/reuse-popup-template.png"alt-text="A screenshot of a map showing Seattle with three blue pins to demonstrating how to reuse popup templates.":::
For a fully functional sample that shows hot to reuse a single popup template wi
Popups can be opened, closed, and dragged. The popup class provides events to help developers react to these events. The following sample highlights which events fire when the user opens, closes, or drags the popup.
-For a fully functional sample that shows how to add events to popups, see [Popup events] in the [Azure Maps Samples].
+For a fully functional sample that shows how to add events to popups, see [Popup events] in the [Azure Maps Samples]. For the source code for this sample, see [Popup events source code].
:::image type="content" source="./media/map-add-popup/popup-events.png"alt-text="A screenshot of a map of the world with a popup in the center and a list of events in the upper left that are highlighted when the user opens, closes, or drags the popup.":::
See the following great articles for full code samples:
[Customize a popup]: https://samples.azuremaps.com/popups/customize-a-popup [Reuse a popup template]: https://samples.azuremaps.com/popups/reuse-a-popup-template [Popup events]: https://samples.azuremaps.com/popups/popup-events+
+[Reusing Popup with Multiple Pins source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Popups/Reusing%20Popup%20with%20Multiple%20Pins/Reusing%20Popup%20with%20Multiple%20Pins.html
+[Customize a popup source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Popups/Customize%20a%20popup/Customize%20a%20popup.html
+[Reuse a popup template source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Popups/Reuse%20a%20popup%20template/Reuse%20a%20popup%20template.html
+[Popup events source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Popups/Popup%20events/Popup%20events.html
azure-maps Map Add Shape https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-shape.md
function InitMap()
In addition to filling a polygon with a color, you may use an image pattern to fill the polygon. Load an image pattern into the maps image sprite resources and then reference this image with the `fillPattern` property of the polygon layer.
-For a fully functional sample that shows how to use an image template as a fill pattern in a polygon layer, see [Fill polygon with built-in icon template] in the [Azure Maps Samples].
+For a fully functional sample that shows how to use an image template as a fill pattern in a polygon layer, see [Fill polygon with built-in icon template] in the [Azure Maps Samples]. For the source code for this sample, see [Fill polygon with built-in icon template source code].
:::image type="content" source="./media/map-add-shape/fill-polygon-with-built-in-icon-template.png" alt-text="A screenshot of a map of the world with red dots forming a triangle in the center of the map.":::
var shape1 = new atlas.Shape(new atlas.data.Point[0,0], { myProperty: 1 });
var shape2 = new atlas.Shape(new atlas.data.Feature(new atlas.data.Point[0,0], { myProperty: 1 }); ```
-The [Make a geometry easy to update] sample shows how to wrap a circle GeoJSON object with a shape class. As the value of the radius changes in the shape, the circle renders automatically on the map.
+The [Make a geometry easy to update] sample shows how to wrap a circle GeoJSON object with a shape class. As the value of the radius changes in the shape, the circle renders automatically on the map. For the source code for this sample, see [Make a geometry easy to update source code].
:::image type="content" source="./media/map-add-shape/easy-to-update-geometry.png" alt-text="A screenshot of a map showing a red circle in New York City with a slider bar titled Circle Radius and as you slide the bar to the right or left, the value of the radius changes and the circle size adjusts automatically on the map.":::
Additional resources:
[Fill polygon with built-in icon template]: https://samples.azuremaps.com/?sample=fill-polygon-with-built-in-icon-template [Azure Maps Samples]: https://samples.azuremaps.com
-[Make a geometry easy to update]: https://samples.azuremaps.com/?sample=make-a-geometry-easy-to-update
+[Make a geometry easy to update]: https://samples.azuremaps.com/?sample=make-a-geometry-easy-to-update
+[Fill polygon with built-in icon template source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Polygons/Fill%20polygon%20with%20built-in%20icon%20template/Fill%20polygon%20with%20built-in%20icon%20template.html
+[Make a geometry easy to update source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Polygons/Make%20a%20geometry%20easy%20to%20update/Make%20a%20geometry%20easy%20to%20update.html
azure-maps Map Add Snap Grid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-snap-grid.md
The resolution of the snapping grid is in pixels. The grid is square and relativ
Create a snap grid using the `atlas.drawing.SnapGridManager` class and pass in a reference to the map you want to connect the manager to. Set the `showGrid` option to `true` if you want to make the grid visible. To snap a shape to the grid, pass it into the snap grid managers `snapShape` function. If you want to snap an array of positions, pass it into the `snapPositions` function.
-The [Use a snapping grid] sample snaps an HTML marker to a grid when it's dragged. Drawing tools are used to snap drawn shapes to the grid when the `drawingcomplete` event fires.
+The [Use a snapping grid] sample snaps an HTML marker to a grid when it's dragged. Drawing tools are used to snap drawn shapes to the grid when the `drawingcomplete` event fires. For the source code for this sample, see [Use a snapping grid source code].
:::image type="content" source="./media/map-add-snap-grid/use-snapping-grid.png"alt-text="A screenshot that shows the snap grid on map.":::
The [Use a snapping grid] sample snaps an HTML marker to a grid when it's dragge
## Snap grid options
-The [Snap grid options] sample shows the different customization options available for the snap grid manager. The grid line styles can be customized by retrieving the underlying line layer using the snap grid managers `getGridLayer` function.
+The [Snap grid options] sample shows the different customization options available for the snap grid manager. The grid line styles can be customized by retrieving the underlying line layer using the snap grid managers `getGridLayer` function. For the source code for this sample, see [Snap grid options source code].
:::image type="content" source="./media/map-add-snap-grid/snap-grid-options.png"alt-text="A screenshot of map with snap grid enabled and an options panel on the left where you can set various options and see the results in the map.":::
Learn how to use other features of the drawing tools module:
[Use a snapping grid]: https://samples.azuremaps.com/drawing-tools-module/use-a-snapping-grid [Snap grid options]: https://samples.azuremaps.com/drawing-tools-module/snap-grid-options
+[Use a snapping grid source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Drawing%20Tools%20Module/Use%20a%20snapping%20grid/Use%20a%20snapping%20grid.html
+[Snap grid options source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Drawing%20Tools%20Module/Snap%20grid%20options/Snap%20grid%20options.html
azure-maps Map Add Tile Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-tile-layer.md
map.layers.add(new atlas.layer.TileLayer({
}), 'labels'); ```
-For a fully functional sample that shows how to create a tile layer that points to a set of tiles using the x, y, zoom tiling system, see the [Tile Layer using X, Y, and Z] sample in the [Azure Maps Samples]. The source of the tile layer in this sample is a nautical chart from the [OpenSeaMap project], an OpenStreetMaps project licensed under ODbL.
+For a fully functional sample that shows how to create a tile layer that points to a set of tiles using the x, y, zoom tiling system, see the [Tile Layer using X, Y, and Z] sample in the [Azure Maps Samples]. The source of the tile layer in this sample is a nautical chart from the [OpenSeaMap project], an OpenStreetMaps project licensed under ODbL. For the source code for this sample, see [Tile Layer using X, Y, and Z source code].
:::image type="content" source="./media/map-add-tile-layer/tile-layer.png"alt-text="A screenshot of map with a tile layer that points to a set of tiles using the x, y, zoom tiling system. The source of this tile layer is the OpenSeaMap project.":::
For a fully functional sample that shows how to create a tile layer that points
A web-mapping service (WMTS) is an Open Geospatial Consortium (OGC) standard for serving images of map data. There are many open data sets available in this format that you can use with Azure Maps. This type of service can be used with a tile layer if the service supports the `EPSG:3857` coordinate reference system (CRS). When using a WMS service, set the width and height parameters to the value supported by the service, be sure to set this value in the `tileSize` option. In the formatted URL, set the `BBOX` parameter of the service with the `{bbox-epsg-3857}` placeholder.
-For a fully functional sample that shows how to create a tile layer that points to a Web Mapping Service (WMS), see the [WMS Tile Layer] sample in the [Azure Maps Samples].
+For a fully functional sample that shows how to create a tile layer that points to a Web Mapping Service (WMS), see the [WMS Tile Layer] sample in the [Azure Maps Samples]. For the source code for this sample, see [WMS Tile Layer source code].
The following screenshot shows the [WMS Tile Layer] sample that overlays a web-mapping service of geological data from the [U.S. Geological Survey (USGS)] on top of the map and below the labels.
A web-mapping tile service (WMTS) is an Open Geospatial Consortium (OGC) standar
* `{TileRow}` => `{y}` * `{TileCol}` => `{x}`
-For a fully functional sample that shows how to create a tile layer that points to a Web Mapping Tile Service (WMTS), see the [WMTS Tile Layer] sample in the [Azure Maps Samples].
+For a fully functional sample that shows how to create a tile layer that points to a Web Mapping Tile Service (WMTS), see the [WMTS Tile Layer] sample in the [Azure Maps Samples]. For the source code for this sample, see [WMTS Tile Layer source code].
The following screenshot shows the [WMTS Tile Layer] sample overlaying a web-mapping tile service of imagery from the [U.S. Geological Survey (USGS) National Map] on top of a map, below roads and labels.
The following screenshot shows the [WMTS Tile Layer] sample overlaying a web-map
## Customize a tile layer
-The tile layer class has many styling options. The [Tile Layer Options] sample is a tool to try them out.
+The tile layer class has many styling options. The [Tile Layer Options] sample is a tool to try them out. For the source code for this sample, see [Tile Layer Options source code].
:::image type="content" source="./media/map-add-tile-layer/tile-layer-options.png"alt-text="A screenshot of Tile Layer Options sample.":::
See the following articles for more code samples to add to your maps:
> [Add an image layer](./map-add-image-layer.md) [Azure Maps Samples]: https://samples.azuremaps.com
+[Tile Layer Options]: https://samples.azuremaps.com/tile-layers/tile-layer-options
+[WMS Tile Layer]: https://samples.azuremaps.com/tile-layers/wms-tile-layer
+[WMTS Tile Layer]: https://samples.azuremaps.com/tile-layers/wmts-tile-layer
[Tile Layer using X, Y, and Z]: https://samples.azuremaps.com/tile-layers/tile-layer-using-x,-y-and-z+
+[Tile Layer Options source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Tile%20Layers/Tile%20Layer%20Options/Tile%20Layer%20Options.html
+[WMS Tile Layer source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Tile%20Layers/WMS%20Tile%20Layer/WMS%20Tile%20Layer.html
+[WMTS Tile Layer source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Tile%20Layers/WMTS%20Tile%20Layer/WMTS%20Tile%20Layer.html
+[Tile Layer using X, Y, and Z source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Tile%20Layers/Tile%20Layer%20using%20X,%20Y%20and%20Z/Tile%20Layer%20using%20X,%20Y%20and%20Z.html
+ [OpenSeaMap project]: https://openseamap.org/index.php
-[WMS Tile Layer]: https://samples.azuremaps.com/tile-layers/wms-tile-layer
[U.S. Geological Survey (USGS)]: https://mrdata.usgs.gov/
-[WMTS Tile Layer]: https://samples.azuremaps.com/tile-layers/wmts-tile-layer
-[U.S. Geological Survey (USGS) National Map]:https://viewer.nationalmap.gov/services
-[Tile Layer Options]: https://samples.azuremaps.com/tile-layers/tile-layer-options
+[U.S. Geological Survey (USGS) National Map]:https://viewer.nationalmap.gov/services
azure-maps Map Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-create.md
This article shows you ways to create a map and animate a map.
To load a map, create a new instance of the [Map class](/javascript/api/azure-maps-control/atlas.map). When initializing the map, pass a DIV element ID to render the map and pass a set of options to use when loading the map. If default authentication information isn't specified on the `atlas` namespace, this information needs to be specified in the map options when loading the map. The map loads several resources asynchronously for performance. As such, after creating the map instance, attach a `ready` or `load` event to the map and then add any additional code that interacts with the map to the event handler. The `ready` event fires as soon as the map has enough resources loaded to be interacted with programmatically. The `load` event fires after the initial map view has finished loading completely.
-You can also load multiple maps on the same page, for sample code that demonstrates loading multiple maps on the same page, see [Multiple Maps] in the [Azure Maps Samples].
+You can also load multiple maps on the same page, for sample code that demonstrates loading multiple maps on the same page, see [Multiple Maps] in the [Azure Maps Samples]. For the source code for this sample, see [Multiple Maps source code].
:::image type="content" source="./media/map-create/multiple-maps.png"alt-text="A screenshot that shows the snap grid on map.":::
See code examples to add functionality to your app:
> [!div class="nextstepaction"] > [Code samples](/samples/browse/?products=azure-maps)
-[Multiple Maps]: https://samples.azuremaps.com/map/multiple-maps
[Azure Maps Samples]: https://samples.azuremaps.com
+[Multiple Maps]: https://samples.azuremaps.com/map/multiple-maps
+[Multiple Maps source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Map/Multiple%20Maps/Multiple%20Maps.html
azure-maps Map Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-events.md
This article shows you how to use [map events class](/javascript/api/azure-maps-
## Interact with the map
-The [Map Events] sample highlights the name of the events that are firing as you interact with the map.
+The [Map Events] sample highlights the name of the events that are firing as you interact with the map. For the source code for this sample, see [Map Events source code].
:::image type="content" source="./media/map-events/map-events.png"alt-text="A screenshot showing a map with a list of map events that are highlighted anytime your actions on the map trigger that event.":::
The [Map Events] sample highlights the name of the events that are firing as you
## Interact with map layers
-The [Layer Events] sample highlights the name of the events that are firing as you interact with the Symbol Layer. The symbol, bubble, line, and polygon layer all support the same set of events. The heat map and tile layers don't support any of these events.
+The [Layer Events] sample highlights the name of the events that are firing as you interact with the Symbol Layer. The symbol, bubble, line, and polygon layer all support the same set of events. The heat map and tile layers don't support any of these events. For the source code for this sample, see [Layer Events source code].
:::image type="content" source="./media/map-events/layer-events.png"alt-text="A screenshot showing a map with a list of layer events that are highlighted anytime you interact with the Symbol Layer.":::
The [Layer Events] sample highlights the name of the events that are firing as y
## Interact with HTML Marker
-The [HTML marker layer events] sample highlights the name of the events that are firing as you interact with the HTML marker layer.
+The [HTML marker layer events] sample highlights the name of the events that are firing as you interact with the HTML marker layer. For the source code for this sample, see [HTML marker layer Events source code].
:::image type="content" source="./media/map-events/html-marker-layer-events.png"alt-text="A screenshot showing a map with a list of HTML marker layer events that are highlighted anytime your actions on the map trigger that event.":::
See the following articles for full code examples:
[Map Events]: https://samples.azuremaps.com/map/map-events [Layer Events]: https://samples.azuremaps.com/symbol-layer/symbol-layer-events [HTML marker layer events]: https://samples.azuremaps.com/html-markers/html-marker-layer-events+
+[Map Events source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Map/Map%20Events/Map%20Events.html
+[Layer Events source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Symbol%20Layer/Symbol%20layer%20events/Symbol%20layer%20events.html
+[HTML marker layer events source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/HTML%20Markers/HTML%20marker%20layer%20events/HTML%20marker%20layer%20events.html
azure-maps Map Extruded Polygon https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-extruded-polygon.md
function InitMap()
A choropleth map can be rendered using the polygon extrusion layer. Set the `height` and `fillColor` properties of the extrusion layer to the measurement of the statistical variable in the `Polygon` and `MultiPolygon` feature geometries.
-The [Create a Choropleth Map] sample shows an extruded choropleth map of the United States based on the measurement of the population density by state.
+The [Create a Choropleth Map] sample shows an extruded choropleth map of the United States based on the measurement of the population density by state. For the source code for this sample, see [Create a Choropleth Map source code].
:::image type="content" source="./media/map-extruded-polygon/choropleth-map.png"alt-text="A screenshot of a map showing a choropleth map rendered using the polygon extrusion layer.":::
function InitMap()
## Customize a polygon extrusion layer
-The Polygon Extrusion layer has several styling options. The [Polygon Extrusion Layer Options] sample is a tool to try them out.
+The Polygon Extrusion layer has several styling options. The [Polygon Extrusion Layer Options] sample is a tool to try them out. For the source code for this sample, see [Polygon Extrusion Layer Options source code].
:::image type="content" source="./media/map-extruded-polygon/polygon-extrusion-layer-options.png"alt-text="A screenshot of the Azure Maps code sample that shows how the different options of the polygon extrusion layer affect rendering."::: <!
Additional resources:
[Create a Choropleth Map]: https://samples.azuremaps.com/?sample=create-a-choropleth-map [Polygon Extrusion Layer Options]: https://samples.azuremaps.com/?sample=polygon-extrusion-layer-options+
+[Create a Choropleth Map source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Demos/Create%20a%20Choropleth%20Map/Create%20a%20Choropleth%20Map.html
+[Polygon Extrusion Layer Options source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Polygons/Polygon%20Extrusion%20Layer%20Options/Polygon%20Extrusion%20Layer%20Options.html
azure-maps Web Sdk Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/web-sdk-best-practices.md
Title: Azure Maps Web SDK best practices
description: Learn tips & tricks to optimize your use of the Azure Maps Web SDK. - Previously updated : 04/13/2023-+ Last updated : 06/23/2023+
Similarly, when the map initially loads often it's desired to load data on it as
### Lazy load the Azure Maps Web SDK If the map isn't needed right away, lazy load the Azure Maps Web SDK until it's needed. This delays the loading of the JavaScript and CSS files used by the Azure Maps Web SDK until needed. A common scenario where this occurs is when the map is loaded in a tab or flyout panel that isn't displayed on page load.
-The following code sample shows how to delay the loading the Azure Maps Web SDK until a button is pressed.
-<br/>
+The [Lazy Load the Map] code sample shows how to delay the loading the Azure Maps Web SDK until a button is pressed. For the source code, see [Lazy Load the Map sample code].
+<!
<iframe height="500" scrolling="no" title="Lazy load the map" src="https://codepen.io/azuremaps/embed/vYEeyOv?height=500&theme-id=default&default-tab=js,result" frameborder="no" allowtransparency="true" allowfullscreen="true"> See the Pen <a href='https://codepen.io/azuremaps/pen/vYEeyOv'>Lazy load the map</a> by Azure Maps
- (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+ (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe>
+>
### Add a placeholder for the map
The bubble layer renders points as circles on the map and can easily have their
### Use HTML markers and Popups sparingly
-Unlike most layers in the Azure Maps Web control that use WebGL for rendering, HTML Markers and Popups use traditional DOM elements for rendering. As such, the more HTML markers and Popups added a page, the more DOM elements there are. Performance can degrade after adding a few hundred HTML markers or popups. For larger data sets, consider either clustering your data or using a symbol or bubble layer. For popups, a common strategy is to create a single popup and reuse it by updating its content and position as shown in the following example:
+Unlike most layers in the Azure Maps Web control that use WebGL for rendering, HTML Markers and Popups use traditional DOM elements for rendering. As such, the more HTML markers and Popups added a page, the more DOM elements there are. Performance can degrade after adding a few hundred HTML markers or popups. For larger data sets, consider either clustering your data or using a symbol or bubble layer.
-<br/>
+The [Reusing Popup with Multiple Pins] code sample shows how to create a single popup and reuse it by updating its content and position. For the source code, see [Reusing Popup with Multiple Pins sample code].
-<iframe height='500' scrolling='no' title='Reusing Popup with Multiple Pins' src='//codepen.io/azuremaps/embed/rQbjvK/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/rQbjvK/'>Reusing Popup with Multiple Pins</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+
+<!
+<iframe height='500' scrolling='no' title='Reusing Popup with Multiple Pins' src='//codepen.io/azuremaps/embed/rQbjvK/?height=500&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/rQbjvK/'>Reusing Popup with Multiple Pins</a> by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe>
+-->
That said, if you only have a few points to render on the map, the simplicity of HTML markers may be preferred. Additionally, HTML markers can easily be made draggable if needed. ### Combine layers
-The map is capable of rendering hundreds of layers, however, the more layers there are, the more time it takes to render a scene. One strategy to reduce the number of layers is to combine layers that have similar styles or can be styled using a [data-driven styles].
+The map is capable of rendering hundreds of layers, however, the more layers there are, the more time it takes to render a scene. One strategy to reduce the number of layers is to combine layers that have similar styles or can be styled using [data-driven styles].
For example, consider a data set where all features have a `isHealthy` property that can have a value of `true` or `false`. If creating a bubble layer that renders different colored bubbles based on this property, there are several ways to do this as shown in the following list, from least performant to most performant.
Symbol layers have collision detection enabled by default. This collision detect
Both of these options are set to `false` by default. When animating a symbol, the collision detection calculations run on each frame of the animation, which can slow down the animation and make it look less fluid. To smooth out the animation, set these options to `true`.
-The following code sample a simple way to animate a symbol layer.
+The [Simple Symbol Animation] code sample demonstrates a simple way to animate a symbol layer. For the source code to this sample, see [Simple Symbol Animation sample code].
-<br/>
+<!-
<iframe height="500" scrolling="no" title="Symbol layer animation" src="https://codepen.io/azuremaps/embed/oNgGzRd?height=500&theme-id=default&default-tab=js,result" frameborder="no" allowtransparency="true" allowfullscreen="true"> See the Pen <a href='https://codepen.io/azuremaps/pen/oNgGzRd'>Symbol layer animation</a> by Azure Maps
- (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
-</iframe>
+ (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.</iframe>
+->
### Specify zoom level range
Learn more about the terminology used by Azure Maps and the geospatial industry.
[supported browser]: supported-browsers.md [Tippecanoe]: https://github.com/mapbox/tippecanoe [useful tools for working with GeoJSON data]: https://github.com/tmcw/awesome-geojson+
+[Lazy Load the Map]: https://samples.azuremaps.com/map/lazy-load-the-map
+[Lazy Load the Map sample code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Map/Lazy%20Load%20the%20Map/Lazy%20Load%20the%20Map.html
+[Reusing Popup with Multiple Pins]: https://samples.azuremaps.com/popups/reusing-popup-with-multiple-pins
+[Reusing Popup with Multiple Pins sample code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Popups/Reusing%20Popup%20with%20Multiple%20Pins/Reusing%20Popup%20with%20Multiple%20Pins.html
+[Simple Symbol Animation]: https://samples.azuremaps.com/animations/simple-symbol-animation
+[Simple Symbol Animation sample code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Animations/Simple%20Symbol%20Animation/Simple%20Symbol%20Animation.html
azure-monitor Basic Logs Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/basic-logs-configure.md
Configure a table for Basic logs if:
| API Management | [ApiManagementGatewayLogs](/azure/azure-monitor/reference/tables/ApiManagementGatewayLogs)<br>[ApiManagementWebSocketConnectionLogs](/azure/azure-monitor/reference/tables/ApiManagementWebSocketConnectionLogs) | | Application Insights | [AppTraces](/azure/azure-monitor/reference/tables/apptraces) | | Chaos Experiments | [ChaosStudioExperimentEventLogs](/azure/azure-monitor/reference/tables/ChaosStudioExperimentEventLogs) |
+ | Cloud HSM | [CHSMManagementAuditLogs](/azure/azure-monitor/reference/tables/CHSMManagementAuditLogs) |
| Container Apps | [ContainerAppConsoleLogs](/azure/azure-monitor/reference/tables/containerappconsoleLogs) | | Container Insights | [ContainerLogV2](/azure/azure-monitor/reference/tables/containerlogv2) | | Container Apps Environments | [AppEnvSpringAppConsoleLogs](/azure/azure-monitor/reference/tables/AppEnvSpringAppConsoleLogs) | | Communication Services | [ACSCallAutomationIncomingOperations](/azure/azure-monitor/reference/tables/ACSCallAutomationIncomingOperations)<br>[ACSCallAutomationMediaSummary](/azure/azure-monitor/reference/tables/ACSCallAutomationMediaSummary)<br>[ACSCallRecordingIncomingOperations](/azure/azure-monitor/reference/tables/ACSCallRecordingIncomingOperations)<br>[ACSCallRecordingSummary](/azure/azure-monitor/reference/tables/ACSCallRecordingSummary)<br>[ACSRoomsIncomingOperations](/azure/azure-monitor/reference/tables/acsroomsincomingoperations) | | Confidential Ledgers | [CCFApplicationLogs](/azure/azure-monitor/reference/tables/CCFApplicationLogs) |
- | Custom tables | All custom tables created with or migrated to the [data collection rule (DCR)-based logs ingestion API.](logs-ingestion-api-overview.md) |
+ | Custom log tables | All custom tables created with or migrated to the [data collection rule (DCR)-based logs ingestion API.](logs-ingestion-api-overview.md) |
| Data Manager for Energy | [OEPDataplaneLogs](/azure/azure-monitor/reference/tables/OEPDataplaneLogs) | | Dedicated SQL Pool | [SynapseSqlPoolSqlRequests](/azure/azure-monitor/reference/tables/synapsesqlpoolsqlrequests)<br>[SynapseSqlPoolRequestSteps](/azure/azure-monitor/reference/tables/synapsesqlpoolrequeststeps)<br>[SynapseSqlPoolExecRequests](/azure/azure-monitor/reference/tables/synapsesqlpoolexecrequests)<br>[SynapseSqlPoolDmsWorkers](/azure/azure-monitor/reference/tables/synapsesqlpooldmsworkers)<br>[SynapseSqlPoolWaits](/azure/azure-monitor/reference/tables/synapsesqlpoolwaits) | | Dev Center | [DevCenterDiagnosticLogs](/azure/azure-monitor/reference/tables/DevCenterDiagnosticLogs) |
backup Azure Backup Architecture For Sap Hana Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-backup-architecture-for-sap-hana-backup.md
Title: Azure Backup Architecture for SAP HANA Backup description: Learn about Azure Backup architecture for SAP HANA backup. Previously updated : 09/07/2022 Last updated : 06/20/2023
See the [high-level architecture of Azure Backup for SAP HANA databases](./sap-h
### Backup flow
-This section provides you an understanding about the backup process of an HANA database running on an Azure VM.
+This section provides you with an understanding about the backup process of an HANA database running on an Azure VM.
1. The scheduled backups are managed by crontab entries created on the HANA VM, while the on-demand backups are directly triggered by the Azure Backup service.
In the following sections you'll learn about different SAP HANA setups and their
:::image type="content" source="./media/azure-backup-architecture-for-sap-hana-backup/azure-network-with-udr-and-nva-or-azure-firewall-and-private-endpoint-or-service-endpoint.png" alt-text="Diagram showing the SAP HANA setup if Azure network with UDR + NVA / Azure Firewall + Private Endpoint or Service Endpoint.":::
-### Backup architecture for database with HANA System Replication (preview)
+### Backup architecture for database with HANA System Replication
The backup service resides in both the physical nodes of the HSR setup. Once you confirm that these nodes are in a replication group (using the [pre-registration script](sap-hana-database-with-hana-system-replication-backup.md#run-the-preregistration-script)), Azure Backup groups the nodes logically, and creates a single backup item during protection configuration.
In the following sections, you'll learn about the backup flow for new/existing m
##### New machines
-This section provides you an understanding about the backup process of an HANA database with HANA System replication enabled running on a new Azure VM.
+This section provides you with an understanding about the backup process of an HANA database with HANA System replication enabled running on a new Azure VM.
1. Create a custom user and `hdbuserstore` key on all the nodes. 1. Run the pre-registration script on both the nodes with the custom user as the backup user to implement an ID, which indicates that both the nodes belong to a unique/common group.
This section provides you an understanding about the backup process of an HANA d
##### Existing machines
-This section provides you an understanding about the backup process of an HANA database with HANA System replication enabled running on an existing Azure VM.
+This section provides you with an understanding about the backup process of an HANA database with HANA System replication enabled running on an existing Azure VM.
1. Stop protection and retain data for both the nodes. 1. Run the pre-registration script on both the nodes with the custom user as the backup user to mention an ID, which indicates that both the nodes belong to a unique/common group.
backup Quick Backup Hana Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/quick-backup-hana-cli.md
+
+ Title: Quickstart - Back up an SAP HANA database with Azure CLI
+description: In this quickstart, learn how to create a Recovery Services vault, enable protection on an SAP HANA System Replication database, and create the initial recovery point with Azure CLI.
+ms.devlang: azurecli
+ Last updated : 06/20/2023++++++
+# Quickstart: Back up SAP HANA System Replication on Azure VMs using Azure CLI
+
+This quickstart describes how to protect SAP HANA System Replication (HSR) using Azure CLI.
+
+SAP HANA databases are critical workloads that require a low recovery-point objective (RPO) and long-term retention. This article describes how you can back up SAP HANA databases that are running on Azure virtual machines (VMs) to an Azure Backup Recovery Services vault by using Azure Backup.
+
+For more information about the supported configurations and scenarios, see [SAP HANA backup support matrix](sap-hana-backup-support-matrix.md).
+
+## Create a Recovery Services vault
+
+A Recovery Services vault is a logical container that stores the backup data for each protected resource, such as SAP HANA database data. When the backup job for a protected resource runs, it creates a recovery point in the Recovery Services vault. You can then use one of these recovery points to restore data to a given point in time.
+
+To create a Recovery Services vault, run the following command:
+
+```azurecli-interactive
+az backup vault create --resource-group hanarghsr2 --name hanavault10 --location westus2
+```
+
+By default, the Recovery Services vault is set for Geo-Redundant storage. Geo-Redundant storage ensures your backup data is replicated to a secondary Azure region that's hundreds of miles away from the primary region. If the storage redundancy setting needs to be modified, use [az backup vault backup-properties set](/cli/azure/backup/vault/backup-properties#az-backup-vault-backup-properties-set) cmdlet.
+
+## Register and protect SAP HANA running on Azure VM
+
+When a failover occurs, the users are replicated to the new primary, but `hdbuserstore` isn't replicated. So, you need to create the same key in all nodes of the HSR setup, which allows the Azure Backup service to connect to any new primary node automatically, without any manual intervention.
+Follow these steps:
+
+1. To register and protect the SAP HANA database running on primary Azure VM, run the following command:
+
+ ```azurecli
+ az backup container register --resource-group hanarghsr2 --vault-name hanavault10 --workload-type SAPHANA --backup-management-type AzureWorkload --resource-id "/subscriptions/ef4ab5a7-c2c0-4304-af80-af49f48af3d1/resourceGroups/hanarghsr2/providers/Microsoft.Compute/virtualMachines/hsr-primary"
+ ```
+
+1. To register and protect the SAP HANA database running on secondary Azure VM, run the following command:
+
+ ```azurecli
+ az backup container register --resource-group hanarghsr2 --vault-name hanavault10 --workload-type SAPHANA --backup-management-type AzureWorkload --resource-id "/subscriptions/ef4ab5a7-c2c0-4304-af80-af49f48af3d1/resourceGroups/hanarghsr2/providers/Microsoft.Compute/virtualMachines/hsr-secondary"
+ ```
+
+To identify `resource-id`, run the following command:
+
+```azurecli
+az vm show --name hsr-primary --resource-group hanarghsr2
+```
+
+For example, `id` is `/subscriptions/ef4ab5a7-c2c0-4304-af80-af49f48af3d1/resourceGroups/hanarghsr2/providers/Microsoft.Compute/virtualMachines/hsr-primary`.
+
+## Check the registration of primary and secondary servers to the vault
+
+To check if primary and secondary servers are registered to the vault, run the following command:
+
+```azurecli
+az backup container list --resource-group hanarghsr2 --vault-name hanavault10 --output table --backup-management-type AzureWorkload
+Name Friendly Name Resource Group Type Registration Status
+-- - -
+VMAppContainer;Compute;hanarghsr2;hsr-primary hsr-primary hanarghsr2 AzureWorkload Registered
+VMAppContainer;Compute;hanarghsr2;hsr-secondary hsr-secondary hanarghsr2 AzureWorkload Registered
+```
+
+## View the item list for protection
+
+To check the items that you can protect, run the following command:
+
+```azurecli
+az backup protectable-item list --resource-group hanarghsr2 --vault-name hanavault10 --workload-type SAPHANA --output table
+
+pradeep [ ~ ]$ az backup protectable-item list --resource-group hanarghsr2 --vault-name hanavault10 --workload-type SAPHANA --output table
+Name Protectable Item Type ParentName ServerName IsProtected
+ -- - -
+saphanasystem;arv SAPHanaSystem ARV hsr-primary NotProtected
+saphanasystem;arv SAPHanaSystem ARV hsr-secondary NotProtected
+hanahsrcontainer;hsrtestps2 HanaHSRContainer HsrTestP2 hsr-primary NotProtected
+saphanadatabase;hsrtestps2;arv SAPHanaDatabase HsrTestP2 hsr-primary NotProtected
+saphanadatabase;hsrtestps2;2;DB1 SAPHanaDatabase HsrTestP2 hsr-primary NotProtected
+saphanadatabase;hsrtestps2;systemdb SAPHanaDatabase HsrTestP2 hsr-primary NotProtected
+```
+
+## Rediscover the database
+
+If the database isn't in the item list that can be protected or to rediscover the database, reinitiate discovery on the physical primary VM by running the following command:
+
+```azurecli
+az backup protectable-item initialize --resource-group hanarghsr2 --vault-name hanavault10 --container-name "VMAppContainer;Compute;hanarghsr2;hsr-primary" --workload-type SAPHanaDatabase
+```
+
+## Enable protection for the database
+
+To enable protection for the database listed under the HSR System with required backup policy, run the following command:
+
+```azurecli
+az backup protection enable-for-azurewl --resource-group hanarghsr2 --vault-name hanavault10 --policy-name hanahsr --protectable-item-name "saphanadatabase;hsrtestps2;DB1" --protectable-item-type SAPHanaDatabase --workload-type SAPHanaDatabase --output table --server-name HsrTestP2
+
+az backup protection enable-for-azurewl --resource-group hanarghsr2 --vault-name hanavault10 --policy-name hanahsr --protectable-item-name "saphanadatabase;hsrtestps2;systemdb" --protectable-item-type SAPHanaDatabase --workload-type SAPHanaDatabase --output table --server-name hsr-secondary
+```
+
+## Run an on-demand backup
+
+To initiate a backup job manually, run the following command:
+
+```azurecli
+az backup protection backup-now --resource-group hanarghsr2 --item-name "saphanadatabase;hsrtestps2;db1" --container-name "hanahsrcontainer;hsrtestp2" --vault-name hanavault10 --backup-type Full --retain-until 01-01-2030 --output table
+
+Name Operation Status Item Name Backup Management Type Start Time UTC Duration
+ - - -- -- --
+
+591f1840-4d6a-4464-8f3a-18e586f11bfc Backup (Full) InProgress ARV [hsr-primary] AzureWorkload 2023-04
+```
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Quickstart: Restore SAP HANA System Replication databases on Azure VMs using Azure CLI](quick-restore-hana-cli.md)
backup Quick Restore Hana Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/quick-restore-hana-cli.md
+
+ Title: Quickstart - Restore an SAP HANA database with Azure CLI
+description: In this quickstart, learn how to restore SAP HANA System Replication database with Azure CLI.
+ms.devlang: azurecli
+ Last updated : 06/20/2023++++++
+# Quickstart: Restore SAP HANA System Replication on Azure VMs using Azure CLI
+
+This quickstart describes how to restore SAP HANA System Replication (HSR) using Azure CLI.
+
+SAP HANA databases are critical workloads that require a low recovery-point objective (RPO) and long-term retention. This article describes how you can back up SAP HANA databases that are running on Azure virtual machines (VMs) to an Azure Backup Recovery Services vault by using Azure Backup.
+
+>[!Note]
+>- Original Location Recovery (OLR) is currently not supported for HSR.
+>- Restore to HSR instance isn't supported. However, restore only to HANA instance is supported.
+
+For more information about the supported configurations and scenarios, see [SAP HANA backup support matrix](sap-hana-backup-support-matrix.md).
+
+## View the restore points for a protected database
+
+Before you restore the database, view the available restore points of the protected database by running the following command:
+
+```azurecli
+az backup recoverypoint list --resource-group hanarghsr2 --vault-name hanavault10 --container-name "hanahsrcontainer;hsrtestps2" --item-name "saphanadatabase;hsrtestpradeep2;db1" --output table
+
+abc@Azure:~$ az backup recoverypoint list --resource-group hanarghsr2 --vault-name hanavault10 --container-name "hanahsrcontainer;hsrtestps2" --item-name "saphanadatabase;hsrtestps2;db1" --output table
+```
+
+The list of recovery points will look as follows:
+
+```Output
+Name Time BackupManagementType Item Name RecoveryPointType
+- -- - -- -
+62640091676331 2023-05-04T08:13:09.469000+00:00 AzureWorkload SAPHanaDatabase;hsrtestps2;db1 Full
+68464937558101 2023-05-04T07:49:02.988000+00:00 AzureWorkload SAPHanaDatabase;hsrtestps2;db1 Full
+56015648627567 2023-05-04T07:27:54.425000+00:00 AzureWorkload SAPHanaDatabase;hsrtestps2;db1 Full
+DefaultRangeRecoveryPoint AzureWorkload SAPHanaDatabase;hsrtestps2;db1 Log
+arvind@Azure:~$
+```
+
+>[!Note]
+>If the command fails to extract the backup management type, check if the container name specified is complete or try using container friendly name instead.
+
+## Restore to an alternate location
+
+To restore the database using Alternate Location Restore (ALR), run the following command:
+
+```azurecli
+az backup recoveryconfig show --resource-group hanarghsr2 --vault-name hanavault10 --container-name "hanahsrcontainer;hsrtestps2" --item-name "saphanadatabase;hsrtestps2;db1" --restore-mode AlternateWorkloadRestore --log-point-in-time 04-05-2023-08:27:54 --target-item-name restored_DB_pradeep --target-server-name hsr-primary --target-container-name hsr-primary --target-server-type HANAInstance --backup-management-type AzureWorkload --workload-type SAPHANA --output json > recoveryInput.json
+
+ arvind@Azure:~$ cat recoveryInput.json
+{
+ "alternate_directory_paths": null,
+ "container_id": "/subscriptions/ef4ab5a7-c2c0-4304-af80-af49f48af3d1/resourceGroups/hanarghsr2/providers/Microsoft.RecoveryServices/vaults/hanavault10/backupFabrics/Azure/protectionContainers/vmappcontainer;compute;hanarghsr2;hsr-primary",
+ "container_uri": "HanaHSRContainer;hsrtestps2",
+ "database_name": "ARV/restored_DB_p2",
+ "filepath": null,
+ "item_type": "SAPHana",
+ "item_uri": "SAPHanaDatabase;hsrtestps2;db1",
+ "log_point_in_time": "04-05-2023-08:27:54",
+ "recovery_mode": null,
+ "recovery_point_id": "DefaultRangeRecoveryPoint",
+ "restore_mode": "AlternateLocation",
+ "source_resource_id": null,
+ "workload_type": "SAPHanaDatabase"
+}
+arvind@Azure:~$
+
+az backup restore restore-azurewl --resource-group hanarghsr2 --vault-name hanavault10 --recovery-config recoveryInput.json --output table
+```
+
+## Restore as files:
+
+To restore the database as files, run the following command:
+
+```azurecli
+az backup recoveryconfig show --resource-group hanarghsr2 \
+ --vault-name hanavault10 \
+ --container-name "hanahsrcontainer;hsrtestps2" \
+ --item-name "saphanadatabase;hsrtestps2;arv" \
+ --restore-mode RestoreAsFiles \
+ --log-point-in-time 18-04-2023-09:53:00 \
+ --rp-name DefaultRangeRecoveryPoint \
+ --target-container-name "VMAppContainer;Compute;hanarghsr2;hsr-primary" \
+ --filepath /home/abc \
+ --output json
+
+ az backup restore restore-azurewl --resource-group hanarghsr2 \
+ --vault-name hanavault10 \
+ --restore-config recoveryconfig.json \
+ --output json
+
+az backup recoveryconfig show --resource-group hanarghsr2 --vault-name hanavault10 --container-name "hanahsrcontainer;hsrtestps2" --item-name "saphanadatabase;hsrtestps2;arv" --restore-mode RestoreAsFiles --log-point-in-time 18-04-2023-09:53:00 --rp-name DefaultRangeRecoveryPoint --target-container-name "VMAppContainer;Compute;hanarghsr2;hsr-primary" --filepath /home/abc --output json > recoveryconfig.json
+```
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Troubleshoot backup of SAP HANA databases on Azure](backup-azure-sap-hana-database-troubleshoot.md)
backup Sap Hana Database About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sap-hana-database-about.md
Title: About SAP HANA database backup on Azure VMs description: In this article, you'll learn about backing up SAP HANA databases that are running on Azure virtual machines. Previously updated : 10/06/2022 Last updated : 06/25/2023
You can use [an Azure VM backup](backup-azure-vms-introduction.md) to back up th
1. Restore the database into the VM from the [Azure SAP HANA database backup](sap-hana-db-restore.md#restore-to-a-point-in-time-or-to-a-recovery-point) to your intended point in time.
-## Back up a HANA system with replication enabled (preview)
+## Back up a HANA system with replication enabled
-Azure Backup now supports backing up databases that have HSR enabled (preview). This means that backups are managed automatically when a failover occurs, which eliminates the necessity for manual intervention. Backup also offers immediate protection with no remedial full backups, so you can protect HANA instances or HSR setup nodes as a single HSR container.
+Azure Backup now supports backing up databases that have HSR enabled. This means that backups are managed automatically when a failover occurs, which eliminates the necessity for manual intervention. Backup also offers immediate protection with no remedial full backups, so you can protect HANA instances or HSR setup nodes as a single HSR container.
Although there are multiple physical nodes (primary and secondary), the backup service now considers them a single HSR container.
->[!Note]
->Because the feature is in preview, there are no Protected Instance charges for a logical HSR container. However, you are charged for the underlying storage of the backups.
- ## Back up database instance snapshots (preview) As databases grow in size, the time it takes to restore them becomes a factor when you're dealing with streaming backups. Also, during backup, the time the database takes to generate Backint streams can grow in proportion to the churn, which can be factor as well.
backup Sap Hana Database Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sap-hana-database-manage.md
Title: Manage backed up SAP HANA databases on Azure VMs description: In this article, you'll learn common tasks for managing and monitoring SAP HANA databases that are running on Azure virtual machines. Previously updated : 12/23/2022 Last updated : 06/30/2023
This article describes common tasks for managing and monitoring SAP HANA databas
You'll learn how to monitor jobs and alerts, trigger an on-demand backup, edit policies, stop and resume database protection, and unregister a VM from backups. >[!Note]
->Support for HANA instance snapshots and support for HANA System Replication mode are in preview.
+>Support for HANA instance snapshots is in preview.
If you haven't configured backups yet for your SAP HANA databases, see [Back up SAP HANA databases on Azure VMs](./backup-azure-sap-hana-database.md). To earn more about the supported configurations and scenarios, see [Support matrix for backup of SAP HANA databases on Azure VMs](sap-hana-backup-support-matrix.md).
To run on-demand backups, follow these steps:
1. Monitor the Azure portal notifications. To do so, on the Recovery Services vault dashboard, select **Backup Jobs**, and then select **In progress**. >[!Note]
- >Based on the size of your database, creating the initial backup might take a while.
+ >- Based on the size of your database, creating the initial backup might take a while.
+ >- Before a planned failover, ensure that both VMs/Nodes are registered to the vault (physical and logical registration). [Learn more](#verify-the-registration-status-of-vms-or-nodes-to-the-vault).
## Monitor manual backup jobs
Unregister an SAP HANA instance after you disable protection but before you dele
![Select unregister](./media/sap-hana-db-manage/unregister.png)
+### Verify the registration status of VMs or Nodes to the vault
+
+Before a planned failover, ensure that both VMs/Nodes are registered to the vault (physical and logical registration). If backups fail after failover/fallback, ensure that physical/logical registration is complete. Otherwise, [rediscover the VMs/Nodes](sap-hana-database-with-hana-system-replication-backup.md#discover-the-databases).
+
+**Confirm the physical registration**
+
+Go to the *Recovery Services vault* > **Manage** > **Backup Infrastructure** > **Workload in Azure VM**.
+
+The status of both primary and secondary VMs should be **registered**.
+++
+**Confirm the logical registration**
+
+Follow these steps:
+
+1. Go to *Recovery services vault* > **Backup Items** > **SAP HANA in Azure VM**.
+
+2. Under **HANA System**, select the name of the HANA instance.
+
+ :::image type="content" source="./media/sap-hana-db-manage/select-database-name.png" alt-text="Screenshot shows how to select the database name." lightbox="./media/sap-hana-db-manage/select-database-name.png":::
+
+ Two VMs/Nodes appear under **FQDN** and are in **registered** state.
+
+ :::image type="content" source="./media/sap-hana-db-manage/confirm-logical-registration-status.png" alt-text="Screenshot shows the logical registration status." lightbox="./media/sap-hana-db-manage/confirm-logical-registration-status.png":::
+
+>[!Note]
+>If status is in **not registered** state, you need to [rediscover the VMs/Nodes](sap-hana-database-with-hana-system-replication-backup.md#discover-the-databases) and check the status again.
+ ## Manage operations using SAP HANA native clients This section describes how to manage various operations from non-Azure clients, such as HANA Studio.
backup Sap Hana Database Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sap-hana-database-restore.md
Title: Restore SAP HANA databases on Azure VMs description: In this article, you'll learn how to restore SAP HANA databases that are running on Azure virtual machines. You can also use Cross Region Restore to restore your databases to a secondary region.- Previously updated : 10/07/2022+ Last updated : 06/20/2023
This article describes how to restore SAP HANA databases that are running on Azure virtual machines (VMs) and that the Azure Backup service has backed up to a Recovery Services vault. You can use the restored data to create copies for development and test scenarios or to return to a previous state.
-Azure Backup now supports backup and restore of SAP HANA System Replication (HSR) databases (preview).
+Azure Backup now supports backup and restore of SAP HANA System Replication (HSR) databases.
>[!Note]
->The restore process for HANA databases with HSR is the same as the restore process for HANA databases without HSR. As per SAP advisories, you can restore databases with HSR mode as *standalone* databases. If the target system has the HSR mode enabled, first disable the mode, and then restore the database.
+>- The restore process for HANA databases with HSR is the same as the restore process for HANA databases without HSR. As per SAP advisories, you can restore databases with HSR mode as *standalone* databases. If the target system has the HSR mode enabled, first disable the mode, and then restore the database.
+>- Original Location Recovery (OLR) is currently not supported for HSR.
+>- Restore to HSR instance isn't supported. However, restore only to HANA instance is supported.
For information about the supported configurations and scenarios, see the [SAP HANA backup support matrix](sap-hana-backup-support-matrix.md).
To restore a database, you need the following permissions:
1. In **Select restore point**, select **Logs (Point in Time)** to [restore to a specific point in time](#restore-to-a-specific-point-in-time). Or select **Full & Differential** to [restore to a specific recovery point](#restore-to-a-specific-recovery-point).
-### Restore and overwrite
-
-1. On the **Restore** pane, under **Where and how to Restore?**, select **Overwrite DB**, and then select **OK**.
-
- :::image type="content" source="./media/sap-hana-db-restore/hana-overwrite-database.png" alt-text="Screenshot that shows where to overwrite the database.":::
-
-1. On the **Select restore point** pane, do either of the following:
-
- * To [restore to a specific point in time](#restore-to-a-specific-point-in-time), select **Logs (Point in Time)**.
- * To [restore to a specific recovery point](#restore-to-a-specific-recovery-point), select **Full & Differential**.
- ### Restore as files >[!Note]
backup Sap Hana Database With Hana System Replication Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/sap-hana-database-with-hana-system-replication-backup.md
Title: Back up SAP HANA System Replication databases on Azure VMs (preview)
+ Title: Back up SAP HANA System Replication databases on Azure VMs
description: In this article, discover how to back up SAP HANA databases with HANA System Replication enabled. Last updated 03/08/2023
-# Back up SAP HANA System Replication databases on Azure VMs (preview)
+# Back up SAP HANA System Replication databases on Azure VMs
SAP HANA databases are critical workloads that require a low recovery-point objective (RPO) and long-term retention. This article describes how you can back up SAP HANA databases that are running on Azure virtual machines (VMs) to an Azure Backup Recovery Services vault by using [Azure Backup](backup-overview.md).
SAP HANA databases are critical workloads that require a low recovery-point obje
- Identify/create a Recovery Services vault in the same region and subscription as the two VMs/nodes of the HANA System Replication (HSR) database. - Allow connectivity from each of the VMs/nodes to the internet for communication with Azure.
+- Run the preregistration script on both VMs or nodes that are part of HANA System Replication (HSR). You can download the latest preregistration script [from here](https://aka.ms/ScriptForPermsOnHANA). You can also download it from the link under *Recovery Services vault* > **Backup** > **Discover DBs in VMs** > **Start Discovery**.
>[!Important] >Ensure that the combined length of the SAP HANA Server VM name and the resource group name doesn't exceed 84 characters for Azure Resource Manager VMs and 77 characters for classic VMs. This limitation is because some characters are reserved by the service.
SAP HANA databases are critical workloads that require a low recovery-point obje
[!INCLUDE [Create a Recovery Services vault](../../includes/backup-create-rs-vault.md)]
-## Discover the databases
-
-To discover the HSR database, follow these steps:
-
-1. In the Azure portal, go to **Backup center**, and then select **+ Backup**.
-
- :::image type="content" source="./media/sap-hana-database-with-hana-system-replication-backup/initiate-database-discovery.png" alt-text="Screenshot that shows how to start database discovery.":::
-
-1. Select **SAP HANA in Azure VM** as the data source type, select the Recovery Services vault to use for the backup, and then select **Continue**.
-
- :::image type="content" source="./media/sap-hana-database-with-hana-system-replication-backup/configure-backup.png" alt-text="Screenshot that shows how to configure a database backup.":::
-
-1. Select **Start Discovery** to initiate the discovery of unprotected Linux VMs in the vault region.
- - After discovery, unprotected VMs appear in the portal, listed by name and resource group.
- - If a VM isn't listed as expected, check to see whether it's already backed up in a vault.
- - Multiple VMs can have the same name, but they must belong to different resource groups.
-
- :::image type="content" source="./media/sap-hana-database-with-hana-system-replication-backup/discover-hana-database.png" alt-text="Screenshot that shows how to discover a HANA database.":::
-
-1. On the **Select Virtual Machines** pane, at the bottom, select the **this** link in **Run this script on the SAP HANA VMs to provide these permissions to Azure Backup service**.
-
- :::image type="content" source="./media/sap-hana-database-with-hana-system-replication-backup/download-script.png" alt-text="Screenshot that highlights the link for downloading the script.":::
-
-1. Run the script on each VM that hosts SAP HANA databases that you want to back up.
-
-1. On the **Select Virtual Machines** pane, after you run the script on the VMs, select the VMs, and then select **Discover DBs**.
-
- Azure Backup discovers all SAP HANA databases on the VM. During discovery, Azure Backup registers the VM with the vault and installs an extension on the VM. It doesn't install any agent on the database.
-
- To view the details about all the databases of each discovered VM, select **View details** under the **Step 1: Discover DBs in VMs section**.
## Run the preregistration script
When a failover occurs, the users are replicated to the new primary, but *hdbuse
`-bk CUSTOM_BACKUP_KEY_NAME` or `-backup-key CUSTOM_BACKUP_KEY_NAME`
- If the password of this custom backup key expires, it could lead to the backup and restore operations failure.
+ If the password of this custom backup key expires, the backup and restore operations will fail.
-1. Create the same *Custom backup user* (with the same password) and key (in *hdbuserstore*) on both VMs/nodes.
+ **Example**:
-1. Run the SAP HANA backup configuration script (preregistration script) in the VMs where HANA is installed as the root user. This script sets up the HANA system for backup. For more information about the script actions, see the [What the preregistration script does](tutorial-backup-sap-hana-db.md#what-the-pre-registration-script-does) section.
+ ```HDBSQL
+ hdbuserstore set SYSTEMKEY localhost:30013@SYSTEMDB <custom-user> '<some-password>'
+ hdbuserstore set SYSTEMKEY <load balancer host/ip>:30013@SYSTEMDB <custom-user> '<some-password>'
+ ```
+
+ >[!Note]
+ >You can create a custom backup key using the load balancer host/IP instead of local host to use Virtual IP (VIP).
- There's no HANA-generated unique ID for an HSR setup. So, you need to provide a unique ID that helps the backup service to group all nodes of an HSR as a single data source.
+1. Create the same *Custom backup user* (with the same password) and key (in *hdbuserstore*) on both VMs/nodes.
1. Provide a unique HSR ID as input to the script:
When a failover occurs, the users are replicated to the new primary, but *hdbuse
- For SDC, use the format `3<instancenumber>15`. 1. If your HANA setup uses private endpoints, run the preregistration script with the `-sn` or `--skip-network-checks` parameter. Ater the preregistration script has run successfully, proceed to the next steps.+
+1. Run the SAP HANA backup configuration script (preregistration script) in the VMs where HANA is installed as the root user. This script sets up the HANA system for backup. For more information about the script actions, see the [What the preregistration script does](tutorial-backup-sap-hana-db.md#what-the-pre-registration-script-does) section.
+
+ There's no HANA-generated unique ID for an HSR setup. So, you need to provide a unique ID that helps the backup service to group all nodes of an HSR as a single data source.
+ To set up the database for backup, see the [prerequisites](tutorial-backup-sap-hana-db.md#prerequisites) and the [What the preregistration script does](tutorial-backup-sap-hana-db.md#what-the-pre-registration-script-does) sections. +
+## Discover the databases
+
+To discover the HSR database, follow these steps:
+
+1. In the Azure portal, go to **Backup center**, and then select **+ Backup**.
+
+ :::image type="content" source="./media/sap-hana-database-with-hana-system-replication-backup/initiate-database-discovery.png" alt-text="Screenshot that shows how to start database discovery.":::
+
+1. Select **SAP HANA in Azure VM** as the data source type, select the Recovery Services vault to use for the backup, and then select **Continue**.
+
+ :::image type="content" source="./media/sap-hana-database-with-hana-system-replication-backup/configure-backup.png" alt-text="Screenshot that shows how to configure a database backup.":::
+
+1. Select **Start Discovery** to initiate the discovery of unprotected Linux VMs in the vault region.
+ - After discovery, unprotected VMs appear in the portal, listed by name and resource group.
+ - If a VM isn't listed as expected, check to see whether it's already backed up in a vault.
+ - Multiple VMs can have the same name, but they must belong to different resource groups.
+
+ :::image type="content" source="./media/sap-hana-database-with-hana-system-replication-backup/discover-hana-database.png" alt-text="Screenshot that shows how to discover a HANA database.":::
+
+1. On the **Select Virtual Machines** pane, at the bottom, select the **this** link in **Run this script on the SAP HANA VMs to provide these permissions to Azure Backup service**.
+
+ :::image type="content" source="./media/sap-hana-database-with-hana-system-replication-backup/download-script.png" alt-text="Screenshot that highlights the link for downloading the script." lightbox="./media/sap-hana-database-with-hana-system-replication-backup/download-script.png":::
+
+1. Run the script on each VM that hosts SAP HANA databases that you want to back up.
+
+1. On the **Select Virtual Machines** pane, after you run the script on the VMs, select the VMs, and then select **Discover DBs**.
+
+ Azure Backup discovers all SAP HANA databases on the VM. During discovery, Azure Backup registers the VM with the vault and installs an extension on the VM. It doesn't install any agent on the database.
+
+ To view the details about all the databases of each discovered VM, select **View details** under the **Step 1: Discover DBs in VMs section**.
+ ## Configure backup To enable the backup, follow these steps:
To enable the backup, follow these steps:
1. On the **Select items to back up** pane, select all the databases you want to protect, and then select **OK**.
- :::image type="content" source="./media/sap-hana-database-with-hana-system-replication-backup/select-virtual-machines-for-protection.png" alt-text="Screenshot that shows a list of virtual machines available to be backed up.":::
+ :::image type="content" source="./media/sap-hana-database-with-hana-system-replication-backup/select-virtual-machines-for-protection.png" alt-text="Screenshot that shows a list of virtual machines available to be backed up." lightbox="./media/sap-hana-database-with-hana-system-replication-backup/select-virtual-machines-for-protection.png":::
1. In the **Backup policy** dropdown list, select the policy you want to use, and then select **Add**.
To enable the backup, follow these steps:
1. To track the backup configuration progress, go to **Notifications** in the Azure portal.
+>[!Note]
+>During the *Configure system DB backup* stage, you need to set this parameter `[inifile_checker]/replicate` on the primary node. This enables to replicate parameters from the primary to secondary node or vm.
+ ## Create a backup policy A backup policy defines the backup schedules and the backup retention duration.
To configure the policy settings, follow these steps:
Backups run in accordance with the policy schedule. Learn how to [run an on-demand backup](sap-hana-database-manage.md#run-on-demand-backups).
+>[!Note]
+>Before a planned failover, ensure that both VMs/Nodes are registered to the vault (physical and logical registration). [Learn more](sap-hana-database-manage.md#verify-the-registration-status-of-vms-or-nodes-to-the-vault).
+ ## Run SAP HANA native clients backup on a database with Azure Backup You can run an on-demand backup using SAP HANA native clients to local file-system instead of Backint. Learn more how to [manage operations using SAP native clients](sap-hana-database-manage.md#manage-operations-using-sap-hana-native-clients). + ## Next steps -- [Restore SAP HANA System Replication databases on Azure VMs (preview)](sap-hana-database-restore.md)-- [About backing up SAP HANA System Replication databases on Azure VMs (preview)](sap-hana-database-about.md#back-up-a-hana-system-with-replication-enabled-preview)
+- [Restore SAP HANA System Replication databases on Azure VMs](sap-hana-database-restore.md)
+- [About backing up SAP HANA System Replication databases on Azure VMs](sap-hana-database-about.md#back-up-a-hana-system-with-replication-enabled)
backup Tutorial Sap Hana Backup Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/tutorial-sap-hana-backup-cli.md
Title: Tutorial - SAP HANA DB backup on Azure using Azure CLI description: In this tutorial, learn how to back up SAP HANA databases running on an Azure VM to an Azure Backup Recovery Services vault using Azure CLI. Previously updated : 08/11/2022 Last updated : 06/20/2023
# Tutorial: Back up SAP HANA databases in an Azure VM using Azure CLI
+This tutorial describes how to back up SAP HANA database and SAP HANA System Replication (HSR) instances using Azure CLI.
+ Azure CLI is used to create and manage Azure resources from the Command Line or through scripts. This documentation details how to back up an SAP HANA database and trigger on-demand backups - all using Azure CLI. You can also perform these steps using the [Azure portal](./backup-azure-sap-hana-database.md).
-This document assumes that you already have an SAP HANA database installed on an Azure VM. (You can also [create a VM using Azure CLI](../virtual-machines/linux/quick-create-cli.md)). By the end of this tutorial, you'll be able to:
+Azure Backup also supports backup and restore of SAP HANA System Replication (HSR).
-> [!div class="checklist"]
->
-> * Create a Recovery Services vault
-> * Register SAP HANA instance and discover database(s) on it
-> * Enable backup on an SAP HANA database
-> * Trigger an on-demand backup
+This document assumes that you already have an SAP HANA database installed on an Azure VM. (You can also [create a VM using Azure CLI](../virtual-machines/linux/quick-create-cli.md)).
-Check out the [scenarios that we currently support](./sap-hana-backup-support-matrix.md#scenario-support) for SAP HANA.
+For more information on the supported scenarios, see the [support matrix](./sap-hana-backup-support-matrix.md#scenario-support) for SAP HANA.
[!INCLUDE [azure-cli-prepare-your-environment.md](~/articles/reusable-content/azure-cli/azure-cli-prepare-your-environment.md)]
A Recovery Services vault is a logical container that stores the backup data for
Create a Recovery Services vault with [az backup vault create](/cli/azure/backup/vault#az-backup-vault-create). Specify the same resource group and location as the VM you wish to protect. Learn how to create a VM using Azure CLI with this [VM quickstart](../virtual-machines/linux/quick-create-cli.md).
-For this tutorial, we'll be using the following:
+**Choose a database type**:
+# [HANA database](#tab/hana-database)
++
+For this tutorial, we'll be using:
* a resource group named *saphanaResourceGroup* * a VM named *saphanaVM*
Location Name ResourceGroup
westus2 saphanaVault saphanaResourceGroup ```
+# [HSR database](#tab/hsr-database)
+
+To create the Recovery Services vault for HSR database instance protection, run the following command:
+
+```azurecli
+az backup vault create --resource-group hanarghsr2 --name hanavault10 --location westus2
+```
+++ ## Register and protect the SAP HANA instance For the SAP HANA instance (the VM with SAP HANA installed on it) to be discovered by the Azure services, a [pre-registration script](https://aka.ms/scriptforpermsonhana) must be run on the SAP HANA machine. Make sure that all the [prerequisites](./tutorial-backup-sap-hana-db.md#prerequisites) are met before running the script. To learn more about what the script does, refer to the [What the pre-registration script does](tutorial-backup-sap-hana-db.md#what-the-pre-registration-script-does) section.
-Once the script is run, the SAP HANA instance can be registered with the Recovery Services vault we created earlier. To register the instance, use the [az backup container register](/cli/azure/backup/container#az-backup-container-register) cmdlet. *VMResourceId* is the resource ID of the VM that you created to install SAP HANA.
+Once the script is run, the SAP HANA instance can be registered with the Recovery Services vault we created earlier.
-```azurecli-interactive
-az backup container register --resource-group saphanaResourceGroup \
- --vault-name saphanaVault \
- --workload-type SAPHANA \
- --backup-management-type AzureWorkload \
- --resource-id VMResourceId
-```
+**Choose a database type**
->[!NOTE]
->If the VM isn't in the same resource group as the vault, then *saphanaResourceGroup* refers to the resource group where the vault was created.
+# [HANA database](#tab/hana-database)
-Registering the SAP HANA instance automatically discovers all its current databases. However, to discover any new databases that may be added in the future refer to the [Discovering new databases added to the registered SAP HANA](tutorial-sap-hana-manage-cli.md#protect-new-databases-added-to-an-sap-hana-instance) instance section.
+To register and protect database instance, follow these steps:
-To check if the SAP HANA instance is successfully registered with your vault, use the [az backup container list](/cli/azure/backup/container#az-backup-container-list) cmdlet. You'll see the following response:
+1. To register the instance, use the [az backup container register](/cli/azure/backup/container#az-backup-container-register) command. *VMResourceId* is the resource ID of the VM that you created to install SAP HANA.
-```output
-Name Friendly Name Resource Group Type Registration Status
- -- -- -
-VMAppContainer;Compute;saphanaResourceGroup;saphanaVM saphanaVM saphanaResourceGroup AzureWorkload Registered
-```
+ ```azurecli-interactive
+ az backup container register --resource-group saphanaResourceGroup \
+ --vault-name saphanaVault \
+ --workload-type SAPHANA \
+ --backup-management-type AzureWorkload \
+ --resource-id VMResourceId
+ ```
->[!NOTE]
-> The column ΓÇ£nameΓÇ¥ in the above output refers to the container name. This container name will be used in the next sections to enable backups and trigger them. Which in this case, is *VMAppContainer;Compute;saphanaResourceGroup;saphanaVM*.
+ >[!NOTE]
+ >If the VM isn't in the same resource group as the vault, then *saphanaResourceGroup* refers to the resource group where the vault was created.
+
+ Registering the SAP HANA instance automatically discovers all its current databases. However, to discover any new databases that may be added in the future refer to the [Discovering new databases added to the registered SAP HANA](tutorial-sap-hana-manage-cli.md#protect-new-databases-added-to-an-sap-hana-instance) instance section.
+
+1. To check if the SAP HANA instance is successfully registered with your vault, use the [az backup container list](/cli/azure/backup/container#az-backup-container-list) cmdlet. You'll see the following response:
+
+ ```output
+ Name Friendly Name Resource Group Type Registration Status
+ -- -- -
+ VMAppContainer;Compute;saphanaResourceGroup;saphanaVM saphanaVM saphanaResourceGroup AzureWorkload Registered
+ ```
+
+ >[!NOTE]
+ > The column ΓÇ£nameΓÇ¥ in the above output refers to the container name. This container name will be used in the next sections to enable backups and trigger them. Which in this case, is *VMAppContainer;Compute;saphanaResourceGroup;saphanaVM*.
++
+# [HSR database](#tab/hsr-database)
+
+To register and protect database instance, follow these steps:
+
+1. To register and protect the SAP HANA database running on primary Azure VM, run the following command:
+
+ ```azurecli
+ az backup container register --resource-group hanarghsr2 --vault-name hanavault10 --workload-type SAPHANA --backup-management-type AzureWorkload --resource-id "/subscriptions/ef4ab5a7-c2c0-4304-af80-af49f48af3d1/resourceGroups/hanarghsr2/providers/Microsoft.Compute/virtualMachines/hsr-primary"
+ ```
+
+1. To register and protect the SAP HANA database running on secondary Azure VM, run the following command:
+
+ ```azurecli
+ az backup container register --resource-group hanarghsr2 --vault-name hanavault10 --workload-type SAPHANA --backup-management-type AzureWorkload --resource-id "/subscriptions/ef4ab5a7-c2c0-4304-af80-af49f48af3d1/resourceGroups/hanarghsr2/providers/Microsoft.Compute/virtualMachines/hsr-secondary"
+ ```
+
+ To identify `resource-id`, run the following command:
+
+ ```azurecli
+ az vm show --name hsr-primary --resource-group hanarghsr2
+ ```
+
+ For example, `id` is `/subscriptions/ef4ab5a7-c2c0-4304-af80-af49f48af3d1/resourceGroups/hanarghsr2/providers/Microsoft.Compute/virtualMachines/hsr-primary`.
+
+1. To check if primary and secondary servers are registered to the vault, run the following command:
+
+ ```azurecli
+ az backup container list --resource-group hanarghsr2 --vault-name hanavault10 --output table --backup-management-type AzureWorkload
+ Name Friendly Name Resource Group Type Registration Status
+ -- - -
+ VMAppContainer;Compute;hanarghsr2;hsr-primary hsr-primary hanarghsr2 AzureWorkload Registered
+ VMAppContainer;Compute;hanarghsr2;hsr-secondary hsr-secondary hanarghsr2 AzureWorkload Registered
+ ```
++ ## Enable backup on SAP HANA database The [az backup protectable-item list](/cli/azure/backup/protectable-item#az-backup-protectable-item-list) cmdlet lists out all the databases discovered on the SAP HANA instance that you registered in the previous step.
-```azurecli-interactive
-az backup protectable-item list --resource-group saphanaResourceGroup \
- --vault-name saphanaVault \
- --workload-type SAPHANA \
- --output table
-```
+**Choose a database type**
-You should find the database that you want to back up in this list, which will look as follows:
+# [HANA database](#tab/hana-database)
-```output
-Name Protectable Item Type ParentName ServerName IsProtected
- --
-saphanasystem;hxe SAPHanaSystem HXE hxehost NotProtected
-saphanadatabase;hxe;systemdb SAPHanaDatabase HXE hxehost NotProtected
-saphanadatabase;hxe;hxe SAPHanaDatabase HXE hxehost NotProtected
-```
+To enable database instance backup, follow these steps:
-As you can see from the above output, the SID of the SAP HANA system is HXE. In this tutorial, we'll configure backup for the *saphanadatabase;hxe;hxe* database that resides on the *hxehost* server.
+1. To list the database to be protected, run the following command:
-To protect and configure backup on a database, one at a time, we use the [az backup protection enable-for-azurewl](/cli/azure/backup/protection#az-backup-protection-enable-for-azurewl) cmdlet. Provide the name of the policy that you want to use. To create a policy using CLI, use the [az backup policy create](/cli/azure/backup/policy#az-backup-policy-create) cmdlet. For this tutorial, we'll be using the *sapahanaPolicy* policy.
+ ```azurecli-interactive
+ az backup protectable-item list --resource-group saphanaResourceGroup \
+ --vault-name saphanaVault \
+ --workload-type SAPHANA \
+ --output table
+ ```
-```azurecli-interactive
-az backup protection enable-for-azurewl --resource-group saphanaResourceGroup \
- --vault-name saphanaVault \
- --policy-name saphanaPolicy \
- --protectable-item-name "saphanadatabase;hxe;hxe" \
- --protectable-item-type SAPHANADatabase \
- --server-name hxehost \
- --workload-type SAPHANA \
- --output table
-```
+ You should find the database that you want to back up in this list, which will look as follows:
-You can check if the above backup configuration is complete using the [az backup job list](/cli/azure/backup/job#az-backup-job-list) cmdlet. The output will display as follows:
+ ```output
+ Name Protectable Item Type ParentName ServerName IsProtected
+ -- - --
+ saphanasystem;hxe SAPHanaSystem HXE hxehost NotProtected
+ saphanadatabase;hxe;systemdb SAPHanaDatabase HXE hxehost NotProtected
+ saphanadatabase;hxe;hxe SAPHanaDatabase HXE hxehost NotProtected
+ ```
-```output
-Name Operation Status Item Name Start Time UTC
- - -
-e0f15dae-7cac-4475-a833-f52c50e5b6c3 ConfigureBackup Completed hxe 2019-12-03T03:09:210831+00:00
-```
+ As you can see from the above output, the SID of the SAP HANA system is HXE. In this tutorial, we'll configure backup for the `saphanadatabase;hxe;hxe` database that resides on the `hxehost` server.
+
+1. To protect and configure the backups on a database, one at a time, we use the [az backup protection enable-for-azurewl](/cli/azure/backup/protection#az-backup-protection-enable-for-azurewl) cmdlet. Provide the name of the policy that you want to use. To create a policy using CLI, use the [az backup policy create](/cli/azure/backup/policy#az-backup-policy-create) cmdlet. For this tutorial, we'll be using the *sapahanaPolicy* policy.
+
+ ```azurecli-interactive
+ az backup protection enable-for-azurewl --resource-group saphanaResourceGroup \
+ --vault-name saphanaVault \
+ --policy-name saphanaPolicy \
+ --protectable-item-name "saphanadatabase;hxe;hxe" \
+ --protectable-item-type SAPHANADatabase \
+ --server-name hxehost \
+ --workload-type SAPHANA \
+ --output table
+ ```
+
+1. To check if the above backup configuration is complete, use the [az backup job list](/cli/azure/backup/job#az-backup-job-list) cmdlet. The output will display as follows:
+
+ ```output
+ Name Operation Status Item Name Start Time UTC
+ - -
+ e0f15dae-7cac-4475-a833-f52c50e5b6c3 ConfigureBackup Completed hxe 2019-12-03T03:09:210831+00:00
+ ```
The [az backup job list](/cli/azure/backup/job#az-backup-job-list) cmdlet lists out all the backup jobs (scheduled or on-demand) that have run or are currently running on the protected database, in addition to other operations like register, configure backup, and delete backup data.
The [az backup job list](/cli/azure/backup/job#az-backup-job-list) cmdlet lists
> >Modify the policy manually as needed.
-## Get the container name
+### Get the container name
To get container name, run the following command. [Learn about this CLI command](/cli/azure/backup/container#az-backup-container-list).
To get container name, run the following command. [Learn about this CLI command]
```
+# [HSR database](#tab/hsr-database)
+
+To enable database instance backup, follow these steps:
+
+1. To check the items that you can protect, run the following command:
+
+ ```azurecli
+ az backup protectable-item list --resource-group hanarghsr2 --vault-name hanavault10 --workload-type SAPHANA --output table
+
+ pradeep [ ~ ]$ az backup protectable-item list --resource-group hanarghsr2 --vault-name hanavault10 --workload-type SAPHANA --output table
+ Name Protectable Item Type ParentName ServerName IsProtected
+ -- - -
+ saphanasystem;arv SAPHanaSystem ARV hsr-primary NotProtected
+ saphanasystem;arv SAPHanaSystem ARV hsr-secondary NotProtected
+ hanahsrcontainer;hsrtestps2 HanaHSRContainer HsrTestP2 hsr-primary NotProtected
+ saphanadatabase;hsrtestps2;arv SAPHanaDatabase HsrTestP2 hsr-primary NotProtected
+ saphanadatabase;hsrtestps2;2;DB1 SAPHanaDatabase HsrTestP2 hsr-primary NotProtected
+ saphanadatabase;hsrtestps2;systemdb SAPHanaDatabase HsrTestP2 hsr-primary NotProtected
+ ```
+
+ If the database isn't in the item list that can be protected or to rediscover the database, reinitiate discovery on the physical primary VM by running the following command:
+
+ ```azurecli
+ az backup protectable-item initialize --resource-group hanarghsr2 --vault-name hanavault10 --container-name "VMAppContainer;Compute;hanarghsr2;hsr-primary" --workload-type SAPHanaDatabase
+ ```
+
+1. To enable protection for the database listed under the HSR System with required backup policy, run the following command:
+
+ ```azurecli
+ az backup protection enable-for-azurewl --resource-group hanarghsr2 --vault-name hanavault10 --policy-name hanahsr --protectable-item-name "saphanadatabase;hsrtestps2;DB1" --protectable-item-type SAPHanaDatabase --workload-type SAPHanaDatabase --output table --server-name HsrTestP2
+
+ az backup protection enable-for-azurewl --resource-group hanarghsr2 --vault-name hanavault10 --policy-name hanahsr --protectable-item-name "saphanadatabase;hsrtestps2;systemdb" --protectable-item-type SAPHanaDatabase --workload-type SAPHanaDatabase --output table --server-name hsr-secondary
+ ```
+++ ## Trigger an on-demand backup While the section above details how to configure a scheduled backup, this section talks about triggering an on-demand backup. To do this, we use the [az backup protection backup-now](/cli/azure/backup/protection#az-backup-protection-backup-now) command.
While the section above details how to configure a scheduled backup, this sectio
>- *On-demand differential backups* are retained as per the *log retention set in the policy*. >- *On-demand incremental backups* aren't currently supported.
+**Choose a database type**
+
+# [HANA database](#tab/hana-database)
+
+To run an on-demand backup, run the following command:
+ ```azurecli-interactive az backup protection backup-now --resource-group saphanaResourceGroup \ --item-name saphanadatabase;hxe;hxe \
The response will give you the job name. This job name can be used to track the
>[!NOTE] >Log backups are automatically triggered and managed by SAP HANA internally.
+# [HSR database](#tab/hsr-database)
+
+To run an on-demand backup, run the following command:
+
+```azurecli
+az backup protection backup-now --resource-group hanarghsr2 --item-name "saphanadatabase;hsrtestps2;db1" --container-name "hanahsrcontainer;hsrtestp2" --vault-name hanavault10 --backup-type Full --retain-until 01-01-2030 --output table
+```
+
+The output will display as follows:
+
+```Output
+Name Operation Status Item Name Backup Management Type Start Time UTC Duration
+ - - -- -- --
+
+591f1840-4d6a-4464-8f3a-18e586f11bfc Backup (Full) InProgress ARV [hsr-primary] AzureWorkload 2023-04
+```
+++ ## Next steps * To learn how to restore an SAP HANA database in Azure VM using CLI, continue to the tutorial ΓÇô [Restore an SAP HANA database in Azure VM using CLI](tutorial-sap-hana-restore-cli.md)
backup Tutorial Sap Hana Restore Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/tutorial-sap-hana-restore-cli.md
Title: Tutorial - SAP HANA DB restore on Azure using CLI description: In this tutorial, learn how to restore SAP HANA databases running on an Azure VM from an Azure Backup Recovery Services vault using Azure CLI. Previously updated : 08/11/2022 Last updated : 06/20/2023
# Tutorial: Restore SAP HANA databases in an Azure VM using Azure CLI
+This tutorial describes how to restore SAP HANA database and SAP HANA System Replication (HSR) instances using Azure CLI.
+ Azure CLI is used to create and manage Azure resources from the command line or through scripts. This documentation details how to restore a backed-up SAP HANA database on an Azure VM - using Azure CLI. You can also perform these steps using the [Azure portal](./sap-hana-db-restore.md).
-Use [Azure Cloud Shell](tutorial-sap-hana-backup-cli.md) to run CLI commands.
+Azure Backup also supports backup and restore of SAP HANA System Replication (HSR).
-By the end of this tutorial you'll be able to:
+>[!Note]
+>- Original Location Recovery (OLR) is currently not supported for HSR.
+>- Restore to HSR instance isn't supported. However, restore only to HANA instance is supported.
-> [!div class="checklist"]
->
-> * View restore points for a backed-up database
-> * Restore a database
+Use [Azure Cloud Shell](tutorial-sap-hana-backup-cli.md) to run CLI commands.
This tutorial assumes you have an SAP HANA database running on Azure VM that's backed-up using Azure Backup. If you've used [Back up an SAP HANA database in Azure using CLI](tutorial-sap-hana-backup-cli.md) to back up your SAP HANA database, then you're using the following resources: * A resource group named *saphanaResourceGroup* * A vault named *saphanaVault*
-* Protected container named *VMAppContainer;Compute;saphanaResourceGroup;saphanaVM*
-* Backed-up database/item named *saphanadatabase;hxe;hxe*
+* Protected container named `VMAppContainer;Compute;saphanaResourceGroup;saphanaVM`
+* Backed-up database/item named `saphanadatabase;hxe;hxe`
* Resources in the *westus2* region
->[!Note]
->See the [SAP HANA backup support matrix](sap-hana-backup-support-matrix.md) to know more about the supported configurations and scenarios.
+For more information on the supported configurations and scenarios, see the [SAP HANA backup support matrix](sap-hana-backup-support-matrix.md).
## View restore points for a backed-up database To view the list of all the recovery points for a database, use the [az backup recoverypoint list](/cli/azure/backup/recoverypoint#az-backup-recoverypoint-show-log-chain) cmdlet as follows:
+**Choose a database type**:
+
+# [HANA database](#tab/hana-database)
+
+To view the available recovery points, run the following command:
+ ```azurecli-interactive az backup recoverypoint list --resource-group saphanaResourceGroup \ --vault-name saphanaVault \
DefaultRangeRecoveryPoint AzureWorkload
As you can see, the list above contains three recovery points: one each for full, differential, and log backup.
+# [HSR database](#tab/hsr-database)
+
+To view the available recovery points, run the following command:
+
+```azurecli
+az backup recoverypoint list --resource-group hanarghsr2 --vault-name hanavault10 --container-name "hanahsrcontainer;hsrtestps2" --item-name "saphanadatabase;hsrtestpradeep2;db1" --output table
+
+abc@Azure:~$ az backup recoverypoint list --resource-group hanarghsr2 --vault-name hanavault10 --container-name "hanahsrcontainer;hsrtestps2" --item-name "saphanadatabase;hsrtestps2;db1" --output table
+```
+
+The list of recovery points will look as follows:
+
+```Output
+Name Time BackupManagementType Item Name RecoveryPointType
+- -- - -- -
+62640091676331 2023-05-04T08:13:09.469000+00:00 AzureWorkload SAPHanaDatabase;hsrtestps2;db1 Full
+68464937558101 2023-05-04T07:49:02.988000+00:00 AzureWorkload SAPHanaDatabase;hsrtestps2;db1 Full
+56015648627567 2023-05-04T07:27:54.425000+00:00 AzureWorkload SAPHanaDatabase;hsrtestps2;db1 Full
+DefaultRangeRecoveryPoint AzureWorkload SAPHanaDatabase;hsrtestps2;db1 Log
+arvind@Azure:~$
+```
+++ >[!NOTE] >You can also view the start and end points of every unbroken log backup chain, using the [az backup recoverypoint show-log-chain](/cli/azure/backup/recoverypoint#az-backup-recoverypoint-show-log-chain) cmdlet.
To restore a database to an alternate location, use **AlternateWorkloadRestore**
In this tutorial, you'll restore to a previous restore point. [View the list of restore points](#view-restore-points-for-a-backed-up-database) for the database and choose the point you want to restore to. This tutorial will use the restore point with the name *7660777527047692711*.
-Using the above restore point name and the restore mode, let's create the recovery config object using the [az backup recoveryconfig show](/cli/azure/backup/recoveryconfig#az-backup-recoveryconfig-show) cmdlet. Let's look at what each of the remaining parameters in this cmdlet mean:
+By using the above restore point name and the restore mode, let's create the recovery config object using the [az backup recoveryconfig show](/cli/azure/backup/recoveryconfig#az-backup-recoveryconfig-show) cmdlet. Let's look at what each of the remaining parameters in this cmdlet mean:
* **--target-item-name** This is the name that the restored database will be using. In this case, we used the name *restored_database*. * **--target-server-name** This is the name of an SAP HANA server that's successfully registered to a Recovery Services vault and lies in the same region as the database to be restored. For this tutorial, we'll restore the database to the same SAP HANA server that we've protected, named *hxehost*. * **--target-server-type** For the restore of SAP HANA databases, **HANAInstance** must be used.
+**Choose a database type**:
+
+# [HANA database](#tab/hana-database)
+
+To start the restore operation, run the following command:
+ ```azurecli-interactive az backup recoveryconfig show --resource-group saphanaResourceGroup \
az backup recoveryconfig show --resource-group saphanaResourceGroup \
The response to the above query will be a recovery config object that looks something like this:
-```output
+```Output
{"restore_mode": "AlternateLocation", "container_uri": " VMAppContainer;Compute;saphanaResourceGroup;saphanaVM ", "item_uri": "SAPHanaDatabase;hxe;hxe", "recovery_point_id": "7660777527047692711", "item_type": "SAPHana", "source_resource_id": "/subscriptions/ef4ab5a7-c2c0-4304-af80-af49f48af3d1/resourceGroups/saphanaResourceGroup/providers/Microsoft.Compute/virtualMachines/saphanavm", "database_name": null, "container_id": null, "alternate_directory_paths": null} ```
Name Resource
The response will give you the job name. This job name can be used to track the job status using [az backup job show](/cli/azure/backup/job#az-backup-job-show) cmdlet.
+# [HSR database](#tab/hsr-database)
+
+To start the restore operation, run the following command:
+
+```azurecli
+az backup recoveryconfig show --resource-group hanarghsr2 --vault-name hanavault10 --container-name "hanahsrcontainer;hsrtestps2" --item-name "saphanadatabase;hsrtestps2;db1" --restore-mode AlternateWorkloadRestore --log-point-in-time 04-05-2023-08:27:54 --target-item-name restored_DB_pradeep --target-server-name hsr-primary --target-container-name hsr-primary --target-server-type HANAInstance --backup-management-type AzureWorkload --workload-type SAPHANA --output json > recoveryInput.json
+
+ arvind@Azure:~$ cat recoveryInput.json
+{
+ "alternate_directory_paths": null,
+ "container_id": "/subscriptions/ef4ab5a7-c2c0-4304-af80-af49f48af3d1/resourceGroups/hanarghsr2/providers/Microsoft.RecoveryServices/vaults/hanavault10/backupFabrics/Azure/protectionContainers/vmappcontainer;compute;hanarghsr2;hsr-primary",
+ "container_uri": "HanaHSRContainer;hsrtestps2",
+ "database_name": "ARV/restored_DB_p2",
+ "filepath": null,
+ "item_type": "SAPHana",
+ "item_uri": "SAPHanaDatabase;hsrtestps2;db1",
+ "log_point_in_time": "04-05-2023-08:27:54",
+ "recovery_mode": null,
+ "recovery_point_id": "DefaultRangeRecoveryPoint",
+ "restore_mode": "AlternateLocation",
+ "source_resource_id": null,
+ "workload_type": "SAPHanaDatabase"
+}
+arvind@Azure:~$
+
+az backup restore restore-azurewl --resource-group hanarghsr2 --vault-name hanavault10 --recovery-config recoveryInput.json --output table
+```
+
+### Restore as files:
+
+To restore the database as files, run the following command:
+
+```azurecli
+az backup recoveryconfig show --resource-group hanarghsr2 \
+ --vault-name hanavault10 \
+ --container-name "hanahsrcontainer;hsrtestps2" \
+ --item-name "saphanadatabase;hsrtestps2;arv" \
+ --restore-mode RestoreAsFiles \
+ --log-point-in-time 18-04-2023-09:53:00 \
+ --rp-name DefaultRangeRecoveryPoint \
+ --target-container-name "VMAppContainer;Compute;hanarghsr2;hsr-primary" \
+ --filepath /home/abc \
+ --output json
+
+ az backup restore restore-azurewl --resource-group hanarghsr2 \
+ --vault-name hanavault10 \
+ --restore-config recoveryconfig.json \
+ --output json
+
+az backup recoveryconfig show --resource-group hanarghsr2 --vault-name hanavault10 --container-name "hanahsrcontainer;hsrtestps2" --item-name "saphanadatabase;hsrtestps2;arv" --restore-mode RestoreAsFiles --log-point-in-time 18-04-2023-09:53:00 --rp-name DefaultRangeRecoveryPoint --target-container-name "VMAppContainer;Compute;hanarghsr2;hsr-primary" --filepath /home/abc --output json > recoveryconfig.json
+```
+++ ## Restore and overwrite To restore to the original location, we'll use **OrignialWorkloadRestore** as the restore mode. You must then choose the restore point, which could either be a previous point-in-time or any of the previous restore points.
backup Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/whats-new.md
Title: What's new in Azure Backup description: Learn about new features in Azure Backup. Previously updated : 03/20/2023 Last updated : 06/25/2023
You can learn more about the new releases by bookmarking this page or by [subscr
## Updates summary
+- June 2023
+ - [Support for backup of SAP HANA System Replication is now generally available](#support-for-backup-of-sap-hana-system-replication-is-now-generally-available)
- April 2023 - [Microsoft Azure Backup Server v4 is now generally available](#microsoft-azure-backup-server-v4-is-now-generally-available) - March 2023
You can learn more about the new releases by bookmarking this page or by [subscr
- [Backup for Azure Blobs (in preview)](#backup-for-azure-blobs-in-preview)
+## Support for backup of SAP HANA System Replication is now generally available
+
+Azure Backup now supports backup of HANA database with HANA System Replication. Now, the log backups from the new primary node are accepted immediately; thus provides continuous database automatic protection,
+
+This eliminates the need of manual intervention to continue backups on the new primary node during a failover. With the elimination of the need to trigger full backups for every failover, you can save costs and reduce time for continue protection.
+
+For more information, see [Back up a HANA system with replication enabled (preview)](sap-hana-database-about.md#back-up-a-hana-system-with-replication-enabled).
+ ## Microsoft Azure Backup Server v4 is now generally available Azure Backup now provides Microsoft Azure Backup Server (MABS) v4, the latest edition of on-premises backup solution.
For more information, see [Back up databases' instance snapshots (preview)](sap-
Azure Backup now supports backup of HANA database with HANA System Replication. Now, the log backups from the new primary node are accepted immediately; thus provides continuous database automatic protection,
-This eliminates the need of manual intervention to continue backups on the new primary node during a failover. With the elimination of the need to trigger full backups for every failover, you can save costs and reduce time for continue protection
+This eliminates the need of manual intervention to continue backups on the new primary node during a failover. With the elimination of the need to trigger full backups for every failover, you can save costs and reduce time for continue protection.
-For more information, see [Back up a HANA system with replication enabled (preview)](sap-hana-database-about.md#back-up-a-hana-system-with-replication-enabled-preview).
+For more information, see [Back up a HANA system with replication enabled (preview)](sap-hana-database-about.md#back-up-a-hana-system-with-replication-enabled).
## Built-in Azure Monitor alerting for Azure Backup is now generally available
container-instances Container Instances Region Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-region-availability.md
The following regions and maximum resources are available to container groups wi
> [!NOTE] > Some regions don't support availability zones (denoted by a 'N/A' in the table), and some regions have availability zones, but ACI doesn't currently leverage the capability (denoted by an 'N' in the table). For more information, see [Azure regions with availability zones][az-region-support].
-| Region | Max CPU | Max memory (GB) | VNET max CPU | VNET max memory (GB) | Storage (GB) | GPU SKUs (preview) | Availability Zone support | Confidential SKU (preview) | Spot containers (preview) |
+| Region | Max CPU | Max memory (GB) | VNET max CPU | VNET max memory (GB) | Storage (GB) | GPU SKUs (preview) | Availability Zone support | Confidential SKU | Spot containers (preview) |
| -- | :: | :: | :-: | :--: | :-: | :-: | :-: | :-: | :-: | | Australia East | 4 | 16 | 4 | 16 | 50 | N/A | Y | N | N | | Australia Southeast | 4 | 16 | 4 | 16 | 50 | N/A | N | N | N |
container-instances Container Instances Virtual Network Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-virtual-network-concepts.md
Container groups deployed into an Azure virtual network enable scenarios like:
* To deploy container groups to a subnet, the subnet and the container group must be on the same Azure subscription. * You can't enable a [liveness probe](container-instances-liveness-probe.md) or [readiness probe](container-instances-readiness-probe.md) in a container group deployed to a virtual network. * Due to the additional networking resources involved, deployments to a virtual network are typically slower than deploying a standard container instance.
-* Outbound connections to port 25 and 19390 are not supported at this time. Port 19390 needs to be opened in your Firewall for connecting to ACI from Azure ortal when container groups are deployed in virtual networks.
+* Outbound connections to port 25 and 19390 are not supported at this time. Port 19390 needs to be opened in your Firewall for connecting to ACI from Azure portal when container groups are deployed in virtual networks.
* For inbound connections, the firewall should also allow all ip addresses within the virtual network. * If you are connecting your container group to an Azure Storage Account, you must add a [service endpoint](../virtual-network/virtual-network-service-endpoints-overview.md) to that resource. * [IPv6 addresses](../virtual-network/ip-services/ipv6-overview.md) are not supported at this time.
defender-for-cloud Concept Cloud Security Posture Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-cloud-security-posture-management.md
The optional Defender CSPM plan, provides advanced posture management capabiliti
### Plan pricing > [!NOTE]
-> The Microsoft Defender CSPM plan protects across multicloud workloads. With Defender CSPM generally available (GA), the plan will remain free until billing starts on August 1 2023. Billing will apply for compute, database, and storage resources. Billable workloads will be VMs, Storage Accounts, OSS DBs, and SQL PaaS & Servers on Machines.ΓÇï
+> The Microsoft Defender CSPM plan protects across multicloud workloads. With Defender CSPM generally available (GA), the plan will remain free until billing starts on August 1 2023. Billing will apply for Servers, Database, and Storage resources. Billable workloads will be VMs, Storage accounts, OSS DBs, SQL PaaS, & SQL servers on machines.ΓÇï
- Microsoft Defender CSPM protects across all your multicloud workloads, but billing only applies for Servers, Databases and Storage accounts at $5/billable resource/month. The underlying compute services for AKS are regarded as servers for billing purposes.
+Microsoft Defender CSPM protects across all your multicloud workloads, but billing only applies for Servers, Database, and Storage accounts at $5/billable resource/month. The underlying compute services for AKS are regarded as servers for billing purposes.
## Plan availability
defender-for-cloud Plan Defender For Servers Select Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-defender-for-servers-select-plan.md
You can choose from two Defender for Servers paid plans:
|[Network map](protect-network-resources.md) | Provides a geographical view of recommendations for hardening your network resources. | Not supported in Plan 1| :::image type="icon" source="./media/icons/yes-icon.png"::: | |[Agentless scanning](concept-agentless-data-collection.md) | Scans Azure virtual machines by using cloud APIs to collect data. | Not supported in Plan 1 | :::image type="icon" source="./media/icons/yes-icon.png":::|
+>[!Note]
+>Once a plan is enabled, a 30-day trial period begins. There is no way to stop, pause, or extend this trial period.
+>To enjoy the full 30-day trial, make sure to plan ahead to meet your evaluation purposes.
+ ## Select a vulnerability assessment solution A couple of vulnerability assessment options are available in Defender for Servers:
defender-for-cloud Quickstart Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-aws.md
The native cloud connector requires:
(Optional) Select **Management account** to create a connector to a management account. Connectors will be created for each member account discovered under the provided management account. Auto-provisioning will be enabled for all of the newly onboarded accounts.
+ > [!NOTE]
+ > Defender for Cloud can be connected to each AWS account or management account only once.
+ 1. Select **Next: Select plans**.<a name="cloudtrail-implications-note"></a> > [!NOTE]
The native cloud connector requires:
1. Select **Next: Configure access**.
-1. Download the CloudFormation template.
+ a. Choose deployment type, **Default access** or **Least privilege access**.
+
+ - Default access - Allows Defender for Cloud to scan your resources and automatically include future capabilities.
+ - Least privileged access - Grants Defender for Cloud access only to the current permissions needed for the selected plans. If you select the least privileged permissions, you'll receive notifications on any new roles and permissions that are required to get full functionality on the connector health section.
+
+ b. Choose deployment method: **AWS CloudFormation** or **Terraform**.
+
+ :::image type="content" source="media/quickstart-onboard-aws/aws-configure-access.png" alt-text="Screenshot showing the configure access and its deployment options and instructions.":::
-1. Using the downloaded CloudFormation template, create the stack in AWS as instructed on screen. If you're onboarding a management account, you'll need to run the CloudFormation template both as Stack and as StackSet. Connectors will be created for the member accounts up to 24 hours after the onboarding.
+1. Follow the on-screen instructions for the selected deployment method to complete the required dependencies on AWS. If you're onboarding a management account, you'll need to run the CloudFormation template both as Stack and as StackSet. Connectors will be created for the member accounts up to 24 hours after the onboarding.
1. Select **Next: Review and generate**.
If you have any existing connectors created with the classic cloud connectors ex
:::image type="content" source="media/quickstart-onboard-gcp/classic-connectors-experience.png" alt-text="Switching back to the classic cloud connectors experience in Defender for Cloud.":::
-1. For each connector, select the three dot button **…** at the end of the row, and select **Delete**.
+1. For each connector, select the three-dot button **…** at the end of the row, and select **Delete**.
1. On AWS, delete the role ARN, or the credentials created for the integration.
defender-for-cloud Quickstart Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-gcp.md
Follow the steps below to create your GCP cloud connector.
(Optional) If you select **Organization**, a management project and an organization custom role will be created on your GCP project for the onboarding process. Auto-provisioning will be enabled for the onboarding of new projects.
+ > [!NOTE]
+ > Defender for Cloud can be connected to each GCP project or organization only once.
+ 1. Select the **Next: Select Plans**. 1. Toggle the plans you want to connect to **On**. By default all necessary prerequisites and components will be provisioned. (Optional) Learn how to [configure each plan](#optional-configure-selected-plans). 1. (**Containers only**) Ensure you've fulfilled the [network requirements](defender-for-containers-enable.md?tabs=defender-for-container-gcp#network-requirements) for the Defender for Containers plan.
-1. Select the **Next: Configure access**.
+1. Select **Next: Configure access**.
-1. Select **Copy**.
+ a. Choose deployment type, **Default access** or **Least privilege access**.
- :::image type="content" source="media/quickstart-onboard-gcp/copy-button.png" alt-text="Screenshot showing the location of the copy button.":::
+ - Default access - Allows Defender for Cloud to scan your resources and automatically include future capabilities.
+ - Least privileged access - Grants Defender for Cloud access only to the current permissions needed for the selected plans. If you select the least privileged permissions, you'll receive notifications on any new roles and permissions that are required to get full functionality on the connector health section.
- > [!NOTE]
- > To discover GCP resources and for the authentication process, the following APIs must be enabled: `iam.googleapis.com`, `sts.googleapis.com`, `cloudresourcemanager.googleapis.com`, `iamcredentials.googleapis.com`, `compute.googleapis.com`. If these APIs are not enabled, we'll enable them during the onboarding process by running the GCloud script.
+ b. Choose deployment method: **GCP Cloud Shell** or **Terraform**.
-1. Select the **GCP Cloud Shell >**.
+1. Follow the on-screen instructions for the selected deployment method to complete the required dependencies on GCP.
-1. The GCP Cloud Shell will open.
+ :::image type="content" source="media/quickstart-onboard-gcp/gcp-configure-access.png" alt-text="Screenshot showing the configure access and its deployment options and instructions.":::
-1. Paste the script into the Cloud Shell terminal and run it.
+ > [!NOTE]
+ > To discover GCP resources and for the authentication process, the following APIs must be enabled: `iam.googleapis.com`, `sts.googleapis.com`, `cloudresourcemanager.googleapis.com`, `iamcredentials.googleapis.com`, `compute.googleapis.com`. If these APIs are not enabled, we'll enable them during the onboarding process by running the GCloud script.
1. Ensure that the following resources were created:
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Updates in June include:
|Date |Update | |||
+| June 26 | [Streamlined multicloud account onboarding with enhanced settings](#streamlined-multicloud-account-onboarding-with-enhanced-settings) |
June 25 | [Private Endpoint support for Malware Scanning in Defender for Storage](#private-endpoint-support-for-malware-scanning-in-defender-for-storage)
-| June 21 | [Recommendation released for preview: Running container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management)](#recommendation-released-for-preview-running-container-images-should-have-vulnerability-findings-resolved-powered-by-microsoft-defender-vulnerability-management) |
| June 15 | [Control updates were made to the NIST 800-53 standards in regulatory compliance](#control-updates-were-made-to-the-nist-800-53-standards-in-regulatory-compliance) | |June 11 | [Planning of cloud migration with an Azure Migrate business case now includes Defender for Cloud](#planning-of-cloud-migration-with-an-azure-migrate-business-case-now-includes-defender-for-cloud) | |June 7 | [Express configuration for vulnerability assessments in Defender for SQL is now Generally Available](#express-configuration-for-vulnerability-assessments-in-defender-for-sql-is-now-generally-available) | |June 6 | [More scopes added to existing Azure DevOps Connectors](#more-scopes-added-to-existing-azure-devops-connectors) |
-|June 5 | [Onboarding directly (without Azure Arc) to Defender for Servers is now Generally Available](#onboarding-directly-without-azure-arc-to-defender-for-servers-is-now-generally-available) |
|June 4 | [Replacing agent-based discovery with agentless discovery for containers capabilities in Defender CSPM](#replacing-agent-based-discovery-with-agentless-discovery-for-containers-capabilities-in-defender-cspm) |
+### Streamlined multicloud account onboarding with enhanced settings
+
+June 26, 2023
+
+Defender for Cloud have improved the onboarding experience to include a new streamlined user interface and instructions in addition to new capabilities that allow you to onboard your AWS and GCP environments while providing access to advanced onboarding features.
+
+For organizations that have adopted Hashicorp Terraform for automation, Defender for Cloud now includes the ability to use Terraform as the deployment method alongside AWS CloudFormation or GCP Cloud Shell. You can now customize the required role names when creating the integration. You can also select between:
+
+- **Default access** - Allows Defender for Cloud to scan your resources and automatically include future capabilities.
+
+- **Least privileged access** -Grants Defender for Cloud access only to the current permissions needed for the selected plans.
+
+If you select the least privileged permissions, you'll only receive notifications on any new roles and permissions that are required to get full functionality on the connector health.
+
+Defender for Cloud allows you to distinguish between your cloud accounts by their native names from the cloud vendors. For example, AWS account aliases and GCP project names.
+ ### Private Endpoint support for Malware Scanning in Defender for Storage June 25, 2023
-Private Endpoint support is now available as part of the Malware Scanning public preview in Defender for Storage. This capability allows enabling Malware Scanning on storage accounts that are using private endpoints. No additional configuration is needed.
+Private Endpoint support is now available as part of the Malware Scanning public preview in Defender for Storage. This capability allows enabling Malware Scanning on storage accounts that are using private endpoints. No other configuration is needed.
-[Malware Scanning (Preview)](defender-for-storage-malware-scan.md) in Defender for Storage helps protect your storage accounts from malicious content by performing a full malware scan on uploaded content in near real-time, using Microsoft Defender Antivirus capabilities. It's designed to help fulfill security and compliance requirements for handling untrusted content. It is an agentless SaaS solution that allows simple setup at scale, with zero maintenance, and supports automating response at scale.
+[Malware Scanning (Preview)](defender-for-storage-malware-scan.md) in Defender for Storage helps protect your storage accounts from malicious content by performing a full malware scan on uploaded content in near real-time, using Microsoft Defender Antivirus capabilities. It's designed to help fulfill security and compliance requirements for handling untrusted content. It's an agentless SaaS solution that allows simple setup at scale, with zero maintenance, and supports automating response at scale.
Private endpoints provide secure connectivity to your Azure Storage services, effectively eliminating public internet exposure, and are considered a security best practice.
-For storage accounts with private endpoints that have Malware Scanning already enabled, you will need to disable and [enable the plan with Malware Scanning](https://learn.microsoft.com/azure/storage/common/azure-defender-storage-configure?toc=%2Fazure%2Fdefender-for-cloud%2Ftoc.json&tabs=enable-subscription) for this to work.
+For storage accounts with private endpoints that have Malware Scanning already enabled, you'll need to disable and [enable the plan with Malware Scanning](/azure/storage/common/azure-defender-storage-configure?toc=%2Fazure%2Fdefender-for-cloud%2Ftoc.json&tabs=enable-subscription) for this to work.
-Learn more about using [private endpoints](https://learn.microsoft.com/azure/private-link/private-endpoint-overview) in [Defender for Storage](defender-for-storage-introduction.md) and how to secure your storage services further.
+Learn more about using [private endpoints](/azure/private-link/private-endpoint-overview) in [Defender for Storage](defender-for-storage-introduction.md) and how to secure your storage services further.
### Recommendation released for preview: Running container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management)
For more information on compliance controls, see [Tutorial: Regulatory complianc
June 11, 2023
-Now you can discover potential cost savings in security by leveraging Defender for Cloud within the context of an [Azure Migrate business case](/azure/migrate/how-to-build-a-business-case).
+Now you can discover potential cost savings in security by applying Defender for Cloud within the context of an [Azure Migrate business case](/azure/migrate/how-to-build-a-business-case).
### Express configuration for vulnerability assessments in Defender for SQL is now Generally Available
defender-for-iot How To Investigate All Enterprise Sensor Detections In A Device Inventory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-investigate-all-enterprise-sensor-detections-in-a-device-inventory.md
To export device inventory data, select the **Import/Export file** :::image type
Save the exported file locally.
+> [!NOTE]
+> In the exported file, date values are based on the region settings for the machine you're using to access the OT sensor. We recommend exporting data only from a machine with the same region settings as the sensor that detected your data. For more information, see [Synchronize time zones on an OT sensor](how-to-manage-individual-sensors.md#synchronize-time-zones-on-an-ot-sensor).
+>
+ ## Add to and enhance device inventory data Use information from other sources, such as CMDBs, DNS, firewalls, and Web APIs, to enhance the data shown in your device inventory. For example, use enhanced data to present information about the following items:
defender-for-iot How To Investigate Sensor Detections In A Device Inventory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-investigate-sensor-detections-in-a-device-inventory.md
To export device inventory data, on the **Device inventory** page, select **Expo
The device inventory is exported with any filters currently applied, and you can save the file locally.
+> [!NOTE]
+> In the exported file, date values are based on the region settings for the machine you're using to access the OT sensor. We recommend exporting data only from a machine with the same region settings as your sensor. For more information, see [Synchronize time zones on an OT sensor](how-to-manage-individual-sensors.md#synchronize-time-zones-on-an-ot-sensor).
+>
+ ## Merge devices You may need to merge duplicate devices if the sensor has discovered separate network entities that are associated with a single, unique device.
energy-data-services Troubleshoot Manifest Ingestion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/troubleshoot-manifest-ingestion.md
Check the Airflow task logs for `update_status_running_task` or `update_status_f
Sample Kusto query: ```kusto
- AirflowTaskLogs
+ OEPAirFlowTask
| where DagName == "Osdu_ingest" | where DagTaskName == "update_status_running_task" | where LogLevel == "ERROR" // ERROR/DEBUG/INFO/WARNING
Check the Airflow task logs for `validate_manifest_schema_task` or `process_mani
Sample Kusto query: ```kusto
- AirflowTaskLogs
+ OEPAirFlowTask
| where DagName has "Osdu_ingest" | where DagTaskName == "validate_manifest_schema_task" or DagTaskName has "process_manifest_task" | where LogLevel == "ERROR"
Check the Airflow task logs for `provide_manifest_integrity_task` or `process_ma
Sample Kusto query: ```kusto
- AirflowTaskLogs
+ OEPAirFlowTask
| where DagName has "Osdu_ingest" | where DagTaskName == "provide_manifest_integrity_task" or DagTaskName has "process_manifest_task" | where Content has 'Search query "'or Content has 'response ids: ['
Check the Airflow task logs for `process_single_manifest_file_task` or `process_
Sample Kusto query: ```kusto
- AirflowTaskLogs
+ OEPAirFlowTask
| where DagName has "Osdu_ingest" | where DagTaskName == "process_single_manifest_file_task" or DagTaskName has "process_manifest_task" | where LogLevel == "ERROR"
expressroute Expressroute Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-routing.md
In addition to the BGP tag for each region, Microsoft also tags prefixes based o
| Azure Active Directory |12076:5060 | | Azure Resource Manager |12076:5070 | | Other Office 365 Online services** | 12076:5100 |
-| Microsoft Defender for Identity | 12076:5520 |
+| Microsoft Defender for Identity | 12076:5220 |
(1) Azure Global Services includes only Azure DevOps at this time.
frontdoor Tier Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/tier-migration.md
The default Azure Front Door tier selected for migration gets determined by the
**Classic WAF policy with only custom rules** - the new Azure Front Door profile defaults to Standard tier and can be upgraded to Premium during the migration. If you use the portal for migration, Azure creates custom WAF rules for Standard. If you upgrade to Premium during migration, custom WAF rules are created as part of the migration process. You'll need to add managed WAF rules manually after migration if you want to use managed rules.
-**Classic WAF policy with only managed WAF rules, or both managed and custom WAF rules** - the new Azure Front Door profile defaults to Premium tier and can't be downgraded during the migration. If you want to use Standard tier, then you need to remove the WAF policy association or delete the manage WAF rules from the Front Door (classic) WAF policy.
+**Classic WAF policy with only managed WAF rules, or both managed and custom WAF rules** - the new Azure Front Door profile defaults to Premium tier and can't be downgraded during the migration. If you want to use Standard tier, then you need to remove the WAF policy association or delete the managed WAF rules from the Front Door (classic) WAF policy.
> [!NOTE] > To avoid creating duplicate WAF policies during migration, the migration capability provides the option to either create copies or use an existing Azure Front Door Standard or Premium WAF policy.
hdinsight Apache Hadoop Connect Hive Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-connect-hive-power-bi.md
description: Learn how to use Microsoft Power BI to visualize Hive data processe
Previously updated : 05/06/2022 Last updated : 06/26/2023 # Visualize Apache Hive data with Microsoft Power BI using ODBC in Azure HDInsight
hdinsight Hdinsight Sales Insights Etl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-sales-insights-etl.md
description: Learn how to use create ETL pipelines with Azure HDInsight to deriv
Previously updated : 05/13/2022 Last updated : 06/26/2023 # Tutorial: Create an end-to-end data pipeline to derive sales insights in Azure HDInsight
hdinsight Hdinsight Use Oozie Linux Mac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-use-oozie-linux-mac.md
description: Use Hadoop Oozie in Linux-based HDInsight. Learn how to define an O
Previously updated : 05/09/2022 Last updated : 06/26/2023 # Use Apache Oozie with Apache Hadoop to define and run a workflow on Linux-based Azure HDInsight
hdinsight Tutorial Cli Rest Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/tutorial-cli-rest-proxy.md
Title: 'Tutorial: Create an Apache Kafka REST proxy enabled cluster in HDInsight
description: Learn how to perform Apache Kafka operations using a Kafka REST proxy on Azure HDInsight. Previously updated : 05/13/2022 Last updated : 06/26/2023
hdinsight Apache Spark Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-known-issues.md
description: Learn about issues related to Apache Spark clusters in Azure HDInsi
Previously updated : 05/10/2022 Last updated : 06/26/2023 # Known issues for Apache Spark cluster on HDInsight
hdinsight Safely Manage Jar Dependency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/safely-manage-jar-dependency.md
description: This article discusses best practices for managing Java Archive (JA
Previously updated : 05/13/2022 Last updated : 06/26/2022 # Safely manage jar dependencies
industrial-iot Tutorial Deploy Industrial Iot Platform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industrial-iot/tutorial-deploy-industrial-iot-platform.md
The deployment script allows to select which set of components to deploy.
- App Service Plan (shared with microservices), [App Service](https://azure.microsoft.com/services/app-service/) for hosting the Industrial IoT Engineering Tool cloud application - Simulation: - [Virtual machine](https://azure.microsoft.com/services/virtual-machines/), Virtual network, IoT Edge used for a factory simulation to show the capabilities of the platform and to generate sample telemetry-- [Azure Kubernetes Service](https://github.com/Azure/Industrial-IoT/blob/main/docs/deploy/howto-deploy-aks.md) should be used to host the cloud microservices
+- [Azure Kubernetes Service](/azure/aks/learn/quick-kubernetes-deploy-cli) should be used to host the cloud microservices
## Deploy Azure IIoT Platform using the deployment script
The deployment script allows to select which set of components to deploy.
Other hosting and deployment methods: -- For production deployments that require staging, rollback, scaling, and resilience, the platform can be deployed into [Azure Kubernetes Service (AKS)](https://github.com/Azure/Industrial-IoT/blob/main/docs/deploy/howto-deploy-aks.md)-- Deploying Azure Industrial IoT Platform microservices into an existing Kubernetes cluster using [Helm](https://github.com/Azure/Industrial-IoT/blob/main/docs/deploy/howto-deploy-helm.md).
+- For production deployments that require staging, rollback, scaling, and resilience, the platform can be deployed into [Azure Kubernetes Service (AKS)](/azure/aks/learn/quick-kubernetes-deploy-cli)
+- Deploying Azure Industrial IoT Platform microservices into an existing Kubernetes cluster using [Helm](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/api-management/how-to-deploy-self-hosted-gateway-kubernetes-helm.md).
- Deploying [Azure Kubernetes Service (AKS) cluster on top of Azure Industrial IoT Platform created by deployment script and adding Azure Industrial IoT components into the cluster](https://github.com/Azure/Industrial-IoT/blob/main/docs/deploy/howto-add-aks-to-ps1.md). References:-- [Deploying Azure Industrial IoT Platform](https://github.com/Azure/Industrial-IoT/tree/main/docs/deploy)-- [How to deploy all-in-one](https://github.com/Azure/Industrial-IoT/blob/main/docs/deploy/howto-deploy-all-in-one.md)-- [How to deploy platform into AKS](https://github.com/Azure/Industrial-IoT/blob/main/docs/deploy/howto-deploy-aks.md)
+- [Deploying Azure Industrial IoT Platform](/azure/industrial-iot/tutorial-deploy-industrial-iot-platform)
+- [How to deploy platform into AKS](/azure/aks/learn/quick-kubernetes-deploy-cli)
## Next steps
industrial-iot Tutorial Publisher Configure Opc Publisher https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/industrial-iot/tutorial-publisher-configure-opc-publisher.md
To persist the security configuration of OPC Publisher, the certificate store mu
## Configuration via Configuration File
-The simplest way to configure OPC Publisher is via a configuration file. An example configuration file and format documentation are provided in the file [`publishednodes.json`](https://raw.githubusercontent.com/Azure/Industrial-IoT/main/components/opc-ua/src/Microsoft.Azure.IIoT.OpcUa.Edge.Publisher/tests/Engine/publishednodes.json) in this repository.
+The simplest way to configure OPC Publisher is via a configuration file. An example configuration file and format documentation are provided in the file [`publishednodes.json`](https://raw.githubusercontent.com/Azure/Industrial-IoT/main/src/Azure.IIoT.OpcUa.Publisher/tests/Publisher/publishednodes.json) in this repository.
Configuration file syntax has changed over time. OPC Publisher still can read old formats, but converts them into the latest format when writing the file. A basic configuration file looks like this:
OPC Publisher implements the following [IoT Hub Direct Methods](../iot-hub/iot-h
>[!NOTE] > This feature is only available in version 2.6 and above of OPC Publisher.
-A cloud-based, companion microservice with a REST interface is described and available [here](https://github.com/Azure/Industrial-IoT/blob/main/docs/services/publisher.md). It can be used to configure OPC Publisher via an OpenAPI-compatible interface, for example through Swagger.
+A cloud-based, companion microservice with a REST interface is described and available [here](https://github.com/Azure/Industrial-IoT/tree/main/docs/opc-publisher). It can be used to configure OPC Publisher via an OpenAPI-compatible interface, for example through Swagger.
## Configuration of the simple JSON telemetry format via Separate Configuration File
iot-central Overview Iot Central Solution Builder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-solution-builder.md
A typical IoT solution:
- Extracts business value from your device data. - Is composed of multiple services and applications. When you use IoT Central to create an IoT solution, tasks include:
iot-central Tutorial Industrial End To End https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/tutorial-industrial-end-to-end.md
The IoT Edge deployment manifest defines four custom modules:
- [azuremetricscollector](../../iot-edge/how-to-collect-and-transport-metrics.md?view=iotedge-2020-11&tabs=iotcentral&preserve-view=true) - sends metrics from the IoT Edge device to the IoT Central application. - [opcplc](https://github.com/Azure-Samples/iot-edge-opc-plc) - generates simulated OPC-UA data.-- [opcpublisher](https://github.com/Azure/Industrial-IoT/blob/main/docs/modules/publisher.md) - forwards OPC-UA data from an OPC-UA server to the **miabgateway**.
+- [opcpublisher](https://github.com/Azure/Industrial-IoT/tree/main/docs/opc-publisher) - forwards OPC-UA data from an OPC-UA server to the **miabgateway**.
- [miabgateway](https://github.com/iot-for-all/iotc-miab-gateway) - gateway to send OPC-UA data to your IoT Central app and handle commands sent from your IoT Central app. You can see the deployment manifest in the tool configuration file. The tool assigns the deployment manifest to the IoT Edge device it registers in your IoT Central application.
To learn more about how to use the REST API to deploy and configure the IoT Edge
The [opcplc](https://github.com/Azure-Samples/iot-edge-opc-plc) module on the IoT Edge device generates simulated OPC-UA data for the solution. This module implements an OPC-UA server with multiple nodes that generate random data and anomalies. The module also lets you configure user defined nodes.
-The [opcpublisher](https://github.com/Azure/Industrial-IoT/blob/main/docs/modules/publisher.md) module on the IoT Edge device forwards OPC-UA data from an OPC-UA server to the **miabgateway** module.
+The [opcpublisher](https://github.com/Azure/Industrial-IoT/tree/main/docs/opc-publisher) module on the IoT Edge device forwards OPC-UA data from an OPC-UA server to the **miabgateway** module.
### IoT Central application
machine-learning Concept Ml Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-ml-pipelines.md
The core of a machine learning pipeline is to split a complete machine learning
Machine learning operation (MLOps) automates the process of building machine learning models and taking the model to production. This is a complex process. It usually requires collaboration from different teams with different skills. A well-defined machine learning pipeline can abstract this complex process into a multiple steps workflow, mapping each step to a specific task such that each team can work independently.
-For example, a typical machine learning project includes the steps of data collection, data preparation, model training, model evaluation, and model deployment. Usually, the data engineers concentrate on data steps, data scientists spend most time on model training and evaluation, the machine learning engineers focus on model deployment and automation of the entire workflow. By leveraging machine learning pipeline, each team only needs to work on building their own steps. The best way of building steps is using [Azure Machine Learning component](concept-component.md), a self-contained piece of code that does one step in a machine learning pipeline. All these steps built by different users are finally integrated into one workflow through the pipeline definition. The pipeline is a collaboration tool for everyone in the project. The process of defining a pipeline and all its steps can be standardized by each company's preferred DevOps practice. The pipeline can be further versioned and automated. If the ML projects are described as a pipeline, then the best MLOps practice is already applied.
+For example, a typical machine learning project includes the steps of data collection, data preparation, model training, model evaluation, and model deployment. Usually, the data engineers concentrate on data steps, data scientists spend most time on model training and evaluation, the machine learning engineers focus on model deployment and automation of the entire workflow. By leveraging machine learning pipeline, each team only needs to work on building their own steps. The best way of building steps is using [Azure Machine Learning component (v2)](concept-component.md), a self-contained piece of code that does one step in a machine learning pipeline. All these steps built by different users are finally integrated into one workflow through the pipeline definition. The pipeline is a collaboration tool for everyone in the project. The process of defining a pipeline and all its steps can be standardized by each company's preferred DevOps practice. The pipeline can be further versioned and automated. If the ML projects are described as a pipeline, then the best MLOps practice is already applied.
### Training efficiency and cost reduction
Once the teams get familiar with pipelines and want to do more machine learning
Once a team has built a collection of machine learnings pipelines and reusable components, they could start to build the machine learning pipeline from cloning previous pipeline or tie existing reusable component together. At this stage, the team's overall productivity will be improved significantly. Azure Machine Learning offers different methods to build a pipeline. For users who are familiar with DevOps practices, we recommend using [CLI](how-to-create-component-pipelines-cli.md). For data scientists who are familiar with python, we recommend writing pipeline using the [Azure Machine Learning SDK v2](how-to-create-component-pipeline-python.md). For users who prefer to use UI, they could use the [designer to build pipeline by using registered components](how-to-create-component-pipelines-ui.md).
The Azure cloud provides several types of pipeline, each with a different purpos
Azure Machine Learning pipelines are a powerful facility that begins delivering value in the early development stages. + [Define pipelines with the Azure Machine Learning CLI v2](./how-to-create-component-pipelines-cli.md) + [Define pipelines with the Azure Machine Learning SDK v2](./how-to-create-component-pipeline-python.md) + [Define pipelines with Designer](./how-to-create-component-pipelines-ui.md) + Try out [CLI v2 pipeline example](https://github.com/Azure/azureml-examples/tree/sdk-preview/cli/jobs/pipelines-with-components) + Try out [Python SDK v2 pipeline example](https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/pipelines)++ [Create and run machine learning pipelines](v1/how-to-create-machine-learning-pipelines.md)++ [Define pipelines with Designer](./how-to-create-component-pipelines-ui.md)
machine-learning How To Setup Mlops Azureml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-setup-mlops-azureml.md
This step deploys the training pipeline to the Azure Machine Learning workspace
1. Select the repository that you cloned in from the previous section `mlops-v2-ado-demo`
-1. Select **Existing Azure Pipeline YAML File**
+1. Select **Existing Azure Pipelines YAML file**
![Screenshot of Azure DevOps Pipeline page on configure step.](./media/how-to-setup-mlops-azureml/ADO-configure-pipelines.png)
This training pipeline contains the following steps:
1. Select the repository that you cloned in from the previous section `mlopsv2`
-1. Select **Existing Azure Pipeline YAML File**
+1. Select **Existing Azure Pipelines YAML file**
![Screenshot of ADO Pipeline page on configure step.](./media/how-to-setup-mlops-azureml/ADO-configure-pipelines.png)
This scenario includes prebuilt workflows for two approaches to deploying a trai
1. Select the repository that you cloned in from the previous section `mlopsv2`
-1. Select **Existing Azure Pipeline YAML File**
+1. Select **Existing Azure Pipelines YAML file**
![Screenshot of Azure DevOps Pipeline page on configure step.](./media/how-to-setup-mlops-azureml/ADO-configure-pipelines.png)
machine-learning How To Track Experiments Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-track-experiments-mlflow.md
All metrics and parameters are also returned when querying runs. However, for me
By default, experiments are ordered descending by `start_time`, which is the time the experiment was queue in Azure Machine Learning. However, you can change this default by using the parameter `order_by`.
-* Order runs by the attribute `start_time`:
+* Order runs by attributes, like `start_time`:
```python mlflow.search_runs(experiment_ids=[ "1234-5678-90AB-CDEFG" ], order_by=["attributes.start_time DESC"]) ```
-* Order and retrieve the last run:
+* Order runs and limit results. The following example returns the last single run in the experiment:
```python mlflow.search_runs(experiment_ids=[ "1234-5678-90AB-CDEFG" ], max_results=1, order_by=["attributes.start_time DESC"]) ```
+* Order runs by the attribute `duration`:
+
+ ```python
+ mlflow.search_runs(experiment_ids=[ "1234-5678-90AB-CDEFG" ],
+ order_by=["attributes.duration DESC"])
+ ```
+
+ > [!TIP]
+ > `attributes.duration` is not present in MLflow OSS, but provided in Azure Machine Learning for convenience.
+ * Order runs by metric's values: ```python
By default, experiments are ordered descending by `start_time`, which is the tim
``` > [!WARNING]
- > Using `order_by` with expressions containing `metrics.*` in the parameter `order_by` is not supported by the moment. Please use `order_values` method from Pandas as shown in the next example.
+ > Using `order_by` with expressions containing `metrics.*`, `params.*`, or `tags.*` in the parameter `order_by` is not supported by the moment. Please use `order_values` method from Pandas as shown in the example.
### Filtering runs
You can also look for a run with a specific combination in the hyperparameters u
> [!TIP] > Notice that for the key `attributes`, values should always be strings and hence encoded between quotes.
-
+
+* Search runs taking longer than one hour:
+
+ ```python
+ duration = 360 * 1000 # duration is in milliseconds
+ mlflow.search_runs(experiment_ids=[ "1234-5678-90AB-CDEFG" ],
+ filter_string=f"attributes.duration > '{duration}'")
+ ```
+
+ > [!TIP]
+ > `attributes.duration` is not present in MLflow OSS, but provided in Azure Machine Learning for convenience.
+
* Search runs having the ID in a given set: ```python
machine-learning Reference Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-kubernetes.md
spec:
storage: 1Gi ``` > [!IMPORTANT]
-> Only the job pods in the same Kubernetes namespace with the PVC(s) will be mounted the volume. Data scientist is able to access the `mount path` specified in the PVC annotation in the job.
+> Only training job pods and batch-deployment pods will have access to the PVC(s); managed and Kubernetes online-deployment pods will not. In addition, only the pods in the same Kubernetes namespace with the PVC(s) will be mounted the volume. Data scientist is able to access the `mount path` specified in the PVC annotation in the job.
## Supported Azure Machine Learning taints and tolerations
operator-nexus Quickstarts Kubernetes Cluster Deployment Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/quickstarts-kubernetes-cluster-deployment-arm.md
+
+ Title: Create an Azure Nexus Kubernetes cluster by using Azure Resource Manager template (ARM template)
+description: Learn how to create an Azure Nexus Kubernetes cluster by using Azure Resource Manager template (ARM template).
+++++ Last updated : 05/14/2023++
+# Quickstart: Deploy an Azure Nexus Kubernetes cluster by using Azure Resource Manager template (ARM template)
+
+* Deploy an Azure Nexus Kubernetes cluster using an Azure Resource Manager template.
+
+This quickstart describes how to use an Azure Resource Manager template (ARM template) to create Azure Nexus Kubernetes cluster.
+++
+## Prerequisites
++
+## Review the template
+
+Before deploying the Kubernetes template, let's review the content to understand its structure.
++
+Once you have reviewed and saved the template file named ```kubernetes-deploy.json```, proceed to the next section to deploy the template.
+
+## Deploy the template
+
+1. Create a file named ```kubernetes-deploy-parameters.json``` and add the required parameters in JSON format. You can use the following example as a starting point. Replace the values with your own.
++
+2. Deploy the template.
+
+```azurecli
+ az deployment group create \
+ --resource-group myResourceGroup \
+ --template-file kubernetes-deploy.json \
+ --parameters @kubernetes-deploy-parameters.json
+```
+
+## Review deployed resources
++
+## Connect to the cluster
++
+## Add an agent pool
+The cluster created in the previous step has a single node pool. Let's add a second agent pool using the ARM template. The following example creates an agent pool named ```myNexusAKSCluster-nodepool-2```:
+
+1. Review the template.
+
+Before adding the agent pool template, let's review the content to understand its structure.
++
+Once you have reviewed and saved the template file named ```kubernetes-add-agentpool.json```, proceed to the next section to deploy the template.
+
+1. Create a file named ```kubernetes-nodepool-parameters.json``` and add the required parameters in JSON format. You can use the following example as a starting point. Replace the values with your own.
++
+2. Deploy the template.
+
+```azurecli
+ az deployment group create \
+ --resource-group myResourceGroup \
+ --template-file kubernetes-add-agentpool.json \
+ --parameters @kubernetes-nodepool-parameters.json
+```
++
+## Clean up resources
++
+## Next steps
++
operator-nexus Quickstarts Kubernetes Cluster Deployment Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/quickstarts-kubernetes-cluster-deployment-bicep.md
+
+ Title: Quickstart - Create an Azure Nexus Kubernetes cluster by using Bicep
+description: Learn how to create an Azure Nexus Kubernetes cluster by using Bicep.
+++++ Last updated : 05/13/2023++
+# Quickstart: Deploy an Azure Nexus Kubernetes cluster using Bicep
+
+* Deploy an Azure Nexus Kubernetes cluster using Bicep.
++
+## Prerequisites
++
+## Review the Bicep file
+
+Before deploying the Kubernetes template, let's review the content to understand its structure.
++
+Once you have reviewed and saved the template file named ```kubernetes-deploy.bicep```, proceed to the next section to deploy the template.
+
+## Deploy the Bicep file
+
+1. Create a file named ```kubernetes-deploy-parameters.json``` and add the required parameters in JSON format. You can use the following example as a starting point. Replace the values with your own.
++
+2. Deploy the template.
+
+```azurecli
+ az deployment group create \
+ --resource-group myResourceGroup \
+ --template-file kubernetes-deploy.bicep \
+ --parameters @kubernetes-deploy-parameters.json
+```
+
+## Review deployed resources
++
+## Connect to the cluster
++
+## Add an agent pool
+The cluster created in the previous step has a single node pool. Let's add a second agent pool using the Bicep template. The following example creates an agent pool named ```myNexusAKSCluster-nodepool-2```:
+
+1. Review the template.
+
+Before adding the agent pool template, let's review the content to understand its structure.
++
+Once you have reviewed and saved the template file named ```kubernetes-add-agentpool.bicep```, proceed to the next section to deploy the template.
+
+1. Create a file named ```kubernetes-nodepool-parameters.json``` and add the required parameters in JSON format. You can use the following example as a starting point. Replace the values with your own.
++
+2. Deploy the template.
+
+```azurecli
+ az deployment group create \
+ --resource-group myResourceGroup \
+ --template-file kubernetes-add-agentpool.bicep \
+ --parameters @kubernetes-nodepool-parameters.json
+```
++
+## Clean up resources
++
+## Next steps
+
operator-nexus Quickstarts Kubernetes Cluster Deployment Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/quickstarts-kubernetes-cluster-deployment-cli.md
+
+ Title: Create an Azure Nexus Kubernetes cluster by using Azure CLI
+description: Learn how to create an Azure Nexus Kubernetes cluster by using Azure CLI.
+++++ Last updated : 05/13/2023++
+# Quickstart: Create an Azure Nexus Kubernetes cluster by using Azure CLI
+
+* Deploy an Azure Nexus Kubernetes cluster using Azure CLI.
+
+## Before you begin
++
+## Create an Azure Nexus Kubernetes cluster
+
+The following example creates a cluster named *myNexusAKSCluster* in resource group *myResourceGroup* in the *eastus* location.
+
+Before you run the commands, you need to set several variables to define the configuration for your cluster. Here are the variables you need to set, along with some default values you can use for certain variables:
+
+| Variable | Description |
+| -- | |
+| LOCATION | The Azure region where you want to create your cluster. |
+| RESOURCE_GROUP | The name of the Azure resource group where you want to create the cluster. |
+| SUBSCRIPTION_ID | The ID of your Azure subscription. |
+| CUSTOM_LOCATION | This argument specifies a custom location of the Nexus instance. |
+| CSN_ARM_ID | CSN ID is the unique identifier for the cloud services network you want to use. |
+| CNI_ARM_ID | CNI ID is the unique identifier for the network interface to be used by the container runtime. |
+| AAD_ADMIN_GROUP_OBJECT_ID | The object ID of the Azure Active Directory group that should have admin privileges on the cluster. |
+| CLUSTER_NAME | The name you want to give to your Nexus Kubernetes cluster. |
+| K8S_VERSION | The version of Kubernetes you want to use. |
+| ADMIN_USERNAME | The username for the cluster administrator. |
+| SSH_PUBLIC_KEY | The SSH public key that is used for secure communication with the cluster. |
+| CONTROL_PLANE_COUNT | The number of control plane nodes for the cluster. |
+| CONTROL_PLANE_VM_SIZE | The size of the virtual machine for the control plane nodes. |
+| INITIAL_AGENT_POOL_NAME | The name of the initial agent pool. |
+| INITIAL_AGENT_POOL_COUNT | The number of nodes in the initial agent pool. |
+| INITIAL_AGENT_POOL_VM_SIZE | The size of the virtual machine for the initial agent pool. |
+| POD_CIDR | The network range for the Kubernetes pods in the cluster, in CIDR notation. |
+| SERVICE_CIDR | The network range for the Kubernetes services in the cluster, in CIDR notation. |
+| DNS_SERVICE_IP | The IP address for the Kubernetes DNS service. |
++
+Once you've defined these variables, you can run the Azure CLI command to create the cluster. The ```--debug``` flag at the end is used to provide more detailed output for troubleshooting purposes.
+
+To define these variables, use the following set commands and replace the example values with your preferred values. You can also use the default values for some of the variables, as shown in the following example:
+
+```bash
+RESOURCE_GROUP="myResourceGroup"
+LOCATION="$(az group show --name $RESOURCE_GROUP --query location | tr -d '\"')"
+SUBSCRIPTION_ID="$(az account show -o tsv --query id)"
+CUSTOM_LOCATION="/subscriptions/<subscription_id>/resourceGroups/<managed_resource_group>/providers/microsoft.extendedlocation/customlocations/<custom-location-name>"
+CSN_ARM_ID="/subscriptions/<subscription_id>/resourceGroups/<resource_group>/providers/Microsoft.NetworkCloud/cloudServicesNetworks/<csn-name>"
+CNI_ARM_ID="/subscriptions/<subscription_id>/resourceGroups/<resource_group>/providers/Microsoft.NetworkCloud/l3Networks/<l3Network-name>"
+AAD_ADMIN_GROUP_OBJECT_ID="00000000-0000-0000-0000-000000000000"
+CLUSTER_NAME="myNexusAKSCluster"
+K8S_VERSION="v1.24.9"
+ADMIN_USERNAME="azureuser"
+SSH_PUBLIC_KEY="$(cat ~/.ssh/id_rsa.pub)"
+CONTROL_PLANE_COUNT="1"
+CONTROL_PLANE_VM_SIZE="NC_G2_v1"
+INITIAL_AGENT_POOL_NAME="${CLUSTER_NAME}-nodepool-1"
+INITIAL_AGENT_POOL_COUNT="1"
+INITIAL_AGENT_POOL_VM_SIZE="NC_M4_v1"
+POD_CIDR="10.244.0.0/16"
+SERVICE_CIDR="10.96.0.0/16"
+DNS_SERVICE_IP="10.96.0.10"
+```
+> [!NOTE]
+> It is essential that you replace the placeholders for CUSTOM_LOCATION, CSN_ARM_ID, CNI_ARM_ID, and AAD_ADMIN_GROUP_OBJECT_ID with your actual values before running these commands.
+
+After defining these variables, you can create the Kubernetes cluster by executing the following Azure CLI command:
+
+```azurecli
+az networkcloud kubernetescluster create \
+--name "${CLUSTER_NAME}" \
+--resource-group "${RESOURCE_GROUP}" \
+--subscription "${SUBSCRIPTION_ID}" \
+--extended-location name="${CUSTOM_LOCATION}" type=CustomLocation \
+--location "${LOCATION}" \
+--kubernetes-version "${K8S_VERSION}" \
+--aad-configuration admin-group-object-ids="[${AAD_ADMIN_GROUP_OBJECT_ID}]" \
+--admin-username "${ADMIN_USERNAME}" \
+--ssh-key-values "${SSH_PUBLIC_KEY}" \
+--control-plane-node-configuration \
+ count="${CONTROL_PLANE_COUNT}" \
+ vm-sku-name="${CONTROL_PLANE_VM_SIZE}" \
+--initial-agent-pool-configurations "[{count:${INITIAL_AGENT_POOL_COUNT},mode:System,name:${INITIAL_AGENT_POOL_NAME},vm-sku-name:${INITIAL_AGENT_POOL_VM_SIZE}}]" \
+--network-configuration \
+ cloud-services-network-id="${CSN_ARM_ID}" \
+ cni-network-id="${CNI_ARM_ID}" \
+ pod-cidrs="[${POD_CIDR}]" \
+ service-cidrs="[${SERVICE_CIDR}]" \
+ dns-service-ip="${DNS_SERVICE_IP}"
+```
+
+After a few minutes, the command completes and returns information about the cluster. For more advanced options, see [Quickstart: Deploy an Azure Nexus Kubernetes cluster using Bicep](./quickstarts-kubernetes-cluster-deployment-bicep.md).
+
+## Review deployed resources
++
+## Connect to the cluster
++
+## Add an agent pool
+The cluster created in the previous step has a single node pool. Let's add a second agent pool using the ```az networkcloud kubernetescluster agentpool create``` command. The following example creates an agent pool named ```myNexusAKSCluster-nodepool-2```:
+
+You can also use the default values for some of the variables, as shown in the following example:
+
+```bash
+RESOURCE_GROUP="myResourceGroup"
+CUSTOM_LOCATION="/subscriptions/<subscription_id>/resourceGroups/<managed_resource_group>/providers/microsoft.extendedlocation/customlocations/<custom-location-name>"
+CLUSTER_NAME="myNexusAKSCluster"
+AGENT_POOL_NAME="${CLUSTER_NAME}-nodepool-2"
+AGENT_POOL_VM_SIZE="NC_M4_v1"
+AGENT_POOL_COUNT="1"
+AGENT_POOL_MODE="System"
+```
+After defining these variables, you can add an agent pool by executing the following Azure CLI command:
+
+```azurecli
+az networkcloud kubernetescluster agentpool create \
+ --name "${AGENT_POOL_NAME}" \
+ --kubernetes-cluster-name "${CLUSTER_NAME}" \
+ --resource-group "${RESOURCE_GROUP}" \
+ --extended-location name="${CUSTOM_LOCATION}" type=CustomLocation \
+ --count "${AGENT_POOL_COUNT}" \
+ --mode "${AGENT_POOL_MODE}" \
+ --vm-sku-name "${AGENT_POOL_VM_SIZE}"
+```
+
+After a few minutes, the command completes and returns information about the agent pool. For more advanced options, see [Quickstart: Deploy an Azure Nexus Kubernetes cluster using Bicep](./quickstarts-kubernetes-cluster-deployment-bicep.md).
++
+## Clean up resources
++
+## Next steps
+
postgresql Concepts Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-maintenance.md
Title: Scheduled maintenance - Azure Database for PostgreSQL - Flexible server
-description: This article describes the scheduled maintenance feature in Azure Database for PostgreSQL - Flexible server.
+ Title: Scheduled maintenance - Azure Database for PostgreSQL - Flexible Server
+description: This article describes the scheduled maintenance feature in Azure Database for PostgreSQL - Flexible Server.
Last updated 11/30/2021
-# Scheduled maintenance in Azure Database for PostgreSQL ΓÇô Flexible server
+# Scheduled maintenance in Azure Database for PostgreSQL ΓÇô Flexible Server
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-Azure Database for PostgreSQL - Flexible server performs periodic maintenance to keep your managed database secure, stable, and up-to-date. During maintenance, the server gets new features, updates, and patches.
+Azure Database for PostgreSQL - Flexible Server performs periodic maintenance to keep your managed database secure, stable, and up-to-date. During maintenance, the server gets new features, updates, and patches.
## Selecting a maintenance window
When specifying preferences for the maintenance schedule, you can pick a day of
> > However, in case of a critical emergency update such as a severe vulnerability, the notification window could be shorter than five days or be omitted. The critical update may be applied to your server even if a successful scheduled maintenance was performed in the last 30 days.
-You can update scheduling settings at any time. If there is a maintenance scheduled for your Flexible server and you update scheduling preferences, the current rollout will proceed as scheduled and the scheduling settings change will become effective upon its successful completion for the next scheduled maintenance.
+You can update scheduling settings at any time. If there is maintenance scheduled for your flexible server and you update scheduling preferences, the current rollout will proceed as scheduled and the scheduling settings change will become effective upon its successful completion for the next scheduled maintenance.
You can define system-managed schedule or custom schedule for each flexible server in your Azure subscription. * With custom schedule, you can specify your maintenance window for the server by choosing the day of the week and a one-hour time window.
postgresql Concepts Query Performance Insight https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-query-performance-insight.md
Title: Query Performance Insight - Azure Database for PostgreSQL - Flexible server
-description: This article describes the Query Performance Insight feature in Azure Database for PostgreSQL - Flexible server.
+ Title: Query Performance Insight - Azure Database for PostgreSQL - Flexible Server
+description: This article describes the Query Performance Insight feature in Azure Database for PostgreSQL - Flexible Server.
Last updated 4/1/2023
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-Query Performance Insight provides intelligent query analysis for Azure Postgres Flexible server databases. It helps identify the top resource consuming and long-running queries in your workload. This helps you find the queries to optimize to improve overall workload performance and efficiently use the resource that you are paying for. Query Performance Insight helps you spend less time troubleshooting database performance by providing:
+Query Performance Insight provides intelligent query analysis for Azure Postgres Flexible Server databases. It helps identify the top resource consuming and long-running queries in your workload. This helps you find the queries to optimize to improve overall workload performance and efficiently use the resource that you are paying for. Query Performance Insight helps you spend less time troubleshooting database performance by providing:
>[!div class="checklist"] > * Identify what your long running queries, and how they change over time.
postgresql How To Manage Azure Ad Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-manage-azure-ad-users.md
select * from pgaadauth_list_principals(true);
## Create a role using Azure AD principal name ```sql
-select * from pgaadauth_create_principal('mary@contoso.com', false, false);
+select * from pgaadauth_create_principal('<roleName>', <isAdmin>, <isMfa>);
+
+For example: select * from pgaadauth_create_principal('mary@contoso.com', false, false);
``` **Parameters:**
select * from pgaadauth_create_principal('mary@contoso.com', false, false);
## Create a role using Azure AD object identifier ```sql
-select * from pgaadauth_create_principal_with_oid('accounting_application', '00000000-0000-0000-0000-000000000000', 'service', false, false);
+select * from pgaadauth_create_principal_with_oid('<roleName>', '<objectId>', '<objectType>', <isAdmin>, <isMfa>);
+
+For example: select * from pgaadauth_create_principal_with_oid('accounting_application', '00000000-0000-0000-0000-000000000000', 'service', false, false);
``` **Parameters:**
select * from pgaadauth_create_principal_with_oid('accounting_application', '000
- *objectId* - Unique object identifier of the Azure AD object: - For **Users**, **Groups** and **Managed Identities** the ObjectId can be found by searching for the object name in Azure AD page in Azure portal. [See this guide as example](/partner-center/find-ids-and-domain-names) - For **Applications**, Objectid of the corresponding **Service Principal** must be used. In Azure portal the required ObjectId can be found on **Enterprise Applications** page.-- *objectType* - Type of the Azure AD object to link to this role.
+- *objectType* - Type of the Azure AD object to link to this role: service, user, group.
- *isAdmin* - Set to **true** if when creating an admin user and **false** for a regular user. Admin user created this way has the same privileges as one created via portal or API. - *isMfa* - Flag if Multi Factor Authentication must be enforced for this role.
postgresql How To Migrate Single To Flexible Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/how-to-migrate-single-to-flexible-cli.md
az postgres flexible-server migration create [--subscription]
[--resource-group] [--name] [--migration-name]
+ [--migration-mode]
[--properties] ```
az postgres flexible-server migration create [--subscription]
|`resource-group` | Resource group of the Flexible Server target. | |`name` | Name of the Flexible Server target. | |`migration-name` | Unique identifier to migrations attempted to Flexible Server. This field accepts only alphanumeric characters and does not accept any special characters, except a hyphen (`-`). The name can't start with `-`, and no two migrations to a Flexible Server target can have the same name. |
+|`migration-mode` | This is an optional parameter. Default value: Offline. Offline migration involves copying of your source databases at a point in time, to your target server. |
|`properties` | Absolute path to a JSON file that has the information about the Single Server source. | For example: ```azurecli-interactive
-az postgres flexible-server migration create --subscription 11111111-1111-1111-1111-111111111111 --resource-group my-learning-rg --name myflexibleserver --migration-name migration1 --properties "C:\Users\Administrator\Documents\migrationBody.JSON"
+az postgres flexible-server migration create --subscription 11111111-1111-1111-1111-111111111111 --resource-group my-learning-rg --name myflexibleserver --migration-name migration1 --properties "C:\Users\Administrator\Documents\migrationBody.JSON" --migration-mode offline
``` The `migration-name` argument used in the `create` command will be used in other CLI commands, such as `update`, `delete`, and `show.` In all those commands, it uniquely identifies the migration attempt in the corresponding actions.
The structure of the JSON is:
```bash { "properties": {
- "SourceDBServerResourceId":"/subscriptions/<subscriptionid>/resourceGroups/<src_ rg_name>/providers/Microsoft.DBforPostgreSQL/servers/<source server name>",
+ "sourceDbServerResourceId":"/subscriptions/<subscriptionid>/resourceGroups/<src_ rg_name>/providers/Microsoft.DBforPostgreSQL/servers/<source server name>",
-"SecretParameters": {
- "AdminCredentials":
+"secretParameters": {
+ "adminCredentials":
{
- "SourceServerPassword": "<password>",
- "TargetServerPassword": "<password>"
+ "sourceServerPassword": "<password>",
+ "targetServerPassword": "<password>"
} },
-"DBsToMigrate":
+"dbsToMigrate":
[ "<db1>","<db2>" ],
-"OverwriteDBsInTarget":"true"
+"overwriteDbsInTarget":"true"
}
The `create` parameters that go into the json file format are as shown below:
| Parameter | Type | Description | | - | - | - |
-| `SourceDBServerResourceId` | Required | This parameter is the resource ID of the Single Server source and is mandatory. |
-| `SecretParameters` | Required | This parameter lists passwords for admin users for both the Single Server source and the Flexible Server target. These passwords help to authenticate against the source and target servers.
-| `DBsToMigrate` | Required | Specify the list of databases that you want to migrate to Flexible Server. You can include a maximum of eight database names at a time. |
-| `OverwriteDBsinTarget` | Required | When set to true (default), if the target server happens to have an existing database with the same name as the one you're trying to migrate, migration tool automatically overwrites the database. |
+| `sourceDbServerResourceId` | Required | This parameter is the resource ID of the Single Server source and is mandatory. |
+| `secretParameters` | Required | This parameter lists passwords for admin users for both the Single Server source and the Flexible Server target. These passwords help to authenticate against the source and target servers.
+| `dbsToMigrate` | Required | Specify the list of databases that you want to migrate to Flexible Server. You can include a maximum of eight database names at a time. |
+| `overwriteDbsInTarget` | Required | When set to true (default), if the target server happens to have an existing database with the same name as the one you're trying to migrate, migration tool automatically overwrites the database. |
| `SetupLogicalReplicationOnSourceDBIfNeeded` | Optional | You can enable logical replication on the source server automatically by setting this property to `true`. This change in the server settings requires a server restart with a downtime of two to three minutes. | | `SourceDBServerFullyQualifiedDomainName` | Optional | Use it when a custom DNS server is used for name resolution for a virtual network. Provide the FQDN of the Single Server source according to the custom DNS server for this property. | | `TargetDBServerFullyQualifiedDomainName` | Optional | Use it when a custom DNS server is used for name resolution inside a virtual network. Provide the FQDN of the Flexible Server target according to the custom DNS server. <br> `SourceDBServerFullyQualifiedDomainName` and `TargetDBServerFullyQualifiedDomainName` are included as a part of the JSON only in the rare scenario that a custom DNS server is used for name resolution instead of Azure-provided DNS. Otherwise, don't include these parameters as a part of the JSON file. |
sap Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/get-started.md
In the SAP workload documentation space, you can find the following areas:
## Change Log
+- Jun 23, 2023: Updated Azure scheduled events for SLES in [Pacemaker set up guide](./high-availability-guide-suse-pacemaker.md#configure-pacemaker-for-azure-scheduled-events).
- June 1, 2023: Included virtual machine scale set with flexible orchestration guidelines in SAP workload [planning guide](./planning-guide.md). - June 1, 2023: Updated high availability guidelines in [HA architecture and scenarios](./sap-high-availability-architecture-scenarios.md), and added additional deployment option in [configuring optimal network latency with SAP applications](./proximity-placement-scenarios.md). - June 1, 2023: Release of [virtual machine scale set with flexible orchestration support for SAP workload](./virtual-machine-scale-set-sap-deployment-guide.md).
sap High Availability Guide Suse Pacemaker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-suse-pacemaker.md
Make sure to assign the custom role to the service principal at all VM (cluster
## Configure Pacemaker for Azure scheduled events
-Azure offers [scheduled events]((../../virtual-machines/linux/scheduled-events.md). Scheduled events are provided via the metadata service and allow time for the application to prepare for such events. Resource agent [azure-events-az](https://github.com/ClusterLabs/resource-agents/pull/1161) monitors for scheduled Azure events. If events are detected and the resource agent determines that another cluster node is available, it sets a cluster health attribute. When the cluster health attribute is set for a node, the location constraint triggers and all resources, whose name doesnΓÇÖt start with ΓÇ£health-ΓÇ£ are migrated away from the node with scheduled event. Once the affected cluster node is free of running cluster resources, scheduled event is acknowledged and can execute its action, such as restart.
+Azure offers [scheduled events](../../virtual-machines/linux/scheduled-events.md). Scheduled events are provided via the metadata service and allow time for the application to prepare for such events. Resource agent [azure-events-az](https://github.com/ClusterLabs/resource-agents/pull/1161) monitors for scheduled Azure events. If events are detected and the resource agent determines that another cluster node is available, it sets a cluster health attribute. When the cluster health attribute is set for a node, the location constraint triggers and all resources, whose name doesnΓÇÖt start with ΓÇ£health-ΓÇ£ are migrated away from the node with scheduled event. Once the affected cluster node is free of running cluster resources, scheduled event is acknowledged and can execute its action, such as restart.
> [!IMPORTANT] > Previously, this document described the use of resource agent [azure-events](https://github.com/ClusterLabs/resource-agents/blob/main/heartbeat/azure-events.in). New resource agent [azure-events-az](https://github.com/ClusterLabs/resource-agents/blob/main/heartbeat/azure-events-az.in) fully supports Azure environments deployed in different availability zones.
Azure offers [scheduled events]((../../virtual-machines/linux/scheduled-events.m
First time query execution for scheduled events [can take up to 2 minutes](../../virtual-machines/linux/scheduled-events.md#enabling-and-disabling-scheduled-events). Pacemaker testing with scheduled events can use reboot or redeploy actions for the cluster VMs. For more information, see [scheduled events](../../virtual-machines/linux/scheduled-events.md) documentation. > [!NOTE]
- >
> After you've configured the Pacemaker resources for the azure-events agent, if you place the cluster in or out of maintenance mode, you might get warning messages such as: >
- > WARNING: cib-bootstrap-options: unknown attribute 'hostName_ **hostname**'
- > WARNING: cib-bootstrap-options: unknown attribute 'azure-events_globalPullState'
- > WARNING: cib-bootstrap-options: unknown attribute 'hostName_ **hostname**'
+ > WARNING: cib-bootstrap-options: unknown attribute 'hostName_ **hostname**'
+ > WARNING: cib-bootstrap-options: unknown attribute 'azure-events_globalPullState'
+ > WARNING: cib-bootstrap-options: unknown attribute 'hostName_ **hostname**'
> These warning messages can be ignored. ## Next steps
sap Sap Hana Availability Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-availability-overview.md
These articles provide a good overview of using SAP HANA in Azure:
It's also a good idea to be familiar with these articles about SAP HANA: - [High availability for SAP HANA](https://help.sap.com/viewer/6b94445c94ae495c83a19646e7c3fd56/2.0.02/en-US/6d252db7cdd044d19ad85b46e6c294a4.html)-- [FAQ: High availability for SAP HANA](https://www.sap.com/documents/2016/05/c6f37cb5-737c-0010-82c7-eda71af511fa.html)
+- [FAQ: High availability for SAP HANA](https://help.sap.com/docs/SAP_HANA_PLATFORM/6b94445c94ae495c83a19646e7c3fd56/6d252db7cdd044d19ad85b46e6c294a4.html)
- [Perform system replication for SAP HANA](https://www.sap.com/documents/2017/07/606a676e-c97c-0010-82c7-eda71af511fa.html) - [SAP HANA 2.0 SPS 01 WhatΓÇÖs new: High availability](https://blogs.sap.com/2017/05/15/sap-hana-2.0-sps-01-whats-new-high-availability-by-the-sap-hana-academy/) - [Network recommendations for SAP HANA system replication](https://www.sap.com/documents/2016/06/18079a1c-767c-0010-82c7-eda71af511fa.html)
sentinel Ai Analyst Darktrace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/ai-analyst-darktrace.md
Title: "AI Analyst Darktrace connector for Microsoft Sentinel"
description: "Learn how to install the connector AI Analyst Darktrace to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 06/22/2023
The Darktrace connector lets users connect Darktrace Model Breaches in real-time
| Connector attribute | Description | | | | | **Log Analytics table(s)** | CommonSecurityLog (Darktrace)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](../../azure-monitor/logs/tutorial-workspace-transformations-portal.md) |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
| **Supported by** | [Darktrace](https://www.darktrace.com/en/contact/) | ## Query samples
Install the Microsoft Monitoring Agent on your Linux machine and configure the m
Run the following command to install and apply the CEF collector:
- sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py python cef_installer.py {0} {1}
+ `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}`
2. Forward Common Event Format (CEF) logs to Syslog agent
If the logs are not received, run the following connectivity validation script:
Run the following command to validate your connectivity:
- sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py python cef_troubleshoot.py {0}
+ `sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}`
4. Secure your machine
sentinel Apache Http Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/apache-http-server.md
Title: "Apache HTTP Server connector for Microsoft Sentinel"
description: "Learn how to install the connector Apache HTTP Server to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 06/22/2023
The Apache HTTP Server data connector provides the capability to ingest [Apache
| Connector attribute | Description | | | |
-| **Kusto function alias** | ApacheHTTPServer |
-| **Kusto function url** | https://aka.ms/sentinel-apachehttpserver-parser |
| **Log Analytics table(s)** | ApacheHTTPServer_CL<br/> | | **Data collection rules support** | Not currently supported | | **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
ApacheHTTPServer
> [!NOTE]
- > This data connector depends on a parser based on a Kusto Function to work as expected. [Follow these steps](https://aka.ms/sentinel-apachehttpserver-parser) to create the Kusto Functions alias, **ApacheHTTPServer**
+ > This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias ApacheHTTPServer and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/ApacheHTTPServer/Parsers/ApacheHTTPServer.txt). The function usually takes 10-15 minutes to activate after solution installation/update.
1. Install and onboard the agent for Linux or Windows
sentinel Apache Tomcat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/apache-tomcat.md
Title: "Apache Tomcat connector for Microsoft Sentinel"
description: "Learn how to install the connector Apache Tomcat to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 06/22/2023 # Apache Tomcat connector for Microsoft Sentinel
-The Apache Tomcat data connector provides the capability to ingest [Apache Tomcat](http://tomcat.apache.org/) events into Microsoft Sentinel. Refer to [Apache Tomcat documentation](http://tomcat.apache.org/tomcat-10.0-doc/logging.html) for more information.
+The Apache Tomcat solution provides the capability to ingest [Apache Tomcat](http://tomcat.apache.org/) events into Microsoft Sentinel. Refer to [Apache Tomcat documentation](http://tomcat.apache.org/tomcat-10.0-doc/logging.html) for more information.
## Connector attributes | Connector attribute | Description | | | |
-| **Kusto function alias** | TomcatEvent |
-| **Kusto function url** | https://aka.ms/sentinel-ApacheTomcat-parser |
| **Log Analytics table(s)** | Tomcat_CL<br/> | | **Data collection rules support** | Not currently supported | | **Supported by** | [Microsoft Corporation](https://support.microsoft.com) |
TomcatEvent
## Vendor installation instructions
-> [!NOTE]
- > This data connector depends on a parser based on a Kusto Function to work as expected. [Follow these steps](https://aka.ms/sentinel-ApacheTomcat-parser) to create the Kusto Functions alias, **TomcatEvent**
+**NOTE:** This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias TomcatEvent and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/Tomcat/Parsers/TomcatEvent.txt).The function usually takes 10-15 minutes to activate after solution installation/update.
> [!NOTE]
sentinel Aruba Clearpass https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/aruba-clearpass.md
Title: "Aruba ClearPass connector for Microsoft Sentinel"
description: "Learn how to install the connector Aruba ClearPass to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 06/22/2023
The [Aruba ClearPass](https://www.arubanetworks.com/products/security/network-ac
| Connector attribute | Description | | | |
-| **Kusto function alias** | ArubaClearPass |
-| **Kusto function url** | https://aka.ms/sentinel-arubaclearpass-parser |
| **Log Analytics table(s)** | CommonSecurityLog (ArubaClearPass)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](../../azure-monitor/logs/tutorial-workspace-transformations-portal.md) |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
| **Supported by** | [Microsoft Corporation](https://support.microsoft.com/) | ## Query samples
ArubaClearPass
## Vendor installation instructions
->This data connector depends on a parser based on a Kusto Function to work as expected. [Follow the steps](https://aka.ms/sentinel-arubaclearpass-parser) to use the Kusto function alias, **ArubaClearPass**
+**NOTE:** This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias ArubaClearPass and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/Aruba%20ClearPass/Parsers/ArubaClearPass.txt).The function usually takes 10-15 minutes to activate after solution installation/update.
1. Linux Syslog agent configuration
Install the Microsoft Monitoring Agent on your Linux machine and configure the m
Run the following command to install and apply the CEF collector:
- sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py python cef_installer.py {0} {1}
+ `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}`
2. Forward Aruba ClearPass logs to a Syslog agent
If the logs are not received, run the following connectivity validation script:
Run the following command to validate your connectivity:
- sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py python cef_troubleshoot.py {0}
+ `sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}`
4. Secure your machine
-Make sure to configure the machine's security according to your organizationΓÇÖs security policy
+Make sure to configure the machine's security according to your organization's security policy
[Learn more >](https://aka.ms/SecureCEF)
sentinel Awake Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/awake-security.md
Title: "Awake Security connector for Microsoft Sentinel"
description: "Learn how to install the connector Awake Security to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 06/22/2023
The Awake Security CEF connector allows users to send detection model matches fr
| Connector attribute | Description | | | | | **Log Analytics table(s)** | CommonSecurityLog (AwakeSecurity)<br/> |
-| **Data collection rules support** | [Workspace transform DCR](../../azure-monitor/logs/tutorial-workspace-transformations-portal.md) |
+| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |
| **Supported by** | [Arista - Awake Security](https://awakesecurity.com/) | ## Query samples
Install the Microsoft Monitoring Agent on your Linux machine and configure the m
Run the following command to install and apply the CEF collector:
- sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py python cef_installer.py {0} {1}
+ `sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py {0} {1}`
2. Forward Awake Adversarial Model match results to a CEF collector.
If the logs are not received, run the following connectivity validation script:
Run the following command to validate your connectivity:
- sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py python cef_troubleshoot.py {0}
+ `sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py {0}`
4. Secure your machine
sentinel Azure Ddos Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/azure-ddos-protection.md
Title: "Azure DDoS Protection connector for Microsoft Sentinel"
description: "Learn how to install the connector Azure DDoS Protection to connect your data source to Microsoft Sentinel." Previously updated : 06/06/2023 Last updated : 06/22/2023 # Azure DDoS Protection connector for Microsoft Sentinel
-Connect to Azure DDoS Protection logs via Public IP Address Diagnostic Logs. In addition to the core DDoS protection in the platform, Azure DDoS Protection provides advanced DDoS mitigation capabilities against network attacks. It's automatically tuned to protect your specific Azure resources. Protection is simple to enable during the creation of new virtual networks. It can also be done after creation and requires no application or resource changes. For more information, see the [Microsoft Sentinel documentation](https://go.microsoft.com/fwlink/p/?linkid=2219760&wt.mc_id=sentinel_dataconnectordocs_content_cnl_csasci).
+Connect to Azure DDoS Protection Standard logs via Public IP Address Diagnostic Logs. In addition to the core DDoS protection in the platform, Azure DDoS Protection Standard provides advanced DDoS mitigation capabilities against network attacks. It's automatically tuned to protect your specific Azure resources. Protection is simple to enable during the creation of new virtual networks. It can also be done after creation and requires no application or resource changes. For more information, see the [Microsoft Sentinel documentation](https://go.microsoft.com/fwlink/p/?linkid=2219760&wt.mc_id=sentinel_dataconnectordocs_content_cnl_csasci).
## Connector attributes
sentinel Box Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/box-using-azure-function.md
Title: "Box (using Azure Functions) connector for Microsoft Sentinel"
description: "Learn how to install the connector Box (using Azure Functions) to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 06/22/2023
sentinel Cyberark Enterprise Password Vault Epv Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/cyberark-enterprise-password-vault-epv-events.md
Title: "CyberArk Enterprise Password Vault (EPV) Events connector for Microsoft
description: "Learn how to install the connector CyberArk Enterprise Password Vault (EPV) Events to connect your data source to Microsoft Sentinel." Previously updated : 05/22/2023 Last updated : 06/22/2023 # CyberArk Enterprise Password Vault (EPV) Events connector for Microsoft Sentinel
-CyberArk Enterprise Password Vault generates an xml Syslog message for every action taken against the Vault. The EPV will send the xml messages through the Sentinel.xsl translator to be converted into CEF standard format and sent to a syslog staging server of your choice (syslog-ng, rsyslog). The Log Analytics agent installed on your syslog staging server will import the messages into Microsoft Log Analytics. Refer to the [CyberArk documentation](https://docs.cyberark.com/Product-Doc/OnlineHelp/PAS/Latest/en/Content/PASIMP/DV-Integrating-with-SIEM-Applications.htm) for more guidance on SIEM integrations.
+CyberArk Enterprise Password Vault generates an xml Syslog message for every action taken against the Vault. The EPV will send the xml messages through the Microsoft Sentinel.xsl translator to be converted into CEF standard format and sent to a syslog staging server of your choice (syslog-ng, rsyslog). The Log Analytics agent installed on your syslog staging server will import the messages into Microsoft Log Analytics. Refer to the [CyberArk documentation](https://docs.cyberark.com/Product-Doc/OnlineHelp/PAS/Latest/en/Content/PASIMP/DV-Integrating-with-SIEM-Applications.htm) for more guidance on SIEM integrations.
## Connector attributes
sentinel Digital Shadows Searchlight Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/digital-shadows-searchlight-using-azure-function.md
Title: "Digital Shadows Searchlight (using Azure Functions) connector for Micros
description: "Learn how to install the connector Digital Shadows Searchlight (using Azure Functions) to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 06/22/2023
The Digital Shadows data connector provides ingestion of the incidents and alert
| Connector attribute | Description | | | |
-| **Application settings** | apiUsername<br/>apipassword<br/>apiToken<br/>workspaceID<br/>workspaceKey<br/>uri<br/>logAnalyticsUri (optional)(add any other settings required by the Function App)Set the <code>uri</code> value to: <code>&lt;add uri value&gt;</code> |
-| **Azure functions app code** | Add%20GitHub%20link%20to%20Function%20App%20code |
+| **Application settings** | DigitalShadowsAccountID<br/>WorkspaceID<br/>WorkspaceKey<br/>DigitalShadowsKey<br/>DigitalShadowsSecret<br/>HistoricalDays<br/>DigitalShadowsURL<br/>ClassificationFilterOperation<br/>HighVariabilityClassifications<br/>FUNCTION_NAME<br/>logAnalyticsUri (optional)(add any other settings required by the Function App)Set the <code>DigitalShadowsURL</code> value to: <code>https://api.searchlight.app/v1</code>Set the <code>HighVariabilityClassifications</code> value to: <code>exposed-credential,marked-document</code>Set the <code>ClassificationFilterOperation</code> value to: <code>exclude</code> for exclude function app or <code>include</code> for include function app |
+| **Azure function app code** | https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/Digital%20Shadows/Data%20Connectors/Digital%20Shadows/digitalshadowsConnector.zip |
| **Log Analytics table(s)** | DigitalShadows_CL<br/> | | **Data collection rules support** | Not currently supported | | **Supported by** | [Digital Shadows](https://www.digitalshadows.com/) |
DigitalShadows_CL
To integrate with Digital Shadows Searchlight (using Azure Functions) make sure you have: -- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions).
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/).
- **REST API Credentials/permissions**: **Digital Shadows account ID, secret and key** is required. See the documentation to learn more about API on the `https://portal-digitalshadows.com/learn/searchlight-api/overview/description`.
Use this method for automated deployment of the 'Digital Shadows Searchlight' co
1. Create a Function App 1. From the Azure Portal, navigate to [Function App](https://portal.azure.com/#blade/HubsExtension/BrowseResource/resourceType/Microsoft.Web%2Fsites/kind/functionapp).
-2. Click **+ Add** at the top.
-3. In the **Basics** tab, ensure Runtime stack is set to **'Add Required Language'**.
-4. In the **Hosting** tab, ensure **Plan type** is set to **'Add Plan Type'**.
-5. 'Add other required configurations'.
+2. Click **+ Create** at the top.
+3. In the **Basics** tab, ensure Runtime stack is set to **python 3.8**.
+4. In the **Hosting** tab, ensure **Plan type** is set to **'Consumption (Serverless)'**.
+5.select Storage account
+6. 'Add other required configurations'.
5. 'Make other preferable configuration changes', if needed, then click **Create**.
-2. Import Function App Code
+2. Import Function App Code(Zip deployment)
-1. In the newly created Function App, select **Functions** from the navigation menu and click **+ Add**.
-2. Select **Timer Trigger**.
-3. Enter a unique Function **Name** in the New Function field and leave the default cron schedule of every 5 minutes, then click **Create Function**.
-4. Click on the function name and click **Code + Test** from the left pane.
-5. Copy the Function App Code and paste into the Function App `run.ps1` editor.
-6. Click **Save**.
+1. Install Azure CLI
+2. From terminal type `az functionapp deployment source config-zip -g <ResourceGroup> -n <FunctionApp> --src <Zip File>` and hit enter. Set the `ResourceGroup` value to: your resource group name. Set the `FunctionApp` value to: your newly created function app name. Set the `Zip File` value to: `digitalshadowsConnector.zip`(path to your zip file). Note:- Download the zip file from the link - [Function App Code](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/Digital%20Shadows/Data%20Connectors/Digital%20Shadows/digitalshadowsConnector.zip)
3. Configure the Function App 1. In the Function App screen, click the Function App name and select **Configuration**. 2. In the **Application settings** tab, select **+ New application setting**. 3. Add each of the following 'x (number of)' application settings individually, under Name, with their respective string values (case-sensitive) under Value:
- apiUsername
- apipassword
- apiToken
- workspaceID
- workspaceKey
- uri
+ DigitalShadowsAccountID
+ WorkspaceID
+ WorkspaceKey
+ DigitalShadowsKey
+ DigitalShadowsSecret
+ HistoricalDays
+ DigitalShadowsURL
+ ClassificationFilterOperation
+ HighVariabilityClassifications
+ FUNCTION_NAME
logAnalyticsUri (optional) (add any other settings required by the Function App)
-Set the `uri` value to: `<add uri value>`
+Set the `DigitalShadowsURL` value to: `https://api.searchlight.app/v1`
+Set the `HighVariabilityClassifications` value to: `exposed-credential,marked-document`
+Set the `ClassificationFilterOperation` value to: `exclude` for exclude function app or `include` for include function app
>Note: If using Azure Key Vault secrets for any of the values above, use the`@Microsoft.KeyVault(SecretUri={Security Identifier})`schema in place of the string values. Refer to [Azure Key Vault references documentation](/azure/app-service/app-service-key-vault-references) for further details.
+ - Use logAnalyticsUri to override the log analytics API endpoint for dedicated cloud. For example, for public cloud, leave the value empty; for Azure GovUS cloud environment, specify the value in the following format: https://CustomerId.ods.opinsights.azure.us.
4. Once all application settings have been entered, click **Save**.
sentinel Forescout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/forescout.md
Title: "Forescout connector for Microsoft Sentinel"
description: "Learn how to install the connector Forescout to connect your data source to Microsoft Sentinel." Previously updated : 03/25/2023 Last updated : 06/22/2023
Install the agent on the Server where the Forescout logs are forwarded.
-2. Configure Forescout event forwarding
+2. Configure the logs to be collected
+
+Configure the facilities you want to collect and their severities.
+ 1. Under workspace advanced settings **Configuration**, select **Data** and then **Syslog**.
+ 2. Select **Apply below configuration to my machines** and select the facilities and severities.
+ 3. Click **Save**.
++
+3. Configure Forescout event forwarding
Follow the configuration steps below to get Forescout logs into Microsoft Sentinel. 1. [Select an Appliance to Configure.](https://docs.forescout.com/bundle/syslog-3-6-1-h/page/syslog-3-6-1-h.Select-an-Appliance-to-Configure.html)
sentinel Google Workspace G Suite Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/google-workspace-g-suite-using-azure-function.md
Title: "Google Workspace (G Suite) (using Azure Function) connector for Microsoft Sentinel"
-description: "Learn how to install the connector Google Workspace (G Suite) (using Azure Function) to connect your data source to Microsoft Sentinel."
+ Title: "Google Workspace (G Suite) (using Azure Functions) connector for Microsoft Sentinel"
+description: "Learn how to install the connector Google Workspace (G Suite) (using Azure Functions) to connect your data source to Microsoft Sentinel."
Previously updated : 03/25/2023 Last updated : 06/22/2023
-# Google Workspace (G Suite) (using Azure Function) connector for Microsoft Sentinel
+# Google Workspace (G Suite) (using Azure Functions) connector for Microsoft Sentinel
The [Google Workspace](https://workspace.google.com/) data connector provides the capability to ingest Google Workspace Activity events into Microsoft Sentinel through the REST API. The connector provides ability to get [events](https://developers.google.com/admin-sdk/reports/v1/reference/activities) which helps to examine potential security risks, analyze your team's use of collaboration, diagnose configuration problems, track who signs in and when, analyze administrator activity, understand how users create and share content, and more review events in your org.
GWorkspace_ReportsAPI_user_accounts_CL
## Prerequisites
-To integrate with Google Workspace (G Suite) (using Azure Function) make sure you have:
+To integrate with Google Workspace (G Suite) (using Azure Functions) make sure you have:
-- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions).
+- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/).
- **REST API Credentials/permissions**: **GooglePickleString** is required for REST API. [See the documentation to learn more about API](https://developers.google.com/admin-sdk/reports/v1/reference/activities). Please find the instructions to obtain the credentials in the configuration section below. You can check all [requirements and follow the instructions](https://developers.google.com/admin-sdk/reports/v1/quickstart/python) from here as well.
To integrate with Google Workspace (G Suite) (using Azure Function) make sure yo
> [!NOTE]
- > This connector uses Azure Functions to connect to the Google Reports API to pull its logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details
+ > This connector uses Azure Functions to connect to the Google Reports API to pull its logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details.
>**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App.
Use the following step-by-step instructions to deploy the Google Workspace data
**1. Deploy a Function App**
-> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/functions-create-first-function-python) for Azure function development.
+> **NOTE:** You will need to [prepare VS code](/azure/azure-functions/functions-create-first-function-python#prerequisites) for Azure function development.
1. Download the [Azure Function App](https://aka.ms/sentinel-GWorkspaceReportsAPI-functionapp) file. Extract archive to your local development computer. 2. Start VS Code. Choose File in the main menu and select Open Folder.
If you're already signed in, go to the next step.
WorkspaceID WorkspaceKey logAnalyticsUri (optional)
-4. Once all application settings have been entered, click **Save**.
+4. (Optional) Change the default delays if required.
+
+ > **NOTE:** The following default values for ingestion delays have been added for different set of logs from Google Workspace based on Google [documentation](https://support.google.com/a/answer/7061566). These can be modified based on environmental requirements.
+ Fetch Delay - 10 minutes
+ Calendar Fetch Delay - 6 hours
+ Chat Fetch Delay - 1 day
+ User Accounts Fetch Delay - 3 hours
+ Login Fetch Delay - 6 hours
+
+5. Use logAnalyticsUri to override the log analytics API endpoint for dedicated cloud. For example, for public cloud, leave the value empty; for Azure GovUS cloud environment, specify the value in the following format: `https://<CustomerId>.ods.opinsights.azure.us`.
+6. Once all application settings have been entered, click **Save**.
sentinel Nxlog Aix Audit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/nxlog-aix-audit.md
Title: "NXLog AIX Audit connector for Microsoft Sentinel"
description: "Learn how to install the connector NXLog AIX Audit to connect your data source to Microsoft Sentinel." Previously updated : 05/22/2023 Last updated : 06/22/2023 # NXLog AIX Audit connector for Microsoft Sentinel
-The NXLog [AIX Audit](https://nxlog.co/documentation/nxlog-user-guide/im_aixaudit.html) data connector uses the AIX Audit subsystem to read events directly from the kernel for capturing audit events on the AIX platform. This REST API connector can efficiently export AIX Audit events to Microsoft Sentinel in real time.
+The [NXLog AIX Audit](https://docs.nxlog.co/refman/current/im/aixaudit.html) data connector uses the AIX Audit subsystem to read events directly from the kernel for capturing audit events on the AIX platform. This REST API connector can efficiently export AIX Audit events to Microsoft Sentinel in real time.
## Connector attributes
The NXLog [AIX Audit](https://nxlog.co/documentation/nxlog-user-guide/im_aixaudi
| | | | **Log Analytics table(s)** | AIX_Audit_CL<br/> | | **Data collection rules support** | Not currently supported |
-| **Supported by** | [NXLog](https://nxlog.co/community-forum/t/819-support-tickets) |
+| **Supported by** | [NXLog](https://nxlog.co/support-tickets/add/support-ticket) |
## Query samples
NXLog_parsed_AIX_Audit_view
> This data connector depends on a parser based on a Kusto Function to work as expected [**NXLog_parsed_AIX_Audit_view**](https://aka.ms/sentinel-nxlogaixaudit-parser) which is deployed with the Microsoft Sentinel Solution.
-Follow the step-by-step instructions in the *NXLog User Guide* Integration Topic [Microsoft Microsoft Sentinel](https://nxlog.co/documentation/nxlog-user-guide/sentinel.html) to configure this connector.
+Follow the step-by-step instructions in the *NXLog User Guide* Integration Guide [Microsoft Sentinel](https://docs.nxlog.co/userguide/integrate/microsoft-azure-sentinel.html) to configure this connector.
sentinel Nxlog Bsm Macos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/nxlog-bsm-macos.md
Title: "NXLog BSM macOS connector for Microsoft Sentinel"
description: "Learn how to install the connector NXLog BSM macOS to connect your data source to Microsoft Sentinel." Previously updated : 05/22/2023 Last updated : 06/22/2023 # NXLog BSM macOS connector for Microsoft Sentinel
-The NXLog [BSM](https://nxlog.co/documentation/nxlog-user-guide/im_bsm.html) macOS data connector uses Sun's Basic Security Module (BSM) Auditing API to read events directly from the kernel for capturing audit events on the macOS platform. This REST API connector can efficiently export macOS audit events to Azure Sentinel in real-time.
+The [NXLog BSM](https://docs.nxlog.co/refman/current/im/bsm.html) macOS data connector uses Sun's Basic Security Module (BSM) Auditing API to read events directly from the kernel for capturing audit events on the macOS platform. This REST API connector can efficiently export macOS audit events to Microsoft Sentinel in real-time.
## Connector attributes
The NXLog [BSM](https://nxlog.co/documentation/nxlog-user-guide/im_bsm.html) mac
| | | | **Log Analytics table(s)** | BSMmacOS_CL<br/> | | **Data collection rules support** | Not currently supported |
-| **Supported by** | [NXLog](https://nxlog.co/community-forum/t/819-support-tickets) |
+| **Supported by** | [NXLog](https://nxlog.co/support-tickets/add/support-ticket) |
+ ## Query samples
BSMmacOS_CL
## Vendor installation instructions
-Follow the step-by-step instructions in the *NXLog User Guide* Integration Topic [Microsoft Azure Sentinel](https://nxlog.co/documentation/nxlog-user-guide/sentinel.html) to configure this connector.
+Follow the step-by-step instructions in the *NXLog User Guide* Integration Topic [Microsoft Sentinel](https://docs.nxlog.co/userguide/integrate/microsoft-azure-sentinel.html) to configure this connector.
sentinel Nxlog Linuxaudit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/nxlog-linuxaudit.md
Title: "NXLog LinuxAudit connector for Microsoft Sentinel"
description: "Learn how to install the connector NXLog LinuxAudit to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 06/22/2023 # NXLog LinuxAudit connector for Microsoft Sentinel
-The NXLog [LinuxAudit](https://nxlog.co/documentation/nxlog-user-guide/im_linuxaudit.html) data connector supports custom audit rules and collects logs without auditd or any other user-space software. IP addresses and group/user ids are resolved to their respective names making [Linux audit](https://nxlog.co/documentation/nxlog-user-guide/linux-audit.html) logs more intelligible to security analysts. This REST API connector can efficiently export Linux security events to Azure Sentinel in real-time.
+The [NXLog LinuxAudit](https://docs.nxlog.co/refman/current/im/linuxaudit.html) data connector supports custom audit rules and collects logs without auditd or any other user-space software. IP addresses and group/user IDs are resolved to their respective names making [Linux audit](https://docs.nxlog.co/userguide/integrate/linux-audit.html) logs more intelligible to security analysts. This REST API connector can efficiently export Linux security events to Microsoft Sentinel in real-time.
## Connector attributes
The NXLog [LinuxAudit](https://nxlog.co/documentation/nxlog-user-guide/im_linuxa
| | | | **Log Analytics table(s)** | LinuxAudit_CL<br/> | | **Data collection rules support** | Not currently supported |
-| **Supported by** | [NXLog](https://nxlog.co/community-forum/t/819-support-tickets) |
+| **Supported by** | [NXLog](https://nxlog.co/support-tickets/add/support-ticket) |
## Query samples
LinuxAudit_CL
## Vendor installation instructions
-Follow the step-by-step instructions in the *NXLog User Guide* Integration Topic [Microsoft Azure Sentinel](https://nxlog.co/documentation/nxlog-user-guide/sentinel.html) to configure this connector.
+Follow the step-by-step instructions in the *NXLog User Guide* Integration Topic [Microsoft Sentinel](https://docs.nxlog.co/userguide/integrate/microsoft-azure-sentinel.html) to configure this connector.
sentinel Qualys Vm Knowledgebase Using Azure Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/qualys-vm-knowledgebase-using-azure-function.md
Title: "Qualys VM KnowledgeBase (using Azure Function) connector for Microsoft S
description: "Learn how to install the connector Qualys VM KnowledgeBase (using Azure Function) to connect your data source to Microsoft Sentinel." Previously updated : 03/25/2023 Last updated : 06/22/2023 # Qualys VM KnowledgeBase (using Azure Function) connector for Microsoft Sentinel
-The [Qualys Vulnerability Management (VM)](https://www.qualys.com/apps/vulnerability-management/) KnowledgeBase (KB) connector provides the capability to ingest the latest vulnerability data from the Qualys KB into Azure Sentinel.
+The [Qualys Vulnerability Management (VM)](https://www.qualys.com/apps/vulnerability-management/) KnowledgeBase (KB) connector provides the capability to ingest the latest vulnerability data from the Qualys KB into Microsoft Sentinel.
- This data can used to correlate and enrich vulnerability detections found by the [Qualys Vulnerability Management (VM)](https://docs.microsoft.com/azure/sentinel/connect-qualys-vm) data connector.
+ This data can used to correlate and enrich vulnerability detections found by the [Qualys Vulnerability Management (VM)](/azure/sentinel/data-connectors-reference#qualys) data connector.
## Connector attributes
sentinel Rsa Securid Authentication Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/rsa-securid-authentication-manager.md
Title: "RSA® SecurID (Authentication Manager) connector for Microsoft Sentinel"
description: "Learn how to install the connector RSA® SecurID (Authentication Manager) to connect your data source to Microsoft Sentinel." Previously updated : 05/22/2023 Last updated : 06/22/2023 # RSA® SecurID (Authentication Manager) connector for Microsoft Sentinel
-The [RSA® SecurID Authentication Manager](https://www.securid.com/) data connector provides the capability to ingest [RSA® SecurID Authentication Manager events](https://community.rsa.com/t5/rsa-authentication-manager/log-messages/ta-p/571404) into Microsoft Sentinel. Refer to [RSA® SecurID Authentication Manager documentation](https://community.rsa.com/t5/rsa-authentication-manager/getting-started-with-rsa-authentication-manager/ta-p/569582) for more information.
+The [RSA® SecurID Authentication Manager](https://www.securid.com/) data connector provides the capability to ingest [RSA® SecurID Authentication Manager events](https://community.rsa.com/t5/rsa-authentication-manager/rsa-authentication-manager-log-messages/ta-p/630160) into Microsoft Sentinel. Refer to [RSA® SecurID Authentication Manager documentation](https://community.rsa.com/t5/rsa-authentication-manager/getting-started-with-rsa-authentication-manager/ta-p/569582) for more information.
## Connector attributes
sentinel Sophos Cloud Optix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/sophos-cloud-optix.md
Title: "Sophos Cloud Optix connector for Microsoft Sentinel"
description: "Learn how to install the connector Sophos Cloud Optix to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 06/22/2023 # Sophos Cloud Optix connector for Microsoft Sentinel
-The [Sophos Cloud Optix](https://www.sophos.com/products/cloud-optix.aspx) connector allows you to easily connect your Sophos Cloud Optix logs with Azure Sentinel, to view dashboards, create custom alerts, and improve investigation. This gives you more insight into your organization's cloud security and compliance posture and improves your cloud security operation capabilities.
+The [Sophos Cloud Optix](https://www.sophos.com/products/cloud-optix.aspx) connector allows you to easily connect your Sophos Cloud Optix logs with Microsoft Sentinel, to view dashboards, create custom alerts, and improve investigation. This gives you more insight into your organization's cloud security and compliance posture and improves your cloud security operation capabilities.
## Connector attributes
Copy the Workspace ID and Primary Key for your workspace.
2. Configure the Sophos Cloud Optix Integration
-In Sophos Cloud Optix go to [Settings->Integrations->Azure Sentinel](https://optix.sophos.com/#/integrations/sentinel) and enter the Workspace ID and Primary Key copied in Step 1.
+In Sophos Cloud Optix go to [Settings->Integrations->Microsoft Sentinel](https://optix.sophos.com/#/integrations/sentinel) and enter the Workspace ID and Primary Key copied in Step 1.
3. Select Alert Levels
-In Alert Levels, select which Sophos Cloud Optix alerts you want to send to Azure Sentinel.
+In Alert Levels, select which Sophos Cloud Optix alerts you want to send to Microsoft Sentinel.
4. Turn on the integration
sentinel Threat Intelligence Upload Indicators Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/threat-intelligence-upload-indicators-api.md
Title: "Threat Intelligence Upload Indicators API (Preview) connector for Micros
description: "Learn how to install the connector Threat Intelligence Upload Indicators API (Preview) to connect your data source to Microsoft Sentinel." Previously updated : 05/31/2023 Last updated : 06/22/2023
ThreatIntelligenceIndicator
| sort by TimeGenerated desc ``` ++
+## Vendor installation instructions
+
+You can connect your threat intelligence data sources to Microsoft Sentinel by either:
++
+- Using an integrated Threat Intelligence Platform (TIP), such as Threat Connect, Palo Alto Networks MineMeld, MISP, and others.
+
+- Calling the Microsoft Sentinel data plane API directly from another application.
+
+Follow These Steps to Connect to your Threat Intelligence:
+
+Get AAD Access Token
+
+To send request to the APIs, you need to acquire Azure Active Directory access token. You can follow instruction in this page: [Get Azure AD tokens for users by using MSAL](/azure/databricks/dev-tools/api/latest/aad/app-aad-token#get-an-azure-ad-access-token).
+ - Notice: Please request AAD access token with appropriate scope value.
++
+You can send indicators by calling our Upload Indicators API. For more information about the API, click [here](/azure/sentinel/upload-indicators-api).
+
+```http
+
+HTTP method: POST
+
+Endpoint: https://sentinelus.azure-api.net/workspaces/[WorkspaceID]/threatintelligenceindicators:upload?api-version=2022-07-01
+
+WorkspaceID: the workspace that the indicators are uploaded to.
++
+Header Value 1: "Authorization" = "Bearer [AAD Access Token from step 1]"
++
+Header Value 2: "Content-Type" = "application/json"
+
+Body: The body is a JSON object containing an array of indicators in STIX format.'title : 2. Send indicators to Sentinel'
+```
++ ## Next steps For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-threatintelligence-taxii?tab=Overview) in the Azure Marketplace.
sentinel Collect Sap Hana Audit Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/collect-sap-hana-audit-logs.md
description: This article explains how to collect audit logs from your SAP HANA
Previously updated : 03/02/2022 Last updated : 05/24/2023 # Collect SAP HANA audit logs in Microsoft Sentinel
sentinel Reference Kickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/reference-kickstart.md
description: Description of command line options available with kickstart deploy
Previously updated : 03/02/2022 Last updated : 05/24/2023 # Kickstart script reference
sentinel Reference Systemconfig https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/reference-systemconfig.md
Last updated 06/03/2023 - # Systemconfig.ini file reference
sentinel Reference Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/reference-update.md
description: Description of command line options available with update deploymen
Previously updated : 03/02/2022 Last updated : 05/24/2023 # Update script reference
sentinel Sap Solution Log Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/sap-solution-log-reference.md
Previously updated : 02/22/2022 Last updated : 05/24/2023 # Microsoft Sentinel solution for SAP® applications data reference
service-fabric How To Managed Cluster Maintenance Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-maintenance-control.md
After the maintenance configuration is created, it has to be attached to the SFM
>Known issues: >* There should be atmost one maintenance config resource assigned to a Service Fabric managed cluster. There is work underway to prevent assignment of more than one maintenance config. Until then, users are expected to not do multiple config assignments for the same cluster. >* Deleting just the maintenance config resource will not disable MaintenanceControl. To disable MaintenanceControl, you have to specifically delete the configAssignment for the cluster first, before deleting the maintenance config resource.
+>* The work for Azure Portal experience for maintenance control with SFMC is currently underway, so customers shouldn't rely just on the portal. Issues with maintenance resources like SFMC cluster appearing as a Virtual Machine resource and not able to search/assign an SFMC cluster from the portal are known.
virtual-desktop Troubleshoot Custom Image Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-custom-image-templates.md
Last updated 04/05/2023
> Custom image templates in Azure Virtual Desktop is currently in PREVIEW. > See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-Custom image templates in Azure Virtual Desktop enable you to easily create a custom image that you can use when deploying session host virtual machines (VMs). this article helps troubleshoot some issues you could run into.
+Custom image templates in Azure Virtual Desktop enable you to easily create a custom image that you can use when deploying session host virtual machines (VMs). This article helps troubleshoot some issues you could run into.
## General troubleshooting when creating an image
virtual-machines Agent Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/agent-linux.md
Testing has confirmed that the following systems work with the Azure Linux VM Ag
Other supported systems: -- FreeBSD 10+ (Azure Linux VM Agent v2.0.10+)
+- The Agent works on more systems than those listed in the documentation. However, we do not test or provide support for distros that are not on the endorsed list. In particular, FreeBSD is not endorsed. The customer can try FreeBSD 8 and if they run into problems they can open an issue in our [Github repository](https://github.com/Azure/WALinuxAgent) and we may be able to help.
The Linux agent depends on these system packages to function properly:
virtual-machines Custom Script Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/custom-script-windows.md
If your script is on a local server, you might still need to open other firewall
### Tips
+- Output is limited to the last 4,096 bytes.
- The highest failure rate for this extension is due to syntax errors in the script. Verify that the script runs without errors. Put more logging into the script to make it easier to find failures. - Write scripts that are idempotent, so that running them more than once accidentally doesn't cause system changes. - Ensure that the scripts don't require user input when they run.
virtual-machines Image Builder Json https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/image-builder-json.md
The image output is a managed image resource.
```json {
- "type":"managedImage",
+ "type":"ManagedImage",
"imageId": "<resource ID>", "location": "<region>", "runOutputName": "<name>",
The image output is a managed image resource.
```bicep {
- type:'managedImage'
+ type:'ManagedImage'
imageId: '<resource ID>' location: '<region>' runOutputName: '<name>'
The image output is a managed image resource.
Distribute properties: -- **type** ΓÇô managedImage
+- **type** ΓÇô ManagedImage
- **imageId** ΓÇô Resource ID of the destination image, expected format: /subscriptions/\<subscriptionId>/resourceGroups/\<destinationResourceGroupName>/providers/Microsoft.Compute/images/\<imageName> - **location** - location of the managed image. - **runOutputName** ΓÇô unique name for identifying the distribution.
virtual-machines Migrate To Premium Storage Using Azure Site Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/migrate-to-premium-storage-using-azure-site-recovery.md
Site Recovery will create a VM instance whose type is the same as or similar to
For specific scenarios for migrating virtual machines, see the following resources:
-* [Migrate Azure Virtual Machines between Storage Accounts](https://azure.microsoft.com/blog/2014/10/22/migrate-azure-virtual-machines-between-storage-accounts/)
+* [Migrate Azure Virtual Machines between Storage Accounts](https://azure.microsoft.com/blog/migrate-azure-virtual-machines-between-storage-accounts/)
* [Upload a Linux virtual hard disk](upload-vhd.md) * [Migrating Virtual Machines from Amazon AWS to Microsoft Azure](/shows/it-ops-talk/migrate-your-aws-vms-to-azure-with-azure-migrate)
Also, see the following resources to learn more about Azure Storage and Azure Vi
[12]:./media/migrate-to-premium-storage-using-azure-site-recovery/migrate-to-premium-storage-using-azure-site-recovery-12.PNG [13]:./media/migrate-to-premium-storage-using-azure-site-recovery/migrate-to-premium-storage-using-azure-site-recovery-13.png [14]:../site-recovery/media/site-recovery-vmware-to-azure/v2a-architecture-henry.png
-[15]:./media/migrate-to-premium-storage-using-azure-site-recovery/migrate-to-premium-storage-using-azure-site-recovery-14.png
+[15]:./media/migrate-to-premium-storage-using-azure-site-recovery/migrate-to-premium-storage-using-azure-site-recovery-14.png
virtual-machines Migration Classic Resource Manager Deep Dive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/migration-classic-resource-manager-deep-dive.md
You can find the classic deployment model and Resource Manager representations o
| Inbound NAT rules |Inbound NAT rules |Input endpoints defined on the VM are converted to inbound network address translation rules under the load balancer during the migration. | | VIP address |Public IP address with DNS name |The virtual IP address becomes a public IP address, and is associated with the load balancer. A virtual IP can only be migrated if there is an input endpoint assigned to it. To retain the IP, you can [convert it to Reserved IP](/previous-versions/azure/virtual-network/virtual-networks-reserved-public-ip#reserve-the-ip-address-of-an-existing-cloud-service) before migration. There will be downtime of about 60 seconds during this change.| | Virtual network |Virtual network |The virtual network is migrated, with all its properties, to the Resource Manager deployment model. A new resource group is created with the name `-migrated`. |
-| Reserved IPs |Public IP address with static allocation method |Reserved IPs associated with the load balancer are migrated, along with the migration of the cloud service or the virtual machine. Unassociated reserved IPs can be migrated using [Move-AzureReservedIP](/powershell/module/servicemanagement/azure.service/move-azurereservedip). |
+| Reserved IPs |Public IP address with static allocation method |Reserved IPs associated with the load balancer are migrated, along with the migration of the cloud service or the virtual machine. Unassociated reserved IPs can be migrated using [Move-AzureReservedIP](/powershell/module/servicemanagement/azure/move-azurereservedip). |
| Public IP address per VM |Public IP address with dynamic allocation method |The public IP address associated with the VM is converted as a public IP address resource, with the allocation method set to dynamic. |
-| NSGs |NSGs |Network security groups associated with a virtual machine or subnet are cloned as part of the migration to the Resource Manager deployment model. The NSG in the classic deployment model is not removed during the migration. However, the management-plane operations for the NSG are blocked when the migration is in progress. Unassociated NSGs can be migrated using [Move-AzureNetworkSecurityGroup](/powershell/module/servicemanagement/azure.service/move-azurenetworksecuritygroup).|
+| NSGs |NSGs |Network security groups associated with a virtual machine or subnet are cloned as part of the migration to the Resource Manager deployment model. The NSG in the classic deployment model is not removed during the migration. However, the management-plane operations for the NSG are blocked when the migration is in progress. Unassociated NSGs can be migrated using [Move-AzureNetworkSecurityGroup](/powershell/module/servicemanagement/azure/move-azurenetworksecuritygroup).|
| DNS servers |DNS servers |DNS servers associated with a virtual network or the VM are migrated as part of the corresponding resource migration, along with all the properties. |
-| UDRs |UDRs |User-defined routes associated with a subnet are cloned as part of the migration to the Resource Manager deployment model. The UDR in the classic deployment model is not removed during the migration. The management-plane operations for the UDR are blocked when the migration is in progress. Unassociated UDRs can be migrated using [Move-AzureRouteTable](/powershell/module/servicemanagement/azure.service/Move-AzureRouteTable). |
+| UDRs |UDRs |User-defined routes associated with a subnet are cloned as part of the migration to the Resource Manager deployment model. The UDR in the classic deployment model is not removed during the migration. The management-plane operations for the UDR are blocked when the migration is in progress. Unassociated UDRs can be migrated using [Move-AzureRouteTable](/powershell/module/servicemanagement/azure/Move-AzureRouteTable). |
| IP forwarding property on a VM's network configuration |IP forwarding property on the NIC |The IP forwarding property on a VM is converted to a property on the network interface during the migration. | | Load balancer with multiple IPs |Load balancer with multiple public IP resources |Every public IP associated with the load balancer is converted to a public IP resource, and associated with the load balancer after migration. | | Internal DNS names on the VM |Internal DNS names on the NIC |During migration, the internal DNS suffixes for the VMs are migrated to a read-only property named ΓÇ£InternalDomainNameSuffixΓÇ¥ on the NIC. The suffix remains unchanged after migration, and VM resolution should continue to work as previously. |
virtual-machines Trusted Launch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/trusted-launch.md
Azure offers trusted launch as a seamless way to improve the security of [genera
|: |: |: | | [General Purpose](sizes-general.md) |B-series, DCsv2-series, DCsv3-series, DCdsv3-series, Dv4-series, Dsv4-series, Dsv3-series, Dsv2-series, Dav4-series, Dasv4-series, Ddv4-series, Ddsv4-series, Dv5-series, Dsv5-series, Ddv5-series, Ddsv5-series, Dasv5-series, Dadsv5-series, Dlsv5-series, Dldsv5-series | Dpsv5-series, Dpdsv5-series, Dplsv5-series, Dpldsv5-series | [Compute optimized](sizes-compute.md) |FX-series, Fsv2-series | All sizes supported.
-| [Memory optimized](sizes-memory.md) |Dsv2-series, Esv3-series, Ev4-series, Esv4-series, Edv4-series, Edsv4-series, Eav4-series, Easv4-series, Easv5-series, Eadsv5-series, Ebsv5-series, Edv5-series, Edsv5-series | Ebdsv5-series, Epsv5-series, Epdsv5-series, M-series, Msv2-series and Mdsv2 Medium Memory series, Mv2-series
+| [Memory optimized](sizes-memory.md) |Dsv2-series, Esv3-series, Ev4-series, Esv4-series, Edv4-series, Edsv4-series, Eav4-series, Easv4-series, Easv5-series, Eadsv5-series, Ebsv5-series, Edv5-series, Edsv5-series, Ebdsv5-series | Epsv5-series, Epdsv5-series, M-series, Msv2-series and Mdsv2 Medium Memory series, Mv2-series
| [Storage optimized](sizes-storage.md) |Ls-series, Lsv2-series, Lsv3-series, Lasv3-series | All sizes supported. | [GPU](sizes-gpu.md) |NCv2-series, NCv3-series, NCasT4_v3-series, NVv3-series, NVv4-series, NDv2-series, NC_A100_v4-series, NCadsA10 v4-series, NVadsA10 v5-series | NDasrA100_v4-series, NDm_A100_v4-series, ND-series
-| [High Performance Compute](sizes-hpc.md) |HB-series, HBv2-series, HBv3-series, HC-series | HBv4-series, HX-series
+| [High Performance Compute](sizes-hpc.md) |HB-series, HBv2-series, HBv3-series, HC-series, HBv4-series, HX-series | All sizes supported.
> [!NOTE] > - Installation of the **CUDA & GRID drivers on Secure Boot enabled Windows VMs** does not require any additional steps.
virtual-machines Migrate To Premium Storage Using Azure Site Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/migrate-to-premium-storage-using-azure-site-recovery.md
Site Recovery will create a VM instance whose type is the same as or similar to
For specific scenarios for migrating virtual machines, see the following resources:
-* [Migrate Azure Virtual Machines between Storage Accounts](https://azure.microsoft.com/blog/2014/10/22/migrate-azure-virtual-machines-between-storage-accounts/)
+* [Migrate Azure Virtual Machines between Storage Accounts](https://azure.microsoft.com/blog/migrate-azure-virtual-machines-between-storage-accounts/)
* [Create and upload a Windows Server VHD to Azure](upload-generalized-managed.md) * [Migrating Virtual Machines from Amazon AWS to Microsoft Azure](/shows/it-ops-talk/migrate-your-aws-vms-to-azure-with-azure-migrate)
Also, see the following resources to learn more about Azure Storage and Azure Vi
[12]:./media/migrate-to-premium-storage-using-azure-site-recovery/migrate-to-premium-storage-using-azure-site-recovery-12.PNG [13]:./media/migrate-to-premium-storage-using-azure-site-recovery/migrate-to-premium-storage-using-azure-site-recovery-13.png [14]:../site-recovery/media/site-recovery-vmware-to-azure/v2a-architecture-henry.png
-[15]:./media/migrate-to-premium-storage-using-azure-site-recovery/migrate-to-premium-storage-using-azure-site-recovery-14.png
+[15]:./media/migrate-to-premium-storage-using-azure-site-recovery/migrate-to-premium-storage-using-azure-site-recovery-14.png