Updates from: 07/04/2023 01:07:25
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/whats-new-docs.md
Previously updated : 06/02/2023 Last updated : 07/03/2023
Welcome to what's new in the Microsoft identity platform documentation. This article lists new docs that have been added and those that have had significant updates in the last three months.
+## June 2023
+
+### New articles
+
+- [Configure app multi-instancing](configure-app-multi-instancing.md) - Configuration of multiple instances of the same application within a tenant
+- [Migrate away from using email claims for user identification or authorization](migrate-off-email-claim-authorization.md) - Migration guidance for insecure authorization pattern
+- [Optional claims reference](optional-claims-reference.md) - v1.0 and v2.0 optional claims reference
+
+### Updated articles
+
+- [A web app that calls web APIs: Code configuration](scenario-web-app-call-api-app-configuration.md) - Editorial review of Node.js code snippet
+- [Claims mapping policy type](reference-claims-mapping-policy-type.md) - Editorial review of claims mapping policy type
+- [Configure token lifetime policies (preview)](configure-token-lifetimes.md) - Adding service principal policy commands
+- [Customize SAML token claims](saml-claims-customization.md) - Review of claims mapping policy type
+- [Microsoft identity platform code samples](sample-v2-code.md) - Reworking code samples file to add extra tab
+- [Refresh tokens in the Microsoft identity platform](refresh-tokens.md) - Editorial review of refresh tokens
+- [Tokens and claims overview](security-tokens.md) - Editorial review of security tokens
+- [Tutorial: Sign in users and call Microsoft Graph from an iOS or macOS app](tutorial-v2-ios.md) - Editorial review
+- [What's new for authentication?](reference-breaking-changes.md) - Identity breaking change: omission of unverified emails by default
+ ## May 2023 ### New articles
Welcome to what's new in the Microsoft identity platform documentation. This art
- [Web app that signs in users: App registration](scenario-web-app-sign-user-app-registration.md) - [Web app that signs in users: Code configuration](scenario-web-app-sign-user-app-configuration.md) - [Web app that signs in users: Sign-in and sign-out](scenario-web-app-sign-user-sign-in.md)-
-## March 2023
-
-### New articles
--- [Configure a SAML app to receive tokens with claims from an external store (preview)](custom-extension-configure-saml-app.md)-- [Configure a custom claim provider token issuance event (preview)](custom-extension-get-started.md)-- [Custom claims provider (preview)](custom-claims-provider-overview.md)-- [Custom claims providers](custom-claims-provider-reference.md)-- [Custom authentication extensions (preview)](custom-extension-overview.md)-- [Troubleshoot your custom claims provider API (preview)](custom-extension-troubleshoot.md)-- [Understanding application-only access](app-only-access-primer.md)-
-### Updated articles
--- [ADAL to MSAL migration guide for Python](migrate-python-adal-msal.md)-- [Handle errors and exceptions in MSAL for Python](msal-error-handling-python.md)-- [How to migrate a JavaScript app from ADAL.js to MSAL.js](msal-compare-msal-js-and-adal-js.md)-- [Microsoft identity platform access tokens](access-tokens.md)-- [Microsoft Enterprise SSO plug-in for Apple devices (preview)](apple-sso-plugin.md)-- [Restrict your Azure AD app to a set of users in an Azure AD tenant](howto-restrict-your-app-to-a-set-of-users.md)-- [Token cache serialization in MSAL.NET](msal-net-token-cache-serialization.md)-- [Troubleshoot publisher verification](troubleshoot-publisher-verification.md)-- [Tutorial: Call the Microsoft Graph API from a Universal Windows Platform (UWP) application](tutorial-v2-windows-uwp.md)
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new.md
This page updates monthly, so revisit it regularly. If you're looking for items
## June 2023
-### Public Preview - Availability of Exchange Hybrid in Azure AD Connect cloud sync
-
-**Type:** New feature
-**Service category:** Directory Management
-**Product capability:** Azure Active Directory Connect Cloud Sync
---
-Exchange hybrid capability allows for the coexistence of Exchange mailboxes both on-premises and in Microsoft 365. Cloud Sync synchronizes a specific set of Exchange-related attributes from Azure AD back into your on-premises directory and to any forests that's disconnected (no network trust needed between them). With this capability, existing customers who have this feature enabled in Azure AD Connect sync can now migrate and leverage this feature with Azure AD cloud sync. For more information, see: ADD LINK
--- ### Public Preview - New provisioning connectors in the Azure AD Application Gallery - June 2023 **Type:** New feature
Restricted Management Administrative Units allow you to restrict modification of
-### Public Preview - Real-Time Threat Intelligence Detections
-
-**Type:** New feature
-**Service category:** Identity Protection
-**Product capability:** Identity Security & Protection
-
-To address emerging attacks, Identity Protection now includes Real-Time Threat Intelligence Detections, also referred to as Rapid Response Detections. When emerging attacks occur, Identity Protection will now dynamically issue new detections in response to these attacks. These detections utilize MicrosoftΓÇÖs threat intelligence in real-time, meaning Identity Protection detects emerging patterns of compromise during sign-in and challenge the user accordingly. For more information, see: ADD LINK
--- ### General Availability - Report suspicious activity integrated with Identity Protection **Type:** Changed feature
active-directory Reference Azure Ad Sla Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/reference-azure-ad-sla-performance.md
The SLA attainment is truncated at three places after the decimal. Numbers aren'
| March | 99.568% | 99.998% | 99.999% | | April | 99.999% | 99.999% | 99.999% | | May | 99.999% | 99.999% | 99.999% |
-| June | 99.999% | 99.999% | |
+| June | 99.999% | 99.999% | 99.999% |
| July | 99.999% | 99.999% | | | August | 99.999% | 99.999% | | | September | 99.999% | 99.998% | |
aks Load Balancer Standard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/load-balancer-standard.md
description: Learn how to use a public load balancer with a Standard SKU to expo
Previously updated : 02/22/2023 Last updated : 06/17/2023 #Customer intent: As a cluster operator or developer, I want to learn how to create a service in AKS that uses an Azure Load Balancer with a Standard SKU.
An outbound rule configures outbound NAT for all virtual machines identified by
While you can use an outbound rule with a single public IP address, outbound rules are great for scaling outbound NAT because they ease the configuration burden. You can use multiple IP addresses to plan for large-scale scenarios and outbound rules to mitigate SNAT exhaustion prone patterns. Each IP address provided by a frontend provides 64k ephemeral ports for the load balancer to use as SNAT ports.
-When using a *Standard* SKU load balancer with managed outbound public IPs (which are created by default), you can scale the number of managed outbound public IPs using the **`load-balancer-managed-ip-count`** parameter.
+When using a *Standard* SKU load balancer with managed outbound public IPs (which are created by default), you can scale the number of managed outbound public IPs using the **`--load-balancer-managed-outbound-ip-count`** parameter.
Use the following command to update an existing cluster. You can also set this parameter to have multiple managed outbound public IPs.
az aks update \
The above example sets the number of managed outbound public IPs to *2* for the *myAKSCluster* cluster in *myResourceGroup*.
-At cluster creation time, you can also use the **`load-balancer-managed-ip-count`** parameter to set the initial number of managed outbound public IPs by appending the **`--load-balancer-managed-outbound-ip-count`** parameter and setting it to your desired value. The default number of managed outbound public IPs is *1*.
+At cluster creation time, you can also set the initial number of managed outbound public IPs by appending the **`--load-balancer-managed-outbound-ip-count`** parameter and setting it to your desired value. The default number of managed outbound public IPs is *1*.
### Provide your own outbound public IPs or prefixes
aks Use Azure Ad Pod Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-azure-ad-pod-identity.md
metadata:
... ```
+## Disable pod-managed identity on an existing cluster
+
+To disable pod-managed identity on an existing cluster, remove the pod-managed identities from the cluster. Then disable the feature on the cluster.
+
+```azurecli
+az aks pod-identity delete --name ${POD_IDENTITY_NAME} --namespace ${POD_IDENTITY_NAMESPACE} --resource-group myResourceGroup --cluster-name myAKSCluster
+```
+
+```azurecli
+az aks update --resource-group myResourceGroup --cluster-name myAKSCluster --disable-pod-identity
+```
+ ## Clean up To remove an Azure AD pod-managed identity from your cluster, remove the sample application and the pod-managed identity from the cluster. Then remove the identity and the role assignment of cluster identity.
app-service Configure Ssl App Service Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-ssl-app-service-certificate.md
+
+ Title: Add and manage App Service certificates
+description: Create an App Service certificate and manage it (such as renew, sync, and delete).
+tags: buy-ssl-certificates
++ Last updated : 07/28/2023++++
+# Create and manage an App Service certificate for your web app
+
+This article shows how to create an App Service certificate and manage it (such as renew, sync, and delete). Once you have an App Service certificate, you can then import it into an App Service app. An App Service certificate is a private certificate that's managed by Azure. It combines the simplicity of automated certificate management and the flexibility of renewal and export options.
+
+If you purchase an App Service certificate from Azure, Azure manages the following tasks:
+
+- Handles the purchase process from GoDaddy.
+- Performs domain verification of the certificate.
+- Maintains the certificate in [Azure Key Vault](../key-vault/general/overview.md).
+- Manages [certificate renewal](#renew-an-app-service-certificate).
+- Synchronizes the certificate automatically with the imported copies in App Service apps.
+
+> [!NOTE]
+> After you upload a certificate to an app, the certificate is stored in a deployment unit that's bound to the App Service plan's resource group, region, and operating system combination, internally called a *webspace*. That way, the certificate is accessible to other apps in the same resource group and region combination. Certificates uploaded or imported to App Service are shared with App Services in the same deployment unit.
+
+## Prerequisites
+
+- [Create an App Service app](./index.yml). The app's [App Service plan](overview-hosting-plans.md) must be in the **Basic**, **Standard**, **Premium**, or **Isolated** tier. See [Scale up an app](manage-scale-up.md#scale-up-your-pricing-tier) to update the tier.
+
+> [!NOTE]
+> Currently, App Service certificates aren't supported in Azure National Clouds.
+
+## Buy and configure an App Service certificate
+
+#### Start certificate purchase
+
+1. Go to the [App Service Certificate creation page](https://portal.azure.com/#create/Microsoft.SSL), and start your purchase for an App Service certificate.
+
+ > [!NOTE]
+ > App Service Certificates purchased from Azure are issued by GoDaddy. For some domains, you must explicitly allow GoDaddy as a certificate issuer by creating a [CAA domain record](https://wikipedia.org/wiki/DNS_Certification_Authority_Authorization) with the value: `0 issue godaddy.com`
+
+ :::image type="content" source="./media/configure-ssl-certificate/purchase-app-service-cert.png" alt-text="Screenshot of 'Create App Service Certificate' pane with purchase options.":::
+
+1. To help you configure the certificate, use the following table. When you're done, select **Review + Create**, then select **Create**.
+
+ | Setting | Description |
+ |-|-|
+ | **Subscription** | The Azure subscription to associate with the certificate. |
+ | **Resource group** | The resource group that will contain the certificate. You can either create a new resource group or select the same resource group as your App Service app. |
+ | **SKU** | Determines the type of certificate to create, either a standard certificate or a [wildcard certificate](https://wikipedia.org/wiki/Wildcard_certificate). |
+ | **Naked Domain Host Name** | Specify the root domain. The issued certificate secures *both* the root domain and the `www` subdomain. In the issued certificate, the **Common Name** field specifies the root domain, and the **Subject Alternative Name** field specifies the `www` domain. To secure any subdomain only, specify the fully qualified domain name for the subdomain, for example, `mysubdomain.contoso.com`.|
+ | **Certificate name** | The friendly name for your App Service certificate. |
+ | **Enable auto renewal** | Select whether to automatically renew the certificate before expiration. Each renewal extends the certificate expiration by one year and the cost is charged to your subscription. |
+
+1. When deployment is complete, select **Go to resource**.
+
+#### Store certificate in Azure Key Vault
+
+[Key Vault](../key-vault/general/overview.md) is an Azure service that helps safeguard cryptographic keys and secrets used by cloud applications and services. For App Service certificates, the storage of choice is Key Vault. After you finish the certificate purchase process, you must complete a few more steps before you start using this certificate.
+
+1. On the [App Service Certificates page](https://portal.azure.com/#blade/HubsExtension/Resources/resourceType/Microsoft.CertificateRegistration%2FcertificateOrders), select the certificate. On the certificate menu, select **Certificate Configuration** > **Step 1: Store**.
+
+ :::image type="content" source="media/configure-ssl-certificate/configure-key-vault.png" alt-text="Screenshot of 'Certificate Configuration' pane with 'Step 1: Store' selected.":::
+
+1. On the **Key Vault Status** page, select **Select from Key Vault**.
+
+1. If you create a new vault, set up the vault based on the following table, and make sure to use the same subscription and resource group as your App Service app.
+
+ | Setting | Description |
+ |-|-|
+ | **Resource group** | Recommended: The same resource group as your App Service certificate. |
+ | **Key vault name** | A unique name that uses only alphanumeric characters and dashes. |
+ | **Region** | The same location as your App Service app. |
+ | **Pricing tier** | For information, see [Azure Key Vault pricing details](https://azure.microsoft.com/pricing/details/key-vault/). |
+ | **Days to retain deleted vaults** | The number of days after deletion, in which objects remain recoverable (see [Azure Key Vault soft-delete overview](../key-vault/general/soft-delete-overview.md)). Set a value between 7 and 90. |
+ | **Purge protection** | Prevents objects soft-deleted st objects to be manually purged. Enabling this option forces all deleted objects to remain in soft-deleted state for the entire duration of the retention period. |
+
+1. Select **Next** and select **Vault access policy**. Currently, App Service certificates support only Key Vault access policies, not the RBAC model.
+1. Select **Review + create**, then select **Create**.
+1. After the key vault is created, don't select **Go to resource** but wait for the **Select key vault from Azure Key Vault page** to reload.
+1. Select **Select**.
+1. After you select the vault, close the **Key Vault Repository** page. The **Step 1: Store** option should show a green check mark to indicate success. Keep the page open for the next step.
+
+#### Confirm domain ownership
+
+1. From the same **Certificate Configuration** page in the previous section, select **Step 2: Verify**.
+
+ :::image type="content" source="media/configure-ssl-certificate/verify-domain.png" alt-text="Screenshot of 'Certificate Configuration' pane with 'Step 2: Verify' selected.":::
+
+1. Select **App Service Verification**. However, because you previously mapped the domain to your web app per the [Prerequisites](#prerequisites), the domain is already verified. To finish this step, just select **Verify**, and then select **Refresh** until the message **Certificate is Domain Verified** appears.
+
+The following domain verification methods are supported:
+
+| Method | Description |
+|--|-|
+| **App Service Verification** | The most convenient option when the domain is already mapped to an App Service app in the same subscription because the App Service app has already verified the domain ownership. Review the last step in [Confirm domain ownership](#confirm-domain-ownership). |
+| **Domain Verification** | Confirm an [App Service domain that you purchased from Azure](manage-custom-dns-buy-domain.md). Azure automatically adds the verification TXT record for you and completes the process. |
+| **Mail Verification** | Confirm the domain by sending an email to the domain administrator. Instructions are provided when you select the option. |
+| **Manual Verification** | Confirm the domain by using either a DNS TXT record or an HTML page, which applies only to **Standard** certificates per the following note. The steps are provided after you select the option. The HTML page option doesn't work for web apps with "HTTPS Only' enabled. |
+
+> [!IMPORTANT]
+> With the **Standard** certificate, you get a certificate for the requested top-level domain *and* the `www` subdomain, for example, `contoso.com` and `www.contoso.com`. However, **App Service Verification** and **Manual Verification** both use HTML page verification, which doesn't support the `www` subdomain when issuing, rekeying, or renewing a certificate. For the **Standard** certificate, use **Domain Verification** and **Mail Verification** to include the `www` subdomain with the requested top-level domain in the certificate.
+
+Once your certificate is domain-verified, [you're ready to import it into an App Service app](configure-ssl-certificate.md#import-an-app-service-certificate).
+
+## Renew an App Service certificate
+
+By default, App Service certificates have a one-year validity period. Before and nearer to the expiration date, you can automatically or manually renew App Service certificates in one-year increments. The renewal process effectively gives you a new App Service certificate with the expiration date extended to one year from the existing certificate's expiration date.
+
+> [!NOTE]
+> Starting September 23 2021, if you haven't verified the domain in the last 395 days, App Service certificates require domain verification during a renew or rekey process. The new certificate order remains in "pending issuance" mode during the renew or rekey process until you complete the domain verification.
+>
+> Unlike the free App Service managed certificate, domain re-verification for App Service certificates *isn't* automated. Failure to verify domain ownership results in failed renewals. For more information about how to verify your App Service certificate, review [Confirm domain ownership](#confirm-domain-ownership).
+>
+> The renewal process requires that the well-known [service principal for App Service has the required permissions on your key vault](deploy-resource-manager-template.md#deploy-web-app-certificate-from-key-vault). These permissions are set up for you when you import an App Service certificate through the Azure portal. Make sure that you don't remove these permissions from your key vault.
+
+1. To change the automatic renewal setting for your App Service certificate at any time, on the [App Service Certificates page](https://portal.azure.com/#blade/HubsExtension/Resources/resourceType/Microsoft.CertificateRegistration%2FcertificateOrders), select the certificate.
+
+1. On the left menu, select **Auto Renew Settings**.
+
+1. Select **On** or **Off**, and select **Save**.
+
+ If you turn on automatic renewal, certificates can start automatically renewing 32 days before expiration.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of specified certificate's auto renewal settings.](./media/configure-ssl-certificate/auto-renew-app-service-cert.png)
+
+1. To manually renew the certificate instead, select **Manual Renew**. You can request to manually renew your certificate 60 days before expiration.
+
+1. After the renew operation completes, select **Sync**.
+
+ The sync operation automatically updates the hostname bindings for the certificate in App Service without causing any downtime to your apps.
+
+ > [!NOTE]
+ > If you don't select **Sync**, App Service automatically syncs your certificate within 24 hours.
+
+## Rekey and App Service certificate
+
+If you think your certificate's private key is compromised, you can rekey your certificate. This action rolls the certificate with a new certificate issued from the certificate authority.
+
+1. On the [App Service Certificates page](https://portal.azure.com/#blade/HubsExtension/Resources/resourceType/Microsoft.CertificateRegistration%2FcertificateOrders), select the certificate. From the left menu, select **Rekey and Sync**.
+
+1. To start the process, select **Rekey**. This process can take 1-10 minutes to complete.
+
+ > [!div class="mx-imgBorder"]
+ > ![Screenshot of rekeying an App Service certificate.](./media/configure-ssl-certificate/rekey-app-service-cert.png)
+
+1. You might also be required to [reconfirm domain ownership](#confirm-domain-ownership).
+
+1. After the rekey operation completes, select **Sync**.
+
+ The sync operation automatically updates the hostname bindings for the certificate in App Service without causing any downtime to your apps.
+
+ > [!NOTE]
+ > If you don't select **Sync**, App Service automatically syncs your certificate within 24 hours.
+
+## Export an App Service certificate
+
+Because an App Service certificate is a [Key Vault secret](../key-vault/general/about-keys-secrets-certificates.md), you can export a copy as a PFX file, which you can use for other Azure services or outside of Azure.
+
+> [!IMPORTANT]
+> The exported certificate is an unmanaged artifact. App Service doesn't sync such artifacts when the App Service Certificate is [renewed](#renew-an-app-service-certificate). You must export and install the renewed certificate where necessary.
+
+#### [Azure portal](#tab/portal)
+
+1. On the [App Service Certificates page](https://portal.azure.com/#blade/HubsExtension/Resources/resourceType/Microsoft.CertificateRegistration%2FcertificateOrders), select the certificate.
+
+1. On the left menu, select **Export Certificate**.
+
+1. Select **Open Key Vault Secret**.
+
+1. Select the certificate's current version.
+
+1. Select **Download as a certificate**.
+
+#### [Azure CLI](#tab/cli)
+
+Run the following commands in [Azure Cloud Shell](https://shell.azure.com), or run them locally if you [installed Azure CLI](/cli/azure/install-azure-cli). Replace the placeholders with the names that you used when you [bought the App Service certificate](#start-certificate-purchase).
+
+```azurecli-interactive
+secretname=$(az resource show \
+ --resource-group <group-name> \
+ --resource-type "Microsoft.CertificateRegistration/certificateOrders" \
+ --name <app-service-cert-name> \
+ --query "properties.certificates.<app-service-cert-name>.keyVaultSecretName" \
+ --output tsv)
+
+az keyvault secret download \
+ --file appservicecertificate.pfx \
+ --vault-name <key-vault-name> \
+ --name $secretname \
+ --encoding base64
+```
+
+#### [Azure PowerShell](#tab/powershell)
+
+```azurepowershell-interactive
+$ascName = <app-service-cert-name>
+$ascResource = Get-AzResource -ResourceType "Microsoft.CertificateRegistration/certificateOrders" -Name $ascName -ResourceGroupName <group-name> -ExpandProperties
+$keyVaultSecretName = $ascResource.Properties.certificates[0].$ascName.KeyVaultSecretName
+$CertBase64 = Get-AzKeyVaultSecret -VaultName <key-vault-name> -Name $keyVaultSecretName -AsPlainText
+$CertBytes = [Convert]::FromBase64String($CertBase64)
+Set-Content -Path appservicecertificate.pfx -Value $CertBytes -AsByteStream
+```
+++
+The downloaded PFX file is a raw PKCS12 file that contains both the public and private certificates and has an import password that's an empty string. You can locally install the file by leaving the password field empty. You can't [upload the file as-is into App Service](configure-ssl-certificate.md#upload-a-private-certificate) because the file isn't [password protected](configure-ssl-certificate.md#private-certificate-requirements).
+
+## Delete an App Service certificate
+
+If you delete an App Service certificate, the delete operation is irreversible and final. The result is a revoked certificate, and any binding in App Service that uses this certificate becomes invalid.
+
+1. On the [App Service Certificates page](https://portal.azure.com/#blade/HubsExtension/Resources/resourceType/Microsoft.CertificateRegistration%2FcertificateOrders), select the certificate.
+
+1. From the left menu, select **Overview** > **Delete**.
+
+1. When the confirmation box opens, enter the certificate name, and select **OK**.
+
+## Frequently asked questions
+
+#### My App Service certificate doesn't have any value in Key Vault
+
+Your App Service certificate is most likely still not yet domain-verified. Until [domain ownership is confirmed](#confirm-domain-ownership), your App Service certificate isn't ready for use. As a key vault secret, it maintains an `Initialize` tag, and its value and content-type remain empty. When domain ownership is confirmed, the key vault secret shows a value and a content-type, and the tag changes to `Ready`.
+
+#### I can't export my App Service certificate with PowerShell
+
+Your App Service certificate is most likely still not yet domain-verified. Until [domain ownership is confirmed](#confirm-domain-ownership), your App Service certificate isn't ready for use.
+
+#### What changes does the App Service certificate creation process make to my existing Key Vault?
+
+The creation process makes the following changes:
+
+- Adds two access policies in the vault:
+ - **Microsoft.Azure.WebSites** (or `Microsoft Azure App Service`)
+ - **Microsoft certificate reseller CSM Resource Provider** (or `Microsoft.Azure.CertificateRegistration`)
+- Creates a [delete lock](../azure-resource-manager/management/lock-resources.md) on the vault called: `AppServiceCertificateLock` to prevent accidental deletion of the key vault.
+
+## More resources
+
+* [Secure a custom DNS name with a TLS/SSL binding in Azure App Service](configure-ssl-bindings.md)
+* [Enforce HTTPS](configure-ssl-bindings.md#enforce-https)
+* [Enforce TLS 1.1/1.2](configure-ssl-bindings.md#enforce-tls-versions)
+* [Use a TLS/SSL certificate in your code in Azure App Service](configure-ssl-certificate-in-code.md)
+* [FAQ : App Service Certificates](./faq-configuration-and-management.yml)
app-service Configure Ssl Bindings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-ssl-bindings.md
In the <a href="https://portal.azure.com" target="_blank">Azure portal</a>:
1. If your app already has a certificate for the selected custom domain, you can select it in **Certificate**. If not, you must add a certificate using one of the selections in **Source**. - **Create App Service Managed Certificate** - Let App Service create a managed certificate for your selected domain. This option is the simplest. For more information, see [Create a free managed certificate](configure-ssl-certificate.md#create-a-free-managed-certificate).
- - **Import App Service Certificate** - In **App Service Certificate**, choose an App Service certificate you've purchased for your selected domain. To purchase an App Service certificate, see [Import an App Service certificate](configure-ssl-certificate.md#buy-and-import-app-service-certificate).
+ - **Import App Service Certificate** - In **App Service Certificate**, choose an [App Service certificate](configure-ssl-app-service-certificate.md) you've purchased for your selected domain.
- **Upload certificate (.pfx)** - Follow the workflow at [Upload a private certificate](configure-ssl-certificate.md#upload-a-private-certificate) to upload a PFX certificate from your local machine and specify the certificate password. - **Import from Key Vault** - Select **Select key vault certificate** and select the certificate in the dialog.
app-service Configure Ssl Certificate In Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-ssl-certificate-in-code.md
In your application code, you can access the [public or private certificates you add to App Service](configure-ssl-certificate.md). Your app code may act as a client and access an external service that requires certificate authentication, or it may need to perform cryptographic tasks. This how-to guide shows how to use public or private certificates in your application code.
-This approach to using certificates in your code makes use of the TLS functionality in App Service, which requires your app to be in **Basic** tier or above. If your app is in **Free** or **Shared** tier, you can [include the certificate file in your app repository](#load-certificate-from-file).
+This approach to using certificates in your code makes use of the TLS functionality in App Service, which requires your app to be in **Basic** tier or higher. If your app is in **Free** or **Shared** tier, you can [include the certificate file in your app repository](#load-certificate-from-file).
When you let App Service manage your TLS/SSL certificates, you can maintain the certificates and your application code separately and safeguard your sensitive data.
If you need to load a certificate file that you upload manually, it's better to
> az webapp config appsettings set --name <app-name> --resource-group <resource-group-name> --settings WEBSITE_LOAD_USER_PROFILE=1 > ``` >
-> This approach to using certificates in your code makes use of the TLS functionality in App Service, which requires your app to be in **Basic** tier or above.
+> This approach to using certificates in your code makes use of the TLS functionality in App Service, which requires your app to be in **Basic** tier or higher.
The following C# example loads a public certificate from a relative path in your app:
To see how to load a TLS/SSL certificate from a file in Node.js, PHP, Python, Ja
## Load certificate in Linux/Windows containers
-The `WEBSITE_LOAD_CERTIFICATES` app settings makes the specified certificates accessible to your Windows or Linux custom containers (including built-in Linux containers) as files. The files are found under the following directories:
+The `WEBSITE_LOAD_CERTIFICATES` app setting makes the specified certificates accessible to your Windows or Linux custom containers (including built-in Linux containers) as files. The files are found under the following directories:
| Container platform | Public certificates | Private certificates | | - | - | - |
The certificate file names are the certificate thumbprints.
> App Service inject the certificate paths into Windows containers as the following environment variables `WEBSITE_PRIVATE_CERTS_PATH`, `WEBSITE_INTERMEDIATE_CERTS_PATH`, `WEBSITE_PUBLIC_CERTS_PATH`, and `WEBSITE_ROOT_CERTS_PATH`. It's better to reference the certificate path with the environment variables instead of hardcoding the certificate path, in case the certificate paths change in the future. >
-In addition, [Windows Server Core containers](configure-custom-container.md#supported-parent-images) load the certificates into the certificate store automatically, in **LocalMachine\My**. To load the certificates, follow the same pattern as [Load certificate in Windows apps](#load-certificate-in-windows-apps). For Windows Nano based containers, use the file paths provided above to [Load the certificate directly from file](#load-certificate-from-file).
+In addition, [Windows Server Core containers](configure-custom-container.md#supported-parent-images) load the certificates into the certificate store automatically, in **LocalMachine\My**. To load the certificates, follow the same pattern as [Load certificate in Windows apps](#load-certificate-in-windows-apps). For Windows Nano based containers, use these file paths [Load the certificate directly from file](#load-certificate-from-file).
The following C# code shows how to load a public certificate in a Linux app.
If you manually upload the [public](configure-ssl-certificate.md#upload-a-public
- If you list thumbprints explicitly in `WEBSITE_LOAD_CERTIFICATES`, add the new thumbprint to the app setting. - If `WEBSITE_LOAD_CERTIFICATES` is set to `*`, restart the app to make the new certificate accessible.
-If you renew a certificate [in Key Vault](configure-ssl-certificate.md#renew-a-certificate-imported-from-key-vault), such as with an [App Service certificate](configure-ssl-certificate.md#renew-app-service-certificate), the daily sync from Key Vault makes the necessary update automatically when synchronizing your app with the renewed certificate.
+If you renew a certificate [in Key Vault](configure-ssl-certificate.md#renew-a-certificate-imported-from-key-vault), such as with an [App Service certificate](configure-ssl-app-service-certificate.md#renew-an-app-service-certificate), the daily sync from Key Vault makes the necessary update automatically when synchronizing your app with the renewed certificate.
- If `WEBSITE_LOAD_CERTIFICATES` contains the old thumbprint of your renewed certificate, the daily sync updates the old thumbprint to the new thumbprint automatically. - If `WEBSITE_LOAD_CERTIFICATES` is set to `*`, the daily sync makes the new certificate accessible automatically.
app-service Configure Ssl Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-ssl-certificate.md
description: Create a free certificate, import an App Service certificate, impor
tags: buy-ssl-certificates Previously updated : 07/28/2022 Last updated : 07/28/2023
The following table lists the options for you to add certificates in App Service
|Option|Description| |-|-| | Create a free App Service managed certificate | A private certificate that's free of charge and easy to use if you just need to secure your [custom domain](app-service-web-tutorial-custom-domain.md) in App Service. |
-| Purchase an App Service certificate | A private certificate that's managed by Azure. It combines the simplicity of automated certificate management and the flexibility of renewal and export options. |
+| Import an App Service certificate | A private certificate that's managed by Azure. It combines the simplicity of automated certificate management and the flexibility of renewal and export options. |
| Import a certificate from Key Vault | Useful if you use [Azure Key Vault](../key-vault/index.yml) to manage your [PKCS12 certificates](https://wikipedia.org/wiki/PKCS_12). See [Private certificate requirements](#private-certificate-requirements). | | Upload a private certificate | If you already have a private certificate from a third-party provider, you can upload it. See [Private certificate requirements](#private-certificate-requirements). | | Upload a public certificate | Public certificates aren't used to secure custom domains, but you can load them into your code if you need them to access remote resources. |
The following table lists the options for you to add certificates in App Service
## Prerequisites -- [Create an App Service app](./index.yml).
+- [Create an App Service app](./index.yml). The app's [App Service plan](overview-hosting-plans.md) must be in the **Basic**, **Standard**, **Premium**, or **Isolated** tier. See [Scale up an app](manage-scale-up.md#scale-up-your-pricing-tier) to update the tier.
- For a private certificate, make sure that it satisfies all [requirements from App Service](#private-certificate-requirements).
The following table lists the options for you to add certificates in App Service
## Private certificate requirements
-The [free App Service managed certificate](#create-a-free-managed-certificate) and the [App Service certificate](#buy-and-import-app-service-certificate) already satisfy the requirements of App Service. If you choose to upload or import a private certificate to App Service, your certificate must meet the following requirements:
+The [free App Service managed certificate](#create-a-free-managed-certificate) and the [App Service certificate](configure-ssl-app-service-certificate.md) already satisfy the requirements of App Service. If you choose to upload or import a private certificate to App Service, your certificate must meet the following requirements:
* Exported as a [password-protected PFX file](https://en.wikipedia.org/w/index.php?title=X.509&section=4#Certificate_filename_extensions), encrypted using triple DES. * Contains private key at least 2048 bits long
To secure a custom domain in a TLS binding, the certificate has more requirement
> [!NOTE] > **Elliptic Curve Cryptography (ECC) certificates** work with App Service but aren't covered by this article. For the exact steps to create ECC certificates, work with your certificate authority. - ## Create a free managed certificate
-The free App Service managed certificate is a turn-key solution for securing your custom DNS name in App Service. Without any action required from you, this TLS/SSL server certificate is fully managed by App Service and is automatically renewed continuously in six-month increments, 45 days before expiration, as long as the prerequisites that you set up stay the same. All the associated bindings are updated with the renewed certificate. You create and bind the certificate to a custom domain, and let App Service do the rest.
+The free App Service managed certificate is a turn-key solution for securing your custom DNS name in App Service. Without any action from you, this TLS/SSL server certificate is fully managed by App Service and is automatically renewed continuously in six-month increments, 45 days before expiration, as long as the prerequisites that you set up stay the same. All the associated bindings are updated with the renewed certificate. You create and bind the certificate to a custom domain, and let App Service do the rest.
> [!IMPORTANT] > Before you create a free managed certificate, make sure you have [met the prerequisites](#prerequisites) for your app.
The free certificate comes with the following limitations:
1. In the [Azure portal](https://portal.azure.com), from the left menu, select **App Services** > **\<app-name>**.
-1. On your app's navigation menu, select **TLS/SSL settings**. On the pane that opens, select **Private Key Certificates (.pfx)** > **Create App Service Managed Certificate**.
+1. On your app's navigation menu, select **Certificates**. In the **Managed certificates** pane, select **Add certificate**.
- ![Screenshot of app menu with "TLS/SSL settings", "Private Key Certificates (.pfx)", and "Create App Service Managed Certificate" selected.](./media/configure-ssl-certificate/create-free-cert.png)
+ :::image type="content" source="media/configure-ssl-certificate/create-free-cert.png" alt-text="Screenshot of app menu with 'Certificates', 'Managed certificates', and 'Add certificate' selected.":::
-1. Select the custom domain for the free certificate, and then select **Create**. You can create only one certificate for each supported custom domain.
+1. Select the custom domain for the free certificate, and then select **Validate**. When validation completes, select **Add**. You can create only one managed certificate for each supported custom domain.
- When the operation completes, the certificate appears in the **Private Key Certificates** list.
+ When the operation completes, the certificate appears in the **Managed certificates** list.
- ![Screenshot of "Private Key Certificates" pane with newly created certificate listed.](./media/configure-ssl-certificate/create-free-cert-finished.png)
+ :::image type="content" source="media/configure-ssl-certificate/create-free-cert-finished.png" alt-text="Screenshot of 'Managed certificates' pane with newly created certificate listed.":::
1. To secure a custom domain with this certificate, you still have to create a certificate binding. Follow the steps in [Secure a custom DNS name with a TLS/SSL binding in Azure App Service](configure-ssl-bindings.md).
-## Buy and import App Service certificate
-
-If you purchase an App Service certificate from Azure, Azure manages the following tasks:
--- Handles the purchase process from GoDaddy.-- Performs domain verification of the certificate.-- Maintains the certificate in [Azure Key Vault](../key-vault/general/overview.md).-- Manages [certificate renewal](#renew-app-service-certificate).-- Synchronizes the certificate automatically with the imported copies in App Service apps.-
-To purchase an App Service certificate, go to [Start certificate order](#start-certificate-purchase).
-
-> [!NOTE]
-> Currently, App Service certificates aren't supported in Azure National Clouds.
-
-If you already have a working App Service certificate, you can complete the following tasks:
--- [Import the certificate into App Service](#import-certificate-into-app-service).-- [Manage the App Service certificate](#manage-app-service-certificates), such as renew, rekey, and export.-
-### Start certificate purchase
-
-1. Go to the [App Service Certificate creation page](https://portal.azure.com/#create/Microsoft.SSL), and start your purchase for an App Service certificate.
-
- > [!NOTE]
- > In this article, all prices shown are for example purposes only.
- >
- > App Service Certificates purchased from Azure are issued by GoDaddy. For some domains, you must explicitly allow GoDaddy as a certificate issuer by creating a [CAA domain record](https://wikipedia.org/wiki/DNS_Certification_Authority_Authorization) with the value: `0 issue godaddy.com`
-
- :::image type="content" source="./media/configure-ssl-certificate/purchase-app-service-cert.png" alt-text="Screenshot of 'Create App Service Certificate' pane with purchase options.":::
-
-1. To help you configure the certificate, use the following table. When you're done, select **Create**.
-
- | Setting | Description |
- |-|-|
- | **Subscription** | The Azure subscription to associate with the certificate. |
- | **Resource group** | The resource group that will contain the certificate. You can either create a new resource group or select the same resource group as your App Service app. |
- | **SKU** | Determines the type of certificate to create, either a standard certificate or a [wildcard certificate](https://wikipedia.org/wiki/Wildcard_certificate). |
- | **Naked Domain Host Name** | Specify the root domain. The issued certificate secures *both* the root domain and the `www` subdomain. In the issued certificate, the **Common Name** field specifies the root domain, and the **Subject Alternative Name** field specifies the `www` domain. To secure any subdomain only, specify the fully qualified domain name for the subdomain, for example, `mysubdomain.contoso.com`.|
- | **Certificate name** | The friendly name for your App Service certificate. |
- | **Enable auto renewal** | Select whether to automatically renew the certificate before expiration. Each renewal extends the certificate expiration by one year and the cost is charged to your subscription. |
-
-### Store certificate in Azure Key Vault
-
-[Key Vault](../key-vault/general/overview.md) is an Azure service that helps safeguard cryptographic keys and secrets used by cloud applications and services. For App Service certificates, the storage of choice is Key Vault. After you finish the certificate purchase process, you must complete a few more steps before you start using this certificate.
-
-1. On the [App Service Certificates page](https://portal.azure.com/#blade/HubsExtension/Resources/resourceType/Microsoft.CertificateRegistration%2FcertificateOrders), select the certificate. On the certificate menu, select **Certificate Configuration** > **Step 1: Store**.
-
- ![Screenshot of "Certificate Configuration" pane with "Step 1: Store" selected.](./media/configure-ssl-certificate/configure-key-vault.png)
-
-1. On the **Key Vault Status** page, to create a new vault or choose an existing vault, select **Key Vault Repository**.
-
-1. If you create a new vault, set up the vault based on the following table, and make sure to use the same subscription and resource group as your App Service app. When you're done, select **Create**.
-
- | Setting | Description |
- |-|-|
- | **Name** | A unique name that uses only alphanumeric characters and dashes. |
- | **Resource group** | Recommended: The same resource group as your App Service certificate. |
- | **Location** | The same location as your App Service app. |
- | **Pricing tier** | For information, see [Azure Key Vault pricing details](https://azure.microsoft.com/pricing/details/key-vault/). |
- | **Access policies** | Defines the applications and the allowed access to the vault resources. You can set up these policies later by following the steps at [Assign a Key Vault access policy](../key-vault/general/assign-access-policy-portal.md). Currently, App Service Certificate supports only Key Vault access policies, not the RBAC model. |
- | **Virtual Network Access** | Restrict vault access to certain Azure virtual networks. You can set up this restriction later by following the steps at [Configure Azure Key Vault Firewalls and Virtual Networks](../key-vault/general/network-security.md) |
+## Import an App Service certificate
-1. After you select the vault, close the **Key Vault Repository** page. The **Step 1: Store** option should show a green check mark to indicate success. Keep the page open for the next step.
-
-### Confirm domain ownership
-
-1. From the same **Certificate Configuration** page in the previous section, select **Step 2: Verify**.
-
- ![Screenshot of "Certificate Configuration" pane with "Step 2: Verify" selected.](./media/configure-ssl-certificate/verify-domain.png)
-
-1. Select **App Service Verification**. However, because you previously mapped the domain to your web app per the [Prerequisites](#prerequisites), the domain is already verified. To finish this step, just select **Verify**, and then select **Refresh** until the message **Certificate is Domain Verified** appears.
-
-The following domain verification methods are supported:
-
-| Method | Description |
-|--|-|
-| **App Service** | The most convenient option when the domain is already mapped to an App Service app in the same subscription because the App Service app has already verified the domain ownership. Review the last step in [Confirm domain ownership](#confirm-domain-ownership). |
-| **Domain** | Confirm an [App Service domain that you purchased from Azure](manage-custom-dns-buy-domain.md). Azure automatically adds the verification TXT record for you and completes the process. |
-| **Mail** | Confirm the domain by sending an email to the domain administrator. Instructions are provided when you select the option. |
-| **Manual** | Confirm the domain by using either a DNS TXT record or an HTML page, which applies only to **Standard** certificates per the following note. The steps are provided after you select the option. The HTML page option doesn't work for web apps with "HTTPS Only' enabled. |
-
-> [!IMPORTANT]
-> For a **Standard** certificate, the certificate provider gives you a certificate for the requested top-level domain *and* the `www` subdomain, for example, `contoso.com` and `www.contoso.com`. However, starting December 1, 2021, [a restriction is introduced](https://azure.github.io/AppService/2021/11/22/ASC-1130-Change.html) on **App Service** and the **Manual** verification methods. To confirm domain ownership, both use HTML page verification. This method doesn't allow the certificate provider to include the `www` subdomain when issuing, rekeying, or renewing a certificate.
->
-> However, the **Domain** and **Mail** verification methods continue to include the `www` subdomain with the requested top-level domain in the certificate.
-
-### Import certificate into App Service
+To import an App Service certificate, first [buy and configure an App Service certificate](configure-ssl-app-service-certificate.md#buy-and-configure-an-app-service-certificate), then follow the steps here.
1. In the [Azure portal](https://portal.azure.com), from the left menu, select **App Services** > **\<app-name>**.
-1. From your app's navigation menu, select **TLS/SSL settings** > **Private Key Certificates (.pfx)** > **Import App Service Certificate**.
+1. From your app's navigation menu, select **Certificates** > **Bring your own certificates (.pfx)** > **Add certificate**.
- ![Screenshot of app menu with "TLS/SSL settings", "Private Key Certificates (.pfx)", and "Import App Service certificate" selected.](./media/configure-ssl-certificate/import-app-service-cert.png)
+1. In Source, select **Import App Service Certificate**.
+1. In **App Service certificate**, select the certificate you just created.
+1. In **Certificate friendly name**, give the certificate a name in your app.
+1. Select **Validate**. When validation succeeds, select **Add**.
-1. Select the certificate that you just purchased, and then select **OK**.
+ :::image type="content" source="media/configure-ssl-certificate/import-app-service-cert.png" alt-text="Screenshot of app management page with 'Certificates', 'Bring your own certificates (.pfx)', and 'Import App Service certificate' selected, and the completed 'Add private key certificate' page with the **Validate** button.":::
- When the operation completes, the certificate appears in the **Private Key Certificates** list.
+ When the operation completes, the certificate appears in the **Bring your own certificates** list.
- ![Screenshot of "Private Key Certificates" pane with purchased certificate listed.](./media/configure-ssl-certificate/import-app-service-cert-finished.png)
+ :::image type="content" source="media/configure-ssl-certificate/import-app-service-cert-finished.png" alt-text="Screenshot of 'Bring your own certificates (.pfx)' pane with purchased certificate listed.":::
1. To secure a custom domain with this certificate, you still have to create a certificate binding. Follow the steps in [Secure a custom DNS name with a TLS/SSL binding in Azure App Service](configure-ssl-bindings.md).
By default, the App Service resource provider doesn't have access to your key va
### Import a certificate from your vault to your app
-1. In the [Azure portal](https://portal.azure.com), on the left menu, select **App Services** > **\<app-name>**.
+1. In the [Azure portal](https://portal.azure.com), from the left menu, select **App Services** > **\<app-name>**.
+
+1. From your app's navigation menu, select **Certificates** > **Bring your own certificates (.pfx)** > **Add certificate**.
+
+1. In Source, select **Import from Key Vault**.
-1. From your app's navigation menu, select **TLS/SSL settings** > **Private Key Certificates (.pfx)** > **Import Key Vault Certificate**.
+1. Select **Select key vault certificate**.
- ![Screenshot of "TLS/SSL settings", "Private Key Certificates (.pfx)", and "Import Key Vault Certificate" selected.](./media/configure-ssl-certificate/import-key-vault-cert.png)
+ :::image type="content" source="media/configure-ssl-certificate/import-key-vault-cert.png" alt-text="Screenshot of app management page with 'Certificates', 'Bring your own certificates (.pfx)', and 'Import from Key Vault' selected":::
1. To help you select the certificate, use the following table: | Setting | Description | |-|-| | **Subscription** | The subscription associated with the key vault. |
- | **Key Vault** | The key vault that has the certificate you want to import. |
+ | **Key vault** | The key vault that has the certificate you want to import. |
| **Certificate** | From this list, select a PKCS12 certificate that's in the vault. All PKCS12 certificates in the vault are listed with their thumbprints, but not all are supported in App Service. |
- When the operation completes, the certificate appears in the **Private Key Certificates** list. If the import fails with an error, the certificate doesn't meet the [requirements for App Service](#private-certificate-requirements).
+1. When finished with your selection, select **Select**, **Validate**, then **Add**.
- ![Screenshot of "Private Key Certificates" pane with imported certificate listed.](./media/configure-ssl-certificate/import-app-service-cert-finished.png)
+ When the operation completes, the certificate appears in the **Bring your own certificates** list. If the import fails with an error, the certificate doesn't meet the [requirements for App Service](#private-certificate-requirements).
+
+ :::image type="content" source="media/configure-ssl-certificate/import-app-service-cert-finished.png" alt-text="Screenshot of 'Bring your own certificates (.pfx)' pane with imported certificate listed.":::
> [!NOTE] > If you update your certificate in Key Vault with a new certificate, App Service automatically syncs your certificate within 24 hours.
By default, the App Service resource provider doesn't have access to your key va
After you get a certificate from your certificate provider, make the certificate ready for App Service by following the steps in this section.
-### Merge intermediate certificates
+#### Merge intermediate certificates
If your certificate authority gives you multiple certificates in the certificate chain, you must merge the certificates following the same order.
If your certificate authority gives you multiple certificates in the certificate
--END CERTIFICATE-- ```
-### Export merged private certificate to PFX
+#### Export merged private certificate to PFX
Now, export your merged TLS/SSL certificate with the private key that was used to generate your certificate request. If you generated your certificate request using OpenSSL, then you created a private key file.
Now, export your merged TLS/SSL certificate with the private key that was used t
1. If you used IIS or _Certreq.exe_ to generate your certificate request, install the certificate to your local computer, and then [export the certificate to a PFX file](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc754329(v=ws.11)).
-### Upload certificate to App Service
+#### Upload certificate to App Service
You're now ready upload the certificate to App Service. 1. In the [Azure portal](https://portal.azure.com), from the left menu, select **App Services** > **\<app-name>**.
-1. From your app's navigation menu, select **TLS/SSL settings** > **Private Key Certificates (.pfx)** > **Upload Certificate**.
+1. From your app's navigation menu, select **Certificates** > **Bring your own certificates (.pfx)** > **Upload Certificate**.
+
+ :::image type="content" source="media/configure-ssl-certificate/upload-private-cert.png" alt-text="Screenshot of 'Certificates', 'Bring your own certificates (.pfx)', 'Upload Certificate' selected.":::
+
+1. To help you upload the .pfx certificate, use the following table:
- ![Screenshot of "TLS/SSL settings", "Private Key Certificates (.pfx)", "Upload Certificate" selected.](./media/configure-ssl-certificate/upload-private-cert.png)
+ | Setting | Description |
+ |-|-|
+ | **PFX certificate file** | Select your .pfx file. |
+ | **Certificate password** | Enter the password that you created when you exported the PFX file. |
+ | **Certificate friendly name** | The certificate name that will be shown in your web app. |
-1. In **PFX Certificate File**, select your PFX file. In **Certificate password**, enter the password that you created when you exported the PFX file. When you're done, select **Upload**.
+1. When finished with your selection, select **Select**, **Validate**, then **Add**.
- When the operation completes, the certificate appears in the **Private Key Certificates** list.
+ When the operation completes, the certificate appears in the **Bring your own certificates** list.
- ![Screenshot of "Private Key Certificates" pane with uploaded certificate listed.](./media/configure-ssl-certificate/create-free-cert-finished.png)
+ :::image type="content" source="media/configure-ssl-certificate/import-app-service-cert-finished.png" alt-text="Screenshot of 'Bring your own certificates' pane with uploaded certificate listed.":::
1. To secure a custom domain with this certificate, you still have to create a certificate binding. Follow the steps in [Secure a custom DNS name with a TLS/SSL binding in Azure App Service](configure-ssl-bindings.md).
Public certificates are supported in the *.cer* format.
1. In the [Azure portal](https://portal.azure.com), from the left menu, select **App Services** > **\<app-name>**.
-1. From your app's navigation menu, select **TLS/SSL settings** > **Public Certificates (.cer)** > **Upload Public Key Certificate**.
+1. From your app's navigation menu, select **Certificates** > **Public key certificates (.cer)** > **Add certificate**.
+
+1. To help you upload the .cer certificate, use the following table:
-1. For **Name**, enter the name for the certificate. In **CER Certificate file**, select your CER file. When you're done, select **Upload**.
+ | Setting | Description |
+ |-|-|
+ | **CER certificate file** | Select your .pfx file. |
+ | **Certificate friendly name** | The certificate name that will be shown in your web app. |
+
+1. When you're done, select **Add**.
- ![Screenshot of name and public key certificate to upload.](./media/configure-ssl-certificate/upload-public-cert.png)
+ :::image type="content" source="media/configure-ssl-certificate/upload-public-cert.png" alt-text="Screenshot of name and public key certificate to upload.":::
1. After the certificate is uploaded, copy the certificate thumbprint, and then review [Make the certificate accessible](configure-ssl-certificate-in-code.md#make-the-certificate-accessible). ## Renew an expiring certificate
-Before a certificate expires, make sure to add the renewed certificate to App Service, and update any [TLS/SSL bindings](configure-ssl-certificate.md) where the process depends on the certificate type. For example, a [certificate imported from Key Vault](#import-a-certificate-from-key-vault), including an [App Service certificate](#buy-and-import-app-service-certificate), automatically syncs to App Service every 24 hours and updates the TLS/SSL binding when you renew the certificate. For an [uploaded certificate](#upload-a-private-certificate), there's no automatic binding update. Based on your scenario, review the corresponding section:
+Before a certificate expires, make sure to add the renewed certificate to App Service, and update any certificate bindings where the process depends on the certificate type. For example, a [certificate imported from Key Vault](#import-a-certificate-from-key-vault), including an [App Service certificate](configure-ssl-app-service-certificate.md), automatically syncs to App Service every 24 hours and updates the TLS/SSL binding when you renew the certificate. For an [uploaded certificate](#upload-a-private-certificate), there's no automatic binding update. Based on your scenario, review the corresponding section:
- [Renew an uploaded certificate](#renew-uploaded-certificate)-- [Renew an App Service certificate](#renew-app-service-certificate)
+- [Renew an App Service certificate](configure-ssl-app-service-certificate.md#renew-an-app-service-certificate)
- [Renew a certificate imported from Key Vault](#renew-a-certificate-imported-from-key-vault)
-## Renew uploaded certificate
+#### Renew uploaded certificate
When you replace an expiring certificate, the way you update the certificate binding with the new certificate might adversely affect user experience. For example, your inbound IP address might change when you delete a binding, even if that binding is IP-based. This result is especially impactful when you renew a certificate that's already in an IP-based binding. To avoid a change in your app's IP address, and to avoid downtime for your app due to HTTPS errors, follow these steps in the specified sequence: 1. [Upload the new certificate](#upload-a-private-certificate).
-1. Bind the new certificate to the same custom domain without deleting the existing, expiring certificate. For this task, go to your App Service app's TLS/SSL settings pane, and select **Add Binding**.
+1. Go to the **Custom domains** page for your app, select the **...** actions button, and select **Update binding**.
- This action replaces the binding, rather than remove the existing certificate binding.
+1. Select the new certificate and select **Update**.
1. Delete the existing certificate.
-## Renew App Service certificate
-
-By default, App Service certificates have a one-year validity period. Before and nearer to the expiration date, you can automatically or manually renew App Service certificates in one-year increments. The renewal process effectively gives you a new App Service certificate with the expiration date extended to one year from the existing certificate's expiration date.
+#### Renew a certificate imported from Key Vault
> [!NOTE]
-> Starting September 23 2021, if you haven't verified the domain in the last 395 days, App Service certificates require domain verification during a renew or rekey process. The new certificate order remains in "pending issuance" mode during the renew or rekey process until you complete the domain verification.
->
-> Unlike an App Service managed certificate, domain re-verification for App Service certificates *isn't* automated. Failure to verify domain ownership results in failed renewals. For more information about how to verify your App Service certificate, review [Confirm domain ownership](#confirm-domain-ownership).
->
-> The renewal process requires that the well-known [service principal for App Service has the required permissions on your key vault](deploy-resource-manager-template.md#deploy-web-app-certificate-from-key-vault). These permissions are set up for you when you import an App Service certificate through the Azure portal. Make sure that you don't remove these permissions from your key vault.
-
-1. To change the automatic renewal setting for your App Service certificate at any time, on the [App Service Certificates page](https://portal.azure.com/#blade/HubsExtension/Resources/resourceType/Microsoft.CertificateRegistration%2FcertificateOrders), select the certificate.
-
-1. On the left menu, select **Auto Renew Settings**.
-
-1. Select **On** or **Off**, and select **Save**.
-
- If you turn on automatic renewal, certificates can start automatically renewing 32 days before expiration.
-
- ![Screenshot of specified certificate's auto renewal settings.](./media/configure-ssl-certificate/auto-renew-app-service-cert.png)
-
-1. To manually renew the certificate instead, select **Manual Renew**. You can request to manually renew your certificate 60 days before expiration.
-
-1. After the renew operation completes, select **Sync**.
-
- The sync operation automatically updates the hostname bindings for the certificate in App Service without causing any downtime to your apps.
-
- > [!NOTE]
- > If you don't select **Sync**, App Service automatically syncs your certificate within 24 hours.
-
-## Renew a certificate imported from Key Vault
+> To renew an App Service certificate, see [Renew an App Service certificate](configure-ssl-app-service-certificate.md#renew-an-app-service-certificate).
To renew a certificate that you imported into App Service from Key Vault, review [Renew your Azure Key Vault certificate](../key-vault/certificates/overview-renew-certificate.md).
-After the certificate renews inside your key vault, App Service automatically syncs the new certificate, and updates any applicable TLS/SSL binding within 24 hours. To sync manually, follow these steps:
-
-1. Go to your app's **TLS/SSL settings** page.
-
-1. Under **Private Key Certificates**, select the imported certificate, and then select **Sync**.
-
-## Manage App Service certificates
-
-This section includes links to tasks that help you manage an [App Service certificate that you purchased](#buy-and-import-app-service-certificate):
--- [Rekey an App Service certificate](#rekey-app-service-certificate)-- [Export an App Service certificate](#export-app-service-certificate)-- [Delete an App Service certificate](#delete-app-service-certificate)-- [Renew an App Service certificate](#renew-app-service-certificate)-
-### Rekey App Service certificate
-
-If you think your certificate's private key is compromised, you can rekey your certificate. This action rolls the certificate with a new certificate issued from the certificate authority.
-
-1. On the [App Service Certificates page](https://portal.azure.com/#blade/HubsExtension/Resources/resourceType/Microsoft.CertificateRegistration%2FcertificateOrders), select the certificate. From the left menu, select **Rekey and Sync**.
-
-1. To start the process, select **Rekey**. This process can take 1-10 minutes to complete.
-
- ![Screenshot of rekeying an App Service certificate.](./media/configure-ssl-certificate/rekey-app-service-cert.png)
-
-1. You might also be required to [reconfirm domain ownership](#confirm-domain-ownership).
-
-1. After the rekey operation completes, select **Sync**.
-
- The sync operation automatically updates the hostname bindings for the certificate in App Service without causing any downtime to your apps.
-
- > [!NOTE]
- > If you don't select **Sync**, App Service automatically syncs your certificate within 24 hours.
-
-### Export App Service certificate
-
-Because an App Service certificate is a [Key Vault secret](../key-vault/general/about-keys-secrets-certificates.md), you can export a copy as a PFX file, which you can use for other Azure services or outside of Azure.
-
-> [!IMPORTANT]
-> The exported certificate is an unmanaged artifact. App Service doesn't sync such artifacts when the App Service Certificate is [renewed](#renew-app-service-certificate). You must export and install the renewed certificate where necessary.
-
-#### [Azure portal](#tab/portal)
-
-1. On the [App Service Certificates page](https://portal.azure.com/#blade/HubsExtension/Resources/resourceType/Microsoft.CertificateRegistration%2FcertificateOrders), select the certificate.
-
-1. On the left menu, select **Export Certificate**.
-
-1. Select **Open in Key Vault**.
-
-1. Select the certificate's current version.
-
-1. Select **Download as a certificate**.
-
-#### [Azure CLI](#tab/cli)
-
-To export the App Service Certificate as a PFX file, run the following commands in [Azure Cloud Shell](https://shell.azure.com). Or, you can locally run Cloud Shell locally if you [installed Azure CLI](/cli/azure/install-azure-cli). Replace the placeholders with the names that you used when you [bought the App Service certificate](#start-certificate-purchase).
-
-```azurecli-interactive
-secretname=$(az resource show \
- --resource-group <group-name> \
- --resource-type "Microsoft.CertificateRegistration/certificateOrders" \
- --name <app-service-cert-name> \
- --query "properties.certificates.<app-service-cert-name>.keyVaultSecretName" \
- --output tsv)
-
-az keyvault secret download \
- --file appservicecertificate.pfx \
- --vault-name <key-vault-name> \
- --name $secretname \
- --encoding base64
-```
---
-The downloaded PFX file is a raw PKCS12 file that contains both the public and private certificates and has an import password that's an empty string. You can locally install the file by leaving the password field empty. You can't [upload the file as-is into App Service](#upload-a-private-certificate) because the file isn't [password protected](#private-certificate-requirements).
-
-### Delete App Service certificate
-
-If you delete an App Service certificate, the delete operation is irreversible and final. The result is a revoked certificate, and any binding in App Service that uses this certificate becomes invalid.
-
-To prevent accidental deletion, Azure puts a lock on the App Service certificate. So, to delete the certificate, you must first remove the delete lock on the certificate.
-
-1. On the [App Service Certificates page](https://portal.azure.com/#blade/HubsExtension/Resources/resourceType/Microsoft.CertificateRegistration%2FcertificateOrders), select the certificate.
-
-1. On the left menu, select **Locks**.
-
-1. On your certificate, find the lock with the lock type named **Delete**. To the right side, select **Delete**.
-
- ![Screenshot of deleting the lock on an App Service certificate.](./media/configure-ssl-certificate/delete-lock-app-service-cert.png)
-
-1. Now, you can delete the App Service certificate. From the left menu, select **Overview** > **Delete**.
-
-1. When the confirmation box opens, enter the certificate name, and select **OK**.
+After the certificate renews inside your key vault, App Service automatically syncs the new certificate, and updates any applicable certificate binding within 24 hours. To sync manually, follow these steps:
-## Automate with scripts
+1. Go to your app's **Certificate** page.
-### Azure CLI
+1. Under **Bring your own certificates (.pfx)**, select the **...** details button for the imported key vault certificate, and then select **Sync**.
-[Bind a custom TLS/SSL certificate to a web app](scripts/cli-configure-ssl-certificate.md)
+## Frequently asked questions
-### PowerShell
+- [How can I automate adding a bring-your-owncertificate to an app?](#how-can-i-automate-adding-a-bring-your-owncertificate-to-an-app)
+- [Frequently asked questions for App Service certificates](configure-ssl-app-service-certificate.md#frequently-asked-questions)
+#### How can I automate adding a bring-your-owncertificate to an app?
-[!code-powershell[main](../../powershell_scripts/app-service/configure-ssl-certificate/configure-ssl-certificate.ps1?highlight=1-3 "Bind a custom TLS/SSL certificate to a web app")]
+- [Azure CLI: Bind a custom TLS/SSL certificate to a web app](scripts/cli-configure-ssl-certificate.md)
+- [Azure PowerShell Bind a custom TLS/SSL certificate to a web app using PowerShell](scripts/powershell-configure-ssl-certificate.md)
## More resources
app-service Integrate With Application Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/integrate-with-application-gateway.md
With a public domain mapped to the application gateway, you don't need to config
### A valid public certificate
-For security enhancement, it's recommended to bind TLS/SSL certificate for session encryption. To bind TLS/SSL certificate to the application gateway, a valid public certificate with following information is required. With [App Service Certificates](../configure-ssl-certificate.md#start-certificate-purchase), you can buy a TLS/SSL certificate and export it in .pfx format.
+For security enhancement, it's recommended to bind TLS/SSL certificate for session encryption. To bind TLS/SSL certificate to the application gateway, a valid public certificate with following information is required. With [App Service certificates](../configure-ssl-app-service-certificate.md), you can buy a TLS/SSL certificate and export it in .pfx format.
| Name | Value | Description| | -- | - ||
app-service Overview Manage Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-manage-costs.md
This article describes how you plan for and manage costs for Azure App Service.
## Understand the full billing model for Azure App Service
-Azure App Service runs on Azure infrastructure that accrues costs when you deploy new resources. It's important to understand that there could be other additional infrastructure costs that might accrue.
+Azure App Service runs on Azure infrastructure that accrues costs when you deploy new resources. It's important to understand that there could be other infrastructure costs that might accrue.
### How you're charged for Azure App Service
When you create or use App Service resources, you're charged for the following m
Other cost resources for App Service are (see [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/) for details): - [App Service domains](manage-custom-dns-buy-domain.md) Your subscription is charged for the domain registration on a yearly basis, if you enable automatic renewal.-- [App Service certificates](configure-ssl-certificate.md#buy-and-import-app-service-certificate) One-time charge at the time of purchase. If you have multiple subdomains to secure, you can reduce cost by purchasing one wildcard certificate instead of multiple standard certificates.-- [IP-based SSL binding](configure-ssl-bindings.md) The binding is configured on a certificate at the app level. Costs are accrued for each binding. For **Standard** tier and above, the first IP-based binding is not charged.
+- [App Service certificates](configure-ssl-app-service-certificate.md) One-time charge at the time of purchase. If you have multiple subdomains to secure, you can reduce cost by purchasing one wildcard certificate instead of multiple standard certificates.
+- [IP-based SSL binding](configure-ssl-bindings.md) The binding is configured on a certificate at the app level. Costs are accrued for each binding. For **Standard** tier and higher, the first IP-based binding isn't charged.
At the end of your billing cycle, the charges for each VM instance. Your bill or invoice shows a section for all App Service costs. There's a separate line item for each meter.
You can pay for Azure App Service charges with your Azure Prepayment credit. How
An easy way to estimate and optimize your App Service cost beforehand is by using the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/).
-To use the pricing calculator, click **App Service** in the **Products** tab. Then, scroll down to work with the calculator. The following screenshot is an example and doesn't reflect current pricing.
+To use the pricing calculator, select **App Service** in the **Products** tab. Then, scroll down to work with the calculator. The following screenshot is an example and doesn't reflect current pricing.
![Example showing estimated cost in the Azure Pricing calculator](media/overview-manage-costs/pricing-calculator.png)
When you create an App Service app or an App Service plan, you can see the estim
To create an app and view the estimated price:
-1. On the create page, scroll down to **App Service plan**, and click **Create new**.
-1. Specify a name and click **OK**.
-1. Next to **Sku and size**, click **Change size**.
+1. On the create page, scroll down to **App Service plan**, and select **Create new**.
+1. Specify a name and select **OK**.
+1. Next to **Sku and size**, select **Change size**.
1. Review the estimated price shown in the summary. The following screenshot is an example and doesn't reflect current pricing. ![Review estimated cost for each pricing tier in the portal](media/overview-manage-costs/pricing-estimates.png)
If your Azure subscription has a spending limit, Azure prevents you from spendin
At a basic level, App Service apps are charged by the App Service plan that hosts them. The costs associated with your App Service deployment depend on a few main factors: - **Pricing tier** Otherwise known as the SKU of the App Service plan. Higher tiers provide more CPU cores, memory, storage, or features, or combinations of them.-- **Instance count** dedicated tiers (Basic and above) can be scaled out, and each scaled out instance accrues costs.
+- **Instance count** dedicated tiers (Basic and higher) can be scaled out, and each scaled out instance accrues costs.
- **Stamp fee** In the Isolated tier, a flat fee is accrued on your App Service environment, regardless of how many apps or worker instances are hosted. An App Service plan can host more than one app. Depending on your deployment, you could save costs hosting more apps on one App Service plans (i.e. hosting your apps on fewer App Service plans).
To test App Service or your solution while accruing low or minimal cost, you can
### Production workloads
-Production workloads come with the recommendation of the dedicated **Standard** pricing tier or above. While the price goes up for higher tiers, it also gives you more memory and storage and higher-performing hardware, giving you higher app density per compute instance. That translates to lower instance count for the same number of apps, and therefore lower cost. In fact, **Premium V3** (the highest non-**Isolated** tier) is the most cost effective way to serve your app at scale. To add to the savings, you can get deep discounts on [Premium V3 reservations](#azure-reservations).
+Production workloads come with the recommendation of the dedicated **Standard** pricing tier or higher. While the price goes up for higher tiers, it also gives you more memory and storage and higher-performing hardware, giving you higher app density per compute instance. That translates to lower instance count for the same number of apps, and therefore lower cost. In fact, **Premium V3** (the highest non-**Isolated** tier) is the most cost effective way to serve your app at scale. To add to the savings, you can get deep discounts on [Premium V3 reservations](#azure-reservations).
> [!NOTE] > **Premium V3** supports both Windows containers and Linux containers.
-Once you choose the pricing tier you want, you should minimize the idle instances. In a scale-out deployment, you can waste money on underutilized compute instances. You should [configure autoscaling](../azure-monitor/autoscale/autoscale-get-started.md), available in **Standard** tier and above. By creating scale-out schedules, as well as metric-based scale-out rules, you only pay for the instances you really need at any given time.
+Once you choose the pricing tier you want, you should minimize the idle instances. In a scale-out deployment, you can waste money on underutilized compute instances. You should [configure autoscaling](../azure-monitor/autoscale/autoscale-get-started.md), available in **Standard** tier and higher. By creating scale-out schedules, as well as metric-based scale-out rules, you only pay for the instances you really need at any given time.
### Azure Reservations
If you plan to utilize a known minimum number of compute instances for one year
- **Windows (or platform agnostic)** Can apply to Windows or Linux instances in your subscription. - **Linux specific** Applies only to Linux instances in your subscription.
-The reserved instance pricing applies to the applicable instances in your subscription, up to the number of instances that you reserve. The reserved instances are a billing matter and are not tied to specific compute instances. If you run fewer instances than you reserve at any point during the reservation period, you still pay for the reserved instances. If you run more instances than you reserve at any point during the reservation period, you pay the normal accrued cost for the additional instances.
+The reserved instance pricing applies to the applicable instances in your subscription, up to the number of instances that you reserve. The reserved instances are a billing matter and aren't tied to specific compute instances. If you run fewer instances than you reserve at any point during the reservation period, you still pay for the reserved instances. If you run more instances than you reserve at any point during the reservation period, you pay the normal accrued cost for the additional instances.
The **Isolated** tier (App Service environment) also supports 1-year and 3-year reservations at reduced pricing. For more information, see [How reservation discounts apply to Azure App Service](../cost-management-billing/reservations/reservation-discount-app-service.md).
app-service Troubleshoot Domain Ssl Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/troubleshoot-domain-ssl-certificates.md
This problem might happen if another app uses the certificate.
#### Symptom
-In the Azure portal, you can't purchase an [Azure App Service certificate](./configure-ssl-certificate.md#buy-and-import-app-service-certificate).
+In the Azure portal, you can't purchase an [Azure App Service certificate](configure-ssl-app-service-certificate.md).
#### Cause and solution
The Key Vault used to store the App Service Certificate is missing access policy
#### Solution 1: Modify the access policies for the key vault
-To modify the access polices for the key vault, follow these steps:
+To modify the access policies for the key vault, follow these steps:
1. Sign in to the Azure portal. Select the Key Vault used by your App Service Certificate. Navigate to Access policies.</li> 2. If you do not see the two Service Principals listed you will need to add them. If they are available, verify the permissions include the recommended secret and certificate permissions.</li>
You're not required to migrate to Azure DNS hosting. If you want to migrate to A
**I would like to purchase my domain from App Service Domain but can I host my domain on GoDaddy instead of Azure DNS?**
-Starting July 24, 2017, Azure hosts App Service domains purchased from the Azure portal on Azure DNS. If you prefer to use a different hosting provider, you must go to their website to obtain a domain hosting solution.
+Starting on July 24, 2017, Azure hosts App Service domains purchased from the Azure portal on Azure DNS. If you prefer to use a different hosting provider, you must go to their website to obtain a domain hosting solution.
**Do I have to pay for privacy protection for my domain?**
When you purchase a domain, you're not charged for five days. During this time,
**Can I use the domain in another Azure App Service app in my subscription?**
-Yes, when you access the Custom Domains and TLS blade in the Azure portal, you see the domains that you purchased. You can configure your app to use any of those domains.
+Yes, when you access the **Custom domains** and **Certificates** pages in the Azure portal, you see the domains that you purchased. You can configure your app to use any of those domains.
**Can I transfer a domain from one subscription to another subscription?**
application-gateway Configuration Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-infrastructure.md
Subnet Size /24 = 256 IP addresses - 5 reserved from the platform = 251 availabl
> [!IMPORTANT] > Although a /24 subnet isn't required per Application Gateway v2 SKU deployment, it is highly recommended. This is to ensure that Application Gateway v2 has sufficient space for autoscaling expansion and maintenance upgrades. You should ensure that the Application Gateway v2 subnet has sufficient address space to accommodate the number of instances required to serve your maximum expected traffic. If you specify the maximum instance count, then the subnet should have capacity for at least that many addresses. For capacity planning around instance count, see [instance count details](understanding-pricing.md#instance-count).
+> [!IMPORTANT]
+> The subnet named "GatewaySubnet" is reserved for VPN gateways. The Application Gateway V1 resources using the "GatewaySubnet" subnet need to be moved to a different subnet or migrated to V2 SKU before September 30, 2023 to avoid control plane failures and platform inconsistencies. For changing the subnet of an existing Application gateway, see [steps here](application-gateway-faq.yml#can-i-change-the-virtual-network-or-subnet-for-an-existing-application-gateway).
+ > [!TIP] > IP addresses are allocated from the beginning of the defined subnet space for gateway instances. As instances are created and removed due to creation of gateways or scaling events, it can become difficult to understand what the next available address is in the subnet. To be able to determine the next address to use for a future gateway and have a contiguous addressing theme for frontend IPs, consider assigning frontend IP addresses from the upper half of the defined subset space. For example, if my subnet address space is 10.5.5.0/24, consider setting the private frontend IP configuration of your gateways starting with 10.5.5.254 and then following with 10.5.5.253, 10.5.5.252, 10.5.5.251, and so forth for future gateways.
automation Automation Update Azure Modules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-update-azure-modules.md
Title: Update Azure PowerShell modules in Azure Automation
description: This article tells how to update common Azure PowerShell modules provided by default in Azure Automation. Previously updated : 05/03/2023 Last updated : 07/03/2023 # Update Azure PowerShell modules in Automation
-> [!Important]
-> If you are facing issues while upgrading to **Az.Accounts version 2.12.2** or upgrading to a newer version with dependencies on **Az.Accounts version 2.12.2**, we recommend you use **Az.Accounts version 2.12.1 or lower** to avoid issues with Az modules that are dependent on Az.Accounts. For more information, see [steps to import module with specific versions](shared-resources/modules.md#import-modules-by-using-powershell).
-
- The most common PowerShell modules are provided by default in each Automation account. See [Default modules](shared-resources/modules.md#default-modules). As the Azure team updates the Azure modules regularly, changes can occur with the included cmdlets. These changes, for example, renaming a parameter or deprecating a cmdlet entirely, can negatively affect your runbooks. > [!NOTE]
azure-cache-for-redis Cache Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-overview.md
Consider the following options when choosing an Azure Cache for Redis tier:
- **High availability**: Azure Cache for Redis provides multiple [high availability](cache-high-availability.md) options. It guarantees that a Standard, Premium, or Enterprise cache is available according to our [SLA](https://azure.microsoft.com/support/legal/sla/cache/v1_0/). The SLA only covers connectivity to the cache endpoints. The SLA doesn't cover protection from data loss. We recommend using the Redis data persistence feature in the Premium and Enterprise tiers to increase resiliency against data loss. - **Data persistence**: The Premium and Enterprise tiers allow you to persist the cache data to an Azure Storage account and a Managed Disk respectively. Underlying infrastructure issues might result in potential data loss. We recommend using the Redis data persistence feature in these tiers to increase resiliency against data loss. Azure Cache for Redis offers both RDB and AOF (preview) options. Data persistence can be enabled through Azure portal and CLI. For the Premium tier, see [How to configure persistence for a Premium Azure Cache for Redis](cache-how-to-premium-persistence.md). - **Network isolation**: Azure Private Link and Virtual Network (VNET) deployments provide enhanced security and traffic isolation for your Azure Cache for Redis. VNET allows you to further restrict access through network access control policies. For more information, see [Azure Cache for Redis with Azure Private Link](cache-private-link.md) and [How to configure Virtual Network support for a Premium Azure Cache for Redis](cache-how-to-premium-vnet.md).-- **Redis Modules**: Enterprise tiers support [RediSearch](https://docs.redis.com/latest/modules/redisearch/), [RedisBloom](https://docs.redis.com/latest/modules/redisbloom/), [RedisTimeSeries](https://docs.redis.com/latest/modules/redistimeseries/), and [RedisJSON](https://docs.redis.com/latest/modules/redisjson/) (preview). These modules add new data types and functionality to Redis.
+- **Redis Modules**: Enterprise tiers support [RediSearch](https://docs.redis.com/latest/modules/redisearch/), [RedisBloom](https://docs.redis.com/latest/modules/redisbloom/), [RedisTimeSeries](https://docs.redis.com/latest/modules/redistimeseries/), and [RedisJSON](https://docs.redis.com/latest/modules/redisjson/). These modules add new data types and functionality to Redis.
You can scale your cache from the Basic tier up to Premium after it has been created. Scaling down to a lower tier isn't supported currently. For step-by-step scaling instructions, see [How to Scale Azure Cache for Redis](cache-how-to-scale.md) and [How to scale - Basic, Standard, and Premium tiers](cache-how-to-scale.md#how-to-scalebasic-standard-and-premium-tiers).
azure-maps How To Use Indoor Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-indoor-module.md
# Use the Azure Maps Indoor Maps module with custom styles (preview)
-The Azure Maps Web SDK includes the *Azure Maps Indoor* module, enabling you to render indoor maps created in Azure Maps Creator services.
+The Azure Maps Web SDK includes an [Indoor Maps] module, enabling you to render indoor maps created in Azure Maps Creator services.
When you create an indoor map using Azure Maps Creator, default styles are applied. Azure Maps Creator now also supports customizing the styles of the different elements of your indoor maps using the [Style Rest API], or the [visual style editor].
To use the globally hosted Azure Content Delivery Network version of the *Azure
``` Inside your source file, import atlas-indoor.min.css:+ ```js import "azure-maps-indoor/dist/atlas-indoor.min.css"; ``` Then add loaders to the module rules portion of the Webpack config:+ ```js module.exports = { module: {
For a live demo of an indoor map with available source code, see [Creator Indoor
Read about the APIs that are related to the *Azure Maps Indoor* module: > [!div class="nextstepaction"]
-> [Drawing package requirements](drawing-requirements.md)
+> [Drawing package requirements]
>[!div class="nextstepaction"]
-> [Creator for indoor maps](creator-indoor-maps.md)
+> [Creator for indoor maps]
Learn more about how to add more data to your map: > [!div class="nextstepaction"]
-> [Indoor Maps dynamic styling](indoor-map-dynamic-styling.md)
+> [Indoor Maps dynamic styling]
> [!div class="nextstepaction"]
-> [Code samples](/samples/browse/?products=azure-maps)
+> [Code samples]
[Azure Content Delivery Network]: #embed-the-indoor-maps-module [Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account [Azure Maps Creator resource]: how-to-manage-creator.md
+[Indoor Maps]: https://www.npmjs.com/package/azure-maps-indoor
[Azure Maps service geographic scope]: geographic-scope.md [azure-maps-indoor package]: https://www.npmjs.com/package/azure-maps-indoor
+[Code samples]: /samples/browse/?products=azure-maps
[Create custom styles for indoor maps]: how-to-create-custom-styles.md
+[Creator for indoor maps]: creator-indoor-maps.md
[Creator Indoor Maps]: https://samples.azuremaps.com/?sample=creator-indoor-maps
+[Drawing package requirements]: drawing-requirements.md
[dynamic map styling]: indoor-map-dynamic-styling.md
+[Indoor Maps dynamic styling]: indoor-map-dynamic-styling.md
[map configuration API]: /rest/api/maps/v20220901preview/map-configuration [map configuration]: creator-indoor-maps.md#map-configuration [Style Rest API]: /rest/api/maps/v20220901preview/style
+[style-loader]: https://webpack.js.org/loaders/style-loader
[Subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account [Tileset List API]: /rest/api/maps/v2/tileset/list [Use Creator to create indoor maps]: tutorial-creator-indoor-maps.md
-[visual style editor]: https://azure.github.io/Azure-Maps-Style-Editor/
-[Webpack]: https://webpack.js.org/
-[style-loader]: https://webpack.js.org/loaders/style-loader/
+[visual style editor]: https://azure.github.io/Azure-Maps-Style-Editor
+[Webpack]: https://webpack.js.org
azure-maps How To Use Map Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-map-control.md
# Use the Azure Maps map control
-The Map Control client-side JavaScript library allows you to render maps and embedded Azure Maps functionality into your web or mobile application.
+The Azure Maps Web SDK provides a [Map Control] that enables the customization of interactive maps with your own content and imagery for display in your web or mobile applications. This module is a helper library that makes it easy to use the Azure Maps REST services in web or Node.js applications by using JavaScript or TypeScript.
This article uses the Azure Maps Web SDK, however the Azure Maps services work with any map control. For a list of third-party map control plug-ins, see [Azure Maps community - Open-source projects].
This article uses the Azure Maps Web SDK, however the Azure Maps services work w
To use the Map Control in a web page, you must have one of the following prerequisites: * An [Azure Maps account]
-* A [subscription key]
-* Obtain your Azure Active Directory (Azure AD) credentials with [authentication options]
+* A [subscription key] or Azure Active Directory (Azure AD) credentials. For more information, see [authentication options].
## Create a new map in a web page
For a list of samples showing how to integrate Azure AD with Azure Maps, see:
[ng-azure-maps]: https://github.com/arnaudleclerc/ng-azure-maps [subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account [Vue Azure Maps]: https://github.com/rickyruiz/vue-azure-maps
+[Map Control]: https://www.npmjs.com/package/azure-maps-control
azure-maps How To Use Services Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-services-module.md
# Use the Azure Maps services module
-The Azure Maps Web SDK provides a *services module*. This module is a helper library that makes it easy to use the Azure Maps REST services in web or Node.js applications by using JavaScript or TypeScript.
+The Azure Maps Web SDK provides a [services module]. This module is a helper library that makes it easy to use the Azure Maps REST services in web or Node.js applications by using JavaScript or TypeScript.
## Use the services module in a webpage
If directly accessing the Azure Maps REST services, change the URL domain to `at
Learn more about the classes and methods used in this article: > [!div class="nextstepaction"]
-> [MapsURL](/javascript/api/azure-maps-rest/atlas.service.mapsurl)
+> [MapsURL]
> [!div class="nextstepaction"]
-> [SearchURL](/javascript/api/azure-maps-rest/atlas.service.searchurl)
+> [SearchURL]
> [!div class="nextstepaction"]
-> [RouteURL](/javascript/api/azure-maps-rest/atlas.service.routeurl)
+> [RouteURL]
> [!div class="nextstepaction"]
-> [SubscriptionKeyCredential](/javascript/api/azure-maps-rest/atlas.service.subscriptionkeycredential)
+> [SubscriptionKeyCredential]
> [!div class="nextstepaction"]
-> [TokenCredential](/javascript/api/azure-maps-rest/atlas.service.tokencredential)
+> [TokenCredential]
For more code samples that use the services module, see these articles: > [!div class="nextstepaction"]
-> [Show search results on the map](./map-search-location.md)
+> [Show search results on the map]
> [!div class="nextstepaction"]
-> [Get information from a coordinate](./map-get-information-from-coordinate.md)
+> [Get information from a coordinate]
> [!div class="nextstepaction"]
-> [Show directions from A to B](./map-route.md)
+> [Show directions from A to B]
+
+[MapsURL]: /javascript/api/azure-maps-rest/atlas.service.mapsurl
+[SearchURL]: /javascript/api/azure-maps-rest/atlas.service.searchurl
+[RouteURL]: /javascript/api/azure-maps-rest/atlas.service.routeurl
+[SubscriptionKeyCredential]: /javascript/api/azure-maps-rest/atlas.service.subscriptionkeycredential
+[TokenCredential]: /javascript/api/azure-maps-rest/atlas.service.tokencredential
+[Show search results on the map]: map-search-location.md
+[Get information from a coordinate]: map-get-information-from-coordinate.md
+[Show directions from A to B]: map-route.md
[Authentication with Azure Maps]: azure-maps-authentication.md
+[services module]: https://www.npmjs.com/package/azure-maps-rest
azure-maps How To Use Spatial Io Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-spatial-io-module.md
Title: How to use the Azure Maps spatial IO module | Microsoft Azure Maps
+ Title: How to use the Azure Maps spatial IO module
+ description: Learn how to use the Spatial IO module provided by the Azure Maps Web SDK. This module provides robust features to make it easy for developers to integrate spatial data with the Azure Maps web SDK.
# How to use the Azure Maps Spatial IO module
-The Azure Maps Web SDK provides the **Spatial IO module**, which integrates spatial data with the Azure Maps web SDK using JavaScript or TypeScript. The robust features in this module allow developers to:
+The Azure Maps Web SDK provides the [Spatial IO module], which integrates spatial data with the Azure Maps web SDK using JavaScript or TypeScript. The robust features in this module allow developers to:
- [Read and write spatial data]. Supported file formats include: KML, KMZ, GPX, GeoRSS, GML, GeoJSON and CSV files containing columns with spatial information. Also supports Well-Known Text (WKT). - Connect to Open Geospatial Consortium (OGC) services and integrate with Azure Maps web SDK, and overlay Web Map Services (WMS) and Web Map Tile Services (WMTS) as layers on the map. For more information, see [Add a map layer from the Open Geospatial Consortium (OGC)].
Refer to the Azure Maps Spatial IO documentation:
> [!div class="nextstepaction"] > [Azure Maps Spatial IO package]
-[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
-[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
-[Read and write spatial data]: spatial-io-read-write-spatial-data.md
[Add a map layer from the Open Geospatial Consortium (OGC)]: spatial-io-add-ogc-map-layer.md [Add a simple data layer]: spatial-io-add-simple-data-layer.md
-[Core IO operations]: spatial-io-core-operations.md
-[Connect to a WFS service]: spatial-io-connect-wfs-service.md
-[azure-maps-spatial-io]: https://www.npmjs.com/package/azure-maps-spatial-io
-[Azure Maps map control]: how-to-use-map-control.md
[Add an OGC map layer]: spatial-io-add-ogc-map-layer.md
+[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
+[Azure Maps map control]: how-to-use-map-control.md
+[Azure Maps Spatial IO package]: /javascript/api/azure-maps-spatial-io
+[azure-maps-spatial-io]: https://www.npmjs.com/package/azure-maps-spatial-io
+[Connect to a WFS service]: spatial-io-connect-wfs-service.md
+[Core IO operations]: spatial-io-core-operations.md
[Leverage core operations]: spatial-io-core-operations.md
+[Read and write spatial data]: spatial-io-read-write-spatial-data.md
+[Spatial IO module]: https://www.npmjs.com/package/azure-maps-spatial-io
+[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
[Supported data format details]: spatial-io-supported-data-format-details.md
-[Azure Maps Spatial IO package]: /javascript/api/azure-maps-spatial-io
+
azure-maps Map Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-create.md
Title: Create a map with Azure Maps | Microsoft Azure Maps
+ Title: Create a map with Azure Maps
+ description: Find out how to add maps to web pages by using the Azure Maps Web SDK. Learn about options for animation, style, the camera, services, and user interactions.
This article shows you ways to create a map and animate a map.
## Loading a map
-To load a map, create a new instance of the [Map class](/javascript/api/azure-maps-control/atlas.map). When initializing the map, pass a DIV element ID to render the map and pass a set of options to use when loading the map. If default authentication information isn't specified on the `atlas` namespace, this information needs to be specified in the map options when loading the map. The map loads several resources asynchronously for performance. As such, after creating the map instance, attach a `ready` or `load` event to the map and then add any additional code that interacts with the map to the event handler. The `ready` event fires as soon as the map has enough resources loaded to be interacted with programmatically. The `load` event fires after the initial map view has finished loading completely.
+To load a map, create a new instance of the [Map class]. When initializing the map, pass a DIV element ID to render the map and pass a set of options to use when loading the map. If default authentication information isn't specified on the `atlas` namespace, this information needs to be specified in the map options when loading the map. The map loads several resources asynchronously for performance. As such, after creating the map instance, attach a `ready` or `load` event to the map and then add more code that interacts with the map to the event handler. The `ready` event fires as soon as the map has enough resources loaded to be interacted with programmatically. The `load` event fires after the initial map view has finished loading completely.
You can also load multiple maps on the same page, for sample code that demonstrates loading multiple maps on the same page, see [Multiple Maps] in the [Azure Maps Samples]. For the source code for this sample, see [Multiple Maps source code].
renderWorldCopies: false
When creating a map there, are several different types of options that can be passed in to customize how the map functions: -- [CameraOptions](/javascript/api/azure-maps-control/atlas.cameraoptions) and [CameraBoundOptions](/javascript/api/azure-maps-control/atlas.cameraboundsoptions) are used to specify the area the map should display.-- [ServiceOptions](/javascript/api/azure-maps-control/atlas.serviceoptions) are used to specify how the map should interact with services that power the map.-- [StyleOptions](/javascript/api/azure-maps-control/atlas.styleoptions) are used to specify the map should be styled and rendered.-- [UserInteractionOptions](/javascript/api/azure-maps-control/atlas.userinteractionoptions) are used to specify how the map should reach when the user is interacting with the map.
+- [CameraOptions] and [CameraBoundOptions] are used to specify the area the map should display.
+- [ServiceOptions] are used to specify how the map should interact with services that power the map.
+- [StyleOptions] are used to specify the map should be styled and rendered.
+- [UserInteractionOptions] are used to specify how the map should reach when the user is interacting with the map.
These options can also be updated after the map has been loaded using the `setCamera`, `setServiceOptions`, `setStyle`, and `setUserInteraction` functions.
map.setCamera({
}); ```
-Map properties, such as center and zoom level, are part of the [CameraOptions](/javascript/api/azure-maps-control/atlas.cameraoptions) properties.
+Map properties, such as center and zoom level, are part of the [CameraOptions] properties.
<! <iframe height='500' scrolling='no' title='Create a map via CameraOptions' src='//codepen.io/azuremaps/embed/qxKBMN/?height=543&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/qxKBMN/'>Create a map via `CameraOptions` </a>by Azure Location Based Services (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
Map properties, such as center and zoom level, are part of the [CameraOptions](/
### Set the camera bounds
-A bounding box can be used to update the map camera. If the bounding box was calculated from point data, it's often useful to also specify a pixel padding value in the camera options to account for the icon size. This helps ensure that points don't fall off the edge of the map viewport.
+A bounding box can be used to update the map camera. If the bounding box was calculated from point data, it's often useful to specify a pixel padding value in the camera options to account for the icon size. This pixel padding helps ensure that points don't fall off the edge of the map viewport.
```javascript map.setCamera({
map.setCamera({
}); ```
-In the following code, a [Map object](/javascript/api/azure-maps-control/atlas.map) is constructed via `new atlas.Map()`. Map properties such as `CameraBoundsOptions` can be defined via [setCamera](/javascript/api/azure-maps-control/atlas.map) function of the Map class. Bounds and padding properties are set using `setCamera`.
+In the following code, a [Map object] is constructed via `new atlas.Map()`. Map properties such as `CameraBoundsOptions` can be defined via [setCamera] function of the Map class. Bounds and padding properties are set using `setCamera`.
<!- <iframe height='500' scrolling='no' title='Create a map via CameraBoundsOptions' src='//codepen.io/azuremaps/embed/ZrRbPg/?height=543&theme-id=0&default-tab=js,result&embed-version=2&editable=true' frameborder='no' loading="lazy" allowtransparency='true' allowfullscreen='true'>See the Pen <a href='https://codepen.io/azuremaps/pen/ZrRbPg/'>Create a map via `CameraBoundsOptions` </a>by Azure Maps (<a href='https://codepen.io/azuremaps'>@azuremaps</a>) on <a href='https://codepen.io'>CodePen</a>.
In the following code, a [Map object](/javascript/api/azure-maps-control/atlas.m
### Animate map view
-When setting the camera options of the map, [animation options](/javascript/api/azure-maps-control/atlas.animationoptions) can also be set. These options specify the type of animation and duration it should take to move the camera.
+When setting the camera options of the map, [animation options] can also be set. These options specify the type of animation and duration it should take to move the camera.
```javascript map.setCamera({
map.setCamera({
}); ```
-In the following code, the first code block creates a map and sets the enter and zoom map styles. In the second code block, a click event handler is created for the animate button. When this button is selected, the `setCamera` function is called with some random values for the [CameraOptions](/javascript/api/azure-maps-control/atlas.cameraoptions) and [AnimationOptions](/javascript/api/azure-maps-control/atlas.animationoptions).
+In the following code, the first code block creates a map and sets the enter and zoom map styles. In the second code block, a click event handler is created for the animate button. When this button is selected, the `setCamera` function is called with some random values for the [CameraOptions] and [AnimationOptions].
```html <!DOCTYPE html>
In the following code, the first code block creates a map and sets the enter and
Sometimes it's useful to be able to modify HTTP requests made by the map control. For example: -- Add additional headers to tile requests. This is often done for password protected services.
+- Add more headers to tile requests for password protected services.
- Modify URLs to run requests through a proxy service.
-The [service options](/javascript/api/azure-maps-control/atlas.serviceoptions) of the map has a `transformRequest` that can be used to modify all requests made by the map before they're made. The `transformRequest` option is a function that takes in two parameters; a string URL, and a resource type string that indicates what the request is used for. This function must return a [RequestParameters](/javascript/api/azure-maps-control/atlas.requestparameters) result.
+The [service options] of the map has a `transformRequest` that can be used to modify all requests made by the map before they're made. The `transformRequest` option is a function that takes in two parameters; a string URL, and a resource type string that indicates what the request is used for. This function must return a [RequestParameters] result.
```JavaScript transformRequest: (url: string, resourceType: string) => RequestParameters
The resource types most relevant to content you add to the map are listed in the
| Image | A request for an image for use with either a SymbolLayer or ImageLayer. | | Source | A request for source information, such as a TileJSON request. Some requests from the base map styles also use this resource type when loading source information. | | Tile | A request from a tile layer (raster or vector). |
-| WFS | A request from a `WfsClient` in the [Spatial IO module](spatial-io-connect-wfs-service.md) to an OGC Web Feature Service. |
-| WebMapService | A request from the `OgcMapLayer` in the [Spatial IO module](spatial-io-add-ogc-map-layer.md) to a WMS or WMTS service. |
+| WFS | A request from a `WfsClient` in the Spatial IO module to an OGC Web Feature Service. For more information, see [Connect to a WFS service]. |
+| WebMapService | A request from the `OgcMapLayer` in the Spatial IO module to a WMS or WMTS service. For more information, see [Add a map layer from the Open Geospatial Consortium (OGC)]. |
-Here are some resource types that are passed through the request transform and are related to the base map styles: StyleDefinitions, Style, SpriteImage, SpriteJSON, Glyphs, Attribution. You'll normally want to ignore these and simply return the `url` value.
+Here are some resource types, typically ignored, that are passed through the request transform and are related to the base map styles: StyleDefinitions, Style, SpriteImage, SpriteJSON, Glyphs, Attribution.
-The following example shows how to use this to modify all requests to the size `https://example.com` by adding a username and password as headers to the request.
+The following example shows how to modify all requests to the size `https://example.com` by adding a username and password as headers to the request.
```JavaScript var map = new atlas.Map('myMap', {
var map = new atlas.Map('myMap', {
Learn more about the classes and methods used in this article: > [!div class="nextstepaction"]
-> [Map](/javascript/api/azure-maps-control/atlas.map)
+> [Map]
> [!div class="nextstepaction"]
-> [CameraOptions](/javascript/api/azure-maps-control/atlas.cameraoptions)
+> [CameraOptions]
> [!div class="nextstepaction"]
-> [AnimationOptions](/javascript/api/azure-maps-control/atlas.animationoptions)
+> [AnimationOptions]
See code examples to add functionality to your app: > [!div class="nextstepaction"]
-> [Change style of the map](choose-map-style.md)
+> [Change style of the map]
> [!div class="nextstepaction"]
-> [Add controls to the map](map-add-controls.md)
+> [Add controls to the map]
> [!div class="nextstepaction"]
-> [Code samples](/samples/browse/?products=azure-maps)
+> [Code samples]
+[Add a map layer from the Open Geospatial Consortium (OGC)]: spatial-io-add-ogc-map-layer.md
+[Add controls to the map]: map-add-controls.md
+[animation options]: /javascript/api/azure-maps-control/atlas.animationoptions
+[AnimationOptions]: /javascript/api/azure-maps-control/atlas.animationoptions
[Azure Maps Samples]: https://samples.azuremaps.com
-[Multiple Maps]: https://samples.azuremaps.com/map/multiple-maps
+[CameraBoundOptions]: /javascript/api/azure-maps-control/atlas.cameraboundsoptions
+[CameraOptions]: /javascript/api/azure-maps-control/atlas.cameraoptions
+[Change style of the map]: choose-map-style.md
+[Code samples]: /samples/browse/?products=azure-maps
+[Connect to a WFS service]: spatial-io-connect-wfs-service.md
+[Map class]: /javascript/api/azure-maps-control/atlas.map
+[Map object]: /javascript/api/azure-maps-control/atlas.map
+[Map]: /javascript/api/azure-maps-control/atlas.map
[Multiple Maps source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Map/Multiple%20Maps/Multiple%20Maps.html
+[Multiple Maps]: https://samples.azuremaps.com/map/multiple-maps
+[RequestParameters]: /javascript/api/azure-maps-control/atlas.requestparameters
+[service options]: /javascript/api/azure-maps-control/atlas.serviceoptions
+[ServiceOptions]: /javascript/api/azure-maps-control/atlas.serviceoptions
+[setCamera]: /javascript/api/azure-maps-control/atlas.map
+[StyleOptions]: /javascript/api/azure-maps-control/atlas.styleoptions
+[UserInteractionOptions]: /javascript/api/azure-maps-control/atlas.userinteractionoptions
azure-maps Set Drawing Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/set-drawing-options.md
Title: Drawing tools module | Microsoft Azure Maps
+ Title: Drawing tools module
+ description: In this article, you'll learn how to set drawing options data using the Microsoft Azure Maps Web SDK
# Use the drawing tools module
-The Azure Maps Web SDK provides a *drawing tools module*. This module makes it easy to draw and edit shapes on the map using an input device such as a mouse or touch screen. The core class of this module is the [drawing manager](/javascript/api/azure-maps-drawing-tools/atlas.drawing.drawingmanager#setoptions-drawingmanageroptions-). The drawing manager provides all the capabilities needed to draw and edit shapes on the map. It can be used directly, and it's integrated with a custom toolbar UI. You can also use the built-in [drawing toolbar](/javascript/api/azure-maps-drawing-tools/atlas.control.drawingtoolbar) class.
+The Azure Maps Web SDK provides a [drawing tools module]. This module makes it easy to draw and edit shapes on the map using an input device such as a mouse or touch screen. The core class of this module is the [drawing manager](/javascript/api/azure-maps-drawing-tools/atlas.drawing.drawingmanager#setoptions-drawingmanageroptions-). The drawing manager provides all the capabilities needed to draw and edit shapes on the map. It can be used directly, and it's integrated with a custom toolbar UI. You can also use the built-in [drawing toolbar](/javascript/api/azure-maps-drawing-tools/atlas.control.drawingtoolbar) class.
## Loading the drawing tools module in a webpage
The Azure Maps Web SDK provides a *drawing tools module*. This module makes it e
``` Inside your source file, import atlas-drawing.min.css:+ ```js import "azure-maps-drawing-tools/dist/atlas-drawing.min.css"; ``` Then add loaders to the module rules portion of the Webpack config:+ ```js module.exports = { module: {
The following image is an example of drawing mode of the `DrawingManager`. Selec
The drawing manager supports three different ways of interacting with the map to draw shapes.
-* `click` - Coordinates are added when the mouse or touch is clicked.
-* `freehand ` - Coordinates are added when the mouse or touch is dragged on the map.
-* `hybrid` - Coordinates are added when the mouse or touch is clicked or dragged.
+- `click` - Coordinates are added when the mouse or touch is clicked.
+- `freehand` - Coordinates are added when the mouse or touch is dragged on the map.
+- `hybrid` - Coordinates are added when the mouse or touch is clicked or dragged.
-The following code enables the polygon drawing mode and sets the type of drawing interaction that the drawing manager should adhere to `freehand`.
+The following code enables the polygon drawing mode and sets the type of drawing interaction that the drawing manager should adhere to `freehand`.
```javascript //Create an instance of the drawing manager and set drawing mode.
Learn more about the classes and methods used in this article:
[Webpack]: https://webpack.js.org/ [style-loader]: https://webpack.js.org/loaders/style-loader/ [Drawing manager options source code]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/main/Samples/Drawing%20Tools%20Module/Drawing%20manager%20options/Drawing%20manager%20options.html
+[drawing tools module]: https://www.npmjs.com/package/azure-maps-drawing-tools
azure-monitor Azure Monitor Agent Extension Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-extension-versions.md
We strongly recommended to update to the latest version at all times, or opt in
| Release Date | Release notes | Windows | Linux | |:|:|:|:| | June 2023| **Linux** <ul><li>Add the forwarder/collector's identifier (hostname)</li><li>Link OpenSSL dynamically</li><li>Support Arc-Enabled Servers proxy configuration file</li><li>**Fixes**<ul><li>Allow uploads soon after AMA startup</li><li>Run LocalSink GC on a dedicated thread to avoid threadpool scheduling issues</li><li>Fix upgrade restart of disabled services</li><li>Handle Linux Hardening where sudo on root is blocked</li><li>CEF processing fixes for non-strictly RFC compliant devices</li><li>ASA tenant can fail to startup due to config-cache directory permissions</li><li>Fix auth proxy in AMA</li></ul></li></ul>| |1.27.0|
-| May 2023 | **Windows** <ul><li>Enable Large Event support for all regions.</li><li>Update to TroubleShooter 1.4.0.</li><li>Fixed issue when Event Log subscription become invalid; will resubscribe.</li><li>AMA: Fixed issue with Large Event sending too large data. Also affecting Custom Log.</li></ul> **Linux** <ul><li>Support for CIS and SELinux [hardening](https://learn.microsoft.com/azure/azure-monitor/agents/agents-overview#linux-hardening-standards)</li><li>Include Ubuntu 22.04 (jammy) in azure-mdsd package publishing</li><li>Move storage SDK patch to build container</li><li>Add system telegraf counters to AMA</li><li>Drop msgpack and syslog data if not configured in active configuration</li><li>Limit the events sent to Public ingestion pipeline</li><li>**Fixes** <ul><li>Fix mdsd crash in init when in persistent mode </li><li>Remove FdClosers from ProtocolListeners to avoid a race condition</li><li>Fix sed regex special character escaping issue in rpm macro for Centos 7.3.Maipo</li><li>Fix latency and future timestamp issue for 3P</li><li>Install AMA syslog configs only if customer is opted in for syslog in DCR</li><li>Fix heartbeat time check</li><li>Skip unnecessary cleanup in fatal signal handler</li><li>Fix case where fast-forwarding may cause intervals to be skipped</li><li>Fix comma separated custom log paths with fluent</li></ul></li><ul> | 1.16.0 | 1.26.2 |
+| May 2023 | **Windows** <ul><li>Enable Large Event support for all regions.</li><li>Update to TroubleShooter 1.4.0.</li><li>Fixed issue when Event Log subscription become invalid; will resubscribe.</li><li>AMA: Fixed issue with Large Event sending too large data. Also affecting Custom Log.</li></ul> **Linux** <ul><li>Support for CIS and SELinux [hardening](https://learn.microsoft.com/azure/azure-monitor/agents/agents-overview#linux-hardening-standards)</li><li>Include Ubuntu 22.04 (jammy) in azure-mdsd package publishing</li><li>Move storage SDK patch to build container</li><li>Add system telegraf counters to AMA</li><li>Drop msgpack and syslog data if not configured in active configuration</li><li>Limit the events sent to Public ingestion pipeline</li><li>**Fixes** <ul><li>Fix mdsd crash in init when in persistent mode </li><li>Remove FdClosers from ProtocolListeners to avoid a race condition</li><li>Fix sed regex special character escaping issue in rpm macro for Centos 7.3.Maipo</li><li>Fix latency and future timestamp issue for 3P</li><li>Install AMA syslog configs only if customer is opted in for syslog in DCR</li><li>Fix heartbeat time check</li><li>Skip unnecessary cleanup in fatal signal handler</li><li>Fix case where fast-forwarding may cause intervals to be skipped</li><li>Fix comma separated custom log paths with fluent</li></ul></li><ul> | 1.16.0.0 | 1.26.2 |
| Apr 2023 | **Windows** <ul><li>AMA: Enable Large Event support based on Region.</li><li>AMA: Upgrade to FluentBit version 2.0.9</li><li>Update Troubleshooter to 1.3.1</li><li>Update ME version to 2.2023.331.1521</li><li>Updating package version for AzSecPack 4.26 release</li></ul>|1.15.0.0| Coming soon| | Mar 2023 | **Windows** <ul><li>Text file collection improvements to handle high rate of logging and for continuous tailing in case of longer lines</li><li>VM Insights fixes for collecting metrics from non-English OS</li></ul> | 1.14.0.0 | Coming soon | | Feb 2023 | <ul><li>**Linux (hotfix)** Resolved potential data loss due to "Bad file descriptor" errors seen in the mdsd error log with previous version. Please upgrade to hotfix version</li><li>**Windows** Reliability improvements in fluentbit buffering to handle larger text files</li></ul> | 1.13.1.0 | 1.25.2<sup>Hotfix</sup> |
azure-monitor Integrate Keda https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/integrate-keda.md
This article walks you through the steps to integrate KEDA into your AKS cluster
## Set up a workload identity
-1. Start by setting up some environment variables. Change the values to suit your AKS cluster.
+1. Start by setting up some environment variables. Change the values to suit your AKS cluster.
```bash export RESOURCE_GROUP="rg-keda-integration"
This article walks you through the steps to integrate KEDA into your AKS cluster
export FEDERATED_IDENTITY_CREDENTIAL_NAME="kedaFedIdentity" export SERVICE_ACCOUNT_NAMESPACE="keda" export SERVICE_ACCOUNT_NAME="keda-operator"
+ export AKS_CLUSTER_NAME="aks-cluster-name"
```
- + `SERVICE_ACCOUNT_NAME` - KEDA must use the service account that was used to create federated credentials.
+ + `SERVICE_ACCOUNT_NAME` - KEDA must use the service account that was used to create federated credentials. This can be any user defined name.
+ + `AKS_CLUSTER_NAME`- The name of the AKS cluster where you want to deploy KEDA.
+ `SERVICE_ACCOUNT_NAMESPACE` Both KEDA and service account must be in same namespace. + `USER_ASSIGNED_IDENTITY_NAME` is the name of the Azure Active directory identity that's created for KEDA. + `FEDERATED_IDENTITY_CREDENTIAL_NAME` is the name of the credential that's created for KEDA to use to authenticate with Azure.
This article walks you through the steps to integrate KEDA into your AKS cluster
1. Store the OIDC issuer url in an environment variable to be used later. ```bash
- export AKS_OIDC_ISSUER="$(az aks show -n $CLUSTER_NAME -g $RESOURCE_GROUP --query "oidcIssuerProfile.issuerUrl" -otsv)"
+ export AKS_OIDC_ISSUER="$(az aks show -n $AKS_CLUSTER_NAME -g $RESOURCE_GROUP --query "oidcIssuerProfile.issuerUrl" -otsv)"
``` 1. Create a user assigned identity for KEDA. This identity is used by KEDA to authenticate with Azure Monitor. ```azurecli
- az identity create --name $USER_ASSIGNED_IDENTITY_NAME --resource-group $RESOURCE_GROUP --location $LOCATION --subscription $SUBSCRIPTION
+ az identity create --name $USER_ASSIGNED_IDENTITY_NAME --resource-group $RESOURCE_GROUP --location $LOCATION --subscription $SUBSCRIPTION
``` The output will be similar to the following:
This article walks you through the steps to integrate KEDA into your AKS cluster
1. Store the `clientId` and `tenantId` in environment variables to use later. ```bash
- export USER_ASSIGNED_CLIENT_ID="$(az identity show --resource-group $RESOURCE_GROUP --name $USER_ASSIGNED_IDENTITY_NAME --query 'clientId' -otsv)"
- export TENANT_ID="$(az identity show --resource-group $RESOURCE_GROUP --name $USER_ASSIGNED_IDENTITY_NAME --query 'tenantId' -otsv)"
+ export USER_ASSIGNED_CLIENT_ID="$(az identity show --resource-group $RESOURCE_GROUP --name $USER_ASSIGNED_IDENTITY_NAME --query 'clientId' -otsv)"
+ export TENANT_ID="$(az identity show --resource-group $RESOURCE_GROUP --name $USER_ASSIGNED_IDENTITY_NAME --query 'tenantId' -otsv)"
```
-1. Assign the *Monitoring Data Reader* role to the identity for your Azure Monitor workspace. This role allows the identity to read metrics from your workspace.
+1. Assign the *Monitoring Data Reader* role to the identity for your Azure Monitor workspace. This role allows the identity to read metrics from your workspace. Replace the *Azure Monitor Workspace resource group* and *Azure Monitor Workspace name* with the resource group and name of the Azure Monitor workspace which is configured to collect metrics from the AKS cluster.
```azurecli az role assignment create \ --assignee $USER_ASSIGNED_CLIENT_ID \ --role "Monitoring Data Reader" \
- --scope /subscriptions/$SUBSCRIPTION/resourceGroups/<Azure Monitor Workspace resource group>/providers/microsoft.monitor/accounts/ <Azure monitor workspace name>
+ --scope /subscriptions/$SUBSCRIPTION/resourceGroups/<Azure Monitor Workspace resource group>/providers/microsoft.monitor/accounts/<Azure monitor workspace name>
```
-1. Create the KEDA namespace, then create Kubernetes service account. This service account is used by KEDA to authenticate with Azure.
+1. Create the KEDA namespace, then create Kubernetes service account. This service account is used by KEDA to authenticate with Azure.
```azurecli
- az aks get-credentials -n $CLUSTER_NAME -g $RESOURCE_GROUP
+ az aks get-credentials -n $AKS_CLUSTER_NAME -g $RESOURCE_GROUP
kubectl create namespace keda
This article walks you through the steps to integrate KEDA into your AKS cluster
1. Check your service account by running ```bash
- kubectl describe serviceaccount workload-identity-sa -n keda
+ kubectl describe serviceaccount $SERVICE_ACCOUNT_NAME -n keda
``` 1. Establish a federated credential between the service account and the user assigned identity. The federated credential allows the service account to use the user assigned identity to authenticate with Azure. ```azurecli
- az identity federated-credential create --name $FEDERATED_IDENTITY_CREDENTIAL_NAME --identity-name $USER_ASSIGNED_IDENTITY_NAME --resource-group $RESOURCE_GROUP --issuer $AKS_OIDC_ISSUER --subject system:serviceaccount:$SERVICE_ACCOUNT_NAMESPACE:$SERVICE_ACCOUNT_NAME --audience api://AzureADTokenExchange
+ az identity federated-credential create --name $FEDERATED_IDENTITY_CREDENTIAL_NAME --identity-name $USER_ASSIGNED_IDENTITY_NAME --resource-group $RESOURCE_GROUP --issuer $AKS_OIDC_ISSUER --subject system:serviceaccount:$SERVICE_ACCOUNT_NAMESPACE:$SERVICE_ACCOUNT_NAME --audience api://AzureADTokenExchange
``` > [!Note]
This article walks you through the steps to integrate KEDA into your AKS cluster
KEDA can be deployed using YAML manifests, Helm charts, or Operator Hub. This article uses Helm charts. For more information on deploying KEDA, see [Deploying KEDA](https://keda.sh/docs/2.10/deploy/)
-Deploy KEDA using the following command.
+Add helm repository:
+
+```bash
+helm repo add kedacore https://kedacore.github.io/charts
+helm repo update
+```
+
+Deploy KEDA using the following command:
```bash helm install keda kedacore/keda --namespace keda \
azure-monitor Metrics Supported https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-supported.md
Previously updated : 06/27/2023 Last updated : 07/03/2023
> [!NOTE] > This list is largely auto-generated. Any modification made to this list via GitHub might be written over without warning. Contact the author of this article for details on how to make permanent updates.
-Date list was last updated: 06/27/2023.
+Date list was last updated: 07/03/2023.
Azure Monitor provides several ways to interact with metrics, including charting them in the Azure portal, accessing them through the REST API, or querying them by using PowerShell or the Azure CLI (Command Line Interface).
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|AclMatchedPackets |Yes |Acl Matched Packets |Count |Average |Count of the number of packets matching the current ACL entry. |FabricId, RegionName, AclSetName, AclEntrySequenceId, AclSetType |
-|BgpPeerStatus |Yes |BGP Peer Status |Unspecified |Minimum |Operational state of the BGP peer. State is represented in numerical form. Idle : 1, Connect : 2, Active : 3, Opensent : 4, Openconfirm : 5, Established : 6 |FabricId, RegionName, IpAddress |
-|ComponentOperStatus |Yes |Component Operational State |Unspecified |Minimum |The current operational status of the component. |FabricId, RegionName, ComponentName |
-|CpuUtilizationMax |Yes |Cpu Utilization Max |Percent |Average |Max cpu utilization. The maximum value of the percentage measure of the statistic over the time interval. |FabricId, RegionName, ComponentName |
-|CpuUtilizationMin |Yes |Cpu Utilization Min |Percent |Average |Min cpu utilization. The minimum value of the percentage measure of the statistic over the time interval. |FabricId, RegionName, ComponentName |
-|FanSpeed |Yes |Fan Speed |Unspecified |Average |Current fan speed. |FabricId, RegionName, ComponentName |
-|IfEthInCrcErrors |Yes |Ethernet Interface In CRC Errors |Count |Average |The total number of frames received that had a length (excluding framing bits, but including FCS octets) of between 64 and 1518 octets, inclusive, but had either a bad Frame Check Sequence (FCS) with an integral number of octets (FCS Error) or a bad FCS with a non-integral number of octets (Alignment Error) |FabricId, RegionName, InterfaceName |
-|IfEthInFragmentFrames |Yes |Ethernet Interface In Fragment Frames |Count |Average |The total number of frames received that were less than 64 octets in length (excluding framing bits but including FCS octets) and had either a bad Frame Check Sequence (FCS) with an integral number of octets (FCS Error) or a bad FCS with a non-integral number of octets (Alignment Error). |FabricId, RegionName, InterfaceName |
-|IfEthInJabberFrames |Yes |Ethernet Interface In Jabber Frames |Count |Average |Number of jabber frames received on the interface. Jabber frames are typically defined as oversize frames which also have a bad CRC. |FabricId, RegionName, InterfaceName |
-|IfEthInMacControlFrames |Yes |Ethernet Interface In MAC Control Frames |Count |Average |MAC layer control frames received on the interface |FabricId, RegionName, InterfaceName |
-|IfEthInMacPauseFrames |Yes |Ethernet Interface In MAC Pause Frames |Count |Average |MAC layer PAUSE frames received on the interface |FabricId, RegionName, InterfaceName |
-|IfEthInMaxsizeExceeded |Yes |Ethernet Interface In Maxsize Exceeded |Count |Average |The total number frames received that are well-formed dropped due to exceeding the maximum frame size on the interface. |FabricId, RegionName, InterfaceName |
-|IfEthInOversizeFrames |Yes |Ethernet Interface In Oversize Frames |Count |Average |The total number of frames received that were longer than 1518 octets (excluding framing bits, but including FCS octets) and were otherwise well formed. |FabricId, RegionName, InterfaceName |
-|IfEthOutMacControlFrames |Yes |Ethernet Interface Out MAC Control Frames |Count |Average |MAC layer control frames sent on the interface. |FabricId, RegionName, InterfaceName |
-|IfEthOutMacPauseFrames |Yes |Ethernet Interface Out MAC Pause Frames |Count |Average |MAC layer PAUSE frames sent on the interface. |FabricId, RegionName, InterfaceName |
-|IfInBroadcastPkts |Yes |Interface In Broadcast Pkts |Count |Average |The number of packets, delivered by this sub-layer to a higher (sub-)layer, that were addressed to a broadcast address at this sub-layer. |FabricId, RegionName, InterfaceName |
-|IfInDiscards |Yes |Interface In Discards |Count |Average |The number of inbound packets that were chosen to be discarded even though no errors had been detected to prevent their being deliverable to a higher-layer protocol. |FabricId, RegionName, InterfaceName |
-|IfInErrors |Yes |Interface In Errors |Count |Average |For packet-oriented interfaces, the number of inbound packets that contained errors preventing them from being deliverable to a higher-layer protocol. |FabricId, RegionName, InterfaceName |
-|IfInFcsErrors |Yes |Interface In FCS Errors |Count |Average |Number of received packets which had errors in the frame check sequence (FCS), i.e., framing errors. |FabricId, RegionName, InterfaceName |
-|IfInMulticastPkts |Yes |Interface In Multicast Pkts |Count |Average |The number of packets, delivered by this sub-layer to a higher (sub-)layer, that were addressed to a multicast address at this sub-layer. For a MAC-layer protocol, this includes both Group and Functional addresses. |FabricId, RegionName, InterfaceName |
-|IfInOctets |Yes |Interface In Octets |Count |Average |The total number of octets received on the interface, including framing characters. |FabricId, RegionName, InterfaceName |
-|IfInPkts |Yes |Interface In Pkts |Count |Average |The total number of packets received on the interface, including all unicast, multicast, broadcast and bad packets etc. |FabricId, RegionName, InterfaceName |
-|IfInUnicastPkts |Yes |Interface In Unicast Pkts |Count |Average |The number of packets, delivered by this sub-layer to a higher (sub-)layer, that were not addressed to a multicast or broadcast address at this sub-layer. |FabricId, RegionName, InterfaceName |
-|IfOutBroadcastPkts |Yes |Interface Out Broadcast Pkts |Count |Average |The total number of packets that higher-level protocols requested be transmitted, and that were addressed to a broadcast address at this sub-layer, including those that were discarded or not sent. |FabricId, RegionName, InterfaceName |
-|IfOutDiscards |Yes |Interface Out Discards |Count |Average |The number of outbound packets that were chosen to be discarded even though no errors had been detected to prevent their being transmitted. |FabricId, RegionName, InterfaceName |
-|IfOutErrors |Yes |Interface Out Errors |Count |Average |For packet-oriented interfaces, the number of outbound packets that could not be transmitted because of errors. |FabricId, RegionName, InterfaceName |
-|IfOutMulticastPkts |Yes |Interface Out Multicast Pkts |Count |Average |The total number of packets that higher-level protocols requested be transmitted, and that were addressed to a multicast address at this sub-layer, including those that were discarded or not sent. For a MAC-layer protocol, this includes both Group and Functional addresses. |FabricId, RegionName, InterfaceName |
-|IfOutOctets |Yes |Interface Out Octets |Count |Average |The total number of octets transmitted out of the interface, including framing characters. |FabricId, RegionName, InterfaceName |
-|IfOutPkts |Yes |Interface Out Pkts |Count |Average |The total number of packets transmitted out of the interface, including all unicast, multicast, broadcast, and bad packets etc. |FabricId, RegionName, InterfaceName |
-|IfOutUnicastPkts |Yes |Interface Out Unicast Pkts |Count |Average |The total number of packets that higher-level requested be transmitted, and that were not addressed to a multicast or broadcast address at this sub-layer, including those that were discarded or not sent. |FabricId, RegionName, InterfaceName |
-|InterfaceOperStatus |Yes |Interface Operational State |Unspecified |Minimum |The current operational state of the interface. State is represented in numerical form. Up: 0, Down: 1, Lower_layer_down: 2, Testing: 3, Unknown: 4, Dormant: 5, Not_present: 6. |FabricId, RegionName, InterfaceName |
-|LacpErrors |Yes |Lacp Errors |Count |Average |Number of LACPDU illegal packet errors. |FabricId, RegionName, InterfaceName |
-|LacpInPkts |Yes |Lacp In Pkts |Count |Average |Number of LACPDUs received. |FabricId, RegionName, InterfaceName |
-|LacpOutPkts |Yes |Lacp Out Pkts |Count |Average |Number of LACPDUs transmitted. |FabricId, RegionName, InterfaceName |
-|LacpRxErrors |Yes |Lacp Rx Errors |Count |Average |Number of LACPDU receive packet errors. |FabricId, RegionName, InterfaceName |
-|LacpTxErrors |Yes |Lacp Tx Errors |Count |Average |Number of LACPDU transmit packet errors. |FabricId, RegionName, InterfaceName |
-|LacpUnknownErrors |Yes |Lacp Unknown Errors |Count |Average |Number of LACPDU unknown packet errors. |FabricId, RegionName, InterfaceName |
-|LldpFrameIn |Yes |Lldp Frame In |Count |Average |The number of lldp frames received. |FabricId, RegionName, InterfaceName |
-|LldpFrameOut |Yes |Lldp Frame Out |Count |Average |The number of frames transmitted out. |FabricId, RegionName, InterfaceName |
-|LldpTlvUnknown |Yes |Lldp Tlv Unknown |Count |Average |The number of frames received with unknown TLV. |FabricId, RegionName, InterfaceName |
-|MemoryAvailable |Yes |Memory Available |Bytes |Average |The available memory physically installed, or logically allocated to the component. |FabricId, RegionName, ComponentName |
-|MemoryUtilized |Yes |Memory Utilized |Bytes |Average |The memory currently in use by processes running on the component, not considering reserved memory that is not available for use. |FabricId, RegionName, ComponentName |
-|PowerSupplyCapacity |Yes |Power Supply Maximum Power Capacity |Unspecified |Average |Maximum power capacity of the power supply (watts). |FabricId, RegionName, ComponentName |
-|PowerSupplyInputCurrent |Yes |Power Supply Input Current |Unspecified |Average |The input current draw of the power supply (amps). |FabricId, RegionName, ComponentName |
-|PowerSupplyInputVoltage |Yes |Power Supply Input Voltage |Unspecified |Average |Input voltage to the power supply (volts). |FabricId, RegionName, ComponentName |
-|PowerSupplyOutputCurrent |Yes |Power Supply Output Current |Unspecified |Average |The output current supplied by the power supply (amps) |FabricId, RegionName, ComponentName |
-|PowerSupplyOutputPower |Yes |Power Supply Output Power |Unspecified |Average |Output power supplied by the power supply (watts) |FabricId, RegionName, ComponentName |
-|PowerSupplyOutputVoltage |Yes |Power Supply Output Voltage |Unspecified |Average |Output voltage supplied by the power supply (volts). |FabricId, RegionName, ComponentName |
-|TemperatureMax |Yes |Temperature Max |Unspecified |Average |Max temperature in degrees Celsius of the component. The maximum value of the statistic over the sampling period. |FabricId, RegionName, ComponentName |
+|AclMatchedPackets |Yes |Acl Matched Packets |Count |Average |Count of the number of packets matching the current ACL entry. |FabricId, AclSetName, AclEntrySequenceId, AclSetType |
+|BgpPeerStatus |Yes |BGP Peer Status |Unspecified |Minimum |Operational state of the BGP peer. State is represented in numerical form. Idle : 1, Connect : 2, Active : 3, Opensent : 4, Openconfirm : 5, Established : 6 |FabricId, IpAddress |
+|ComponentOperStatus |Yes |Component Operational State |Unspecified |Minimum |The current operational status of the component. |FabricId, ComponentName |
+|CpuUtilizationMax |Yes |Cpu Utilization Max |Percent |Average |Max cpu utilization. The maximum value of the percentage measure of the statistic over the time interval. |FabricId, ComponentName |
+|CpuUtilizationMin |Yes |Cpu Utilization Min |Percent |Average |Min cpu utilization. The minimum value of the percentage measure of the statistic over the time interval. |FabricId, ComponentName |
+|FanSpeed |Yes |Fan Speed |Unspecified |Average |Current fan speed. |FabricId, ComponentName |
+|IfEthInCrcErrors |Yes |Ethernet Interface In CRC Errors |Count |Average |The total number of frames received that had a length (excluding framing bits, but including FCS octets) of between 64 and 1518 octets, inclusive, but had either a bad Frame Check Sequence (FCS) with an integral number of octets (FCS Error) or a bad FCS with a non-integral number of octets (Alignment Error) |FabricId, InterfaceName |
+|IfEthInFragmentFrames |Yes |Ethernet Interface In Fragment Frames |Count |Average |The total number of frames received that were less than 64 octets in length (excluding framing bits but including FCS octets) and had either a bad Frame Check Sequence (FCS) with an integral number of octets (FCS Error) or a bad FCS with a non-integral number of octets (Alignment Error). |FabricId, InterfaceName |
+|IfEthInJabberFrames |Yes |Ethernet Interface In Jabber Frames |Count |Average |Number of jabber frames received on the interface. Jabber frames are typically defined as oversize frames which also have a bad CRC. |FabricId, InterfaceName |
+|IfEthInMacControlFrames |Yes |Ethernet Interface In MAC Control Frames |Count |Average |MAC layer control frames received on the interface |FabricId, InterfaceName |
+|IfEthInMacPauseFrames |Yes |Ethernet Interface In MAC Pause Frames |Count |Average |MAC layer PAUSE frames received on the interface |FabricId, InterfaceName |
+|IfEthInMaxsizeExceeded |Yes |Ethernet Interface In Maxsize Exceeded |Count |Average |The total number frames received that are well-formed dropped due to exceeding the maximum frame size on the interface. |FabricId, InterfaceName |
+|IfEthInOversizeFrames |Yes |Ethernet Interface In Oversize Frames |Count |Average |The total number of frames received that were longer than 1518 octets (excluding framing bits, but including FCS octets) and were otherwise well formed. |FabricId, InterfaceName |
+|IfEthOutMacControlFrames |Yes |Ethernet Interface Out MAC Control Frames |Count |Average |MAC layer control frames sent on the interface. |FabricId, InterfaceName |
+|IfEthOutMacPauseFrames |Yes |Ethernet Interface Out MAC Pause Frames |Count |Average |MAC layer PAUSE frames sent on the interface. |FabricId, InterfaceName |
+|IfInBroadcastPkts |Yes |Interface In Broadcast Pkts |Count |Average |The number of packets, delivered by this sub-layer to a higher (sub-)layer, that were addressed to a broadcast address at this sub-layer. |FabricId, InterfaceName |
+|IfInDiscards |Yes |Interface In Discards |Count |Average |The number of inbound packets that were chosen to be discarded even though no errors had been detected to prevent their being deliverable to a higher-layer protocol. |FabricId, InterfaceName |
+|IfInErrors |Yes |Interface In Errors |Count |Average |For packet-oriented interfaces, the number of inbound packets that contained errors preventing them from being deliverable to a higher-layer protocol. |FabricId, InterfaceName |
+|IfInFcsErrors |Yes |Interface In FCS Errors |Count |Average |Number of received packets which had errors in the frame check sequence (FCS), i.e., framing errors. |FabricId, InterfaceName |
+|IfInMulticastPkts |Yes |Interface In Multicast Pkts |Count |Average |The number of packets, delivered by this sub-layer to a higher (sub-)layer, that were addressed to a multicast address at this sub-layer. For a MAC-layer protocol, this includes both Group and Functional addresses. |FabricId, InterfaceName |
+|IfInOctets |Yes |Interface In Octets |Count |Average |The total number of octets received on the interface, including framing characters. |FabricId, InterfaceName |
+|IfInPkts |Yes |Interface In Pkts |Count |Average |The total number of packets received on the interface, including all unicast, multicast, broadcast and bad packets etc. |FabricId, InterfaceName |
+|IfInUnicastPkts |Yes |Interface In Unicast Pkts |Count |Average |The number of packets, delivered by this sub-layer to a higher (sub-)layer, that were not addressed to a multicast or broadcast address at this sub-layer. |FabricId, InterfaceName |
+|IfOutBroadcastPkts |Yes |Interface Out Broadcast Pkts |Count |Average |The total number of packets that higher-level protocols requested be transmitted, and that were addressed to a broadcast address at this sub-layer, including those that were discarded or not sent. |FabricId, InterfaceName |
+|IfOutDiscards |Yes |Interface Out Discards |Count |Average |The number of outbound packets that were chosen to be discarded even though no errors had been detected to prevent their being transmitted. |FabricId, InterfaceName |
+|IfOutErrors |Yes |Interface Out Errors |Count |Average |For packet-oriented interfaces, the number of outbound packets that could not be transmitted because of errors. |FabricId, InterfaceName |
+|IfOutMulticastPkts |Yes |Interface Out Multicast Pkts |Count |Average |The total number of packets that higher-level protocols requested be transmitted, and that were addressed to a multicast address at this sub-layer, including those that were discarded or not sent. For a MAC-layer protocol, this includes both Group and Functional addresses. |FabricId, InterfaceName |
+|IfOutOctets |Yes |Interface Out Octets |Count |Average |The total number of octets transmitted out of the interface, including framing characters. |FabricId, InterfaceName |
+|IfOutPkts |Yes |Interface Out Pkts |Count |Average |The total number of packets transmitted out of the interface, including all unicast, multicast, broadcast, and bad packets etc. |FabricId, InterfaceName |
+|IfOutUnicastPkts |Yes |Interface Out Unicast Pkts |Count |Average |The total number of packets that higher-level requested be transmitted, and that were not addressed to a multicast or broadcast address at this sub-layer, including those that were discarded or not sent. |FabricId, InterfaceName |
+|InterfaceOperStatus |Yes |Interface Operational State |Unspecified |Minimum |The current operational state of the interface. State is represented in numerical form. Up: 0, Down: 1, Lower_layer_down: 2, Testing: 3, Unknown: 4, Dormant: 5, Not_present: 6. |FabricId, InterfaceName |
+|LacpErrors |Yes |Lacp Errors |Count |Average |Number of LACPDU illegal packet errors. |FabricId, InterfaceName |
+|LacpInPkts |Yes |Lacp In Pkts |Count |Average |Number of LACPDUs received. |FabricId, InterfaceName |
+|LacpOutPkts |Yes |Lacp Out Pkts |Count |Average |Number of LACPDUs transmitted. |FabricId, InterfaceName |
+|LacpRxErrors |Yes |Lacp Rx Errors |Count |Average |Number of LACPDU receive packet errors. |FabricId, InterfaceName |
+|LacpTxErrors |Yes |Lacp Tx Errors |Count |Average |Number of LACPDU transmit packet errors. |FabricId, InterfaceName |
+|LacpUnknownErrors |Yes |Lacp Unknown Errors |Count |Average |Number of LACPDU unknown packet errors. |FabricId, InterfaceName |
+|LldpFrameIn |Yes |Lldp Frame In |Count |Average |The number of lldp frames received. |FabricId, InterfaceName |
+|LldpFrameOut |Yes |Lldp Frame Out |Count |Average |The number of frames transmitted out. |FabricId, InterfaceName |
+|LldpTlvUnknown |Yes |Lldp Tlv Unknown |Count |Average |The number of frames received with unknown TLV. |FabricId, InterfaceName |
+|MemoryAvailable |Yes |Memory Available |Bytes |Average |The available memory physically installed, or logically allocated to the component. |FabricId, ComponentName |
+|MemoryUtilized |Yes |Memory Utilized |Bytes |Average |The memory currently in use by processes running on the component, not considering reserved memory that is not available for use. |FabricId, ComponentName |
+|PowerSupplyCapacity |Yes |Power Supply Maximum Power Capacity |Unspecified |Average |Maximum power capacity of the power supply (watts). |FabricId, ComponentName |
+|PowerSupplyInputCurrent |Yes |Power Supply Input Current |Unspecified |Average |The input current draw of the power supply (amps). |FabricId, ComponentName |
+|PowerSupplyInputVoltage |Yes |Power Supply Input Voltage |Unspecified |Average |Input voltage to the power supply (volts). |FabricId, ComponentName |
+|PowerSupplyOutputCurrent |Yes |Power Supply Output Current |Unspecified |Average |The output current supplied by the power supply (amps) |FabricId, ComponentName |
+|PowerSupplyOutputPower |Yes |Power Supply Output Power |Unspecified |Average |Output power supplied by the power supply (watts) |FabricId, ComponentName |
+|PowerSupplyOutputVoltage |Yes |Power Supply Output Voltage |Unspecified |Average |Output voltage supplied by the power supply (volts). |FabricId, ComponentName |
+|TemperatureMax |Yes |Temperature Max |Unspecified |Average |Max temperature in degrees Celsius of the component. The maximum value of the statistic over the sampling period. |FabricId, ComponentName |
## Microsoft.Maps/accounts <!-- Data source : arm-->
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| ||||||||
-|HostBootTimeSeconds |No |Host Boot Seconds |Seconds |Average |Unix time of last boot |Host |
+|HostBootTimeSeconds |No |Host Boot Seconds (Preview) |Seconds |Average |Unix time of last boot |Host |
|HostDiskReadCompleted |No |Host Disk Reads Completed |Count |Average |Disk reads completed by node |Device, Host |
-|HostDiskReadSeconds |No |Host Disk Read Seconds |Seconds |Average |Disk read time by node |Device, Host |
+|HostDiskReadSeconds |No |Host Disk Read Seconds (Preview) |Seconds |Average |Disk read time by node |Device, Host |
|HostDiskWriteCompleted |No |Total Number of Writes Completed |Count |Average |Disk writes completed by node |Device, Host |
-|HostDiskWriteSeconds |No |Host Disk Write In Seconds |Seconds |Average |Disk write time by node |Device, Host |
-|HostDmiInfo |No |Host DMI Info |Unspecified |Count |Host Desktop Management Interface (DMI) environment information |BiosDate, BiosRelease, BiosVendor, BiosVersion, BoardAssetTag, BoardName, BoardVendor, BoardVersion, ChassisAssetTag, ChassisVendor, ChassisVersion, Host, ProductFamily, ProductName, ProductSku, ProductUuid, ProductVersion, SystemVendor |
+|HostDiskWriteSeconds |No |Host Disk Write Seconds (Preview) |Seconds |Average |Disk write time by node |Device, Host |
+|HostDmiInfo |No |Host DMI Info (Preview) |Unspecified |Count |Host Desktop Management Interface (DMI) environment information |BiosDate, BiosRelease, BiosVendor, BiosVersion, BoardAssetTag, BoardName, BoardVendor, BoardVersion, ChassisAssetTag, ChassisVendor, ChassisVersion, Host, ProductFamily, ProductName, ProductSku, ProductUuid, ProductVersion, SystemVendor |
|HostEntropyAvailableBits |No |Host Entropy Available Bits (Preview) |Count |Average |Available bits in node entropy |Host | |HostFilesystemAvailBytes |No |Host Filesystem Available Bytes |Count |Average |Available filesystem size by node |Device, FSType, Host, Mountpoint | |HostFilesystemDeviceError |No |Host Filesystem Device Errors |Count |Average |Indicates if there was a problem getting information for the filesystem |Device, FSType, Host, Mountpoint |
This latest update adds a new column and reorders the metrics to be alphabetical
|HostFilesystemSizeBytes |No |Host Filesystem Size In Bytes |Count |Average |Filesystem size by node |Device, FSType, Host, Mountpoint | |HostHwmonTempCelsius |No |Host Hardware Monitor Temp |Count |Average |Hardware monitor for temperature (celsius) |Chip, Host, Sensor | |HostHwmonTempMax |No |Host Hardware Monitor Temp Max |Count |Average |Hardware monitor for maximum temperature (celsius) |Chip, Host, Sensor |
-|HostLoad1 |No |Average Load In 1 Minute |Count |Average |1 minute load average |Host |
-|HostLoad15 |No |Average Load In 15 Minutes |Count |Average |15 minute load average |Host |
-|HostLoad5 |No |Average load in 5 minutes |Count |Average |5 minute load average |Host |
+|HostLoad1 |No |Average Load In 1 Minute (Preview) |Count |Average |1 minute load average |Host |
+|HostLoad15 |No |Average Load In 15 Minutes (Preview) |Count |Average |15 minute load average |Host |
+|HostLoad5 |No |Average load in 5 minutes (Preview) |Count |Average |5 minute load average |Host |
|HostMemAvailBytes |No |Host Memory Available Bytes |Count |Average |Available memory in bytes by node |Host | |HostMemHWCorruptedBytes |No |Total Amount of Memory In Corrupted Pages |Count |Average |Corrupted bytes in hardware by node |Host | |HostMemTotalBytes |No |Host Memory Total Bytes |Bytes |Average |Total bytes of memory by node |Host |
-|HostSpecificCPUUtilization |No |Host Specific CPU Utilization |Seconds |Average |A counter metric that counts the number of seconds the CPU has been running in a particular mode |Cpu, Host, Mode |
+|HostSpecificCPUUtilization |No |Host Specific CPU Utilization (Preview) |Seconds |Average |A counter metric that counts the number of seconds the CPU has been running in a particular mode |Cpu, Host, Mode |
|IdracPowerCapacityWatts |No |IDRAC Power Capacity Watts |Unspecified |Average |Power Capacity |Host, PSU | |IdracPowerInputWatts |No |IDRAC Power Input Watts |Unspecified |Average |Power Input |Host, PSU | |IdracPowerOn |No |IDRAC Power On |Unspecified |Count |IDRAC Power On Status |Host |
This latest update adds a new column and reorders the metrics to be alphabetical
|NcTotalCpusPerNuma |No |Total CPUs Available to Nexus per NUMA |Count |Average |Total number of CPUs available to Nexus per NUMA |Hostname, NUMA Node | |NcTotalWorkloadCpusAllocatedPerNuma |No |CPUs per NUMA Allocated for Nexus Kubernetes |Count |Average |Total number of CPUs per NUMA allocated for Nexus Kubernetes and Tenant Workloads |Hostname, NUMA Node | |NcTotalWorkloadCpusAvailablePerNuma |No |CPUs per NUMA Available for Nexus Kubernetes |Count |Average |Total number of CPUs per NUMA available to Nexus Kubernetes and Tenant Workloads |Hostname, NUMA Node |
-|NodeBondingActive |No |Node Bonding Active |Count |Average |Number of active interfaces per bonding interface |Master |
-|NodeMemHugePagesFree |No |Node Memory Huge Pages Free |Bytes |Average |NUMA hugepages free by node |Host, Node |
+|NodeBondingActive |No |Node Bonding Active (Preview) |Count |Average |Number of active interfaces per bonding interface |Master |
+|NodeMemHugePagesFree |No |Node Memory Huge Pages Free (Preview) |Bytes |Average |NUMA hugepages free by node |Host, Node |
|NodeMemHugePagesTotal |No |Node Memory Huge Pages Total |Bytes |Average |NUMA huge pages total by node |Host, Node | |NodeMemNumaFree |No |Node Memory NUMA (Free Memory) |Bytes |Average |NUMA memory free |Name, Host | |NodeMemNumaShem |No |Node Memory NUMA (Shared Memory) |Bytes |Average |NUMA shared memory |Host, Node | |NodeMemNumaUsed |No |Node Memory NUMA (Used Memory) |Bytes |Average |NUMA memory used |Host, Node | |NodeNetworkCarrierChanges |No |Node Network Carrier Changes |Count |Average |Node network carrier changes |Device, Host |
-|NodeNetworkMtuBytes |No |Node Network Maximum Transmission Unit Bytes |Bytes |Average |Node network Maximum Transmission Unit (mtu_bytes) value of /sys/class/net/<iface> |Device, Host |
+|NodeNetworkMtuBytes |No |Node Network Maximum Transmission Unit Bytes |Bytes |Average |Node network Maximum Transmission Unit (mtu_bytes) value of /sys/class/net/\<iface\> |Device, Host |
|NodeNetworkReceiveMulticastTotal |No |Node Network Received Multicast Total |Bytes |Average |Network device statistic receive_multicast |Device, Host | |NodeNetworkReceivePackets |No |Node Network Received Packets |Count |Average |Network device statistic receive_packets |Device, Host |
-|NodeNetworkSpeedBytes |No |Node Network Speed Bytes |Bytes |Average |speed_bytes value of /sys/class/net/<iface> |Device, Host |
+|NodeNetworkSpeedBytes |No |Node Network Speed Bytes |Bytes |Average |speed_bytes value of /sys/class/net/\<iface\> |Device, Host |
|NodeNetworkTransmitPackets |No |Node Network Transmited Packets |Count |Average |Network device statistic transmit_packets |Device, Host | |NodeNetworkUp |No |Node Network Up |Count |Count |Value is 1 if operstate is 'up', 0 otherwise. |Device, Host |
-|NodeNvmeInfo |No |Node NVMe Info |Count |Count |Non-numeric data from /sys/class/nvme/<device>, value is always 1. Provides firmware, model, state and serial for a device |Device, State |
+|NodeNvmeInfo |No |Node NVMe Info (Preview) |Count |Count |Non-numeric data from /sys/class/nvme/\<device\>, value is always 1. Provides firmware, model, state and serial for a device |Device, State |
|NodeOsInfo |No |Node OS Info |Count |Count |Node OS information |Host, Name, Version | |NodeTimexMaxErrorSeconds |No |Node Timex Max Error Seconds |Seconds |Average |Maximum time error between the local system and reference clock |Host | |NodeTimexOffsetSeconds |No |Node Timex Offset Seconds |Seconds |Average |Time offset in between the local system and reference clock |Host |
This latest update adds a new column and reorders the metrics to be alphabetical
|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions| |||||||| |ApiserverAuditRequestsRejectedTotal |No |API Server Audit Requests Rejected Total |Count |Average |Counter of API server requests rejected due to an error in the audit logging backend |Component, Pod Name |
-|ApiserverClientCertificateExpirationSecondsSum |No |API Server Client Certificate Expiration Seconds Sum |Seconds |Average |Sum of API server client certificate expiration (seconds) |Component, Pod Name |
+|ApiserverClientCertificateExpirationSecondsSum |No |API Server Client Certificate Expiration Seconds Sum (Preview) |Seconds |Average |Sum of API server client certificate expiration (seconds) |Component, Pod Name |
|ApiserverStorageDataKeyGenerationFailuresTotal |No |API Server Storage Data Key Generation Failures Total |Count |Average |Total number of operations that failed Data Encryption Key (DEK) generation |Component, Pod Name |
-|ApiserverTlsHandshakeErrorsTotal |No |API Server TLS Handshake Errors Total |Count |Average |Number of requests dropped with 'TLS handshake' error |Component, Pod Name |
-|ContainerFsIoTimeSecondsTotal |No |Container FS I/O Time Seconds Total |Seconds |Average |Time taken for container Input/Output (I/O) operations |Device, Host |
+|ApiserverTlsHandshakeErrorsTotal |No |API Server TLS Handshake Errors Total (Preview) |Count |Average |Number of requests dropped with 'TLS handshake' error |Component, Pod Name |
+|ContainerFsIoTimeSecondsTotal |No |Container FS I/O Time Seconds Total (Preview) |Seconds |Average |Time taken for container Input/Output (I/O) operations |Device, Host |
|ContainerMemoryFailcnt |No |Container Memory Fail Count |Count |Average |Number of times a container's memory usage limit is hit |Container, Host, Namespace, Pod | |ContainerMemoryUsageBytes |No |Container Memory Usage Bytes |Bytes |Average |Current memory usage, including all memory regardless of when it was accessed |Container, Host, Namespace, Pod |
-|ContainerNetworkReceiveErrorsTotal |No |Container Network Receive Errors Total |Count |Average |Number of errors encountered while receiving bytes over the network |Interface, Namespace, Pod |
-|ContainerNetworkTransmitErrorsTotal |No |Container Network Transmit Errors Total |Count |Average |Count of errors that happened while transmitting |Interface, Namespace, Pod |
+|ContainerNetworkReceiveErrorsTotal |No |Container Network Receive Errors Total (Preview) |Count |Average |Number of errors encountered while receiving bytes over the network |Interface, Namespace, Pod |
+|ContainerNetworkTransmitErrorsTotal |No |Container Network Transmit Errors Total (Preview) |Count |Average |Count of errors that happened while transmitting |Interface, Namespace, Pod |
|ContainerScrapeError |No |Container Scrape Error |Unspecified |Average |Indicates whether there was an error while getting container metrics |Host | |ContainerTasksState |No |Container Tasks State |Count |Average |Number of tasks or processes in a given state (sleeping, running, stopped, uninterruptible, or waiting) in a container |Container, Host, Namespace, Pod, State | |ControllerRuntimeReconcileErrorsTotal |No |Controller Reconcile Errors Total |Count |Average |Total number of reconciliation errors per controller |Controller, Namespace, Pod Name | |ControllerRuntimeReconcileTotal |No |Controller Reconciliations Total |Count |Average |Total number of reconciliations per controller |Controller, Namespace, Pod Name | |CorednsDnsRequestsTotal |No |CoreDNS Requests Total |Count |Average |Total number of DNS requests |Family, Pod Name, Proto, Server, Type | |CorednsDnsResponsesTotal |No |CoreDNS Responses Total |Count |Average |Total number of DNS responses |Pod Name, Server, Rcode |
-|CorednsForwardHealthcheckBrokenTotal |No |CoreDNS Forward Healthcheck Broken Total |Count |Average |Total number of times all upstreams are unhealthy |Pod Name, Namespace |
-|CorednsForwardMaxConcurrentRejectsTotal |No |CoreDNS Forward Max Concurrent Rejects Total |Count |Average |Total number of rejected queries because concurrent queries were at the maximum limit |Pod Name, Namespace |
+|CorednsForwardHealthcheckBrokenTotal |No |CoreDNS Forward Healthcheck Broken Total (Preview) |Count |Average |Total number of times all upstreams are unhealthy |Pod Name, Namespace |
+|CorednsForwardMaxConcurrentRejectsTotal |No |CoreDNS Forward Max Concurrent Rejects Total (Preview) |Count |Average |Total number of rejected queries because concurrent queries were at the maximum limit |Pod Name, Namespace |
|CorednsHealthRequestFailuresTotal |No |CoreDNS Health Request Failures Total |Count |Average |The number of times the self health check failed |Pod Name | |CorednsPanicsTotal |No |CoreDNS Panics Total |Count |Average |Total number of panics |Pod Name | |CorednsReloadFailedTotal |No |CoreDNS Reload Failed Total |Count |Average |Total number of failed reload attempts |Pod Name, Namespace |
This latest update adds a new column and reorders the metrics to be alphabetical
|EtcdServerProposalsAppliedTotal |No |Etcd Server Proposals Applied Total |Count |Average |The total number of consensus proposals applied |Component, Pod Name, Tier | |EtcdServerProposalsCommittedTotal |No |Etcd Server Proposals Committed Total |Count |Average |The total number of consensus proposals committed |Component, Pod Name, Tier | |EtcdServerProposalsFailedTotal |No |Etcd Server Proposals Failed Total |Count |Average |The total number of failed proposals |Component, Pod Name, Tier |
-|EtcdServerSlowApplyTotal |No |Etcd Server Slow Apply Total |Count |Average |The total number of slow apply requests |Pod Name, Tier |
+|EtcdServerSlowApplyTotal |No |Etcd Server Slow Apply Total (Preview) |Count |Average |The total number of slow apply requests |Pod Name, Tier |
|FelixActiveLocalEndpoints |No |Felix Active Local Endpoints |Count |Average |Number of active endpoints on this host |Host | |FelixClusterNumHostEndpoints |No |Felix Cluster Num Host Endpoints |Count |Average |Total number of host endpoints cluster-wide |Host | |FelixClusterNumHosts |No |Felix Cluster Number of Hosts |Count |Average |Total number of Calico hosts in the cluster |Host |
This latest update adds a new column and reorders the metrics to be alphabetical
|KubeNodeStatusCondition |No |Node Status Condition |Count |Average |The condition of a node |Condition, Node, Status | |KubePodContainerResourceLimits |No |Container Resources Limits |Count |Average |The container's resources limits |Container, Namespace, Node, Pod, Resource, Unit | |KubePodContainerResourceRequests |No |Container Resources Requests |Count |Average |The container's resources requested |Container, Namespace, Node, Pod, Resource, Unit |
-|KubePodContainerStateStarted |No |Container State Started |Count |Average |Unix timestamp start time of a container |Container, Namespace, Pod |
+|KubePodContainerStateStarted |No |Container State Started (Preview) |Count |Average |Unix timestamp start time of a container |Container, Namespace, Pod |
|KubePodContainerStatusLastTerminatedReason |No |Container Status Last Terminated Reason |Count |Average |The reason of a container's last terminated status |Container, Namespace, Pod, Reason | |KubePodContainerStatusReady |No |Container Status Ready |Count |Average |Describes whether the container's readiness check succeeded |Container, Namespace, Pod | |KubePodContainerStatusRestartsTotal |No |Container Restarts |Count |Average |The number of container restarts |Container, Namespace, Pod |
This latest update adds a new column and reorders the metrics to be alphabetical
|KubePodContainerStatusTerminatedReason |No |Container Status Terminated Reason |Count |Average |The number and reason of containers with a status of 'terminated' |Container, Namespace, Pod, Reason | |KubePodContainerStatusWaiting |No |Container Status Waiting |Count |Average |The number of containers with a status of 'waiting' |Container, Namespace, Pod | |KubePodContainerStatusWaitingReason |No |Container Status Waiting Reason |Count |Average |The number and reason of containers with a status of 'waiting' |Container, Namespace, Pod, Reason |
-|KubePodDeletionTimestamp |No |Pod Deletion Timestamp |Count |Average |The timestamp of the pod's deletion |Namespace, Pod |
+|KubePodDeletionTimestamp |No |Pod Deletion Timestamp (Preview) |Count |Average |The timestamp of the pod's deletion |Namespace, Pod |
|KubePodInitContainerStatusReady |No |Pod Init Container Ready |Count |Average |The number of ready pod init containers |Namespace, Container, Pod | |KubePodInitContainerStatusRestartsTotal |No |Pod Init Container Restarts |Count |Average |The number of pod init containers restarts |Namespace, Container, Pod | |KubePodInitContainerStatusRunning |No |Pod Init Container Running |Count |Average |The number of running pod init containers |Namespace, Container, Pod |
This latest update adds a new column and reorders the metrics to be alphabetical
|KubevirtVirtOperatorReady |No |Kubevirt Virt Operator Ready |Unspecified |Average |Indication for a virt operator being ready |Pod Name | |KubevirtVmiMemoryActualBalloonBytes |No |Kubevirt VMI Memory Actual BalloonBytes |Bytes |Average |Current balloon size (in bytes) |Name, Node | |KubevirtVmiMemoryAvailableBytes |No |Kubevirt VMI Memory Available Bytes |Bytes |Average |Amount of usable memory as seen by the domain. This value may not be accurate if a balloon driver is in use or if the guest OS does not initialize all assigned pages |Name, Node |
-|KubevirtVmiMemoryDomainBytesTotal |No |Kubevirt VMI Memory Domain Bytes Total |Bytes |Average |The amount of memory (in bytes) allocated to the domain. The memory value in domain XML file |Node |
+|KubevirtVmiMemoryDomainBytesTotal |No |Kubevirt VMI Memory Domain Bytes Total (Preview) |Bytes |Average |The amount of memory (in bytes) allocated to the domain. The memory value in domain XML file |Node |
|KubevirtVmiMemorySwapInTrafficBytesTotal |No |Kubevirt VMI Memory Swap In Traffic Bytes Total |Bytes |Average |The total amount of data read from swap space of the guest (in bytes) |Name, Node | |KubevirtVmiMemorySwapOutTrafficBytesTotal |No |Kubevirt VMI Memory Swap Out Traffic Bytes Total |Bytes |Average |The total amount of memory written out to swap space of the guest (in bytes) |Name, Node | |KubevirtVmiMemoryUnusedBytes |No |Kubevirt VMI Memory Unused Bytes |Bytes |Average |The amount of memory left completely unused by the system. Memory that is available but used for reclaimable caches should NOT be reported as free |Name, Node |
This latest update adds a new column and reorders the metrics to be alphabetical
|KubevirtVmiPhaseCount |No |Kubevirt VMI Phase Count |Count |Average |Sum of VirtualMachineInstances (VMIs) per phase and node |Node, Phase, Workload | |KubevirtVmiStorageIopsReadTotal |No |Kubevirt VMI Storage IOPS Read Total |Count |Average |Total number of Input/Output (I/O) read operations |Drive, Name, Node | |KubevirtVmiStorageIopsWriteTotal |No |Kubevirt VMI Storage IOPS Write Total |Count |Average |Total number of Input/Output (I/O) write operations |Drive, Name, Node |
-|KubevirtVmiStorageReadTimesMsTotal |No |Kubevirt VMI Storage Read Times Total |Milliseconds |Average |Total time in milliseconds (ms) spent on read operations |Drive, Name, Node |
-|KubevirtVmiStorageWriteTimesMsTotal |No |Kubevirt VMI Storage Write Times Total |Milliseconds |Average |Total time in milliseconds (ms) spent on write operations |Drive, Name, Node |
-|NcVmiCpuAffinity |No |CPU Pinning Map |Count |Average |Pinning map of virtual CPUs (vCPUs) to CPUs |CPU, NUMA Node, VMI Namespace, VMI Node, VMI Name |
+|KubevirtVmiStorageReadTimesMsTotal |No |Kubevirt VMI Storage Read Times Total (Preview) |Milliseconds |Average |Total time in milliseconds (ms) spent on read operations |Drive, Name, Node |
+|KubevirtVmiStorageWriteTimesMsTotal |No |Kubevirt VMI Storage Write Times Total (Preview) |Milliseconds |Average |Total time in milliseconds (ms) spent on write operations |Drive, Name, Node |
+|NcVmiCpuAffinity |No |CPU Pinning Map (Preview) |Count |Average |Pinning map of virtual CPUs (vCPUs) to CPUs |CPU, NUMA Node, VMI Namespace, VMI Node, VMI Name |
+|TyphaClientLatencySecsCount |No |Typha Client Latency Secs |Count |Average |Per-client latency. I.e. how far behind the current state each client is. |Pod Name |
|TyphaConnectionsAccepted |No |Typha Connections Accepted |Count |Average |Total number of connections accepted over time |Pod Name | |TyphaConnectionsDropped |No |Typha Connections Dropped |Count |Average |Total number of connections dropped due to rebalancing |Pod Name | |TyphaPingLatencyCount |No |Typha Ping Latency |Count |Average |Round-trip ping/pong latency to client. Typha's protocol includes a regular ping/pong keepalive to verify that the connection is still up |Pod Name |
This latest update adds a new column and reorders the metrics to be alphabetical
|PurefaHostSpaceBytes |No |Nexus Storage Host Space Bytes |Bytes |Average |Storage array host space in bytes |Dimension, Host | |PurefaHostSpaceDatareductionRatio |No |Nexus Storage Host Space Datareduction Ratio |Percent |Average |Storage array host volumes data reduction ratio |Host | |PurefaHostSpaceSizeBytes |No |Nexus Storage Host Space Size Bytes |Bytes |Average |Storage array host volumes size |Host |
-|PurefaInfo |No |Nexus Storage Info |Unspecified |Average |Storage array system information |Array Name |
+|PurefaInfo |No |Nexus Storage Info (Preview) |Unspecified |Average |Storage array system information |Array Name |
|PurefaVolumePerformanceIOPS |No |Nexus Storage Volume Performance IOPS |Count |Average |Storage array volume IOPS |Dimension, Volume | |PurefaVolumePerformanceLatencyUsec |No |Nexus Storage Volume Performance Latency (Microseconds) |MilliSeconds |Average |Storage array volume latency in microseconds |Dimension, Volume | |PurefaVolumePerformanceThroughputBytes |No |Nexus Storage Volume Performance Throughput Bytes |Bytes |Average |Storage array volume throughput |Dimension, Volume |
This latest update adds a new column and reorders the metrics to be alphabetical
|cache_used_percent |Yes |Cache used percentage |Percent |Maximum |Cache used percentage. Applies only to data warehouses. |No Dimensions | |connection_failed |Yes |Failed Connections : System Errors |Count |Total |Failed Connections |Error | |connection_failed_user_error |Yes |Failed Connections : User Errors |Count |Total |Failed Connections : User Errors |Error |
-|connection_successful |Yes |Successful Connections |Count |Total |Successful Connections |SslProtocol |
+|connection_successful |Yes |Successful Connections |Count |Total |Successful Connections |SslProtocol, ConnectionPolicyResult |
|cpu_limit |Yes |CPU limit |Count |Average |CPU limit. Applies to vCore-based databases. |No Dimensions | |cpu_percent |Yes |CPU percentage |Percent |Average |CPU percentage |No Dimensions | |cpu_used |Yes |CPU used |Count |Average |CPU used. Applies to vCore-based databases. |No Dimensions |
This latest update adds a new column and reorders the metrics to be alphabetical
- [Export metrics to storage, Event Hub, or Log Analytics](../essentials/platform-logs-overview.md)
-<!--Gen Date: Wed Jun 28 2023 02:42:02 GMT+0800 (China Standard Time)-->
+<!--Gen Date: Mon Jul 03 2023 13:34:26 GMT+0800 (China Standard Time)-->
azure-monitor Resource Logs Categories https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/resource-logs-categories.md
Title: Supported categories for Azure Monitor resource logs
description: Understand the supported services and event schemas for Azure Monitor resource logs. Previously updated : 06/27/2023 Last updated : 07/03/2023
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| |||| |AgentHealthStatus |AgentHealthStatus |No |
-|AutoscaleEvaluationPooled |Autoscale logs for pooled host pools - private preview |Yes |
+|AutoscaleEvaluationPooled |Autoscale logs for pooled host pools - private preview [Microsoft internal only] |Yes |
|Checkpoint |Checkpoint |No | |Connection |Connection |No | |ConnectionGraphicsData |Connection Graphics Data Logs Preview |Yes |
If you think something is missing, you can open a GitHub comment at the bottom o
|Category|Category Display Name|Costs To Export| ||||
+|DefenderSecurity |Security - Defender |Yes |
|SecurityCritical |Security - Critical |Yes | |SecurityDebug |Security - Debug |Yes | |SecurityError |Security - Error |Yes |
If you think something is missing, you can open a GitHub comment at the bottom o
|||| |HybridConnectionsEvent |HybridConnections Events |No | |HybridConnectionsLogs |HybridConnectionsLogs |Yes |
+|VNetAndIPFilteringLogs |VNet/IP Filtering Connection Logs |Yes |
## Microsoft.Search/searchServices <!-- Data source : naam-->
If you think something is missing, you can open a GitHub comment at the bottom o
* [Analyze logs from Azure storage with Log Analytics](./resource-logs.md#send-to-log-analytics-workspace)
-<!--Gen Date: Wed Jun 28 2023 02:42:02 GMT+0800 (China Standard Time)-->
+<!--Gen Date: Mon Jul 03 2023 13:34:26 GMT+0800 (China Standard Time)-->
azure-netapp-files Azure Netapp Files Solution Architectures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-solution-architectures.md
This section provides references to SAP on Azure solutions.
### SAP tech community and blog posts
-* [Azure NetApp Files ΓÇô SAP HANA backup in seconds](https://blog.netapp.com/azure-netapp-files-sap-hana-backup-in-seconds/)
-* [Azure NetApp Files ΓÇô Restore your HANA database from a snapshot backup](https://blog.netapp.com/azure-netapp-files-backup-sap-hana)
-* [Azure NetApp Files ΓÇô SAP HANA offloading backup with Cloud Sync](https://blog.netapp.com/azure-netapp-files-sap-hana)
-* [Speed up your SAP HANA system copies using Azure NetApp Files](https://blog.netapp.com/sap-hana-faster-using-azure-netapp-files/)
-* [Cloud Volumes ONTAP and Azure NetApp Files: SAP HANA system migration made easy](https://blog.netapp.com/cloud-volumes-ontap-and-azure-netapp-files-sap-hana-system-migration-made-easy/)
* [Architectural Decisions to maximize ANF investment in HANA N+M Scale-Out Architecture - Part 1](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/architectural-decisions-to-maximize-anf-investment-in-hana-n-m/ba-p/2078737) * [Architectural Decisions to maximize ANF investment in HANA N+M Scale-Out Architecture - Part 2](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/architectural-decisions-to-maximize-anf-investment-in-hana-n-m/ba-p/2117130) * [Architectural Decisions to maximize ANF investment in HANA N+M Scale-Out Architecture - Part 3](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/architectural-decisions-to-maximize-anf-investment-in-hana-n-m/ba-p/2215948)
azure-netapp-files Network Attached Storage Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/network-attached-storage-concept.md
+
+ Title: Understand NAS concepts in Azure NetApp Files
+description: This article covers important information about NAS volumes when using Azure NetApp Files.
+
+documentationcenter: ''
++
+editor: ''
+
+ms.assetid:
++
+ na
+ Last updated : 06/26/2023++
+# Understand NAS concepts in Azure NetApp Files
+
+Network Attached Storage (NAS) is a way for a centralized storage system to present data to multiple networked clients across a WAN or LAN.
++
+Datasets in a NAS environment can be structured (data in a well-defined format, such as databases) or unstructured (data not stored in a structured database format, such as images, media files, logs, home directories, etc.). Regardless of the structure, the data is served through a standard conversation between a NAS client and the Azure NetApp Files NAS services. The conversation happens following these basic steps:
+
+1. A client requests access to a NAS share in Azure NetApp Files using either SMB or NFS.
+1. Access controls can be as basic as a client hostname/IP address or more complex, such as username authentication and share-level permissions.
+1. Azure NetApp Files receives this request and checks the access controls to verify if the client is allowed to access the NAS share.
+1. Once the share-level access has been verified successfully, the client attempts to populate the NAS shareΓÇÖs contents via a basic read/listing.
+1. Azure NetApp Files then checks file-level permissions. If the user attempting access to the share does not have the proper permissions, then access is denied--even if the share-level permissions allowed access.
+1. Once this process is complete, file and folder access controls take over in the same way youΓÇÖd expect for any Linux or Windows client.
+1. Azure NetApp Files configuration handles share permission controls. File and folder permissions are always controlled from the NAS clients accessing the shares by the NAS administrator.
+
+## NAS use cases
+
+NAS is a common protocol across many industries, including oil & gas, high performance computing, media and entertainment, EDA, financial services, healthcare, genomics, manufacturing, higher education, and many others. Workloads can vary from simple file shares and home directories to applications with thousands of cores pushing operations to a single share, as well as more modernized application stacks, such as Kubernetes and container deployments.
++
+## Next steps
+* [Understand NAS protocols](network-attached-storage-protocols.md)
+* [Azure NetApp Files NFS FAQ](faq-nfs.md)
+* [Azure NetApp Files SMB FAQ](faq-smb.md)
azure-netapp-files Network Attached Storage Protocols https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/network-attached-storage-protocols.md
+
+ Title: Understand NAS protocols in Azure NetApp Files
+description: Learn how SMB and NFS operate in Azure NetApp Files.
+
+documentationcenter: ''
++
+editor: ''
+
+ms.assetid:
++
+ na
+ Last updated : 06/26/2023+++
+# Understand NAS protocols in Azure NetApp Files
+
+NAS protocols are how conversations happen between clients and servers. NFS and SMB are the NAS protocols used in Azure NetApp Files. Each offers their own distinct methods for communication, but at their root, they operate mostly in the same way.
+
+* Both serve a single dataset to many disparate networked attached clients.
+* Both can leverage encrypted authentication methods for sharing data.
+* Both can be gated with share and file permissions.
+* Both can encrypt data in-flight.
+* Both can use multiple connections to help parallelize performance.
+
+## Network File System (NFS)
+
+NFS is primarily used with Linux/UNIX based clients such as Red Hat, SUSE, Ubuntu, AIX, Solaris, Apple OS, etc. and Azure NetApp Files supports any NFS client that operates in the RFC standards. Windows can also use NFS for access, but it does not operate using Request for Comments (RFC) standards.
+
+RFC standards for NFS protocols can be found here:
+
+* [RFC-1813: NFSv3](https://www.ietf.org/rfc/rfc1813.txt)
+* [RFC 8881: NFSv4.1](https://www.rfc-editor.org/rfc/rfc8881)
+* [RFC 7862: NFSv4.2](https://datatracker.ietf.org/doc/html/rfc7862)
+
+### NFSv3
+
+NFSv3 is a basic offering of the protocol and has the following key attributes:
+* NFSv3 is stateless, meaning that the NFS server does not keep track of the states of connections (including locks).
+* Locking is handled outside of the NFS protocol, using Network Lock Manager (NLM). Because locks are not integrated into the protocol, stale locks can sometimes occur.
+* Since NFSv3 is stateless, performance with NFSv3 can be substantially better in some workloads (particularly workloads with high metadata operations such as OPEN, CLOSE, SETATTR, GETATTR), as there is less general work that needs to be done to process requests on the server and client.
+* NFSv3 uses a basic file permission model where only the owner of the file, a group and everyone else can be assigned a combination of read/write/execute permissions.
+* NFSv3 can use NFSv4.x ACLs, but an NFSv4.x management client would be required to configure and manage the ACLs. Azure NetApp Files does not support the use of nonstandard POSIX draft ACLs.
+* NFSv3 also requires use of other ancillary protocols for regular operations such as port discovery, mounting, locking, status monitoring and quotas. Each ancillary protocol uses a unique network port, which means NFSv3 operations require more exposure through firewalls with well-known port numbers.
+* Azure NetApp Files uses the following port numbers for NFSv3 operations. It's not possible to change these port numbers:
+ * Portmapper (111)
+ * Mount (635)
+ * NFS (2049)
+ * NLM (4045)
+ * NSM (4046)
+ * Rquota (4049)
+* NFSv3 can use security enhancements such as Kerberos, but Kerberos only affects the NFS portion of the packets; ancillary protocols (such as NLM, portmapper, mount) are not included in the Kerberos conversation.
+ * Azure NetApp Files only supports NFSv4.1 Kerberos encryption
+* NFSv3 uses numeric IDs for its user and group authentication. Usernames and group names are not required for communication or permissions, which can make spoofing a user easier, but configuration and management are simpler.
+* NFSv3 can use LDAP for user and group lookups.
+
+### NFSv4.x
+
+NFSv4.x refers to all NFS versions/minor versions that are under NFSv4. This includes NFSv4.0, NFSv4.1 and NFSv4.2. Azure NetApp Files currently only supports NFSv4.1.
+
+NFSv4.x has the following characteristics:
+
+* NFSv4.x is a stateful protocol, which means that the client and server keep track of the states of the NFS connections, including lock states. The NFS mount uses a concept known as a ΓÇ£state IDΓÇ¥ to keep track of the connections.
+* Locking is integrated into the NFS protocol and does not require ancillary locking protocols to keep track of NFS locks. Instead, locks are granted on a lease basis and will expire after a certain period of time if a client/server connection is lost, thus returning the lock back to the system for use with other NFS clients.
+* The statefulness of NFSv4.x does contain some drawbacks, such as potential disruptions during network outages or storage failovers, and performance overhead in certain workload types (such as high metadata workloads).
+* NFSv4.x provides many significant advantages over NFSv3, including:
+ * Better locking concepts (lease-based locking)
+ * Better security (fewer firewall ports needed, standard integration with Kerberos, granular access controls)
+ * More features
+ * Compound NFS operations (multiple commands in a single packet request to reduce network chatter)
+ * TCP-only
+* NFSv4.x can use a more robust file permission model that is similar to Windows NTFS permissions. These granular ACLs can be applied to users or groups and allow for permissions to be set on a wider range of operations than basic read/write/execute operations. NFSv4.x can also use the standard POSIX mode bits that NFSv3 employs.
+* Since NFSv4.x does not use ancillary protocols, Kerberos is applied to the entire NFS conversation when in use.
+* NFSv4.x uses a combination of user/group names and domain strings to verify user and group information. The client and server must agree on the domain strings for proper user and group authentication to occur. If the domain strings do not match, then the NFS user or group gets squashed to the specified user in the /etc/idmapd.conf file on the NFS client (for example, nobody).
+* While NFSv4.x does default to using domain strings, it is possible to configure the client and server to fall back on the classic numeric IDs seen in NFSv3 when AUTH_SYS is in use.
+* Because NFSv4.x has such deep integration with user and group name strings and because the server and clients must agree on these users/groups, using a name service server for user authentication such as LDAP is recommended on NFS clients and servers.
+
+For frequently asked questions regarding NFS in Azure NetApp Files, see the [Azure NetApp Files NFS FAQ](faq-nfs.md).
+
+## Server Message Block (SMB)
+
+SMB is primarily used with Windows clients for NAS functionality. However, it can also be used on Linux-based operating systems such as AppleOS, RedHat, etc. This deployment is generally accomplished using an application called Samba. Azure NetApp Files has official support for SMB using Windows and macOS. SMB/Samba on Linux operating systems can work with Azure NetApp Files, but there is no official support.
+
+Azure NetApp Files supports only SMB 2.1 and SMB 3.1 versions.
+
+SMB has the following characteristics:
+
+* SMB is a stateful protocol: the clients and server maintain a ΓÇ£stateΓÇ¥ for SMB share connections for better security and locking.
+* Locking in SMB is considered mandatory. Once a file is locked, no other client can write to that file until the lock is released.
+* SMBv2.x and later leverage compound calls to perform operations.
+* SMB supports full Kerberos integration. With the way Windows clients are configured, Kerberos is often in use without end users ever knowing.
+* When Kerberos is unable to be used for authentication, Windows NT LAN Manager (NTLM) may be used as a fallback. If NTLM is disabled in the Active Directory environment, then authentication requests that cannot use Kerberos fail.
+* SMBv3.0 and later supports [end-to-end encryption](azure-netapp-files-create-volumes-smb.md) for SMB shares.
+* SMBv3.x supports [multichannel](../storage/files/storage-files-smb-multichannel-performance.md) for performance gains in certain workloads.
+* SMB uses user and group names (via SID translation) for authentication. User and group information is provided by an Active Directory domain controller.
+* SMB in Azure NetApp Files uses standard Windows New Technology File System (NTFS) [ACLs](/windows/win32/secauthz/access-control-lists) for file and folder permissions.
+
+For frequently asked questions regarding SMB in Azure NetApp Files, see the [Azure NetApp Files SMB FAQ](faq-smb.md).
+
+## Next steps
+* [Azure NetApp Files NFS FAQ](faq-nfs.md)
+* [Azure NetApp Files SMB FAQ](faq-smb.md)
azure-resource-manager Azure Services Resource Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/azure-services-resource-providers.md
The resources providers that are marked with **- registered** are registered by
| Microsoft.Cache | [Azure Cache for Redis](../../azure-cache-for-redis/index.yml) | | Microsoft.Capacity | core | | Microsoft.Cdn | [Content Delivery Network](../../cdn/index.yml) |
-| Microsoft.CertificateRegistration | [App Service Certificates](../../app-service/configure-ssl-certificate.md#import-certificate-into-app-service) |
+| Microsoft.CertificateRegistration | [App Service Certificates](../../app-service/configure-ssl-app-service-certificate.md) |
| Microsoft.ChangeAnalysis | [Azure Monitor](../../azure-monitor/index.yml) | | Microsoft.ClassicCompute | Classic deployment model virtual machine | | Microsoft.ClassicInfrastructureMigrate | Classic deployment model migration |
azure-resource-manager Deployment Tutorial Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deployment-tutorial-pipeline.md
Title: Continuous integration with Azure Pipelines description: Learn how to continuously build, test, and deploy Azure Resource Manager templates (ARM templates). Previously updated : 05/22/2023 Last updated : 06/30/2023
In the [previous tutorial](./deployment-tutorial-linked-template.md), you deploy
Azure DevOps provides developer services to support teams to plan work, collaborate on code development, and build and deploy applications. Developers can work in the cloud using Azure DevOps Services. Azure DevOps provides an integrated set of features that you can access through your web browser or IDE client. Azure Pipelines is one of these features. Azure Pipelines is a fully featured continuous integration (CI) and continuous delivery (CD) service. It works with your preferred Git provider and can deploy to most major cloud services. Then you can automate the build, testing, and deployment of your code to Microsoft Azure, Google Cloud Platform, or Amazon Web Services. > [!NOTE]
-> Pick a project name. When you go through the tutorial, replace any of the **AzureRmPipeline** with your project name.
+> Pick a project name. When you go through the tutorial, replace any of the **ARMPipelineProj** with your project name.
> This project name is used to generate resource names. One of the resources is a storage account. Storage account names must be between 3 and 24 characters in length and use numbers and lower-case letters only. The name must be unique. In the template, the storage account name is the project name with **store** appended, and the project name must be between 3 and 11 characters. So the project name must meet the storage account name requirements and has less than 11 characters. This tutorial covers the following tasks:
If you don't have a GitHub account, see [Prerequisites](#prerequisites).
![Azure Resource Manager Azure DevOps Azure Pipelines create GitHub repository](./media/deployment-tutorial-pipeline/azure-resource-manager-devops-pipelines-github-repository.png) 1. Select **New**, a green button.
-1. In **Repository name**, enter a repository name. For example, **AzureRmPipeline-repo**. Remember to replace any of **AzureRmPipeline** with your project name. You can select either **Public** or **private** for going through this tutorial. And then select **Create repository**.
+1. In **Repository name**, enter a repository name. For example, **ARMPipeline-repo**. Remember to replace any of **ARMPipeline** with your project name. You can select either **Public** or **private** for going through this tutorial. And then select **Create repository**.
1. Write down the URL. The repository URL is the following format - `https://github.com/[YourAccountName]/[YourRepositoryName]`. This repository is referred to as a *remote repository*. Each of the developers of the same project can clone their own *local repository*, and merge the changes to the remote repository.
So far, you have created a GitHub repository, and uploaded the templates to the
A DevOps organization is needed before you can proceed to the next procedure. If you don't have one, see [Prerequisites](#prerequisites).
-1. Sign in to [Azure DevOps](https://dev.azure.com).
-1. Select a DevOps organization from the left.
+1. Sign in to [Azure DevOps](https://go.microsoft.com/fwlink/?LinkId=307137).
+1. Select a DevOps organization from the left, and then select **New project**. If you don't have any projects, the create project page is opened automatically.
+1. Enter the following values:
![Azure Resource Manager Azure DevOps Azure Pipelines create Azure DevOps project](./media/deployment-tutorial-pipeline/azure-resource-manager-devops-pipelines-create-devops-project.png)
-1. Select **New project**. If you don't have any projects, the create project page is opened automatically.
-1. Enter the following values:
-
- * **Project name**: enter a project name. You can use the project name you picked at the very beginning of the tutorial.
- * **Version control**: Select **Git**. You might need to expand **Advanced** to see **Version control**.
+ * **Project name**: Enter a project name. You can use the project name you picked at the very beginning of the tutorial.
+ * **Visibility**: Select **Private**.
Use the default value for the other properties. 1. Select **Create**.
Create a service connection that is used to deploy projects to Azure.
1. Select **Project settings** from the bottom of the left menu. 1. Select **Service connections** under **Pipelines**. 1. Select **Create Service connection**, select **Azure Resource Manager**, and then select **Next**.
-1. Select **Service principal**, and then select **Next**.
+1. Select **Service principal (automatic)**, and then select **Next**.
1. Enter the following values: * **Scope level**: select **Subscription**. * **Subscription**: select your subscription. * **Resource Group**: Leave it blank.
- * **Connection name**: enter a connection name. For example, **AzureRmPipeline-conn**. Write down this name, you need the name when you create your pipeline.
+ * **Connection name**: enter a connection name. For example, **ARMPipeline-conn**. Write down this name, you need the name when you create your pipeline.
* **Grant access permission to all pipelines**. (selected) 1. Select **Save**.
To create a pipeline with a step to deploy a template:
* **Azure Resource Manager connection**: Select the service connection name that you created earlier. * **Subscription**: Specify the target subscription ID. * **Action**: Select the **Create Or Update Resource Group** action does 2 actions - 1. create a resource group if a new resource group name is provided; 2. deploy the template specified.
- * **Resource group**: Enter a new resource group name. For example, **AzureRmPipeline-rg**.
+ * **Resource group**: Enter a new resource group name. For example, **ARMPipeline-rg**.
* **Location**: Select a location for the resource group, for example, **Central US**. * **Template location**: Select **URL of the file**, which means the task looks for the template file by using the URL. Because _relativePath_ is used in the main template and _relativePath_ is only supported on URI-based deployments, you must use URL here. * **Template link**: Enter the URL that you got at the end of the [Prepare a GitHub repository](#prepare-a-github-repository) section. It starts with `https://raw.githubusercontent.com`.
azure-video-indexer Considerations When Use At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/considerations-when-use-at-scale.md
Title: Things to consider when using Azure Video Indexer at scale - Azure description: This topic explains what things to consider when using Azure Video Indexer at scale. Previously updated : 11/13/2020 Last updated : 07/03/2023
To see an example of how to upload videos using URL, check out [this example](up
## Automatic Scaling of Media Reserved Units
-Starting August 1st 2021, Azure Video Indexer enabled [Reserved Units](/azure/media-services/latest/concept-media-reserved-units)(MRUs) auto scaling by [Azure Media Services](/azure/media-services/latest/media-services-overview) (AMS), as a result you do not need to manage them through Azure Video Indexer. That will allow price optimization, e.g. price reduction in many cases, based on your business needs as it is being auto scaled.
+Starting August 1st 2021, Azure Video Indexer enabled [Reserved Units](/azure/media-services/latest/concept-media-reserved-units)(MRUs) auto scaling by [Azure Media Services](/azure/media-services/latest/media-services-overview) (AMS), as a result you do not need to manage them through Azure Video Indexer. That allows price optimization, e.g. price reduction in many cases, based on your business needs as it is being auto scaled.
## Respect throttling
-Azure Video Indexer is built to deal with indexing at scale, and when you want to get the most out of it you should also be aware of the system's capabilities and design your integration accordingly. You don't want to send an upload request for a batch of videos just to discover that some of the movies didn't upload and you are receiving an HTTP 429 response code (too many requests). It can happen if the number of requests exceeds our API request limit of 10 requests per second or 60 requests per minute. Azure Video Indexer adds a `retry-after` header in the HTTP response, the header specifies when you should attempt your next retry. Make sure you respect it before trying your next request.
+Azure Video Indexer is built to deal with indexing at scale, and when you want to get the most out of it you should also be aware of the system's capabilities and design your integration accordingly. You don't want to send an upload request for a batch of videos just to discover that some of the movies didn't upload and you are receiving an HTTP 429 response code (too many requests). There is an API request limit of 120 requests per minute.
+ Azure Video Indexer adds a `retry-after` header in the HTTP response, the header specifies when you should attempt your next retry. Make sure you respect it before trying your next request.
:::image type="content" source="./media/considerations-when-use-at-scale/respect-throttling.jpg" alt-text="Design your integration well, respect throttling":::
You might be asking, what video quality do you need for indexing your videos?
In many cases, indexing performance has almost no difference between HD (720P) videos and 4K videos. Eventually, youΓÇÖll get almost the same insights with the same confidence. The higher the quality of the movie you upload means the higher the file size, and this leads to higher computing power and time needed to upload the video.
-For example, for the face detection feature, a higher resolution can help with the scenario where there are many small but contextually important faces. However, this will come with a quadratic increase in runtime and an increased risk of false positives.
+For example, for the face detection feature, a higher resolution can help with the scenario where there are many small but contextually important faces. However, this comes with a quadratic increase in runtime and an increased risk of false positives.
Therefore, we recommend you to verify that you get the right results for your use case and to first test it locally. Upload the same video in 720P and in 4K and compare the insights you get.
azure-video-indexer Face Redaction With Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/face-redaction-with-api.md
Title: Redact faces with Azure Video Indexer API description: This article shows how to use Azure Video Indexer face redaction feature using API. Previously updated : 06/26/2023 Last updated : 07/03/2023 # Redact faces with Azure Video Indexer API
Face service access is limited based on eligibility and usage criteria in order
## Redactor terminology and hierarchy
-The Face Redactor in Video Indexer relies on the output of the existing Video Indexer Face Detection results provided in our Video Standard and Advanced Analysis presets. In order to redact a video, you must first upload a video to Video Indexer and perform an analysis using the **standard** or **Advanced** video presets. This can be done using the [Azure Video Indexer website](https://www.videoindexer.ai/media/library) or [API](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video). You can then use the Redactor API to reference this video using the `videoId` and we create a new video with the redacted faces.
+The Face Redactor in Video Indexer relies on the output of the existing Video Indexer Face Detection results provided in our Video Standard and Advanced Analysis presets. In order to redact a video, you must first upload a video to Video Indexer and perform an analysis using the **standard** or **Advanced** video presets. This can be done using the [Azure Video Indexer website](https://www.videoindexer.ai/media/library) or [API](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Upload-Video). You can then use the Redactor API to reference this video using the `videoId` and we create a new video with the redacted faces. Both the Video Analysis and Face Redaction are separate billable jobs. See our [pricing page](https://azure.microsoft.com/pricing/details/video-indexer/) for more information.
## Blurring kinds
This will redirect to the mp4 stored on the Azure Storage Account.
|Can I play back the redacted video using the Video Indexer [website](https://www.videoindexer.ai/)?|Yes, the redacted video is visible in the Video Indexer like any other indexed video, however it doesn't contain any insights. | |How do I delete a redacted video? |You can use the [Delete Video](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Delete-Video) API and provide the `Videoid` of the redacted video. | |Do I need to pass Facial Identification gating to use Redactor? |Unless you're a US Police Department, no, even when youΓÇÖre gated we continue to offer Face Detection. We don't offer Face Identification when gated. You can however redact all faces in a video with just the Face Detection. |
+|Will the Face Redaction overwrite my original video? |No, the Redaction job will create a new video output file. |
|Not all faces are properly redacted. What can I do? |Redaction relies on the initial Face Detection and tracking output of the Analysis pipeline. While we detect all faces most of the time there can be circumstances where we haven't detected a face. This can have several reasons like face angle, number of frames the face was present and quality of the source video. See our [Face insights](face-detection.md) documentation for more information. | |Can I redact other objects than faces? |No, currently we only have face redaction. If you have the need for other objects, provide feedback to our product in the [Azure User Voice](https://feedback.azure.com/d365community/forum/8952b9e3-e03b-ec11-8c62-00224825aadf) channel. | |How Long is a SAS URL valid to download the redacted video? |<!--The SAS URL is valid for xxxx. -->To download the redacted video after the SAS url expired, you need to call the initial Job status URL. It's best to keep these `Jobstatus` URLs in a database in your backend for future reference. |
azure-video-indexer Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/release-notes.md
Title: Azure Video Indexer release notes | Microsoft Docs
description: To stay up-to-date with the most recent developments, this article provides you with the latest updates on Azure Video Indexer. Previously updated : 05/24/2023 Last updated : 07/03/2023
To stay up-to-date with the most recent Azure Video Indexer developments, this a
## July 2023
+### Redact faces with Azure Video Indexer API
+ You can now redact faces with Azure Video Indexer API. For more information see [Redact faces with Azure Video Indexer API](face-redaction-with-api.md).
+### Upload a video API request limit increase
+
+An upload a video API request limit was increased from 60 to 120 requests per minute.
+ ## June 2023 ### FAQ - following the Azure Media Services retirement announcement
azure-video-indexer Video Indexer Use Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/video-indexer-use-apis.md
Title: Use the Azure Video Indexer API description: This article describes how to get started with Azure Video Indexer API. Previously updated : 08/14/2022 Last updated : 07/03/2023
When you're uploading videos by using the API, you have the following options:
* Upload your video from a URL (preferred). * Send the video file as a byte array in the request body. * Use existing an Azure Media Services asset by providing the [asset ID](/azure/media-services/latest/assets-concept). This option is supported in paid accounts only.
-* There is an API request limit of 10 requests per second or 60 requests per minute.
+* There is an API request limit of 120 requests per minute.
### Getting JSON output
azure-vmware Deploy Arc For Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-arc-for-azure-vmware-solution.md
The following items are needed to ensure you're set up to begin the onboarding p
- Verify that your vCenter Server version is 6.7 or higher. - A resource pool with minimum-free capacity of 16 GB of RAM, 4 vCPUs. - A datastore with minimum 100 GB of free disk space that is available through the resource pool. -- On the vCenter Server, allow inbound connections on TCP port 443, so that the Arc resource bridge and VMware cluster extension can communicate with the vCenter server.
+- On the vCenter Server, allow inbound connections on TCP port 443, so that the Arc resource bridge and VMware cluster extension can communicate with the vCenter server.
+- Please validate the regional support before starting the onboarding. Arc for Azure VMware Solution is supported in all regions where Arc for VMware vSphere on-premises is supported. For more details, see [Azure Arc-enabled VMware vSphere](https://learn.microsoft.com/azure/azure-arc/vmware-vsphere/overview).
+- The firewall and proxy URLs below must be allowlisted in order to enable communication from the management machine, Appliance VM, and Control Plane IP to the required Arc resource bridge URLs.
+[Azure Arc resource bridge (preview) network requirements](../azure-arc/resource-bridge/network-requirements.md)
> [!NOTE] > Only the default port of 443 is supported. If you use a different port, Appliance VM creation will fail.
Use the following steps to uninstall extensions from the portal.
>**Steps 2-5** must be performed for all the VMs that have VM extensions installed. 1. Log in to your Azure VMware Solution private cloud.
-1. Select **Virtual machines** in **Private cloud**, found in the left navigation under ΓÇ£Arc-enabled VMware resourcesΓÇ¥.
+1. Select **Virtual machines** in **Private cloud**, found in the left navigation under ΓÇ£vCenter Server Inventory Page"
1. Search and select the virtual machine where you have **Guest management** enabled. 1. Select **Extensions**. 1. Select the extensions and select **Uninstall**.
At this point, all of your Arc-enabled VMware vSphere resources have been remove
## Delete Arc resources from vCenter Server
-For the final step, you'll need to delete the resource bridge VM and the VM template that were created during the onboarding process. Once that step is done, Arc won't work on the Azure VMware Solution SDDC. When you delete Arc resources from vCenter, it won't affect the Azure VMware Solution private cloud for the customer.
+For the final step, you'll need to delete the resource bridge VM and the VM template that were created during the onboarding process. Login to vCenter and delete resource bridge VM and the VM template from inside the arc-folder. Once that step is done, Arc won't work on the Azure VMware Solution SDDC. When you delete Arc resources from vCenter, it won't affect the Azure VMware Solution private cloud for the customer.
## Preview FAQ
Use the following tips as a self-help guide.
## Appendices
-Appendix 1 shows proxy URLs required by the Azure Arc-enabled private cloud. The URLs will get pre-fixed when the script runs and can be run from the jumpbox VM to ping them.
--
-| **Azure Arc Service** | **URL** |
-| :-- | :-- |
-| Microsoft container registry | `https://mcr.microsoft.com` |
-| Azure Arc Identity service | `https://*.his.arc.azure.com` |
-| Azure Arc configuration service | `https://*.dp.kubernetesconfiguration.azure.com` |
-| Cluster connect | `https://*.servicebus.windows.net` |
-| Guest Notification service | `https://guestnotificationservice.azure.com` |
-| Resource bridge (appliance) Dataplate service | `https://*.dp.prod.appliances.azure.com` |
-| Resource bridge (appliance) container image download | `https://ecpacr.azurecr.io` |
-| Resource bridge (appliance) image download | `https://.blob.core.windows.net https://*.dl.delivery.mp.microsoft.com https://*.do.dsp.mp.microsoft.com` |
-| Azure Resource Manager | `https://management.azure.com` |
-| Azure Active Directory | `https://login.mirosoftonline.com` |
-
+Appendix 1 shows proxy URLs required by the Azure Arc-enabled private cloud. The URLs will get pre-fixed when the script runs and can be run from the jumpbox VM to ping them. The firewall and proxy URLs below must be allowlisted in order to enable communication from the management machine, Appliance VM, and Control Plane IP to the required Arc resource bridge URLs.
+[Azure Arc resource bridge (preview) network requirements](../azure-arc/resource-bridge/network-requirements.md)
**Additional URL resources**
cognitive-services Call Analyze Image 40 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/how-to/call-analyze-image-40.md
You can specify which features you want to use by setting the URL query paramete
|||--| |`features`|`read` | Reads the visible text in the image and outputs it as structured JSON data.| |`features`|`caption` | Describes the image content with a complete sentence in supported languages.|
-|`features`|`denseCaption` | Generates detailed captions for up to 10 prominent image regions. |
+|`features`|`denseCaptions` | Generates detailed captions for up to 10 prominent image regions. |
|`features`|`smartCrops` | Finds the rectangle coordinates that would crop the image to a desired aspect ratio while preserving the area of interest.| |`features`|`objects` | Detects various objects within an image, including the approximate location. The Objects argument is only available in English.| |`features`|`tags` | Tags the image with a detailed list of words related to the image content.|
You can specify which features you want to use by setting the URL query paramete
A populated URL might look like this:
-`https://<endpoint>/computervision/imageanalysis:analyze?api-version=2023-02-01-preview&features=tags,read,caption,denseCaption,smartCrops,objects,people`
+`https://<endpoint>/computervision/imageanalysis:analyze?api-version=2023-02-01-preview&features=tags,read,caption,denseCaptions,smartCrops,objects,people`
cognitive-services Use Your Data Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/use-your-data-quickstart.md
Title: 'Use your own data with Azure OpenAI service'
+ Title: 'Use your own data with Azure OpenAI Service'
description: Use this article to import and use your data in Azure OpenAI.
If you want to clean up and remove an OpenAI or Azure Cognitive Search resource,
## Next steps - Learn more about [using your data in Azure OpenAI Service](./concepts/use-your-data.md)-- [Chat app sample code on GitHub](https://github.com/microsoft/sample-app-aoai-chatGPT/tree/main).
+- [Chat app sample code on GitHub](https://github.com/microsoft/sample-app-aoai-chatGPT/tree/main).
communication-services Lobby Admit And Reject https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/lobby-admit-and-reject.md
- Title: Admit and reject users from Teams meeting lobby-
-description: Use Azure Communication Services SDKs to admit or reject users from Teams meeting lobby.
----- Previously updated : 03/14/2023----
-# Manage Teams meeting lobby
-
-APIs lobby admit and reject on `Call` or `TeamsCall` class allow users to admit and reject participants from Teams meeting lobby.
-
-In this article, you will learn how to admit and reject participants from Microsoft Teams meetings lobby by using Azure Communication Service calling SDKs.
-
-## Prerequisites
--- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). -- A deployed Communication Services resource. [Create a Communication Services resource](../../quickstarts/create-communication-resource.md).-- A user access token to enable the calling client. For more information, see [Create and manage access tokens](../../quickstarts/access-tokens.md).-- Optional: Complete the quickstart to [add voice calling to your application](../../quickstarts/voice-video-calling/getting-started-with-calling.md)-
-User ends up in the lobby depending on Microsoft Teams configuration. The controls are described here:
-[Learn more about Teams configuration ](../../concepts/interop/guest/teams-administration.md)
-
-Microsoft 365 or Azure Communication Services users can admit or reject users from lobby, if they are connected to Teams meeting and have Organizer, Co-organizer, or Presenter meeting role.
-[Learn more about meeting roles](https://support.microsoft.com/office/roles-in-a-teams-meeting-c16fa7d0-1666-4dde-8686-0a0bfe16e019)
-
-To update or check current meeting join & lobby policies in Teams admin center:
-[Learn more about Teams policies](/microsoftteams/settings-policies-reference#automatically-admit-people)
--
-### Get remote participant properties
-
-The first thing is to get the `Call` or `TeamsCall` object of admitter: [Learn how to join Teams meeting](./teams-interoperability.md)
-
-To know who is in the lobby, you could check the state of a remote participant. The `remoteParticipant` with `InLobby` state indicates that remote participant is in lobby.
-To get the `remoteParticipants` collection:
-
-```js
-let remoteParticipants = call.remoteParticipants; // [remoteParticipant, remoteParticipant....]
-```
-
-To get the state of a remote participant:
-
-```js
-const state = remoteParticipant.state;
-```
-
-You could check remote participant state in subscription method:
-[Learn more about events and subscription ](./events.md)
-
-```js
-// Subscribe to a call obj.
-// Listen for property changes and collection updates.
-subscribeToCall = (call) => {
- try {
- // Inspect the call's current remote participants and subscribe to them.
- call.remoteParticipants.forEach(remoteParticipant => {
- subscribeToRemoteParticipant(remoteParticipant);
- })
- // Subscribe to the call's 'remoteParticipantsUpdated' event to be
- // notified when new participants are added to the call or removed from the call.
- call.on('remoteParticipantsUpdated', e => {
- // Subscribe to new remote participants that are added to the call.
- e.added.forEach(remoteParticipant => {
- subscribeToRemoteParticipant(remoteParticipant)
- });
- // Unsubscribe from participants that are removed from the call
- e.removed.forEach(remoteParticipant => {
- console.log('Remote participant removed from the call.');
- })
- });
- } catch (error) {
- console.error(error);
- }
-}
-
-// Subscribe to a remote participant obj.
-// Listen for property changes and collection updates.
-subscribeToRemoteParticipant = (remoteParticipant) => {
- try {
- // Inspect the initial remoteParticipant.state value.
- console.log(`Remote participant state: ${remoteParticipant.state}`);
- if(remoteParticipant.state === 'InLobby'){
- console.log(`${remoteParticipant._displayName} is in the lobby`);
- }
- // Subscribe to remoteParticipant's 'stateChanged' event for value changes.
- remoteParticipant.on('stateChanged', () => {
- console.log(`Remote participant state changed: ${remoteParticipant.state}`);
- if(remoteParticipant.state === 'InLobby'){
- console.log(`${remoteParticipant._displayName} is in the lobby`);
- }
- else if(remoteParticipant.state === 'Connected'){
- console.log(`${remoteParticipant._displayName} is in the meeting`);
- }
- });
- } catch (error) {
- console.error(error);
- }
-}
-```
-
-Before admit or reject `remoteParticipant` with `InLobby` state, you could get the identifier for a remote participant:
-
-```js
-const identifier = remoteParticipant.identifier;
-```
-
-The `identifier` can be one of the following `CommunicationIdentifier` types:
--- `{ communicationUserId: '<COMMUNICATION_SERVICES_USER_ID'> }`: Object representing the Azure Communication Services user.-- `{ phoneNumber: '<PHONE_NUMBER>' }`: Object representing the phone number in E.164 format.-- `{ microsoftTeamsUserId: '<MICROSOFT_TEAMS_USER_ID>', isAnonymous?: boolean; cloud?: "public" | "dod" | "gcch" }`: Object representing the Teams user.-- `{ id: string }`: object representing identifier that doesn't fit any of the other identifier types-
-### Start lobby operations
-
-To admit, reject or admit all users from the lobby, you can use the `admit`, `rejectParticipant` and `admitAll` asynchronous APIs:
-
-You can admit specific user to the Teams meeting from lobby by calling the method `admit` on the object `TeamsCall` or `Call`. The method accepts identifiers `MicrosoftTeamsUserIdentifier`, `CommunicationUserIdentifier`, `PhoneNumberIdentifier` or `UnknownIdentifier` as input.
-
-```js
-await call.admit(identifier);
-```
-
-You can also reject specific user to the Teams meeting from lobby by calling the method `rejectParticipant` on the object `TeamsCall` or `Call`. The method accepts identifiers `MicrosoftTeamsUserIdentifier`, `CommunicationUserIdentifier`, `PhoneNumberIdentifier` or `UnknownIdentifier` as input.
-
-```js
-await call.rejectParticipant(identifier);
-```
-
-You can also admit all users in the lobby by calling the method `admitAll` on the object `TeamsCall` or `Call`.
-
-```js
-await call.admitAll();
-```
-
-## Next steps
-- [Learn how to manage calls](./manage-calls.md)-- [Learn how to manage Teams calls](../cte-calling-sdk/manage-calls.md)-- [Learn how to join Teams meeting](./teams-interoperability.md)-- [Learn how to manage video](./manage-video.md)
communication-services Lobby https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/lobby.md
+
+ Title: Teams meeting lobby
+
+description: Use Azure Communication Services SDKs to admit or reject users from Teams meeting lobby.
+++++ Last updated : 06/15/2023++++
+# Manage Teams meeting lobby
+
+In this article, you will learn how to implement the Teams meetings lobby capability by using Azure Communication Service calling SDKs. This capability allows users to admit and reject participants from Teams meeting lobby, receive the join lobby notification and get the lobby participants list.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- A deployed Communication Services resource. [Create a Communication Services resource](../../quickstarts/create-communication-resource.md).
+- A user access token to enable the calling client. For more information, see [Create and manage access tokens](../../quickstarts/access-tokens.md).
+- Optional: Complete the quickstart to [add voice calling to your application](../../quickstarts/voice-video-calling/getting-started-with-calling.md)
+
+User ends up in the lobby depending on Microsoft Teams configuration. The controls are described here:
+[Learn more about Teams configuration ](../../concepts/interop/guest/teams-administration.md)
+
+Microsoft 365 or Azure Communication Services users can admit or reject users from lobby, if they're connected to Teams meeting and have Organizer, Co-organizer, or Presenter meeting role.
+[Learn more about meeting roles](https://support.microsoft.com/office/roles-in-a-teams-meeting-c16fa7d0-1666-4dde-8686-0a0bfe16e019)
+
+To update or check current meeting join & lobby policies in Teams admin center:
+[Learn more about Teams policies](/microsoftteams/settings-policies-reference#automatically-admit-people)
+
+**The following APIs are supported for both Communication Services and Microsoft 365 users**
+
+|APIs| Organizer | Co-Organizer | Presenter | Attendee |
+|-|--|--|--|--|
+| admit | ✔️ | ✔️ | ✔️ | |
+| reject | ✔️ | ✔️ | ✔️ | |
+| admitAll | ✔️ | ✔️ | ✔️ | |
+| getParticipants | ✔️ | ✔️ | ✔️ | ✔️ |
+| lobbyParticipantsUpdated | ✔️ | ✔️ | ✔️ | ✔️ |
+++
+## Next steps
+- [Learn how to manage calls](./manage-calls.md)
+- [Learn how to manage Teams calls](../cte-calling-sdk/manage-calls.md)
+- [Learn how to join Teams meeting](./teams-interoperability.md)
+- [Learn how to manage video](./manage-video.md)
cosmos-db Samples Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/samples-nodejs.md
You also need the [JavaScript SDK](sdk-nodejs.md).
The [DatabaseManagement](https://github.com/Azure/azure-cosmos-js/blob/master/samples/DatabaseManagement.ts) file shows how to perform the CRUD operations on the database. To learn about the Azure Cosmos DB databases before running the following samples, see [Working with databases, containers, and items](../resource-model.md) conceptual article.
-| Task | API reference |
-| -- | - |
-| [Create a database if it doesn't exist](https://github.com/Azure/azure-cosmos-js/blob/master/samples/DatabaseManagement.ts#L12-L14) | [Databases.createIfNotExists](/javascript/api/@azure/cosmos/databases#createifnotexists-databaserequest--requestoptions-) |
-| [List databases for an account](https://github.com/Azure/azure-cosmos-js/blob/master/samples/DatabaseManagement.ts#L16-L18) | [Databases.readAll](/javascript/api/@azure/cosmos/databases#readall-feedoptions-) |
-| [Read a database by ID](https://github.com/Azure/azure-cosmos-js/blob/master/samples/DatabaseManagement.ts#L20-L29) | [Database.read](/javascript/api/@azure/cosmos/database#read-requestoptions-) |
-| [Delete a database](https://github.com/Azure/azure-cosmos-js/blob/master/samples/DatabaseManagement.ts#L31-L32) | [Database.delete](/javascript/api/@azure/cosmos/database#delete-requestoptions-) |
+| Task | API reference |
+| - | - |
+| [Create a database if it doesn't exist](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/cosmosdb/cosmos/samples-dev/DatabaseManagement.ts#LL26C3-L27C63) | [Databases.createIfNotExists](/javascript/api/@azure/cosmos/databases#createifnotexists-databaserequest--requestoptions-) |
+| [List databases for an account](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/cosmosdb/cosmos/samples-dev/DatabaseManagement.ts#L30-L31) | [Databases.readAll](/javascript/api/@azure/cosmos/databases#readall-feedoptions-) |
+| [Read a database by ID](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/cosmosdb/cosmos/samples-dev/DatabaseManagement.ts#L34) | [Database.read](/javascript/api/@azure/cosmos/database#read-requestoptions-) |
+| [Delete a database](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/cosmosdb/cosmos/samples-dev/DatabaseManagement.ts#LL46C18-L46C18) | [Database.delete](/javascript/api/@azure/cosmos/database#delete-requestoptions-) |
## Container examples The [ContainerManagement](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ContainerManagement.ts) file shows how to perform the CRUD operations on the container. To learn about the Azure Cosmos DB collections before running the following samples, see [Working with databases, containers, and items](../resource-model.md) conceptual article.
-| Task | API reference |
-| - | - |
-| [Create a container if it doesn't exist](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ContainerManagement.ts#L14-L15) | [Containers.createIfNotExists](/javascript/api/@azure/cosmos/containers#createifnotexists-containerrequest--requestoptions-) |
-| [List containers for an account](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ContainerManagement.ts#L17-L21) | [Containers.readAll](/javascript/api/@azure/cosmos/containers#readall-feedoptions-) |
-| [Read a container definition](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ContainerManagement.ts#L23-L26) | [Container.read](/javascript/api/@azure/cosmos/container#read-requestoptions-) |
-| [Delete a container](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ContainerManagement.ts#L28-L30) | [Container.delete](/javascript/api/@azure/cosmos/container#delete-requestoptions-) |
+| Task | API reference |
+| -- | - |
+| [Create a container if it doesn't exist](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/cosmosdb/cosmos/samples-dev/ContainerManagement.ts#L27) | [Containers.createIfNotExists](/javascript/api/@azure/cosmos/containers#createifnotexists-containerrequest--requestoptions-) |
+| [List containers for an account](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/cosmosdb/cosmos/samples-dev/ContainerManagement.ts#L30-L32) | [Containers.readAll](/javascript/api/@azure/cosmos/containers#readall-feedoptions-) |
+| [Read a container definition](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/cosmosdb/cosmos/samples-dev/ContainerManagement.ts#L36-L37) | [Container.read](/javascript/api/@azure/cosmos/container#read-requestoptions-) |
+| [Delete a container](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/cosmosdb/cosmos/samples-dev/ContainerManagement.ts#L42-L43) | [Container.delete](/javascript/api/@azure/cosmos/container#delete-requestoptions-) |
## Item examples The [ItemManagement](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ItemManagement.ts) file shows how to perform the CRUD operations on the item. To learn about the Azure Cosmos DB documents before running the following samples, see [Working with databases, containers, and items](../resource-model.md) conceptual article.
-| Task | API reference |
-| -- | - |
-| [Create items](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ItemManagement.ts#L18-L21) | [Items.create](/javascript/api/@azure/cosmos/items#create-t--requestoptions-) |
-| [Read all items in a container](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ItemManagement.ts#L23-L28) | [Items.readAll](/javascript/api/@azure/cosmos/items#readall-feedoptions-) |
-| [Read an item by ID](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ItemManagement.ts#L30-L33) | [Item.read](/javascript/api/@azure/cosmos/item#read-requestoptions-) |
-| [Read item only if item has changed](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ItemManagement.ts#L45-L56) | [Item.read](/javascript/api/%40azure/cosmos/item) - [RequestOptions.accessCondition](/javascript/api/%40azure/cosmos/requestoptions#accesscondition) |
-| [Query for documents](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ItemManagement.ts#L58-L79) | [Items.query](/javascript/api/%40azure/cosmos/items) |
-| [Replace an item](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ItemManagement.ts#L81-L96) | [Item.replace](/javascript/api/%40azure/cosmos/item) |
-| [Replace item with conditional ETag check](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ItemManagement.ts#L98-L135) | [Item.replace](/javascript/api/%40azure/cosmos/item) - [RequestOptions.accessCondition](/javascript/api/%40azure/cosmos/requestoptions#accesscondition) |
-| [Delete an item](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ItemManagement.ts#L137-L140) | [Item.delete](/javascript/api/%40azure/cosmos/item) |
+| Task | API reference |
+| -- | - |
+| [Create items](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/cosmosdb/cosmos/samples-dev/ItemManagement.ts#L33-L34) | [Items.create](/javascript/api/@azure/cosmos/items#create-t--requestoptions-) |
+| [Read all items in a container](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/cosmosdb/cosmos/samples-dev/ItemManagement.ts#L37) | [Items.readAll](/javascript/api/@azure/cosmos/items#readall-feedoptions-) |
+| [Read an item by ID](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/cosmosdb/cosmos/samples-dev/ItemManagement.ts#L46-L49) | [Item.read](/javascript/api/@azure/cosmos/item#read-requestoptions-) |
+| [Read item only if item has changed](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/cosmosdb/cosmos/samples-dev/ItemManagement.ts#L51-L74) | [Item.read](/javascript/api/%40azure/cosmos/item) - [RequestOptions.accessCondition](/javascript/api/%40azure/cosmos/requestoptions#accesscondition) |
+| [Query for documents](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/cosmosdb/cosmos/samples-dev/ItemManagement.ts#L76-L97) | [Items.query](/javascript/api/%40azure/cosmos/items) |
+| [Replace an item](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/cosmosdb/cosmos/samples-dev/ItemManagement.ts#L100-L118) | [Item.replace](/javascript/api/%40azure/cosmos/item) |
+| [Replace item with conditional ETag check](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/cosmosdb/cosmos/samples-dev/ItemManagement.ts#L127-L128) | [Item.replace](/javascript/api/%40azure/cosmos/item) - [RequestOptions.accessCondition](/javascript/api/%40azure/cosmos/requestoptions#accesscondition) |
+| [Delete an item](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/cosmosdb/cosmos/samples-dev/ItemManagement.ts#L234-L235) | [Item.delete](/javascript/api/%40azure/cosmos/item) |
## Indexing examples
-The [IndexManagement](https://github.com/Azure/azure-cosmos-js/blob/master/samples/IndexManagement.ts) file shows how to manage indexing. To learn about indexing in Azure Cosmos DB before running the following samples, see [indexing policies](../index-policy.md), [indexing types](../index-overview.md#types-of-indexes), and [indexing paths](../index-policy.md#include-exclude-paths) conceptual articles.
+The [IndexManagement](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/cosmosdb/cosmos/samples-dev/IndexManagement.ts) file shows how to manage indexing. To learn about indexing in Azure Cosmos DB before running the following samples, see [indexing policies](../index-policy.md), [indexing types](../index-overview.md#types-of-indexes), and [indexing paths](../index-policy.md#include-exclude-paths) conceptual articles.
-| Task | API reference |
-| | |
-| [Manually index a specific item](https://github.com/Azure/azure-cosmos-js/blob/master/samples/IndexManagement.ts#L52-L75) | [RequestOptions.indexingDirective: 'include'](/javascript/api/%40azure/cosmos/requestoptions#indexingdirective) |
-| [Manually exclude a specific item from the index](https://github.com/Azure/azure-cosmos-js/blob/master/samples/IndexManagement.ts#L17-L29) | [RequestOptions.indexingDirective: 'exclude'](/javascript/api/%40azure/cosmos/requestoptions#indexingdirective) |
-| [Exclude a path from the index](https://github.com/Azure/azure-cosmos-js/blob/master/samples/IndexManagement.ts#L142-L167) | [IndexingPolicy.ExcludedPath](/javascript/api/%40azure/cosmos/indexingpolicy#excludedpaths) |
-| [Create a range index on a string path](https://github.com/Azure/azure-cosmos-js/blob/master/samples/IndexManagement.ts#L87-L112) | [IndexKind.Range](/javascript/api/%40azure/cosmos/indexkind), [IndexingPolicy](/javascript/api/%40azure/cosmos/indexingpolicy), [Items.query](/javascript/api/%40azure/cosmos/items) |
-| [Create a container with default indexPolicy, then update the container online](https://github.com/Azure/azure-cosmos-js/blob/master/samples/IndexManagement.ts#L13-L15) | [Containers.create](/javascript/api/%40azure/cosmos/containers) |
+| Task | API reference |
+| -- | |
+| [Manually index a specific item](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/cosmosdb/cosmos/samples-dev/IndexManagement.ts#L71-L106) | [RequestOptions.indexingDirective: 'include'](/javascript/api/%40azure/cosmos/requestoptions#indexingdirective) |
+| [Manually exclude a specific item from the index](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/cosmosdb/cosmos/samples-dev/IndexManagement.ts#L33-L69) | [RequestOptions.indexingDirective: 'exclude'](/javascript/api/%40azure/cosmos/requestoptions#indexingdirective) |
+| [Exclude a path from the index](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/cosmosdb/cosmos/samples-dev/IndexManagement.ts#L165-L237) | [IndexingPolicy.ExcludedPath](/javascript/api/%40azure/cosmos/indexingpolicy#excludedpaths) |
+| [Create a range index on a string path](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/cosmosdb/cosmos/samples-dev/IndexManagement.ts#L108-L163) | [IndexKind.Range](/javascript/api/%40azure/cosmos/indexkind), [IndexingPolicy](/javascript/api/%40azure/cosmos/indexingpolicy), [Items.query](/javascript/api/%40azure/cosmos/items) |
+| [Create a container with default indexPolicy, then update the container online](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/cosmosdb/cosmos/samples-dev/IndexManagement.ts#L27-L31) | [Containers.create](/javascript/api/%40azure/cosmos/containers) |
## Server-side programming examples
-The [index.ts](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ServerSideScripts/index.ts) file of the [ServerSideScripts](https://github.com/Azure/azure-cosmos-js/tree/master/samples/ServerSideScripts) project shows how to perform the following tasks. To learn about Server-side programming in Azure Cosmos DB before running the following samples, see [Stored procedures, triggers, and user-defined functions](stored-procedures-triggers-udfs.md) conceptual article.
+The [index.ts](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/cosmosdb/cosmos/samples-dev/ServerSideScripts.ts) file shows how to perform the following tasks. To learn about Server-side programming in Azure Cosmos DB before running the following samples, see [Stored procedures, triggers, and user-defined functions](stored-procedures-triggers-udfs.md) conceptual article.
-| Task | API reference |
-| | |
-| [Create a stored procedure](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ServerSideScripts/upsert.js) | [StoredProcedures.create](/javascript/api/%40azure/cosmos/storedprocedures) |
-| [Execute a stored procedure](https://github.com/Azure/azure-cosmos-js/blob/master/samples/ServerSideScripts/index.ts) | [StoredProcedure.execute](/javascript/api/%40azure/cosmos/storedprocedure) |
+| Task | API reference |
+| -- | |
+| [Create a stored procedure](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/cosmosdb/cosmos/samples-dev/ServerSideScripts.ts#L117-L118) | [StoredProcedures.create](/javascript/api/%40azure/cosmos/storedprocedures) |
+| [Execute a stored procedure](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/cosmosdb/cosmos/samples-dev/ServerSideScripts.ts#L120-L121) | [StoredProcedure.execute](/javascript/api/%40azure/cosmos/storedprocedure) |
+| [Bulk update with stored procedure](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/cosmosdb/cosmos/samples-dev/BulkUpdateWithSproc.ts#L70-L101) | [StoredProcedure.execute](/javascript/api/%40azure/cosmos/storedprocedure) |
For more information about server-side programming, see [Azure Cosmos DB server-side programming: Stored procedures, database triggers, and UDFs](stored-procedures-triggers-udfs.md).
+## Azure Identity(AAD) Auth Examples
+
+The [AADAuth.ts](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/cosmosdb/cosmos/samples-dev/AADAuth.ts) file shows how to perform the following tasks.
+
+| Task | API reference |
+| - | |
+| [Create credential object from @azure/identity](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/cosmosdb/cosmos/samples-dev/AADAuth.ts#L23-L28) | [API](/javascript/api/@azure/identity/usernamepasswordcredential#constructors) |
+| [Pass credentials to client object with key aadCredentials](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/cosmosdb/cosmos/samples-dev/AADAuth.ts#L29-L38) | [API](/javascript/api/@azure/cosmos/cosmosclientoptions#@azure-cosmos-cosmosclientoptions-aadcredentials) |
+| [Execute cosmos client with aad credentials](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/cosmosdb/cosmos/samples-dev/AADAuth.ts#L40-L52) | [API](/javascript/api/@azure/cosmos/databases#readall-feedoptions-) |
+
+## Miscellaneous samples
+
+Following curated samples illustrate common scenarios.
+
+| Task | API reference |
+| | |
+| [Alter Query throughput ](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/cosmosdb/cosmos/samples-dev/AlterQueryThroughput.ts#L40-L43) | [API](/javascript/api/@azure/cosmos/offer#@azure-cosmos-offer-replace) |
+| [Getting query throughput ](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/cosmosdb/cosmos/samples-dev/QueryThroughput.ts) | [API](/javascript/api/@azure/cosmos/queryiterator#@azure-cosmos-queryiterator-hasmoreresults) |
+| [using SasTokens for granting scoped access to Cosmos DB resources](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/cosmosdb/cosmos/samples-dev/SasTokenAuth.ts) | [API](/javascript/api/@azure/cosmos#@azure-cosmos-createauthorizationsastoken) |
+ ## Next steps Trying to do capacity planning for a migration to Azure Cosmos DB? You can use information about your existing database cluster for capacity planning.
cosmos-db Tune Connection Configurations Net Sdk V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/tune-connection-configurations-net-sdk-v3.md
+
+ Title: Connection configurations for Azure Cosmos DB .NET SDK v3
+description: Learn how to tune connection configurations to improve Azure Cosmos DB database performance for .NET SDK v3
+++
+ms.devlang: csharp
+ Last updated : 06/27/2023+++
+# Tune connection configurations for Azure Cosmos DB .NET SDK v3
+
+> [!IMPORTANT]
+> The information in this article are for Azure Cosmos DB .NET SDK v3 only. Please view the [Azure Cosmos DB SQL SDK connectivity modes](sdk-connection-modes.md) the Azure Cosmos DB .NET SDK v3 [Release notes](sdk-dotnet-v3.md), [Nuget repository](https://www.nuget.org/packages/Microsoft.Azure.Cosmos), and Azure Cosmos DB .NET SDK v3 [troubleshooting guide](troubleshoot-dotnet-sdk.md) for more information. If you are currently using an older version than v3, see the [Migrate to Azure Cosmos DB .NET SDK v3](migrate-dotnet-v3.md) guide for help upgrading to v3.
+
+Azure Cosmos DB is a fast and flexible distributed database that scales seamlessly with guaranteed latency and throughput. You don't have to make major architecture changes or write complex code to scale your database with Azure Cosmos DB. Scaling up and down is as easy as making a single API call or SDK method call. However, because Azure Cosmos DB is accessed via network calls there are connection configurations you can tune to achieve peak performance when using Azure Cosmos DB .NET SDK v3.
+
+## Connection configuration
+
+> [!NOTE]
+> In Azure Cosmos DB .NETS SDK v3, *Direct mode* is the best choice in most cases to improve database performance with most workloads.
+
+To learn more about different connectivity options, see the [connectivity modes](sdk-connection-modes.md) article.
+
+## Direct connection mode
+
+.NET SDK default connection mode is direct. In direct mode, requests are made using the TCP protocol. Internally Direct mode uses a special architecture to dynamically manage network resources and get the best performance. The client-side architecture employed in Direct mode enables predictable network utilization and multiplexed access to Azure Cosmos DB replicas. To learn more about architecture, see the [direct mode connection architecture](sdk-connection-modes.md#direct-mode).
+
+You configure the connection mode when you create the `CosmosClient` instance in `CosmosClientOptions`.
+
+```csharp
+string connectionString = "<your-account-connection-string>";
+CosmosClient client = new CosmosClient(connectionString,
+new CosmosClientOptions
+{
+ ConnectionMode = ConnectionMode.Gateway // ConnectionMode.Direct is the default
+});
+```
+
+### Customizing direct connection mode
+
+Direct mode can be customized through the *CosmosClientOptions* passed to the *CosmosClient* constructor. We recommend users avoid modifying these unless they feel comfortable in understanding the tradeoffs and it's necessary.
+
+| Configuration option | Default | Recommended | Details |
+| :: | :--: | :: | :--: |
+| EnableTcpConnectionEndpointRediscovery | true | true | This represents the flag to enable detection of connections closing from the server. |
+| IdleTcpConnectionTimeout | By default, idle connections are kept open indefinitely. | 20h-24h | This represents the amount of idle time after which unused connections are closed. Recommended values are between 20 minutes and 24 hours. |
+| MaxRequestsPerTcpConnection | 30 | 30 | This represents the number of requests allowed simultaneously over a single TCP connection. When more requests are in flight simultaneously, the direct/TCP client opens extra connections. Don't set this value lower than four requests per connection or higher than 50-100 requests per connection. Applications with a high degree of parallelism per connection, with large requests or responses, or with tight latency requirements might get better performance with 8-16 requests per connection. |
+| MaxTcpConnectionsPerEndpoint | 65535 | 65535 | This represents the maximum number of TCP connections that may be opened to each Cosmos DB back-end. Together with MaxRequestsPerTcpConnection, this setting limits the number of requests that are simultaneously sent to a single Cosmos DB back-end(MaxRequestsPerTcpConnection x MaxTcpConnectionPerEndpoint). Value must be greater than or equal to 16. |
+| OpenTcpConnectionTimeout | 5 seconds | >= 5 seconds | This represents the amount of time allowed for trying to establish a connection. When the time elapses, the attempt is canceled and an error is returned. Longer timeouts delay retries and failures. |
+| PortReuseMode | PortReuseMode.ReuseUnicastPort | PortReuseMode.ReuseUnicastPort | This represents the client port reuse policy used by the transport stack. |
+
+> [!NOTE]
+> See also [Networking perfomance tips for direct connection mode](performance-tips-dotnet-sdk-v3.md?tabs=trace-net-core#networking)
+
+#### Customizing gateway connection mode
+
+The Gateway mode can be customized through the *CosmosClientOptions* passed to the *CosmosClient* constructor. We recommend users avoid modifying these unless they feel comfortable in understanding the tradeoffs and it's necessary.
+
+| Configuration option | Default | Recommended | Details |
+| :: | :--: | :: | :--: |
+| GatewayModeMaxConnectionLimit | 50 | 50 | This represents the maximum number of concurrent connections allowed for the target service endpoint in the Azure Cosmos DB service. |
+| WebProxy | null | null | This represents the proxy information used for web requests. |
+
+> [!NOTE]
+> See also [Best practices when using Gateway mode for Azure Cosmos DB NET SDK v3](best-practice-dotnet.md#best-practices-when-using-gateway-mode).
+
+## Next steps
+
+To learn more about performance tips for .NET SDK, see [Performance tips for Azure Cosmos DB NET SDK v3](performance-tips-dotnet-sdk-v3.md).
+
+* If all you know is the number of vCores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md)
+* If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
data-factory Tutorial Managed Virtual Network On Premise Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-managed-virtual-network-on-premise-sql-server.md
data factory from the resources list.
:::image type="content" source="./media/tutorial-managed-virtual-network/private-endpoint-6.png" alt-text="Screenshot that shows the private endpoint settings.":::
+> [!Note]
+> When deploying your SQL Server on a virtual machine within a virtual network, it is essential to enhance your FQDN by appending **privatelink**. Otherwise, it will be conflicted with other records in the DNS setting. For example, you can simply modify the SQL Server's FQDN from **sqlserver.westus.cloudapp.azure.net** to **sqlserver.privatelink.westus.cloudapp.azure.net**.
+ 8. Create private endpoint. ## Create a linked service and test the connection
data factory from the resources list.
:::image type="content" source="./media/tutorial-managed-virtual-network/linked-service-3.png" alt-text="Screenshot that shows the SQL server linked service creation page.":::
-> [!Note]
-> If you have more than one SQL Server and need to define multiple load balancer rules and IP table records with different ports, make sure you explicitly add the port name after the FQDN when you edit Linked Service. The NAT VM will handle the port translation. If it's not explicitly specified, the connection will always time-out.
+ > [!Note]
+ > If you have more than one SQL Server and need to define multiple load balancer rules and IP table records with different ports, make sure you explicitly add the port name after the FQDN when you edit Linked Service. The NAT VM will handle the port translation. If it's not explicitly specified, the connection will always time-out.
## Troubleshooting
data-manager-for-agri Concepts Byol And Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/concepts-byol-and-credentials.md
Azure Data Manager for Agriculture supports a range of data ingress connectors to centralize your fragmented accounts. These connections require the customer to populate their credentials in a Bring Your Own License (BYOL) model, so that the data manager may retrieve data on behalf of the customer. -
-> [!NOTE]
-> Microsoft Azure Data Manager for Agriculture is currently in preview. For legal terms that apply to features that are in beta, in preview, or otherwise not yet released into general availability, see the [**Supplemental Terms of Use for Microsoft Azure Previews**](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-> Microsoft Azure Data Manager for Agriculture requires registration and is available to only approved customers and partners during the preview period. To request access to Microsoft Data Manager for Agriculture during the preview period, use this [**form**](https://aka.ms/agridatamanager).
## Prerequisites
Follow one of the following methods to enable:
:::image type="content" source="./media/concepts-byol-and-credentials/enable-system-via-ui.png" alt-text="Screenshot showing usage of UI to enable key.":::
-2. Via Azure Resource Manager client
+2. Via Azure CLI
- ```cmd
- armclient patch /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.AgFoodPlatform/farmBeats/{ADMA_instance_name}?api-version=2023-06-01-preview "{identity: { type: 'systemAssigned' }}
+ ```azurecli
+ az rest --method patch --url /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.AgFoodPlatform/farmBeats/{ADMA_instance_name}?api-version=2023-06-01-preview --body "{'identity': {'type': 'SystemAssigned'}}"
``` ### Step 4: Access policy
data-manager-for-agri Concepts Hierarchy Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/concepts-hierarchy-model.md
# Hierarchy model to organize agriculture related data
-> [!NOTE]
-> Microsoft Azure Data Manager for Agriculture is currently in preview. For legal terms that apply to features that are in beta, in preview, or otherwise not yet released into general availability, see the [**Supplemental Terms of Use for Microsoft Azure Previews**](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-> Microsoft Azure Data Manager for Agriculture requires registration and is available to only approved customers and partners during the preview period. To request access to Microsoft Data Manager for Agriculture during the preview period, use this [**form**](https://aka.ms/agridatamanager).
To generate actionable insights data related to growers, farms, and fields should be organized in a well defined manner. Firms operating in the agriculture industry often perform longitudinal studies and need high quality data to generate insights. Data Manager for Agriculture organizes agronomic data in the below manner.
data-manager-for-agri Concepts Ingest Satellite Imagery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/concepts-ingest-satellite-imagery.md
Using satellite data in Data Manager for Agriculture involves following steps:
:::image type="content" source="./media/satellite-flow.png" alt-text="Diagram showing satellite data ingestion flow.":::
-> [!NOTE]
-> Microsoft Azure Data Manager for Agriculture is currently in preview. For legal terms that apply to features that are in beta, in preview, or otherwise not yet released into general availability, see the [**Supplemental Terms of Use for Microsoft Azure Previews**](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-> Microsoft Azure Data Manager for Agriculture requires registration and is available to only approved customers and partners during the preview period. To request access to Microsoft Data Manager for Agriculture during the preview period, use this [**form**](https://aka.ms/agridatamanager).
## Satellite sources supported by Azure Data Manager for Agriculture In our public preview, we support ingesting data from Sentinel-2 constellation.
data-manager-for-agri Concepts Isv Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/concepts-isv-solutions.md
In this article, you learn how Azure Data Manager for Agriculture provides a framework for customer to use solutions built by ISV Partners.
-> [!NOTE]
-> Microsoft Azure Data Manager for Agriculture is currently in preview. For legal terms that apply to features that are in beta, in preview, or otherwise not yet released into general availability, see the [**Supplemental Terms of Use for Microsoft Azure Previews**](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-> Microsoft Azure Data Manager for Agriculture requires registration and is available to only approved customers and partners during the preview period. To request access to Microsoft Data Manager for Agriculture during the preview period, use this [**form**](https://aka.ms/agridatamanager).
## Overview
data-manager-for-agri How To Create Azure Support Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/how-to-create-azure-support-request.md
+
+ Title: How to create an Azure support request for Azure Data Manager for Agriculture resource
+description: Customers who need assistance can use the Azure portal to find self-service solutions and to create and manage support requests for Azure Data Manager for Agriculture resource.
++++ Last updated : 06/27/2023+++
+# Create an Azure Data Manager for Agriculture support request
+
+Azure enables you to create and manage support requests, also known as support tickets. You can create and manage requests in the [Azure portal](https://portal.azure.com), which is covered in the [Create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request) article. You can also create and manage requests programmatically, using the [Azure support ticket REST API](/rest/api/support), or by using [Azure CLI](/cli/azure/azure-cli-support-request).
+
+## Steps for creating a support request
+
+Following are the steps to create a support request in the context of the Azure Data Manager for Agriculture resource, you're currently working with:
+
+1. From the resource menu, in the **Support + troubleshooting** section, select **New Support Request**.
+
+ :::image type="content" source="media/how-to-create-azure-support-request.png" alt-text="Screenshot of the New Support Request option in the Azure Data Manager for Agriculture resource pane.":::
+
+2. Follow the prompts to provide us with information about the problem you're having. When you start the support request process from a resource, some options are preselected for you.
+
+It's recommended to explore [Create an Azure support request](/azure/azure-portal/supportability/how-to-create-azure-support-request) article to know more about the management of support tickets.
+
+## Next steps
+
+- [How to manage an Azure support request](/azure/azure-portal/supportability/how-to-manage-azure-support-request)
+- [Create Azure support shortcut](https://azure.microsoft.com/support/create-ticket)
+- [Understanding more about Azure portal](/azure/azure-portal)
data-manager-for-agri Overview Azure Data Manager For Agriculture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/overview-azure-data-manager-for-agriculture.md
# What is Azure Data Manager for Agriculture Preview?
-> [!NOTE]
-> Microsoft Azure Data Manager for Agriculture is currently in preview. For legal terms that apply to features that are in beta, in preview, or otherwise not yet released into general availability, see [**Supplemental Terms of Use for Microsoft Azure Previews**](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). See Azure Data Manager for Agriculture specific terms of use [**here**](supplemental-terms-azure-data-manager-for-agriculture.md).
->
-> Microsoft Azure Data Manager for Agriculture requires registration and is available to only approved customers and partners during the preview period. To request access to Microsoft Data Manager for Agriculture during the preview period, use this [**form**](https://aka.ms/agridatamanager).
Adopting sustainable agriculture practices is crucial for the future of our planet. Azure Data Manager for Agriculture is built to help the industry accelerate their sustainability and agriculture practices using digital solutions. In addition, Azure Data Manager for Agriculture helps to facilitate a more sustainable future and a more productive agriculture industry by empowering organizations to: * Drive innovation through insight.
data-manager-for-agri Quickstart Install Data Manager For Agriculture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/quickstart-install-data-manager-for-agriculture.md
Use this document to get started with the steps to install Data Manager for Agriculture. Make sure that your Azure subscription ID is in our allowlist. Microsoft Azure Data Manager for Agriculture requires registration and is available to only approved customers and partners during the preview period. To request access to Azure Data Manager for Agriculture during the preview period, use this [**form**](https://aka.ms/agridatamanager).
-> [!NOTE]
-> Microsoft Azure Data Manager for Agriculture is currently in preview. For legal terms that apply to features that are in beta, in preview, or otherwise not yet released into general availability, see the [**Supplemental Terms of Use for Microsoft Azure Previews**](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). See Azure Data Manager for Agriculture specific terms of use [**here**](supplemental-terms-azure-data-manager-for-agriculture.md).
## 1: Register resource provider
Enter Data manager for agriculture in the marketplace search bar. Then select 'c
Provide the required details for creating an Azure Data Manager for Agriculture instance and resource group in a selected region. Provide the following details:
-* **Subscription Id** : Choose the allow listed subscription Id for your tenant
+* **Subscription ID** : Choose the allow listed subscription ID for your tenant
* **Resource Group**: Choose an existing resource group or create a new one * **Instance Name**: Give the Data Manager for Agriculture instance a name * **Region**: Choose the region where you want the instance deployed
The response should look like:
} ```
-With working **API endpoint (instanceUri)** and **access_token**, you now can start making requests to our service APIs. If there are any queries in setting up the environment write to us at madma@microsoft.com.
+With working **API endpoint (instanceUri)** and **access_token**, you now can start making requests to our service APIs. If there are any queries in setting up the environment, [raise a support request](./how-to-create-azure-support-request.md) to get required help.
## Next steps-
-* See the Hierarchy Model and learn how to create and organize your agriculture data [here](./concepts-hierarchy-model.md).
-* Understand our APIs [here](/rest/api/data-manager-for-agri).
+* See the Hierarchy Model and learn how to create and organize your agriculture data [here](./concepts-hierarchy-model.md)
+* Understand our REST APIs [here](/rest/api/data-manager-for-agri)
+* [How to create an Azure support request](./how-to-create-azure-support-request.md)
data-manager-for-agri Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-manager-for-agri/release-notes.md
Azure Data Manager for Agriculture Preview is updated on an ongoing basis. To st
We provide information on latest releases, bug fixes, & deprecated functionality for Azure Data Manager for Agriculture Preview monthly.
-> [!NOTE]
-> Microsoft Azure Data Manager for Agriculture is currently in preview. For legal terms that apply to features that are in beta, in preview, or otherwise not yet released into general availability, see [**Supplemental Terms of Use for Microsoft Azure Previews**](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). See Azure Data Manager for Agriculture specific terms of use [**here**](supplemental-terms-azure-data-manager-for-agriculture.md).
->
-> Microsoft Azure Data Manager for Agriculture requires registration and is available to only approved customers and partners during the preview period. To request access to Microsoft Data Manager for Agriculture during the preview period, use this [**form**](https://aka.ms/agridatamanager).
## June 2023
databox Data Box Disk Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-disk-overview.md
If you want to import data to Azure Blob storage and Azure Files, you can use Az
## Use cases
-Use Data Box Disk to transfer TBs of data in scenarios with no to limited network connectivity. The data movement can be one-time, periodic, or an initial bulk data transfer followed by periodic transfers.
+Use Data Box Disk to transfer TBs of data in scenarios with limited network connectivity. The data movement can be one-time, periodic, or an initial bulk data transfer followed by periodic transfers.
- **One time migration** - when large amount of on-premises data is moved to Azure. For example, moving data from offline tapes to archival data in Azure cool storage. - **Incremental transfer** - when an initial bulk transfer is done using Data Box Disk (seed) followed by incremental transfers over the network. For example, Commvault and Data Box Disk are used to move backup copies to Azure. This migration is followed by copying incremental data using network to Azure Storage.
For information on pricing, go to [Pricing page](https://azure.microsoft.com/pri
- Review the [Data Box Disk requirements](data-box-disk-system-requirements.md). - Understand the [Data Box Disk limits](data-box-disk-limits.md).-- Quickly deploy [Azure Data Box Disk](data-box-disk-quickstart-portal.md) in Azure portal.
+- Quickly deploy [Azure Data Box Disk](data-box-disk-quickstart-portal.md) in Azure portal.
defender-for-cloud Alert Validation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alert-validation.md
After the Microsoft Defender for Endpoint agent is installed on your machine, as
1. Go to **Start** and type `cmd`. 1. Right-select **Command Prompt** and select **Run as administrator**
-
+ :::image type="content" source="media/alert-validation/command-prompt.png" alt-text="Screenshot showing where to select Run as Administrator." lightbox="media/alert-validation/command-prompt.png"::: 1. At the prompt, copy and run the following command: `powershell.exe -NoExit -ExecutionPolicy Bypass -WindowStyle Hidden $ErrorActionPreference = 'silentlycontinue';(New-Object System.Net.WebClient).DownloadFile('http://127.0.0.1/1.exe', 'C:\\test-MDATP-test\\invoice.exe');Start-Process 'C:\\test-MDATP-test\\invoice.exe'`
After the Microsoft Defender for Endpoint agent is installed on your machine, as
Alternately, you can also use the [EICAR](https://www.eicar.org/download/eicar.com.txt) test string to perform this test: Create a text file, paste the EICAR line, and save the file as an executable file to your machine's local drive. > [!NOTE]
-> When reviewing test alerts for Windows, make sure that you have Defender for Endpoint running with Real-Time protection enabled. Learn how to [validate this configuration](https://learn.microsoft.com/microsoft-365/security/defender-endpoint/configure-real-time-protection-microsoft-defender-antivirus?view=o365-worldwide).
+> When reviewing test alerts for Windows, make sure that you have Defender for Endpoint running with Real-Time protection enabled. Learn how to [validate this configuration](/microsoft-365/security/defender-endpoint/configure-real-time-protection-microsoft-defender-antivirus).
## Simulate alerts on your Azure VMs (Linux) <a name="validate-linux"></a> After the Microsoft Defender for Endpoint agent is installed on your machine, as part of Defender for Servers integration, follow these steps from the machine where you want to be the attacked resource of the alert:
-1. Open a Terminal window, copy and run the following command:
+1. Open a Terminal window, copy and run the following command:
[`curl -o ~/Downloads/eicar.com.txt`](https://www.eicar.org/download/eicar.com.txt). 1. The Command Prompt window closes automatically. If successful, a new alert should appear in Defender for Cloud Alerts blade in 10 minutes. > [!NOTE]
-> When reviewing test alerts for Linux, make sure that you have Defender for Endpoint running with Real-Time protection enabled. Learn how to [validate this configuration](https://learn.microsoft.com/microsoft-365/security/defender-endpoint/configure-real-time-protection-microsoft-defender-antivirus?view=o365-worldwide).
+> When reviewing test alerts for Linux, make sure that you have Defender for Endpoint running with Real-Time protection enabled. Learn how to [validate this configuration](/microsoft-365/security/defender-endpoint/configure-real-time-protection-microsoft-defender-antivirus).
## Simulate alerts on Kubernetes <a name="validate-kubernetes"></a>
You can simulate alerts for resources running on [App Service](/azure/app-servic
1. Navigate to a storage account that has Azure Defender for Storage enabled. 1. Select the **Containers** tab in the sidebar.
-
+ :::image type="content" source="media/alert-validation/storage-atp-navigate-container.png" alt-text="Screenshot showing where to navigate to select a container." lightbox="media/alert-validation/storage-atp-navigate-container.png"::: 1. Navigate to an existing container or create a new one. 1. Upload a file to that container. Avoid uploading any file that may contain sensitive data.
-
+ :::image type="content" source="media/alert-validation/storage-atp-upload-image.png" alt-text="Screenshot showing where to upload a file to the container." lightbox="media/alert-validation/storage-atp-upload-image.png"::: 1. Right-select the uploaded file and select **Generate SAS**.
You can simulate alerts for resources running on [App Service](/azure/app-servic
1. Open the Tor browser, which you can [download here](https://www.torproject.org/download/). 1. In the Tor browser, navigate to the SAS URL. You should now see and can download the file that was uploaded. - ## Testing AppServices alerts **To simulate an app services EICAR alert:**
-1. Find the HTTP endpoint of the website either by going into Azure portal blade for the App Services website or using the custom DNS entry associated with this website. (The default URL endpoint for Azure App Services website has the suffix `https://XXXXXXX.azurewebsites.net`). The website should be an existing website and not one that was created prior to the alert simulation.
+1. Find the HTTP endpoint of the website either by going into Azure portal blade for the App Services website or using the custom DNS entry associated with this website. (The default URL endpoint for Azure App Services website has the suffix `https://XXXXXXX.azurewebsites.net`). The website should be an existing website and not one that was created prior to the alert simulation.
1. Browse to the website URL and add the following fixed suffix: `/This_Will_Generate_ASC_Alert`. The URL should look like this: `https://XXXXXXX.azurewebsites.net/This_Will_Generate_ASC_Alert`. It might take some time for the alert to be generated (~1.5 hours). - ## Validate Azure Key Vault Threat Detection
-1. If you donΓÇÖt have a Key Vault created yet, make sure to [create one](https://learn.microsoft.com/azure/key-vault/general/quick-create-portal).
+1. If you donΓÇÖt have a Key Vault created yet, make sure to [create one](/azure/key-vault/general/quick-create-portal).
1. After finishing creating the Key Vault and the secret, go to a VM that has Internet access and [download the TOR Browser](https://www.torproject.org/download/). 1. Install the TOR Browser on your VM. 1. Once you finished the installation, open your regular browser, sign-in to the Azure portal, and access the Key Vault page. Select the highlighted URL and copy the address.
You can simulate alerts for resources running on [App Service](/azure/app-servic
This article introduced you to the alerts validation process. Now that you're familiar with this validation, explore the following articles: -- [Validating Azure Key Vault threat detection in Microsoft Defender for Cloud](https://techcommunity.microsoft.com/t5/azure-security-center/validating-azure-key-vault-threat-detection-in-azure-security/ba-p/1220336)
+- [Validating Azure Key Vault threat detection in Microsoft Defender for Cloud](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/validating-azure-key-vault-threat-detection-in-microsoft/ba-p/1220336)
- [Managing and responding to security alerts in Microsoft Defender for Cloud](managing-and-responding-alerts.md) - Learn how to manage alerts, and respond to security incidents in Defender for Cloud. - [Understanding security alerts in Microsoft Defender for Cloud](./alerts-overview.md)
defender-for-cloud Concept Agentless Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-agentless-containers.md
Agentless discovery for Kubernetes provides API-based discovery of information a
### How does Agentless Discovery for Kubernetes work?
-The discovery process is based on snapshots taken at intervals:
+The discovery process is based on snapshots taken at intervals:
:::image type="content" source="media/concept-agentless-containers/diagram-permissions-architecture.png" alt-text="Diagram of the permissions architecture." lightbox="media/concept-agentless-containers/diagram-permissions-architecture.png":::
Agentless information in Defender CSPM is updated through a snapshot mechanism.
## Agentless Container registry vulnerability assessment > [!NOTE]
-> This feature supports scanning of images in the Azure Container Registry (ACR) only. If you want to find vulnerabilities stored in other container registries, you can import the images into ACR, after which the imported images are scanned by the built-in vulnerability assessment solution. Learn how to [import container images to a container registry](https://learn.microsoft.com/azure/container-registry/container-registry-import-images?tabs=azure-cli).
+> This feature supports scanning of images in the Azure Container Registry (ACR) only. If you want to find vulnerabilities stored in other container registries, you can import the images into ACR, after which the imported images are scanned by the built-in vulnerability assessment solution. Learn how to [import container images to a container registry](/azure/container-registry/container-registry-import-images).
- Container registry vulnerability assessment scans images in your Azure Container Registry (ACR) to provide recommendations for improving your posture by remediating vulnerabilities.
Container registry vulnerability assessment scans container images stored in you
It currently takes 3 days to remove findings for a deleted image. We are working on providing quicker deletion for removed images. ## Next steps+ - Learn about [support and prerequisites for agentless containers posture](support-agentless-containers-posture.md) - Learn how to [enable agentless containers](how-to-enable-agentless-containers.md)
defender-for-cloud Defender For Containers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-introduction.md
Learn more about:
## Run-time protection for Kubernetes nodes and clusters
-Defender for Containers provides real-time threat protection for [supported containerized environments](support-matrix-defender-for-containers.md) and generates alerts for suspicious activities. You can use this information to quickly remediate security issues and improve the security of your containers.
+Defender for Containers provides real-time threat protection for [supported containerized environments](support-matrix-defender-for-containers.md) and generates alerts for suspicious activities. You can use this information to quickly remediate security issues and improve the security of your containers.
Threat protection at the cluster level is provided by the Defender agent and analysis of the Kubernetes audit logs. This means that security alerts are only triggered for actions and deployments that occur after you've enabled Defender for Containers on your subscription.
You can view security alerts by selecting the Security alerts tile at the top of
:::image type="content" source="media/managing-and-responding-alerts/overview-page-alerts-links.png" alt-text="Screenshot showing how to get to the security alerts page from Microsoft Defender for Cloud's overview page." lightbox="media/managing-and-responding-alerts/overview-page-alerts-links.png":::
-The security alerts page opens.
+The security alerts page opens.
:::image type="content" source="media/defender-for-containers/view-containers-alerts.png" alt-text="Screenshot showing you where to view the list of alerts." lightbox="media/defender-for-containers/view-containers-alerts.png"::: Security alerts for runtime workload in the clusters can be recognized by the `K8S.NODE_` prefix of the alert type. For a full list of the cluster level alerts, see the [reference table of alerts](alerts-reference.md#alerts-k8scluster).
-Defender for Containers also includes host-level threat detection with over 60 Kubernetes-aware analytics, AI, and anomaly detections based on your runtime workload.
+Defender for Containers also includes host-level threat detection with over 60 Kubernetes-aware analytics, AI, and anomaly detections based on your runtime workload.
-Defender for Cloud monitors the attack surface of multicloud Kubernetes deployments based on the [MITRE ATT&CK® matrix for Containers](https://www.microsoft.com/security/blog/2021/04/29/center-for-threat-informed-defense-teams-up-with-microsoft-partners-to-build-the-attck-for-containers-matrix/), a framework developed by the [Center for Threat-Informed Defense](https://mitre-engenuity.org/ctid/) in close partnership with Microsoft.
+Defender for Cloud monitors the attack surface of multicloud Kubernetes deployments based on the [MITRE ATT&CK® matrix for Containers](https://www.microsoft.com/security/blog/2021/04/29/center-for-threat-informed-defense-teams-up-with-microsoft-partners-to-build-the-attck-for-containers-matrix/), a framework developed by the [Center for Threat-Informed Defense](https://mitre-engenuity.org/cybersecurity/center-for-threat-informed-defense/) in close partnership with Microsoft.
## Learn More
defender-for-cloud Defender For Sql Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-sql-usage.md
Learn more about [vulnerability assessment for Azure SQL servers on machines](de
|-|:-| |Release state:|General availability (GA)| |Pricing:|**Microsoft Defender for SQL servers on machines** is billed as shown on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/)|
-|Protected SQL versions:|SQL Server version: 2012, 2014, 2016, 2017, 2019, 2022 <br>- [SQL on Azure virtual machines](/azure/azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview)<br>- [SQL Server on Azure Arc-enabled servers](/sql/sql-server/azure-arc/overview)<br>- On-premises SQL servers on Windows machines without Azure Arc<br>|
+|Protected SQL versions:|SQL Server version: 2012 R2, 2014, 2016, 2017, 2019, 2022 <br>- [SQL on Azure virtual machines](/azure/azure-sql/virtual-machines/windows/sql-server-on-azure-vm-iaas-what-is-overview)<br>- [SQL Server on Azure Arc-enabled servers](/sql/sql-server/azure-arc/overview)<br>- On-premises SQL servers on Windows machines without Azure Arc<br>|
|Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure China 21Vianet **(Advanced Threat Protection Only)**| ## Set up Microsoft Defender for SQL servers on machines
defender-for-cloud Export To Splunk Or Qradar https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/export-to-splunk-or-qradar.md
To configure the Azure resources for QRadar and Splunk in the Azure portal:
1. In the Azure search box, search for "policy" and go to the Policy. 1. In the Policy menu, select **Definitions**.
-1. Search for "deploy export" and select the **Deploy export to Event Hub for Azure Security Center data** built-in policy.
+1. Search for "deploy export" and select the **Deploy export to Event Hub for Microsoft Defender for Cloud data** built-in policy.
1. Select **Assign**. 1. Define the basic policy options: 1. In Scope, select the **...** to select the scope to apply the policy to.
To configure the Azure resources for QRadar and Splunk in the Azure portal:
1. Search for the Azure AD application you created before and select it. 1. Select **Close**.
-To continue setting up export of alerts, [install the built-in connectors](export-to-siem.md#step-2-connect-the-event-hub-to-your-preferred-solution-using-the-built-in-connectors) for the SIEM you're using.
+To continue setting up export of alerts, [install the built-in connectors](export-to-siem.md#step-2-connect-the-event-hub-to-your-preferred-solution-using-the-built-in-connectors) for the SIEM you're using.
defender-for-cloud Github Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/github-action.md
Security DevOps uses the following Open Source tools:
- Open the [Microsoft Security DevOps GitHub action](https://github.com/marketplace/actions/security-devops-action) in a new window. -- Ensure that [Workflow permissions are set to Read and Write](https://docs.github.com/en/repositories/managing-your-repositorys-settings-and-features/enabling-features-for-your-repository/managing-github-actions-settings-for-a-repository#setting-the-permissions-of-the-github_token-for-your-repository) on the GitHub repository.
+- Ensure that [Workflow permissions are set to Read and Write](https://docs.github.com/en/repositories/managing-your-repositorys-settings-and-features/enabling-features-for-your-repository/managing-github-actions-settings-for-a-repository#setting-the-permissions-of-the-github_token-for-your-repository) on the GitHub repository.
## Configure the Microsoft Security DevOps GitHub action
Security DevOps uses the following Open Source tools:
:::image type="content" source="media/msdo-github-action/devops.png" alt-text="Screenshot that shows you where to enter a name for your new workflow.":::
-1. Copy and paste the following [sample action workflow](https://github.com/microsoft/security-devops-action/blob/main/.github/workflows/sample-workflow-windows-latest.yml) into the Edit new file tab.
+1. Copy and paste the following [sample action workflow](https://github.com/microsoft/security-devops-action/blob/main/.github/workflows/sample-workflow.yml) into the Edit new file tab.
```yml name: MSDO windows-latest
Security DevOps uses the following Open Source tools:
name: alerts path: ${{ steps.msdo.outputs.sarifFile }} ```
-
- For details on various input options, see [action.yml](https://github.com/microsoft/security-devops-action/blob/main/action.yml)
-1. Select **Start commit**
+ For details on various input options, see [action.yml](https://github.com/microsoft/security-devops-action/blob/main/action.yml)
+
+1. Select **Start commit**
:::image type="content" source="media/msdo-github-action/start-commit.png" alt-text="Screenshot showing you where to select start commit.":::
-1. Select **Commit new file**.
+1. Select **Commit new file**.
:::image type="content" source="media/msdo-github-action/commit-new.png" alt-text="Screenshot showing you how to commit a new file.":::
Security DevOps uses the following Open Source tools:
1. Sign in to [GitHub](https://www.github.com).
-1. Navigate to **Security** > **Code scanning alerts** > **Tool**.
+1. Navigate to **Security** > **Code scanning alerts** > **Tool**.
1. From the dropdown menu, select **Filter by tool**.
Code scanning findings will be filtered by specific MSDO tools in GitHub. These
- Learn how to [deploy apps from GitHub to Azure](/azure/developer/github/deploy-to-azure). ## Next steps+ Learn more about [Defender for DevOps](defender-for-devops-introduction.md). Learn how to [connect your GitHub](quickstart-onboard-github.md) to Defender for Cloud.
defender-for-cloud How To Enable Agentless Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/how-to-enable-agentless-containers.md
We do not support or charge stopped clusters. To get the value of agentless capa
We suggest that you unlock the locked resource group/subscription/cluster, make the relevant requests manually, and then re-lock the resource group/subscription/cluster by doing the following:
-1. Enable the feature flag manually via CLI by using [Trusted Access](https://learn.microsoft.com/azure/aks/trusted-access-feature).
+1. Enable the feature flag manually via CLI by using [Trusted Access](/azure/aks/trusted-access-feature).
``` CLI
defender-for-cloud Plan Defender For Servers Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-defender-for-servers-scale.md
This article is the *sixth* and final article in the Defender for Servers planni
When you enable a Defender for Cloud subscription, this process occurs: 1. The *microsoft.security* resource provider is automatically registered on the subscription.
-1. At the same time, the Cloud Security Benchmark initiative that's responsible for creating security recommendations and calculating the security score is assigned to the subscription.
+1. At the same time, the Cloud Security Benchmark initiative that's responsible for creating security recommendations and calculating the secure score is assigned to the subscription.
1. After you enable Defender for Cloud on the subscription, you turn on Defender for Servers Plan 1 or Defender for Servers Plan 2, and then you enable auto provisioning. In the next sections, review considerations for specific steps as you scale your deployment:
defender-for-cloud Powershell Sample Vulnerability Assessment Azure Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/powershell-sample-vulnerability-assessment-azure-sql.md
Last updated 05/30/2023
This PowerShell script enables the express configuration of [vulnerability assessments](sql-azure-vulnerability-assessment-overview.md) on an Azure SQL Server.
-If vulnerability assessment has already been configured using the classic configuration, this script migrates it to the express configuration and copy all of the pre-existing baseline definitions.
+If vulnerability assessment has already been configured using the classic configuration, this script migrates it to the express configuration and copies all of the pre-existing baseline definitions.
Your scan history isn't copied over to the new configuration. Your scan history remains accessible on the storage account that was previously used by the classic configuration. ## Prerequisites
defender-for-cloud Secure Score Security Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/secure-score-security-controls.md
In this example:
|Metric|Formula and example| |-|-|
-|**Security control's current score**|<br>![Equation for calculating a security control's score.](media/secure-score-security-controls/secure-score-equation-single-control.png)<br><br>Each individual security control contributes towards the Security Score. Each resource affected by a recommendation within the control, contributes towards the control's current score. The current score for each control is a measure of the status of the resources *within* the control.<br>![Tooltips showing the values used when calculating the security control's current score](media/secure-score-security-controls/security-control-scoring-tooltips.png)<br>In this example, the max score of 6 would be divided by 78 because that's the sum of the healthy and unhealthy resources.<br>6 / 78 = 0.0769<br>Multiplying that by the number of healthy resources (4) results in the current score:<br>0.0769 * 4 = **0.31**<br><br>|
+|**Security control's current score**|<br>![Equation for calculating a security control's score.](media/secure-score-security-controls/secure-score-equation-single-control.png)<br><br>Each individual security control contributes towards the secure score. Each resource affected by a recommendation within the control, contributes towards the control's current score. The current score for each control is a measure of the status of the resources *within* the control.<br>![Tooltips showing the values used when calculating the security control's current score](media/secure-score-security-controls/security-control-scoring-tooltips.png)<br>In this example, the max score of 6 would be divided by 78 because that's the sum of the healthy and unhealthy resources.<br>6 / 78 = 0.0769<br>Multiplying that by the number of healthy resources (4) results in the current score:<br>0.0769 * 4 = **0.31**<br><br>|
|**Secure score**<br>Single subscription, or connector|<br>![Equation for calculating a subscription's secure score](media/secure-score-security-controls/secure-score-equation-single-sub.png)<br><br>![Single subscription secure score with all controls enabled](media/secure-score-security-controls/secure-score-example-single-sub.png)<br>In this example, there's a single subscription, or connector with all security controls available (a potential maximum score of 60 points). The score shows 28 points out of a possible 60 and the remaining 32 points are reflected in the "Potential score increase" figures of the security controls.<br>![List of controls and the potential score increase](media/secure-score-security-controls/secure-score-example-single-sub-recs.png) <br> This equation is the same equation for a connector with just the word subscription being replaced by the word connector. | |**Secure score**<br>Multiple subscriptions, and connectors|<br>![Equation for calculating the secure score for multiple subscriptions.](media/secure-score-security-controls/secure-score-equation-multiple-subs.png)<br><br>The combined score for multiple subscriptions and connectors includes a *weight* for each subscription, and connector. The relative weights for your subscriptions, and connectors are determined by Defender for Cloud based on factors such as the number of resources.<br>The current score for each subscription, a dn connector is calculated in the same way as for a single subscription, or connector, but then the weight is applied as shown in the equation.<br>When you view multiple subscriptions and connectors, the secure score evaluates all resources within all enabled policies and groups their combined impact on each security control's maximum score.<br>![Secure score for multiple subscriptions with all controls enabled](media/secure-score-security-controls/secure-score-example-multiple-subs.png)<br>The combined score is **not** an average; rather it's the evaluated posture of the status of all resources across all subscriptions, and connectors.<br><br>Here too, if you go to the recommendations page and add up the potential points available, you'll find that it's the difference between the current score (22) and the maximum score available (58).|
defender-for-cloud Tutorial Enable Containers Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tutorial-enable-containers-azure.md
If you would prefer to [assign a custom workspace](/azure/defender-for-cloud/def
## Deploy the Defender profile in Azure
-You can enable the Defender for Containers plan and deploy all of the relevant components in different ways. We walk you through the steps to accomplish this using the Azure portal. Learn how to [deploy the Defender profile](/azure/defender-for-cloud/includes/defender-for-containers-enable-plan-aks.md#deploy-the-defender-profile) with REST API, Azure CLI or with a Resource Manager template.
+You can enable the Defender for Containers plan and deploy all of the relevant components in different ways. We walk you through the steps to accomplish this using the Azure portal. Learn how to [deploy the Defender profile](defender-for-containers-enable.md#deploy-the-defender-profile) with REST API, Azure CLI or with a Resource Manager template.
**To deploy the Defender profile in Azure:**
defender-for-cloud Tutorial Enable Servers Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/tutorial-enable-servers-plan.md
Last updated 06/29/2023
# Protect servers with Defender for Servers
-Defender for Servers in Microsoft Defender for Cloud brings threat detection and advanced defenses to your Windows and Linux EC2 instances. This plan includes the integrated license for Microsoft Defender for Endpoint, security baselines and OS level assessments, vulnerability assessment scanning, adaptive application controls (AAC), file integrity monitoring (FIM), and more.
+Defender for Servers in Microsoft Defender for Cloud brings threat detection and advanced defenses to your Windows and Linux machines that run in Azure, Amazon Web Services (AWS), Google Cloud Platform (GCP), and on-premises environments. This plan includes the integrated license for Microsoft Defender for Endpoint, security baselines and OS level assessments, vulnerability assessment scanning, adaptive application controls (AAC), file integrity monitoring (FIM), and more.
Microsoft Defender for Servers includes an automatic, native integration with Microsoft Defender for Endpoint. Learn more, [Protect your endpoints with Defender for Cloud's integrated EDR solution: Microsoft Defender for Endpoint](integration-defender-for-endpoint.md). With this integration enabled, you have access to the vulnerability findings from **Microsoft threat and vulnerability management**.
-Defender for Servers offers two plan options with that offer different levels of protection and their own cost. You can learn more about Defender for Clouds pricing on [the pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
+Defender for Servers offers two plan options that offer different levels of protection and their own cost. You can learn more about Defender for Clouds pricing on [the pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
## Prerequisites
You can enable the Defender for Servers plan on your Azure subscription, AWS acc
## Select a Defender for Servers plan
-When you enable the Defender for Servers plan, you're then given the option to select which plan you want to enable. There are two plans you can choose from that offer different levels of protections for your resources.
+When you enable the Defender for Servers plan, you're then given the option to select which plan you want to enable. There are two plans you can choose from that offer different levels of protections for your resources.
You can compare what's included in [each plan](plan-defender-for-servers-select-plan.md#plan-features).
You can compare what's included in [each plan](plan-defender-for-servers-select-
1. Select **Change plans**.
- :::image type="content" source="media/tutorial-enable-servers-plan/servers-change-plan.png" alt-text="Screnshot that shows you where on the environment settings page to select change plans." lightbox="media/tutorial-enable-servers-plan/servers-change-plan.png":::
+ :::image type="content" source="media/tutorial-enable-servers-plan/servers-change-plan.png" alt-text="Screenshot that shows you where on the environment settings page to select change plans." lightbox="media/tutorial-enable-servers-plan/servers-change-plan.png":::
1. In the popup window, select **Plan 2** or **Plan 1**.
There are three components that can be enabled and configured to provide extra p
| Component | Description | Learn more | |:--:|:--:|:--:|
-| [Log Analytics agent/Azure Monitor agent](plan-defender-for-servers-agents.md) | Collects security-related configurations and event logs from the machine and stores the |data in your Log Analytics workspace for analysis. | [Learn more](../azure-monitor/agents/log-analytics-agent.md) about the Log Analytics agent. |
+| [Log Analytics agent/Azure Monitor agent](plan-defender-for-servers-agents.md) | Collects security-related configurations and event logs from the machine and stores the data in your Log Analytics workspace for analysis. | [Learn more](../azure-monitor/agents/log-analytics-agent.md) about the Log Analytics agent. |
| Vulnerability assessment for machines | Enables vulnerability assessment on your Azure and hybrid machines. | [Learn more](monitoring-components.md) about how Defender for Cloud collects data. | | [Agentless scanning for machines](concept-agentless-data-collection.md) | Scans your machines for installed software and vulnerabilities without relying on agents or impacting machine performance. | [Learn more](concept-agentless-data-collection.md) about agentless scanning for machines. |
After enabling the Log Analytics agent/Azure Monitor agent, you'll be presented
1. In the Auto provisioning configuration window, select one of the following two agent types: - **Log Analytic Agent (Default)** - Collects security-related configurations and event logs from the machine and stores the data in your Log Analytics workspace for analysis.
-
+ - **Azure Monitor Agent (Preview)** - Collects security-related configurations and event logs from the machine and stores the data in your Log Analytics workspace for analysis. :::image type="content" source="media/tutorial-enable-servers-plan/auto-provisioning-screen.png" alt-text="Screenshot of the auto provisioning configuration screen with the available options to select." lightbox="media/tutorial-enable-servers-plan/auto-provisioning-screen.png":::
Defender for Cloud has the ability to scan your Azure machines for installed sof
## Next steps [Overview of Microsoft Defender for Servers](defender-for-servers-introduction.md)-
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
With this release, the recommendation `Container registry images should have vul
Customers with both Defender for Containers plan and Defender CSPM plan should [disable the Qualys recommendation](tutorial-security-policy.md#disable-a-security-recommendation), to avoid multiple reports for the same images with potential impact on secure score. If you're currently using the sub-assesment API or Azure Resource Graph or continuous export, you should also update your requests to the new schema used by the MDVM recommendation prior to disabling the Qualys recommendation and using MDVM results instead.
-If you are also using our public preview offering for Windows containers vulnerability assessment powered by Qualys, and you would like to continue using it, you need to [disable Linux findings](https://learn.microsoft.com/azure/defender-for-cloud/defender-for-containers-vulnerability-assessment-azure#disable-specific-findings) using disable rules rather than disable the registry recommendation.
+If you are also using our public preview offering for Windows containers vulnerability assessment powered by Qualys, and you would like to continue using it, you need to [disable Linux findings](defender-for-containers-vulnerability-assessment-azure.md#disable-specific-findings) using disable rules rather than disable the registry recommendation.
Learn more about [Agentless Containers Posture in Defender CSPM](concept-agentless-containers.md).
The recommendation `Running container images should have vulnerability findings
|--|--|--| | Running container images should have vulnerability findings resolved (powered by Microsoft Defender Vulnerability Management)ΓÇ»| Container image vulnerability assessment scans your registry for commonly known vulnerabilities (CVEs) and provides a detailed vulnerability report for each image. This recommendation provides visibility to vulnerable images currently running in your Kubernetes clusters. Remediating vulnerabilities in container images that are currently running is key to improving your security posture, significantly reducing the attack surface for your containerized workloads. | c609cf0f-71ab-41e9-a3c6-9a1f7fe1b8d5
- Customers with both Defender for the Containers plan and Defender CSPM plan should [disable the Qualys running containers recommendation](https://learn.microsoft.com/azure/defender-for-cloud/tutorial-security-policy#disable-a-security-recommendation), to avoid multiple reports for the same images with potential impact on the secure score.
+ Customers with both Defender for the Containers plan and Defender CSPM plan should [disable the Qualys running containers recommendation](tutorial-security-policy.md#disable-a-security-recommendation), to avoid multiple reports for the same images with potential impact on the secure score.
If you're currently using the sub-assesment API or Azure Resource Graph or continuous export, you should also update your requests to the new schema used by the MDVM recommendation prior to disabling the Qualys recommendation and use MDVM results instead.
-If you are also using our public preview offering for Windows containers vulnerability assessment powered by Qualys, and you would like to continue using it, you need to [disable Linux findings](https://learn.microsoft.com/azure/defender-for-cloud/defender-for-containers-vulnerability-assessment-azure#disable-specific-findings) using disable rules rather than disable the runtime recommendation.
+If you are also using our public preview offering for Windows containers vulnerability assessment powered by Qualys, and you would like to continue using it, you need to [disable Linux findings](defender-for-containers-vulnerability-assessment-azure.md#disable-specific-findings) using disable rules rather than disable the runtime recommendation.
Learn more about [Agentless Containers Posture in Defender CSPM](concept-agentless-containers.md).
defender-for-iot Dell Poweredge R350 E1800 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/dell-poweredge-r350-e1800.md
The following image shows a view of the Dell PowerEdge R350 back panel:
|-||-| |2| 450-AMJH | Dual, Hot-Plug, Power Supply, 700W MM HLAC (200-220Vac) Titanium, Redundant (1+1), by LiteOn, NAF|
-## Optional Expansion Modules
+## Optional Storage Controllers
+Multi-disk RAID arrays combine multiple physical drives into one logical drive for increased redundancy and performance. The optional modules below have been tested in our lab for compatibility and sustained performance:
+|Quantity|PN|Description|
+|-||-|
+|1| 405-ABBT | PERC H755 Controller Card (RAID10) |
+
+## Optional port expansion
Optional modules for additional monitoring ports can be installed: |Location |Type |Specifications |
To install the Dell PowerEdge R350 appliance, you'll need:
- A BIOS configuration XML
-### Configure the Dell BIOS
-
- An integrated iDRAC manages the Dell appliance with Lifecycle Controller (LC). The LC is embedded in every Dell PowerEdge server and provides functionality that helps you deploy, update, monitor, and maintain your Dell PowerEdge appliances.
+### Setup the BIOS and RAID array
-To establish the communication between the Dell appliance and the management computer, you need to define the iDRAC IP address and the management computer's IP address on the same subnet.
+This procedure describes how to configure the BIOS configuration for an unconfigured sensor appliance.
+In the event that any of the steps below are missing in the BIOS, please make sure that the hardware matches the specifications above.
-When the connection is established, the BIOS is configurable.
+Dell BIOS iDRAC is a system management software designed to give administrators control of Dell hardware remotely. It allows administrators to monitor system performance, configure settings, and troubleshoot hardware issues from a web browser. It can also be used to update system BIOS and firmware. The BIOS can be setup locally or remotely. To setup the BIOS remotely from a management computer, you need to define the iDRAC IP address and the management computer's IP address on the same subnet.
**To configure the iDRAC IP address**:
defender-for-iot Hpe Proliant Dl20 Plus Enterprise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/hpe-proliant-dl20-plus-enterprise.md
The following image shows a sample of the HPE ProLiant DL20 back panel:
|1| 512485-B21 | HPE iLO Adv 1 Server License 1 year support| |1| P46114-B21 | HPE DL20 Gen10+ 2x8 LP FIO Riser Kit|
-## Optional Storage Arrays
+## Optional Storage Controllers
+Multi-disk RAID arrays combine multiple physical drives into one logical drive for increased redundancy and performance. The optional modules below have been tested in our lab for compatibility and sustained performance:
|Quantity|PN|Description| |-||-|
+|1| 869079-B21 | HPE Smart Array E208i-a SR G10 LH Ctrlr (RAID10) |
|1| P26325-B21 | Broadcom MegaRAID MR216i-a x16 Lanes without Cache NVMe/SAS 12G Controller (RAID5)<br><br>**Note**: This RAID controller occupies the PCIe expansion slot and does not allow expansion of networking port expansion |
-## Port expansion
-
+## Optional port expansion
Optional modules for port expansion include: |Location |Type|Specifications|
Installation includes:
> [!NOTE] > Installation procedures are only relevant if you need to re-install software on a pre-configured device, or if you buy your own hardware and configure the appliance yourself.
->
+>
### Enable remote access and update the password
Use the following procedure to set up network options and update the default pas
1. Change the default password and select **F10: Save**.
-### Configure the HPE BIOS
+### Setup the BIOS and RAID array
+
+This procedure describes how to configure the BIOS configuration for an unconfigured sensor appliance.
+In the event that any of the steps below are missing in the BIOS, please make sure that the hardware matches the specifications above.
+
+HPE BIOS iLO is a system management software designed to give administrators control of HPE hardware remotely. It allows administrators to monitor system performance, configure settings, and troubleshoot hardware issues from a web browser. It can also be used to update system BIOS and firmware. The BIOS can be setup locally or remotely. To setup the BIOS remotely from a management computer, you need to define the HPE IP address and the management computer's IP address on the same subnet.
-This procedure describes how to update the HPE BIOS configuration for your OT deployment.
**To configure the HPE BIOS**:
defender-for-iot Hpe Proliant Dl360 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/hpe-proliant-dl360.md
The following image describes the hardware elements on the HPE ProLiant DL360 ba
|**512485-B21** | HPE iLO Adv 1-Server License 1 Year Support |1| |**874543-B21** | HPE 1U Gen10 SFF Easy Install Rail Kit |1|
-## Port expansion
+## Optional Storage Controllers
+Multi-disk RAID arrays combine multiple physical drives into one logical drive for increased redundancy and performance. The optional modules below have been tested in our lab for compatibility and sustained performance:
+|Quantity|PN|Description|
+|-||-|
+|1| 804331-B21 | HPE Smart Array P408i-a SR Gen10 Controller (RAID10) |
++
+## Optional port expansion
Optional modules for port expansion include: |Location |Type|Specifications|
Use the following procedure to set up network options and update the default pas
1. Change the default password and select **F10: Save**.
-### Configure the HPE BIOS
+### Setup the BIOS and RAID array
+
+This procedure describes how to configure the BIOS configuration for an unconfigured sensor appliance.
+In the event that any of the steps below are missing in the BIOS, please make sure that the hardware matches the specifications above.
+
+HPE BIOS iLO is a system management software designed to give administrators control of HPE hardware remotely. It allows administrators to monitor system performance, configure settings, and troubleshoot hardware issues from a web browser. It can also be used to update system BIOS and firmware. The BIOS can be setup locally or remotely. To setup the BIOS remotely from a management computer, you need to define the HPE IP address and the management computer's IP address on the same subnet.
-This procedure describes how to update the HPE BIOS configuration for your OT sensor deployment.
**To configure the HPE BIOS**: > [!IMPORTANT]
defender-for-iot Install Software Ot Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-deploy/install-software-ot-sensor.md
If you're using pre-configured appliances, skip this step and continue directly
[!INCLUDE [caution do not use manual configurations](../includes/caution-manual-configurations.md)]
+> [!IMPORTANT]
+> Removing packages from your sensor without Microsoft approval can cause unexpected results. All packages installed on the sensor are required for correct sensor functionality.
+ ## Prerequisites Before installing Microsoft Defender for IoT, make sure that you have:
In Defender for IoT on the Azure portal, select **Getting started** > **Sensor**
This procedure describes how to install OT monitoring software on an OT network sensor. > [!NOTE]
+> If your appliance has a RAID storage array, make sure to configure it before you continue installation.<br>
> Towards the end of this process you will be presented with the usernames and passwords for your device. Make sure to copy these down as these passwords will not be presented again. **To install your software**:
This procedure describes how to install OT monitoring software on an OT network
Your physical media must have a minimum of 4-GB storage. - **Virtual mount** ΓÇô use iLO for HPE appliances, or iDRAC for Dell appliances to boot the ISO file.-
+
1. When the installation boots, you're first prompted to select the hardware profile you want to use. For example: :::image type="content" source="../media/tutorial-install-components/sensor-architecture.png" alt-text="Screenshot of the sensor's hardware profile options." lightbox="../media/tutorial-install-components/sensor-architecture.png":::
digital-twins Concepts Data History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-data-history.md
description: Understand the data history feature for Azure Digital Twins. Previously updated : 03/08/2023 Last updated : 06/29/2023
Later, your Azure Digital Twins instance must have the following permission on t
These permissions can be assigned using the Azure CLI or Azure portal.
+If you'd like to restrict network access to the resources involved in data history (your Azure Digital Twins instance, event hub, or Azure Data Explorer cluster), you should set those restrictions *after* setting up the data history connection. For more information about this process, see [Restrict network access to data history resources](how-to-create-data-history-connection.md#restrict-network-access-to-data-history-resources).
+ ## Data types and schemas Data history historizes three types of events from your Azure Digital Twins instance into Azure Data Explorer: relationship lifecycle events, twin lifecycle events, and twin property updates (which can optionally include twin property deletions). Each of these event types is stored in its own table inside the Azure Data Explorer database, meaning data history keeps three tables total. You can specify custom names for the tables when you set up the data history connection.
digital-twins How To Create Data History Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-create-data-history-connection.md
description: See how to set up a data history connection for historizing Azure Digital Twins updates into Azure Data Explorer. Previously updated : 03/28/2023 Last updated : 06/29/2023
As part of the [data history connection setup](#set-up-data-history-connection)
For more information about Event Hubs and their capabilities, see the [Event Hubs documentation](../event-hubs/event-hubs-about.md).
+>[!NOTE]
+>While setting up data history, local authorization must be *enabled* on the event hub. If you ultimately want to have local authorization disabled on your event hub, disable the authorization after setting up the connection. You'll also need to adjust some permissions, described in [Restrict network access to data history resources](#restrict-network-access-to-data-history-resources) later in this article.
+ # [CLI](#tab/cli) Use the following CLI commands to create the required resources. The commands use several local variables (`$location`, `$resourcegroup`, `$eventhubnamespace`, and `$eventhub`) that were created earlier in [Set up local variables for CLI session](#set-up-local-variables-for-cli-session).
After setting up the data history connection, you can optionally remove the role
>[!NOTE] >Once the connection is set up, the default settings on your Azure Data Explorer cluster will result in an ingestion latency of approximately 10 minutes or less. You can reduce this latency by enabling [streaming ingestion](/azure/data-explorer/ingest-data-streaming) (less than 10 seconds of latency) or an [ingestion batching policy](/azure/data-explorer/kusto/management/batchingpolicy). For more information about Azure Data Explorer ingestion latency, see [End-to-end ingestion latency](concepts-data-history.md#end-to-end-ingestion-latency).
+### Restrict network access to data history resources
+
+If you'd like to restrict network access to the resources involved in data history (your Azure Digital Twins instance, event hub, or Azure Data Explorer cluster), you should set those restrictions *after* setting up the data history connection. This includes disabling local access for your resources, among other measures to reduce network access.
+
+To make sure your data history resources can communicate with each other, you should also modify the data connection for the Azure Data Explorer database to use a system-assigned managed identity.
+
+Follow the order of steps below to make sure your data history connection is set up properly when your resources need reduced network access.
+1. Make sure local authorization is *enabled* on your data history resources (your Azure Digital Twins instance, event hub, and Azure Data Explorer cluster)
+1. [Create the data history connection](#set-up-data-history-connection)
+1. Update the data connection for the Azure Data Explorer database to use a system-assigned managed identity. In the Azure portal, you can do this by navigating to the Azure Data Explorer cluster and using **Databases** in the menu to navigate to the data history database. In the database menu, select **Data connections**. In the table entry for your data history connection, you should see the option to **Assign managed identity**, where you can choose **System-assigned**.
+ :::image type="content" source="media/how-to-create-data-history-connection/database-managed-identity.png" alt-text="Screenshot of the option to assign a managed identity to a data connection in the Azure portal." lightbox="media/how-to-create-data-history-connection/database-managed-identity.png":::
+1. Now, you can disable local authorization or set other network restrictions for your desired resources, by changing the access settings on your Azure Digital Twins instance, event hub, or Azure Data Explorer cluster.
+ ### Troubleshoot connection setup Here are a few common errors you might encounter when setting up a data history connection, and how to resolve them.
load-balancer Load Balancer Tcp Reset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-tcp-reset.md
By carefully examining the entire end to end scenario, you can determine the ben
## Configurable TCP idle timeout
-Azure Load Balancer has the following idle timeout range:
-- 4 minutes to 100 minutes for Outbound Rules-- 4 minutes to 30 minutes for Load Balancer rules and Inbound NAT rules
+Azure Load Balancer has a 4 minutes to 100 minutes timeout range for Load Balancer rules, Outbound Rules, and Inbound NAT rules.
By default, it's set to 4 minutes. If a period of inactivity is longer than the timeout value, there's no guarantee that the TCP or HTTP session is maintained between the client and your cloud service.
load-testing Concept Load Test App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/concept-load-test-app-service.md
Title: Load test Azure App Service apps
+ Title: Load testing for Azure App Service
-description: 'Learn how to use Azure Load Testing with Azure App Service apps. Run load tests, use environment variables, and gain insights with server metrics and diagnostics.'
+description: 'Learn how to use Azure Load Testing with apps hosted on Azure App Service. Run load tests, use environment variables, and gain insights with server metrics and diagnostics.'
Previously updated : 03/23/2023 Last updated : 06/30/2023
-# Load test Azure App Service apps with Azure Load Testing
+# Load testing for Azure App Service
-This article shows how to use Azure Load Testing with applications hosted on Azure App Service. You learn how to run a load test to validate your application's performance. Use environment variables to make your load test more configurable. This feature allows you to reuse your load test across different deployment slots. During and after the test, you can get detailed insights by using server-side metrics and App Service diagnostics, which helps you to identify and troubleshoot any potential issues.
+Azure App Service is a fully managed service that enables you to build, deploy, and scale web applications and APIs in the cloud. This article provides an overview of the key capabilities of Azure Load Testing that are relevant for applications hosted on Azure App Service.
-[Azure App Service](/azure/app-service/overview) is a fully managed HTTP-based service that enables you to build, deploy, and scale web applications and APIs in the cloud. You can develop in your favorite language, be it .NET, .NET Core, Java, Ruby, Node.js, PHP, or Python. Applications run and scale with ease on both Windows and Linux-based environments.
+With Azure Load Testing, you can simulate real-world, large-scale traffic to your application and services. Even though [Azure App Service](/azure/app-service/overview) is a fully managed service that can scale automatically, load testing can offer significant benefits in terms of reliability, performance, and cost optimization:
-## Why load test Azure App Service web apps?
+- Ensure that all application components, not only the web application, can handle the expected load.
+- Verify that the application meets your performance and stability requirements.
+- Use application resource metrics and diagnostics to identify performance bottleneck across the entire application.
+- Avoid over-allocation of computing resources and reduce cost inefficiencies.
+- Detect performance regressions early by integrating load testing in your CI/CD pipeline and specifying test fail criteria.
-Even though Azure App Service is a fully managed service for running applications, load testing can offer significant benefits in terms of reliability, performance, and cost optimization:
+## Create a load test
-- Validate that your application and all dependent application components can handle your expected load-- Verify that your application meets your performance requirements, such as maximum response time or latency-- Identify performance bottlenecks within your application-- Do more with less: right-size your computing resources-- Ensure that new releases don't introduce performance regressions
+You can create a load test to simulate traffic to your application on Azure App Service. Azure Load Testing provides two options to create a load test:
-Often, applications consist of multiple application components besides the app service application. For example, the application might use a database or other data storage solution, invoke dependent serverless functions, or use a caching solution for improving performance. Each of these application components contributes to the availability and performance of your overall application. By running a load test, you can validate that the entire application can support the expected user load without failures. Also, you can verify that the requests meet your performance and availability requirements.
-
-The application implementation and algorithms might affect application performance and stability under load. For example, storing data in memory might lead to excessive memory consumption and application stability issues. You can use load testing to perform a *soak test* and simulate sustained user load over a longer period of time, to identify such problems in the application implementation.
+- Create a URL-based quick test
+- Use an Apache JMeter script (JMX file)
-Each application component has different options for allocating computing resources and scalability settings. For example, an app service always runs in an [App Service plan](/azure/app-service/overview-hosting-plans). An App Service plan defines a set of compute resources for a web app to run. Optionally, you can choose to enable [autoscaling](/azure/azure-monitor/autoscale/autoscale-overview) to automatically add more resources, based on specific metrics. With load testing, you can ensure that you add the right resources to match the characteristics of your application. For example, if your application is memory-intensive, you might choose compute instances that have more memory. Also, by [monitoring application metrics](./how-to-monitor-server-side-metrics.md) during the load test, you can also optimize costs by allocating the right type and amount of computing resources.
+After you create and run a load test, you can [monitor the resource metrics](#monitor-application-metrics) for the web application and all dependent Azure components to identify performance and scalability issues.
-By integrating load testing in your CI/CD pipeline and by [adding fail criteria to your load test](./how-to-define-test-criteria.md), you can quickly identify performance regressions introduced by application changes. For example, adding an external service call in the application might result in the overall response time to surpass your maximum response time requirement.
+### Create a URL-based quick test
-## Create a load test for an app on Azure App Service
+You can use the quick test experience to create a load test for a specific endpoint URL, directly from within the Azure portal. For example, use the App Service web app *default domain* to perform a load test of the web application home page.
-Azure Load Testing enables you to create load tests for your application in two ways:
+When you create a URL-based test, you specify the endpoint and basic load test configuration settings, such as the number of [virtual users](./concept-load-testing-concepts.md#virtual-users), test duration, and [ramp-up time](./concept-load-testing-concepts.md#ramp-up-time).
-- Create a URL-based quick test-- Use an existing Apache JMeter script (JMX file)-
-Use the quick test experience to create a load test for a specific endpoint URL, directly from within the Azure portal. For example, use the App Service web app *default domain* to perform a load test of the web application home page. You can specify basic load test configuration settings, such as the number of [virtual users](./concept-load-testing-concepts.md#virtual-users), test duration, and [ramp-up time](./concept-load-testing-concepts.md#ramp-up-time). Azure Load Testing then generates the corresponding JMeter test script, and runs it against your endpoint. You can modify the test script and configuration settings at any time. Get started by [creating a URL-based load test](./quickstart-create-and-run-load-test.md).
+The following screenshot shows how to create a URL-based load test in the Azure portal.
:::image type="content" source="./media/concept-load-test-app-service/create-quick-test-app-service.png" alt-text="Screenshot that shows the Create quick test in the Azure portal." lightbox="./media/concept-load-test-app-service/create-quick-test-app-service.png":::
-Alternately, create a new load test by uploading an existing JMeter script. Use this approach to load test multiple pages or endpoints in a single test, to test authenticated endpoints, use parameters in the test script, or to use more advanced load patterns. Azure Load Testing provides high-fidelity support of JMeter to enable you to reuse existing load test scripts. Learn how to [create a load test by using an existing JMeter script](./how-to-create-and-run-load-test-with-jmeter-script.md).
-
-If you're just getting started with Azure Load Testing, you might [create a quick test](./quickstart-create-and-run-load-test.md) first, and then further modify and extend the test script that Azure Load Testing generated.
+Get started by [creating a URL-based load test](./quickstart-create-and-run-load-test.md).
-After you create and run your load test, Azure Load Testing provides a dashboard with test run statistics, such as [response time](./concept-load-testing-concepts.md#response-time), error percentage and [throughput](./concept-load-testing-concepts.md#requests-per-second-rps).
+### Create a load test by uploading a JMeter script
-## Use test fail criteria
+Azure Load Testing provides high-fidelity support of JMeter. You can create a new load test by uploading an Apache JMeter script. You might use this approach in the following scenarios:
-The Azure Load Testing dashboard provides insights about a specific load test run and how the application responds to simulated load. To verify that your application can meet your performance and availability requirements, specify *load test fail criteria*.
+- Test multiple pages or endpoints in a single test
+- Test authenticated endpoints
+- Pass parameters to the load test, such as environment variables or secrets
+- Test non-HTTP based endpoints, such as database connections
+- Configure more advanced load patters
+- Reuse existing JMeter scripts
-Test fail criteria enable you to configure conditions for load test *client-side metrics*. If a load test run doesn't meet these conditions, the test is considered to fail. For example, specify that the average response time of requests, or that the percentage of failed requests is above a given threshold. You can add fail criteria to your load test at any time, regardless if it's a quick test or if you uploaded a JMeter script.
+Get started [create a load test by uploading a JMeter script](./how-to-create-and-run-load-test-with-jmeter-script.md).
-
-When you run load tests as part of your CI/CD pipeline, you can use test fail criteria to quickly identify performance regressions with an application build.
-
-Learn how to [configure test fail criteria](./how-to-define-test-criteria.md) for your load test.
+If you previously created a [URL-based test](#create-a-url-based-quick-test), Azure Load Testing generates a JMeter test script. You can download this generated test script, modify or extend it, and then reupload the script.
## Monitor application metrics
-During a load test, Azure Load Testing collects [metrics](./concept-load-testing-concepts.md#metrics) about the test execution. The client-side metrics provide information about the test run, from a test-engine perspective. For example, the end-to-end response time, requests per second, or error percentage. These metrics give an overall indication whether the application can support the simulated user load.
-
-To get insights into the performance and stability of the application and its components, Azure Load Testing enables you to monitor application metrics, also referred to as *server-side metrics*. Monitoring application metrics help identify performance bottlenecks in your application, or indicate which components have too many or too few compute resources allocated.
+During a load test, Azure Load Testing collects [metrics](./concept-load-testing-concepts.md#metrics) about the test run:
-For applications hosted on Azure App Service, use App Service diagnostics to get extra insights into the performance and health of the application.
+- Client-side metrics: test engine metrics, such as the end-to-end response time, number of requests per second, or the error percentage. These metrics give an overall indication whether the application can support the simulated user load.
-### Server-side metrics in Azure Load Testing
+- Server-side metrics: resource metrics of the Azure application components, such as CPU percentage of the app service plan, HTTP response codes, or database resource usage.
-Azure Load Testing lets you monitor server-side metrics for your Azure app components when you run a load test. You can then visualize and analyze these metrics in the Azure Load Testing dashboard. Learn more about the [Azure resource types that Azure Load Testing supports](./resource-supported-azure-resource-types.md).
+Use the Azure Load Testing dashboard to analyze the test run metrics and identify performance bottlenecks in your application, or find out if you over-provisioned some compute resources. For example, you could evaluate if the service plan instances are right-sized for your workload.
-
-In the load test configuration, select the list of Azure resources for your application components. When you add an Azure resource to your load test, Azure Load Testing automatically selects default resource metrics to monitor while running the load test. For example, when you add an App Service plan, Azure Load Testing monitors average CPU percentage and average memory percentage. You can add or remove resource metrics for your load test.
- Learn more about how to [monitor server-side metrics in Azure Load Testing](./how-to-monitor-server-side-metrics.md).
-### App Service diagnostics
-
-When the application you're load testing is hosted on Azure App Service, you can get extra insights by using [App Service diagnostics](/azure/app-service/overview-diagnostics). App Service diagnostics is an intelligent and interactive way to help troubleshoot your app, with no configuration required. When you run into issues with your app, App Service diagnostics can help you resolve the issue easily and quickly.
-
-When you add an App Service application component to your load test configuration, the load testing dashboard provides a direct link to the App Service diagnostics dashboard for your App service resource.
+For applications that are hosted on Azure App Service, you can use [App Service diagnostics](/azure/app-service/overview-diagnostics) to get extra insights into the performance and health of the application. When you add an app service application component to your load test configuration, the load testing dashboard provides a direct link to the App Service diagnostics dashboard for your App service resource.
:::image type="content" source="media/concept-load-test-app-service/test-result-app-service-diagnostics.png" alt-text="Screenshot that shows the 'App Service' section on the load testing dashboard in the Azure portal." lightbox="media/concept-load-test-app-service/test-result-app-service-diagnostics.png":::
-App Service diagnostics enables you to view in-depth information and dashboard about the performance, resource usage, and stability of your app service. In the screenshot, you notice that there are concerns about the CPU usage, app performance, and failed requests.
-
+## Set criteria for test failures
-> [!NOTE]
-> It can take up to 45 minutes for the insights data to be available in App Service diagnostics.
+Test fail criteria enable you to configure conditions for load test client-side metrics. If a load test run doesn't meet these conditions, the test is considered to fail. For example, specify that the average response time of requests, or that the percentage of failed requests is above a given threshold. You can add fail criteria to your load test at any time, regardless if it's a quick test or if you uploaded a JMeter script.
-## Parameterize your test for deployment slots
-
-[Azure App Service deployment slots](/azure/app-service/deploy-staging-slots) enable you to set up staging environments for your application. Each deployment slot has a separate URL. You can easily reuse your load testing script across multiple slots by using environment variables in the load test configuration.
-When you create a quick test, Azure Load Testing generates a generic JMeter script and uses environment variables to pass the target URL to the script.
+When you run load tests as part of your CI/CD pipeline, you can use test fail criteria to identify performance regressions with an application build.
+Learn how to [configure test fail criteria](./how-to-define-test-criteria.md) for your load test.
-To use environment variables for passing the deployment slot URL to your JMeter test script, perform the following steps:
+## Use parameters to test across deployment slots
-1. Add an environment variable in the load test configuration.
+When you configure a load test, you can specify parameters to pass environment variables or secrets to the load test script. For example, to avoid that you need to store the application endpoint URL in the test script, you can use an environment variable. By using parameters, you can make your test script configurable.
-1. Reference the environment variable in your test script by using the `System.getenv` function.
+With [Azure App Service deployment slots](/azure/app-service/deploy-staging-slots) you can set up staging environments for your application. Each deployment slot has a separate URL. To reuse your test script across multiple deployment slots, use a parameter for the application endpoint.
- ```xml
- <elementProp name="domain" elementType="Argument">
- <stringProp name="Argument.name">domain</stringProp>
- <stringProp name="Argument.value">${__BeanShell( System.getenv("domain") )}</stringProp>
- <stringProp name="Argument.metadata">=</stringProp>
- </elementProp>
- ```
-Learn how to [parameterize a load test by using environment variables](./how-to-parameterize-load-tests.md).
+Learn how to [use parameters to pass environment variables to a load test](./how-to-parameterize-load-tests.md).
You can also use environment variables to pass other configuration settings to the JMeter test script. For example, you might pass the number of virtual users, or the file name of a [CSV input file](./how-to-read-csv-data.md) to the test script.
machine-learning How To Configure Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-private-link.md
Use one of the following methods to add a private endpoint to an existing worksp
> [!WARNING] >
-> If you have any existing compute targets associated with this workspace, and they are not behind the same virtual network tha the private endpoint is created in, they will not work.
+> If you have any existing compute targets associated with this workspace, and they are not behind the same virtual network that the private endpoint is created in, they will not work.
# [Azure CLI](#tab/cli) [!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
machine-learning How To Create Attach Compute Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-attach-compute-cluster.md
Compute clusters can run jobs securely in a [virtual network environment](how-to
## Limitations
-* Some of the scenarios listed in this document are marked as __preview__. Preview functionality is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
-* Compute clusters can be created in a different region than your workspace. This functionality is in __preview__, and is only available for __compute clusters__, not compute instances. This preview isn't available if you're using a private endpoint-enabled workspace.
+* Compute clusters can be created in a different region than your workspace. This functionality is only available for __compute clusters__, not compute instances.
> [!WARNING] > When using a compute cluster in a different region than your workspace or datastores, you may see increased network latency and data transfer costs. The latency and costs can occur when creating the cluster, and when running jobs on it.
-* We currently support only creation (and not updating) of clusters through [ARM templates](/azure/templates/microsoft.machinelearningservices/workspaces/computes). For updating compute, we recommend using the SDK, Azure CLI or UX for now.
- * Azure Machine Learning Compute has default limits, such as the number of cores that can be allocated. For more information, see [Manage and request quotas for Azure resources](how-to-manage-quotas.md). * Azure allows you to place _locks_ on resources, so that they can't be deleted or are read only. __Do not apply resource locks to the resource group that contains your workspace__. Applying a lock to the resource group that contains your workspace will prevent scaling operations for Azure Machine Learning compute clusters. For more information on locking resources, see [Lock resources to prevent unexpected changes](../azure-resource-manager/management/lock-resources.md).
machine-learning How To Image Processing Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-image-processing-batch.md
For testing our endpoint, we are going to use a sample of 1000 images from the o
| file | class | probabilities | label | |--|-|| -|
- | n02088094_Afghan_hound.JPEG | 161 | 0.994745 | Afghan hound |
- | n02088238_basset | 162 | 0.999397 | basset |
+ | n02088094_Afghan_hound.JPEG | 161 | 0.994745 | Afghan hound |
+ | n02088238_basset | 162 | 0.999397 | basset |
| n02088364_beagle.JPEG | 165 | 0.366914 | bluetick |
- | n02088466_bloodhound.JPEG | 164 | 0.926464 | bloodhound |
+ | n02088466_bloodhound.JPEG | 164 | 0.926464 | bloodhound |
| ... | ... | ... | ... |
On those cases, we may want to perform inference on the entire batch of data. Th
__code/score-by-batch/batch_driver.py__
- :::code language="python" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/imagenet-classifier/code/score-by-file/batch_driver.py" :::
+ :::code language="python" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/imagenet-classifier/code/score-by-batch/batch_driver.py" :::
> [!TIP] > * Notice that this script is constructing a tensor dataset from the mini-batch sent by the batch deployment. This dataset is preprocessed to obtain the expected tensors for the model using the `map` operation with the function `decode_img`.
machine-learning How To Managed Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-managed-network.md
Azure Machine Learning provides preview support for managed virtual network (VNe
When you enable managed virtual network isolation, a managed VNet is created for the workspace. Managed compute resources (compute clusters and compute instances) for the workspace automatically use this managed VNet. The managed VNet can use private endpoints for Azure resources that are used by your workspace, such as Azure Storage, Azure Key Vault, and Azure Container Registry.
-The following diagram shows a managed virtual network uses private endpoints to communicate with the storage, key vault, and container registry used by the workspace.
+The following diagram shows how a managed virtual network uses private endpoints to communicate with the storage, key vault, and container registry used by the workspace.
:::image type="content" source="./media/how-to-managed-network/managed-virtual-network-architecture.png" alt-text="Diagram of managed virtual network isolation.":::
There are two different configuration modes for outbound traffic from the manage
| Outbound mode | Description | Scenarios | | -- | -- | -- | | Allow internet outbound | Allow all internet outbound traffic from the managed VNet. | Recommended if you need access to machine learning artifacts on the Internet, such as python packages or pretrained models. |
-| Allow only approved outbound | Outbound traffic is allowed by specifying service tags. | Recommended if you want to minimize the risk of data exfiltration but you need to prepare all required machine learning artifacts in your private locations. |
+| Allow only approved outbound | Outbound traffic is allowed by specifying service tags. | Recommended if you want to minimize the risk of data exfiltration but you will need to prepare all required machine learning artifacts in your private locations. |
The managed virtual network is preconfigured with [required default rules](#list-of-required-rules). It's also configured for private endpoint connections to your workspace default storage, container registry and key vault if they're configured as private. After choosing the isolation mode, you only need to consider other outbound requirements you may need to add.
Before following the steps in this article, make sure you have the following pre
```python from azure.ai.ml import MLClient
- from azure.ai.ml.entities import Workspace, ManagedNetwork
- from azure.ai.ml.constants._workspace import IsolationMode
+ from azure.ai.ml.entities import (
+ Workspace,
+ ManagedNetwork,
+ IsolationMode,
+ ServiceTagDestination,
+ PrivateEndpointDestination
+ )
from azure.identity import DefaultAzureCredential
- from azure.ai.ml.entities import ServiceTagDestination, PrivateEndpointDestination
# Replace with the values for your Azure subscription and resource group. subscription_id = "<SUBSCRIPTION_ID>" resource_group = "<RESOURCE_GROUP>"+
+ # get a handle to the subscription
+ ml_client = MLClient(DefaultAzureCredential(), subscription_id, resource_group)
``` # [Azure portal](#tab/portal)
managed_network:
outbound_rules: - name: added-perule destination:
- service_resource_id: /subscriptions/{subscription ID}/resourceGroups/{resource group name}/providers/Microsoft.Storage/storageAccounts/{storage account name}
+ service_resource_id: /subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP>/providers/Microsoft.Storage/storageAccounts/<STORAGE_ACCOUNT_NAME>
spark_enabled: true subresource_target: blob type: private_endpoint
You can configure a managed VNet using either the `az ml workspace create` or `a
outbound_rules: - name: added-perule destination:
- service_resource_id: /subscriptions/{subscription ID}/resourceGroups/{resource group name}/providers/Microsoft.Storage/storageAccounts/{storage account name}
+ service_resource_id: /subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP>/providers/Microsoft.Storage/storageAccounts/<STORAGE_ACCOUNT_NAME>
spark_enabled: true subresource_target: blob type: private_endpoint
To configure a managed VNet that allows internet outbound communications, use th
The following example creates a new workspace named `myworkspace`, with an outbound rule named `myrule` that adds a private endpoint for an Azure Blob store: ```python
- # get a handle to the subscription
- ml_client = MLClient(DefaultAzureCredential(), subscription_id, resource_group)
- # Basic managed network configuration network = ManagedNetwork(IsolationMode.ALLOW_INTERNET_OUTBOUND)
To configure a managed VNet that allows internet outbound communications, use th
# Example private endpoint outbound to a blob rule_name = "myrule"
- service_resource_id = "/subscriptions/{subscription ID}/resourceGroups/{resource group name}/providers/Microsoft.Storage/storageAccounts/{storage account name}"
+ service_resource_id = "/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP>/providers/Microsoft.Storage/storageAccounts/<STORAGE_ACCOUNT_NAME>"
subresource_target = "blob"
- spark_enabled = true
+ spark_enabled = True
# Add the outbound ws.managed_network.outbound_rules = [PrivateEndpointDestination(
To configure a managed VNet that allows internet outbound communications, use th
# Example private endpoint outbound to a blob rule_name = "myrule"
- service_resource_id = "/subscriptions/{subscription ID}/resourceGroups/{resource group name}/providers/Microsoft.Storage/storageAccounts/{storage account name}"
+ service_resource_id = "/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP>/providers/Microsoft.Storage/storageAccounts/<STORAGE_ACCOUNT_NAME>"
subresource_target = "blob"
- spark_enabled = true
+ spark_enabled = True
# Add the outbound ws.managed_network.outbound_rules = [PrivateEndpointDestination(
managed_network:
type: service_tag - name: added-perule destination:
- service_resource_id: /subscriptions/{subscription ID}/resourceGroups/{resource group name}/providers/Microsoft.Storage/storageAccounts/{storage account name}
+ service_resource_id: /subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP>/providers/Microsoft.Storage/storageAccounts/<STORAGE_ACCOUNT_NAME>
spark_enabled: true subresource_target: blob type: private_endpoint
You can configure a managed VNet using either the `az ml workspace create` or `a
type: service_tag - name: added-perule destination:
- service_resource_id: /subscriptions/{subscription ID}/resourceGroups/{resource group name}/providers/Microsoft.Storage/storageAccounts/{storage account name}
+ service_resource_id: /subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP>/providers/Microsoft.Storage/storageAccounts/<STORAGE_ACCOUNT_NAME>
spark_enabled: true subresource_target: blob type: private_endpoint
To configure a managed VNet that allows only approved outbound communications, u
ws.managed_network.outbound_rules = [] # Example private endpoint outbound to a blob rule_name = "myrule"
- service_resource_id = "/subscriptions/{subscription ID}/resourceGroups/{resource group name}/providers/Microsoft.Storage/storageAccounts/{storage account name}"
+ service_resource_id = "/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP>/providers/Microsoft.Storage/storageAccounts/<STORAGE_ACCOUNT_NAME>"
subresource_target = "blob"
- spark_enabled = true
+ spark_enabled = True
ws.managed_network.outbound_rules.append( PrivateEndpointDestination( name=rule_name,
To configure a managed VNet that allows only approved outbound communications, u
ws.managed_network.outbound_rules = [] # Example private endpoint outbound to a blob rule_name = "myrule"
- service_resource_id = "/subscriptions/{subscription ID}/resourceGroups/{resource group name}/providers/Microsoft.Storage/storageAccounts/{storage account name}"
+ service_resource_id = "/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP>/providers/Microsoft.Storage/storageAccounts/<STORAGE_ACCOUNT_NAME>"
subresource_target = "blob"
- spark_enabled = true
+ spark_enabled = True
ws.managed_network.outbound_rules.append( PrivateEndpointDestination( name=rule_name,
To enable the [serverless spark jobs](how-to-submit-spark-jobs.md) for the manag
outbound_rules: - name: added-perule destination:
- service_resource_id: /subscriptions/{subscription ID}/resourceGroups/{resource group name}/providers/Microsoft.Storage/storageAccounts/{storage account name}
+ service_resource_id: /subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP>/providers/Microsoft.Storage/storageAccounts/<STORAGE_ACCOUNT_NAME>
spark_enabled: true subresource_target: blob type: private_endpoint
To enable the [serverless spark jobs](how-to-submit-spark-jobs.md) for the manag
# Example private endpoint outbound to a blob rule_name = "myrule"
- service_resource_id = "/subscriptions/{subscription ID}/resourceGroups/{resource group name}/providers/Microsoft.Storage/storageAccounts/{storage account name}"
+ service_resource_id = "/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP>/providers/Microsoft.Storage/storageAccounts/<STORAGE_ACCOUNT_NAME>"
subresource_target = "blob"
- spark_enabled = true
+ spark_enabled = True
# Add the outbound ws.managed_network.outbound_rules = [PrivateEndpointDestination(
rule_name = "<some-rule-name>"
# Get a rule by name rule = ml_client._workspace_outbound_rules.get(resource_group, ws_name, rule_name)
-print(rule._to_dict())
# List rules for a workspace rule_list = ml_client._workspace_outbound_rules.list(resource_group, ws_name)
-print([r._to_dict() for r in rule_list])
# Delete a rule from a workspace ml_client._workspace_outbound_rules.begin_remove(resource_group, ws_name, rule_name).result()
ml_client._workspace_outbound_rules.begin_remove(resource_group, ws_name, rule_n
> [!TIP] > These rules are automatically added to the managed VNet.
-__Outbound__ rules:
+__Private endpoints__:
+* When the isolation mode for the managed network is `Allow internet outbound`, private endpoint outbound rules will be automatically created as required rules from the managed network for the workspace and associated resources __with public network access disabled__ (Key Vault, Storage Account, Container Registry, Azure ML Workspace).
+* When the isolation mode for the managed network is `Allow only approved outbound`, private endpoint outbound rules will be automatically created as required rules from the managed network for the workspace and associated resources __regardless of public network access mode for those resources__ (Key Vault, Storage Account, Container Registry, Azure ML Workspace).
+
+__Outbound__ service tag rules:
* `AzureActiveDirectory` * `AzureMachineLearning`
__Outbound__ rules:
* `MicrosoftContainerRegistry` * `AzureMonitor`
-__Inbound__ rules:
+__Inbound__ service tag rules:
* `AzureMachineLearning` ## List of recommended outbound rules
Currently we don't have any recommended outbound rules.
* Once you enable managed virtual network isolation of your workspace, you can't disable it. * Managed virtual network uses private endpoint connection to access your private resources. You can't have a private endpoint and a service endpoint at the same time for your Azure resources, such as a storage account. We recommend using private endpoints in all scenarios.
+* The managed network will be deleted and cleaned up when the workspace is deleted.
## Next steps
machine-learning How To Secure Training Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-training-vnet.md
In this article you learn how to secure the following training compute resources
## Limitations
-* __Compute clusters__ can be created in a different region than your workspace. This functionality is in __preview__, and is only available for __compute clusters__, not compute instances. When using a different region for the cluster, the following limitations apply:
+* Compute cluster/instance deployment in virtual network isn't supported with Azure Lighthouse.
+
+* __Port 445__ must be open for _private_ network communications between your compute instances and the default storage account during training. For example, if your computes are in one VNet and the storage account is in another, don't block port 445 to the storage account VNet.
+
+## Compute cluster in a different VNet/region from workspace
+
+> [!IMPORTANT]
+> You can't create a *compute instance* in a different region/VNet, only a *compute cluster*.
+
+To create a compute cluster in an Azure Virtual Network in a different region than your workspace virtual network, you have couple of options to enable communication between the two VNets.
+
+* Use [VNet Peering](/azure/virtual-network/virtual-network-peering-overview).
+* Add a private endpoint for your workspace in the virtual network that will contain the compute cluster.
+
+> [!IMPORTANT]
+> Regardless of the method selected, you must also create the VNet for the compute cluster; Azure Machine Learning will not create it for you.
+>
+> You must also allow the default storage account, Azure Container Registry, and Azure Key Vault to access the VNet for the compute cluster. There are multiple ways to accomplish this. For example, you can create a private endpoint for each resource in the VNet for the compute cluster, or you can use VNet peering to allow the workspace VNet to access the compute cluster VNet.
+
+### Scenario: VNet peering
+
+1. Configure your workspace to use an Azure Virtual Network. For more information, see [Secure your workspace resources](how-to-secure-workspace-vnet.md).
+1. Create a second Azure Virtual Network that will be used for your compute clusters. It can be in a different Azure region than the one used for your workspace.
+1. Configure [VNet Peering](/azure/virtual-network/virtual-network-peering-overview) between the two VNets.
+
+ > [!TIP]
+ > Wait until the VNet Peering status is **Connected** before continuing.
- * If your workspace associated resources, such as storage, are in a different virtual network than the cluster, set up global virtual network peering between the networks. For more information, see [Virtual network peering](../virtual-network/virtual-network-peering-overview.md).
- * You may see increased network latency and data transfer costs. The latency and costs can occur when creating the cluster, and when running jobs on it.
+1. Modify the `privatelink.api.azureml.ms` DNS zone to add a link to the VNet for the compute cluster. This zone is created by your Azure Machine Learning workspace when it uses a private endpoint to participate in a VNet.
- Guidance such as using NSG rules, user-defined routes, and input/output requirements, apply as normal when using a different region than the workspace.
+ 1. Add a new __virtual network link__ to the DNS zone. You can do this multiple ways:
+
+ * From the Azure portal, navigate to the DNS zone and select **Virtual network links**. Then select **+ Add** and select the VNet that you created for your compute clusters.
+ * From the Azure CLI, use the `az network private-dns link vnet create` command. For more information, see [az network private-dns link vnet create](/cli/azure/network/private-dns/link/vnet#az-network-private-dns-link-vnet-create).
+ * From Azure PowerShell, use the `New-AzPrivateDnsVirtualNetworkLink` command. For more information, see [New-AzPrivateDnsVirtualNetworkLink](/powershell/module/az.privatedns/new-azprivatednsvirtualnetworklink).
+
+1. Repeat the previous step and sub-steps for the `privatelink.notebooks.azure.net` DNS zone.
+
+1. Configure the following Azure resources to allow access from both VNets.
+
+ * The default storage account for the workspace.
+ * The Azure Container registry for the workspace.
+ * The Azure Key Vault for the workspace.
+
+ > [!TIP]
+ > There are multiple ways that you might configure these services to allow access to the VNets. For example, you might create a private endpoint for each resource in both VNets. Or you might configure the resources to allow access from both VNets.
+
+1. Create a compute cluster as you normally would when using a VNet, but select the VNet that you created for the compute cluster. If the VNet is in a different region, select that region when creating the compute cluster.
> [!WARNING]
- > If you are using a __private endpoint-enabled workspace__, creating the cluster in a different region is __not supported__.
+ > When setting the region, if it is a different region than your workspace or datastores you may see increased network latency and data transfer costs. The latency and costs can occur when creating the cluster, and when running jobs on it.
-* Compute cluster/instance deployment in virtual network isn't supported with Azure Lighthouse.
+### Scenario: Private endpoint
-* __Port 445__ must be open for _private_ network communications between your compute instances and the default storage account during training. For example, if your computes are in one VNet and the storage account is in another, don't block port 445 to the storage account VNet.
+1. Configure your workspace to use an Azure Virtual Network. For more information, see [Secure your workspace resources](how-to-secure-workspace-vnet.md).
+1. Create a second Azure Virtual Network that will be used for your compute clusters. It can be in a different Azure region than the one used for your workspace.
+1. Create a new private endpoint for your workspace in the VNet that will contain the compute cluster.
+
+ * To add a new private endpoint using the __Azure portal__, select your workspace and then select __Networking__. Select __Private endpoint connections__, __+ Private endpoint__ and use the fields to create a new private endpoint.
+
+ * When selecting the __Region__, select the same region as your virtual network.
+ * When selecting __Resource type__, use __Microsoft.MachineLearningServices/workspaces__.
+ * Set the __Resource__ to your workspace name.
+ * Set the __Virtual network__ and __Subnet__ to the VNet and subnet that you created for your compute clusters.
+
+ Finally, select __Create__ to create the private endpoint.
+
+ * To add a new private endpoint using the Azure CLI, use the `az network private-endpoint create`. For an example of using this command, see [Configure a private endpoint for Azure Machine Learning workspace](how-to-configure-private-link.md#add-a-private-endpoint-to-a-workspace).
+
+1. Create a compute cluster as you normally would when using a VNet, but select the VNet that you created for the compute cluster. If the VNet is in a different region, select that region when creating the compute cluster.
+
+ > [!WARNING]
+ > When setting the region, if it is a different region than your workspace or datastores you may see increased network latency and data transfer costs. The latency and costs can occur when creating the cluster, and when running jobs on it.
## Compute instance/cluster with no public IP
az ml compute create --name cpu-cluster --resource-group rg --workspace-name ws
az ml compute create --name myci --resource-group rg --workspace-name ws --vnet-name yourvnet --subnet yoursubnet --type ComputeInstance --set enable_node_public_ip=False ```
-# [Python](#tab/python)
+# [Python SDK](#tab/python)
> [!IMPORTANT] > The following code snippet assumes that `ml_client` points to an Azure Machine Learning workspace that uses a private endpoint to participate in a VNet. For more information on using `ml_client`, see the tutorial [Azure Machine Learning in a day](tutorial-azure-ml-in-a-day.md).
az ml compute create --name cpu-cluster --resource-group rg --workspace-name ws
az ml compute create --name myci --resource-group rg --workspace-name ws --vnet-name yourvnet --subnet yoursubnet --type ComputeInstance ```
-# [Python](#tab/python)
+# [Python SDK](#tab/python)
> [!IMPORTANT] > The following code snippet assumes that `ml_client` points to an Azure Machine Learning workspace that uses a private endpoint to participate in a VNet. For more information on using `ml_client`, see the tutorial [Azure Machine Learning in a day](tutorial-azure-ml-in-a-day.md).
machine-learning How To Use Pipeline Component https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-pipeline-component.md
[!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)]
-When developing a complex machine learning pipeline, it's common to have sub-pipelines that use multi-step to perform tasks such as data preprocessing and model training. These sub-pipelines can be developed and tested standalone. Pipeline component groups multi-step as a component that can be used as a single step to create complex pipelines. Which will help you share your work and better collaborate with team members.
+When developing a complex machine learning pipeline, it's common to have sub-pipelines that use multi-step to perform tasks such as data preprocessing and model training. These sub-pipelines can be developed and tested standalone. Pipeline component groups multi-step as a component that can be used as a single step to create complex pipelines. Which will help you share your work and better collaborate with team members.
By using a pipeline component, the author can focus on developing sub-tasks and easily integrate them with the entire pipeline job. Furthermore, a pipeline component has a well-defined interface in terms of inputs and outputs, which means that user of the pipeline component doesn't need to know the implementation details of the component.
In this article, you'll learn how to use pipeline component in Azure Machine Lea
## The difference between pipeline job and pipeline component
-In general, pipeline component is similar to pipeline job. They're both consist of a group of jobs/components.
+In general, pipeline components are similar to pipeline jobs because they both contain a group of jobs/components.
-Here are some main differences you need aware when defining pipeline component:
+Here are some main differences you need to be aware of when defining pipeline components:
- Pipeline component only defines the interface of inputs/outputs, which means when defining a pipeline component you need to explicitly define the type of inputs/outputs instead of directly assigning values to them. - Pipeline component can't have runtime settings, you can't hard-code compute, or data node in the pipeline component. Instead you need to promote them as pipeline level inputs and assign values during runtime.
machine-learning Reference Yaml Mltable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-mltable.md
This article contains information relating to the `MLTable` YAML schema only. Fo
|Read Transformation | Description | Parameters | |||| |`read_delimited` | Adds a transformation step to read delimited text file(s) provided in `paths`. | `infer_column_types`: Boolean to infer column data types. Defaults to True. Type inference requires that the current compute can access the data source. Currently, type inference will only pull the first 200 rows.<br><br>`encoding`: Specify the file encoding. Supported encodings: `utf8`, `iso88591`, `latin1`, `ascii`, `utf16`, `utf32`, `utf8bom` and `windows1252`. Default encoding: `utf8`.<br><br>`header`: user can choose one of the following options: `no_header`, `from_first_file`, `all_files_different_headers`, `all_files_same_headers`. Defaults to `all_files_same_headers`.<br><br>`delimiter`: The separator used to split columns.<br><br>`empty_as_string`: Specify if empty field values should load as empty strings. The default (False) will read empty field values as nulls. Passing this setting as *True* will read empty field values as empty strings. If the values are converted to numeric or datetime, then this setting has no effect, as empty values will be converted to nulls.<br><Br>`include_path_column`: Boolean to keep path information as column in the table. Defaults to False. This setting is useful when reading multiple files, and you want to know from which file a specific record originated. Additionally, you can keep useful information in the file path.<br><br>`support_multi_line`: By default (`support_multi_line=False`), all line breaks, including line breaks in quoted field values, will be interpreted as a record break. This approach to data reading increases speed, and it offers optimization for parallel execution on multiple CPU cores. However, it may result in silent production of more records with misaligned field values. Set this value to True when the delimited files are known to contain quoted line breaks. |
-| `read_parquet` | Adds a transformation step to read Parquet formatted file(s) provided in `paths`. | `include_path_column`: Boolean to keep path information as a table column. Defaults to False. This setting helps when you read multiple files, and you want to know from which file a specific record originated. Additionally, you can keep useful information in the file path. |
+| `read_parquet` | Adds a transformation step to read Parquet formatted file(s) provided in `paths`. | `include_path_column`: Boolean to keep path information as a table column. Defaults to False. This setting helps when you read multiple files, and you want to know from which file a specific record originated. Additionally, you can keep useful information in the file path.<br><br>**NOTE:** MLTable only supports reading parquet files that have columns consisting of primitive types. Columns containing arrays are **not** supported. |
| `read_delta_lake` | Adds a transformation step to read a Delta Lake folder provided in `paths`. You can read the data at a particular timestamp or version. | `timestamp_as_of`: String. Timestamp to be specified for time-travel on the specific Delta Lake data. To read data at a specific point in time, the datetime string should have a [RFC-3339/ISO-8601 format](https://wikipedia.org/wiki/ISO_8601). (for example: "2022-10-01T00:00:00Z", "2022-10-01T00:00:00+08:00", "2022-10-01T01:30:00-08:00")<br><br>`version_as_of`: Integer. Version to be specified for time-travel on the specific Delta Lake data.<br><br>**One value of `timestamp_as_of` or `version_as_of` must be provided.** | `read_json_lines` | Adds a transformation step to read the json file(s) provided in `paths`. | `include_path_column`: Boolean to keep path information as column in the MLTable. Defaults to False. This setting becomes useful to read multiple files, and you want to know from which file a particular record originated. Additionally, you can keep useful information in file path.<br><br>`invalid_lines`: How to handle lines that have invalid JSON. Supported values: `error` and `drop`. Defaults to `error`.<br><br>`encoding`: Specify the file encoding. Supported encodings are `utf8`, `iso88591`, `latin1`, `ascii`, `utf16`, `utf32`, `utf8bom` and `windows1252`. Default is `utf8`.
transformations:
## Next steps - [Install and use the CLI (v2)](how-to-configure-cli.md)-- [Working with tables in Azure Machine Learning](how-to-mltable.md)
+- [Working with tables in Azure Machine Learning](how-to-mltable.md)
machine-learning Reference Yaml Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-workspace.md
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
| `customer_managed_key.key_uri` | string | The key URI of the customer-managed key to encrypt data at rest. The URI format is `https://<keyvault-dns-name>/keys/<key-name>/<key-version>`. | | | | `image_build_compute` | string | Name of the compute target to use for building environment Docker images when the container registry is behind a VNet. For more information, see [Secure workspace resources behind VNets](how-to-secure-workspace-vnet.md#enable-azure-container-registry-acr). | | | | `public_network_access` | string | Whether public endpoint access is allowed if the workspace will be using Private Link. For more information, see [Enable public access when behind VNets](how-to-configure-private-link.md#enable-public-access). | `enabled`, `disabled` | `disabled` |
+| `managed_network` | object | Azure Machine Learning Workspace managed network isolation. For more information, see [Workspace managed network isolation](how-to-managed-network.md). | | |
## Remarks
Examples are available in the [examples GitHub repository](https://github.com/Az
:::code language="yaml" source="~/azureml-examples-main/cli/resources/workspace/hbi.yml":::
+## YAML: managed network with allow internet outbound
++
+## YAML: managed network with allow only approved outbound
+++ ## Next steps - [Install and use the CLI (v2)](how-to-configure-cli.md)
machine-learning How To Create Attach Compute Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-create-attach-compute-cluster.md
Compute clusters can run jobs securely in a [virtual network environment](../how
## Limitations
-* Some of the scenarios listed in this document are marked as __preview__. Preview functionality is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-
-* Compute clusters can be created in a different region than your workspace. This functionality is in __preview__, and is only available for __compute clusters__, not compute instances. This preview is not available if you are using a private endpoint-enabled workspace.
-
- > [!WARNING]
- > When using a compute cluster in a different region than your workspace or datastores, you may see increased network latency and data transfer costs. The latency and costs can occur when creating the cluster, and when running jobs on it.
+* Compute clusters can be created in a different region and VNet than your workspace. However, this functionality is only available using the SDK v2, CLI v2, or studio. For more information, see the [v2 version of secure training environments](../how-to-secure-training-vnet.md?view=azureml-api-2&preserve-view=true#compute-cluster-in-a-different-vnetregion-from-workspace).
* We currently support only creation (and not updating) of clusters through [ARM templates](/azure/templates/microsoft.machinelearningservices/workspaces/computes). For updating compute, we recommend using the SDK, Azure CLI or UX for now.
machine-learning How To Secure Training Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-secure-training-vnet.md
In this article you learn how to secure the following training compute resources
### Azure Machine Learning compute cluster/instance
-* __Compute clusters__ can be created in a different region than your workspace. This functionality is in __preview__, and is only available for __compute clusters__, not compute instances. When using a different region for the cluster, the following limitations apply:
-
- * If your workspace associated resources, such as storage, are in a different virtual network than the cluster, set up global virtual network peering between the networks. For more information, see [Virtual network peering](../../virtual-network/virtual-network-peering-overview.md).
- * You may see increased network latency and data transfer costs. The latency and costs can occur when creating the cluster, and when running jobs on it.
-
- Guidance such as using NSG rules, user-defined routes, and input/output requirements, apply as normal when using a different region than the workspace.
-
- > [!WARNING]
- > If you are using a __private endpoint-enabled workspace__, creating the cluster in a different region is __not supported__.
+* __Compute clusters__ can be created in a different region and VNet than your workspace. However, this functionality is only available using the SDK v2, CLI v2, or studio. For more information, see the [v2 version of secure training environments](../how-to-secure-training-vnet.md?view=azureml-api-2&preserve-view=true#compute-cluster-in-a-different-vnetregion-from-workspace).
* Compute cluster/instance deployment in virtual network isn't supported with Azure Lighthouse.
migrate How To Set Up Appliance Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-set-up-appliance-vmware.md
Before you deploy the OVA file, verify that the file is secure:
1. On the server on which you downloaded the file, open a Command Prompt window by using the **Run as administrator** option. 1. Run the following command to generate the hash for the OVA file:
- ```bash
+ ```
C:\>CertUtil -HashFile <file_location> <hashing_agorithm> ``` For example:
- ```bash
+ ```
C:\>CertUtil -HashFile C:\Users\Administrator\Desktop\MicrosoftAzureMigration.ova SHA256 ```
mysql Concepts Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-limitations.md
The MySQL service doesn't allow direct access to the underlying file system. Som
The following are unsupported: - DBA role: Restricted. Alternatively, you can use the administrator user (created during the new server creation), which allows you to perform most of DDL and DML statements.-- Restricted privileges: [SUPER privilege](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_super) and [FILE privilege](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_file) are restricted.
+- Below [static privileges](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#privileges-provided-static) are restricted.
+ - [SUPER privilege](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_super)
+ - [FILE privilege](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_file)
+ - [CREATE TABLESPACE](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_create-tablespace)
+ - [SHUTDOWN](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_shutdown)
+- [BACKUP_ADMIN](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_backup-admin) privilege: Granting BACKUP_ADMIN privilege isn't supported for taking backups using any [utility tools](../migrate/how-to-decide-on-right-migration-tools.md). Refer [Supported](././concepts-limitations.md#supported-1) section for list of supported [dynamic privileges](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#privileges-provided-dynamic).
- DEFINER: Requires super privileges to create and is restricted. If importing data using a backup, manually remove the `CREATE DEFINER` commands or use the `--skip-definer` command when performing a mysqldump. - System databases: The [mysql system database](https://dev.mysql.com/doc/refman/5.7/en/system-schema.html) is read-only and used to support various PaaS functionalities. You can't make changes to the `mysql` system database. - `SELECT ... INTO OUTFILE`: Not supported in the service.-- [BACKUP_ADMIN](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#priv_backup-admin) privilege: Granting BACKUP_ADMIN privilege isn't supported for taking backups using any [utility tools](../migrate/how-to-decide-on-right-migration-tools.md). Refer [Supported](././concepts-limitations.md#supported-1) section for list of supported [dynamic privileges](https://dev.mysql.com/doc/refman/8.0/en/privileges-provided.html#privileges-provided-dynamic).+ ### Supported
operator-nexus Concepts Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/concepts-storage.md
+
+ Title: Azure Operator Nexus storage appliance
+description: Overview of storage appliance resources for Azure Operator Nexus.
++++ Last updated : 06/29/2023+++
+# Azure Operator Nexus storage appliance
+
+Operator Nexus is built on some basic constructs like compute servers, storage appliance, and network fabric devices. These storage appliances, also referred to as Nexus storage appliances, represent the persistent storage appliance in the rack. In each Nexus storage appliance, there are multiple storage devices, which are aggregated to provide a single storage pool. This storage pool is then carved out into multiple volumes, which are then presented to the compute servers as block storage devices. The compute servers can then use these block storage devices as persistent storage for their workloads. Each Nexus cluster is provisioned with a single storage appliance that is shared across all the tenant workloads.
+
+The storage appliance within an Operator Nexus instance is represented as an Azure resource and operators (end users) get access to view its attributes like any other Azure resource.
+
+## Key capabilities offered in Azure Operator Nexus Storage software stack
+
+## Kubernetes storage classes
+
+The Nexus Software Kubernetes stack offers two types of storage, selectable using the Kubernetes StorageClass mechanism.
+
+#### **StorageClass: ΓÇ£nexus-volumeΓÇ¥**
+
+The default storage mechanism, known as "nexus-volume," is the preferred choice for most users. It provides the highest levels of performance and availability. However, it's important to note that volumes can't be simultaneously shared across multiple worker nodes. These volumes can be accessed and managed using the Azure API and Portal through the Volume Resource.
+
+#### **StorageClass: ΓÇ£nexus-sharedΓÇ¥**
+
+In situations where a "shared filesystem" is required, the "nexus-shared" storage class is available. This storage class enables multiple pods to concurrently access and share the same volume, providing a shared storage solution. While the performance and availability of "nexus-shared" are sufficient for most applications, it's recommended that workloads with heavy IO (input/output) requirements utilize the "nexus-volume" option mentioned earlier for optimal performance.
+
+## Storage appliance status
+
+There are multiple properties, which reflect the operational state of storage appliance. Some of these include:
+
+- Status
+- Provisioning state
+- Capacity total / used
+- Remote Vendor Management
+
+_`Status`_ field indicates the state as derived from the storage appliance. The state can be Available, Error or Provisioning.
+
+The _`Provisioning State`_ field provides the current provisioning state of the storage appliance. The provisioning state can be Succeeded, Failed, or InProgress.
+
+The _`Capacity`_ field provides the total and used capacity of the storage appliance.
+
+The _`Remote Vendor Management`_ field indicates whether the remote vendor management is enabled or disabled for the storage appliance.
+
+## Storage appliance operations
+- **List Storage Appliances** List storage appliances in the provided resource group or subscription.
+- **Show Storage Appliance** Get properties of the provided storage appliance.
+- **Update Storage Appliance** Update properties or provided tags of the provided storage appliance.
+- **Enable/Disable Remote Vendor Management for Storage Appliance** Enable/Disable remote vendor management for the provided storage appliance.
+
+> [!NOTE]
+> Customers cannot explicitly create or delete storage appliances directly. These resources are only created as the realization of the Cluster lifecycle. Implementation will block any creation or delete requests from any user, and only allow internal/application driven creates or deletes.
operator-nexus Reference Near Edge Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/reference-near-edge-storage.md
+
+ Title: Azure Operator Nexus Storage Appliance Overview
+description: Storage Appliance SKUs and resources available in Azure Operator Nexus Near-edge.
++++ Last updated : 06/29/2023+++
+# Near-edge Nexus storage appliance
+
+The architecture of Azure Operator Nexus revolves around core components such as compute servers, storage appliances, and network fabric devices. A single Storage Appliance referred to as the "Nexus Storage Appliance," is attached to each near-edge Nexus instance. These appliances play a vital role as the dedicated and persistent storage solution for the tenant workloads hosted within the Nexus instance.
+
+Within each Nexus storage appliance, multiple storage devices are grouped together to form a unified storage pool. This pool is then divided into multiple volumes, which are then presented to the compute servers and tenant workloads as persistent volumes.
+
+## SKUs available
+
+This table lists the SKUs available for the storage appliance in Near-edge Nexus offering:
+
+| SKU | Description |
+| -- | - |
+| Pure x70r3-91 | Storage appliance model x70r3-91 provided by PURE Storage |
+
+## Storage connectivity
+
+This diagram shows the connectivity model followed by storage appliance in the Near Edge offering:
++
+## Storage limits
+
+This table lists the characteristics for the storage appliance:
+
+| Property | Specification/Description |
+| -- | -|
+| Raw storage capacity | 91 TB |
+| Usable capacity | 50 TB |
+| Number of maximum IO operations supported per second <br>(with 80/20 R/W ratio) | 250K+ (4K) <br>150K+ (16K) |
+| Number of IO operations supported per volume per second | 50K+ |
+| Maximum IO latency supported | 10 ms |
+| Nominal failover time supported | 10 s |
postgresql Quickstart Create Server Python Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/quickstart-create-server-python-sdk.md
Last updated 04/24/2023
In this quickstart, you'll learn how to use the [Azure libraries (SDK) for Python](/azure/developer/python/sdk/azure-sdk-overview?view=azure-python&preserve-view=true) to create an Azure Database for PostgreSQL - Flexible Server.
-Flexible server is a managed service that you use to run, manage, and scale highly available PostgreSQL databases in the cloud. You can use Python SDK to provision a PostgreSQL Flexible Server, multiple servers or multiple databases on a server.
+Flexible Server is a managed service that you use to run, manage, and scale highly available PostgreSQL databases in the cloud. You can use Python SDK to provision a PostgreSQL Flexible Server, multiple servers or multiple databases on a server.
## Prerequisites
sap Sap Hana High Availability Netapp Files Red Hat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability-netapp-files-red-hat.md
Azure NetApp Files is available in several [Azure regions](https://azure.microso
For information about the availability of Azure NetApp Files by Azure region, see [Azure NetApp Files Availability by Azure Region](https://azure.microsoft.com/global-infrastructure/services/?products=netapp&regions=all).
+### Important considerations
+
+As you are creating your Azure NetApp Files volumes for SAP HANA Scale-up systems, be aware of the important considerations documented in [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md#important-considerations).
+
+### Sizing of HANA database on Azure NetApp Files
+
+The throughput of an Azure NetApp Files volume is a function of the volume size and service level, as documented in [Service level for Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-service-levels.md).
+
+While designing the infrastructure for SAP HANA on Azure with Azure NetApp Files, be aware of the recommendations in [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md#sizing-for-hana-database-on-azure-netapp-files).
+ ### Deploy Azure NetApp Files resources The following instructions assume that you've already deployed your [Azure virtual network](../../virtual-network/virtual-networks-overview.md). The Azure NetApp Files resources and VMs, where the Azure NetApp Files resources will be mounted, must be deployed in the same Azure virtual network or in peered Azure virtual networks.
The following instructions assume that you've already deployed your [Azure virtu
- Volume hanadb2-log-mnt00001 (nfs://10.32.2.4:/hanadb2-log-mnt00001) - Volume hanadb2-shared-mnt00001 (nfs://10.32.2.4:/hanadb2-shared-mnt00001)
-### Important considerations
-
-As you are creating your Azure NetApp Files volumes for SAP HANA Scale-up systems, be aware of the important considerations documented in [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md#important-considerations).
-
-### Sizing of HANA database on Azure NetApp Files
-
-The throughput of an Azure NetApp Files volume is a function of the volume size and service level, as documented in [Service level for Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-service-levels.md).
-
-While designing the infrastructure for SAP HANA on Azure with Azure NetApp Files, be aware of the recommendations in [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md#sizing-for-hana-database-on-azure-netapp-files).
- > [!NOTE] > All commands to mount /hana/shared in this article are presented for NFSv4.1 /hana/shared volumes. > If you deployed the /hana/shared volumes as NFSv3 volumes, don't forget to adjust the mount commands for /hana/shared for NFSv3.
sap Sap Hana Scale Out Standby Netapp Files Rhel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-scale-out-standby-netapp-files-rhel.md
Azure NetApp Files is available in several [Azure regions](https://azure.microso
For information about the availability of Azure NetApp Files by Azure region, see [Azure NetApp Files Availability by Azure Region][anf-avail-matrix].
+### Important considerations
+
+As you're creating your Azure NetApp Files volumes for SAP HANA scale-out with stand by nodes scenario, be aware of the important considerations documented in [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md#important-considerations).
+
+### Sizing for HANA database on Azure NetApp Files
+
+The throughput of an Azure NetApp Files volume is a function of the volume size and service level, as documented in [Service level for Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-service-levels.md).
+
+While designing the infrastructure for SAP HANA on Azure with Azure NetApp Files, be aware of the recommendations in [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md#sizing-for-hana-database-on-azure-netapp-files).
+ ### Deploy Azure NetApp Files resources The following instructions assume that you've already deployed your [Azure virtual network](../../virtual-network/virtual-networks-overview.md). The Azure NetApp Files resources and VMs, where the Azure NetApp Files resources will be mounted, must be deployed in the same Azure virtual network or in peered Azure virtual networks. + 1. Create a NetApp account in your selected Azure region by following the instructions in [Create a NetApp account](../../azure-netapp-files/azure-netapp-files-create-netapp-account.md). 2. Set up an Azure NetApp Files capacity pool by following the instructions in [Set up an Azure NetApp Files capacity pool](../../azure-netapp-files/azure-netapp-files-set-up-capacity-pool.md).
The following instructions assume that you've already deployed your [Azure virtu
* volume **HN1**-log-mnt00002 (nfs://10.9.0.4/**HN1**-log-mnt00002) * volume **HN1**-shared (nfs://10.9.0.4/**HN1**-shared)
- In this example, we used a separate Azure NetApp Files volume for each HANA data and log volume. For a more cost-optimized configuration on smaller or non-productive systems, it's possible to place all data mounts on a single volume and all logs mounts on a different single volume.
-
-### Important considerations
-
-As you're creating your Azure NetApp Files volumes for SAP HANA scale-out with stand by nodes scenario, be aware of the important considerations documented in [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md#important-considerations).
-
-### Sizing for HANA database on Azure NetApp Files
-
-The throughput of an Azure NetApp Files volume is a function of the volume size and service level, as documented in [Service level for Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-service-levels.md).
-
-While designing the infrastructure for SAP HANA on Azure with Azure NetApp Files, be aware of the recommendations in [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md#sizing-for-hana-database-on-azure-netapp-files).
+ In this example, we used a separate Azure NetApp Files volume for each HANA data and log volume. For a more cost-optimized configuration on smaller or non-productive systems, it's possible to place all data mounts on a single volume and all logs mounts on a different single volume.
## Deploy Linux virtual machines via the Azure portal
sap Sap Hana Scale Out Standby Netapp Files Suse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-scale-out-standby-netapp-files-suse.md
Azure NetApp Files is available in several [Azure regions](https://azure.microso
For information about the availability of Azure NetApp Files by Azure region, see [Azure NetApp Files Availability by Azure Region][anf-avail-matrix].
+### Important considerations
+
+As you're creating your Azure NetApp Files for SAP NetWeaver on SUSE High Availability architecture, be aware of the important considerations documented in [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md#important-considerations).
+
+### Sizing for HANA database on Azure NetApp Files
+
+The throughput of an Azure NetApp Files volume is a function of the volume size and service level, as documented in [Service level for Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-service-levels.md).
+
+As you design the infrastructure for SAP HANA on Azure with Azure NetApp Files, be aware of the recommendations in [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md#sizing-for-hana-database-on-azure-netapp-files).
+ ### Deploy Azure NetApp Files resources The following instructions assume that you've already deployed your [Azure virtual network](../../virtual-network/virtual-networks-overview.md). The Azure NetApp Files resources and VMs, where the Azure NetApp Files resources will be mounted, must be deployed in the same Azure virtual network or in peered Azure virtual networks. + 1. Create a NetApp account in your selected Azure region by following the instructions in [Create a NetApp account](../../azure-netapp-files/azure-netapp-files-create-netapp-account.md). 2. Set up an Azure NetApp Files capacity pool by following the instructions in [Set up an Azure NetApp Files capacity pool](../../azure-netapp-files/azure-netapp-files-set-up-capacity-pool.md).
The following instructions assume that you've already deployed your [Azure virtu
In this example, we used a separate Azure NetApp Files volume for each HANA data and log volume. For a more cost-optimized configuration on smaller or non-productive systems, it's possible to place all data mounts and all logs mounts on a single volume.
-### Important considerations
-
-As you're creating your Azure NetApp Files for SAP NetWeaver on SUSE High Availability architecture, be aware of the important considerations documented in [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md#important-considerations).
-
-### Sizing for HANA database on Azure NetApp Files
-
-The throughput of an Azure NetApp Files volume is a function of the volume size and service level, as documented in [Service level for Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-service-levels.md).
-
-As you design the infrastructure for SAP HANA on Azure with Azure NetApp Files, be aware of the recommendations in [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md#sizing-for-hana-database-on-azure-netapp-files).
## Deploy Linux virtual machines via the Azure portal
security Key Management Choose https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/key-management-choose.md
Here is a list of the key management solutions we commonly see being utilized ba
**Azure Managed HSM**: A FIPS 140-2 Level 3 validated single-tenant HSM offering that gives customers full control of an HSM for encryption-at-rest, Keyless SSL/TLS offload, and custom applications. Azure Managed HSM is the only key management solution offering confidential keys. Customers receive a pool of three HSM partitionsΓÇötogether acting as one logical, highly available HSM applianceΓÇöfronted by a service that exposes crypto functionality through the Key Vault API. Microsoft handles the provisioning, patching, maintenance, and hardware failover of the HSMs, but doesn't have access to the keys themselves, because the service executes within Azure's Confidential Compute Infrastructure. Azure Managed HSM is integrated with the Azure SQL, Azure Storage, and Azure Information Protection PaaS services and offers support for Keyless TLS with F5 and Nginx. For more information, see [What is Azure Key Vault Managed HSM?](../../key-vault/managed-hsm/overview.md)
-**Azure Dedicated HSM**: A FIPS 140-2 Level 3 validated single-tenant bare metal HSM offering that lets customers lease a general-purpose HSM appliance that resides in Microsoft datacenters. The customer has complete ownership over the HSM device and is responsible for patching and updating the firmware when required. Microsoft has no permissions on the device or access to the key material, and Azure Dedicated HSM is not integrated with any Azure PaaS offerings. Customers can interact with the HSM using the PKCS#11, JCE/JCA, and KSP/CNG APIs. This offering is most useful for legacy lift-and-shift workloads, PKI, SSL Offloading and Keyless TLS (supported integrations include F5, Nginx, Apache, Palo Alto, IBM GW and more), OpenSSL applications, Oracle TDE, and Azure SQL TDE IaaS. For more information, see [What is Azure Key Vault Managed HSM?](../../dedicated-hsm/overview.md)
+**Azure Dedicated HSM**: A FIPS 140-2 Level 3 validated single-tenant bare metal HSM offering that lets customers lease a general-purpose HSM appliance that resides in Microsoft datacenters. The customer has complete ownership over the HSM device and is responsible for patching and updating the firmware when required. Microsoft has no permissions on the device or access to the key material, and Azure Dedicated HSM is not integrated with any Azure PaaS offerings. Customers can interact with the HSM using the PKCS#11, JCE/JCA, and KSP/CNG APIs. This offering is most useful for legacy lift-and-shift workloads, PKI, SSL Offloading and Keyless TLS (supported integrations include F5, Nginx, Apache, Palo Alto, IBM GW and more), OpenSSL applications, Oracle TDE, and Azure SQL TDE IaaS. For more information, see [What is Azure Dedicated HSM?](../../dedicated-hsm/overview.md)
**Azure Payment HSM**: A FIPS 140-2 Level 3, PCI HSM v3, validated single-tenant bare metal HSM offering that lets customers lease a payment HSM appliance in Microsoft datacenters for payments operations, including payment processing, payment credential issuing, securing keys and authentication data, and sensitive data protection. The service is PCI DSS, PCI 3DS, and PCI PIN compliant. Azure Payment HSM offers single-tenant HSMs for customers to have complete administrative control and exclusive access to the HSM. Once the HSM is allocated to a customer, Microsoft has no access to customer data. Likewise, when the HSM is no longer required, customer data is zeroized and erased as soon as the HSM is released, to ensure complete privacy and security is maintained. For more information, see [About Azure Payment HSM](../../payment-hsm/overview.md).
spring-apps Diagnostic Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/diagnostic-services.md
To get started, enable one of these services to receive the data. To learn about
* **Archive to a storage account** * **Stream to an event hub** * **Send to Log Analytics**
+ * **Send to partner solution**
1. Choose which log category and metric category you want to monitor, and then specify the retention time (in days). The retention time applies only to the storage account. 1. Select **Save**.
spring-apps How To Deploy In Azure Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-deploy-in-azure-virtual-network.md
**This article applies to:** ✔️ Java ✔️ C#
-**This article applies to:** ✔️ Basic/Standard ✔️ Enterprise
+**This article applies to:** ❌ Basic ✔️ Standard ✔️ Enterprise
This tutorial explains how to deploy an Azure Spring Apps instance in your virtual network. This deployment is sometimes called VNet injection.
storage Storage Blob Use Access Tier Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-use-access-tier-dotnet.md
+
+ Title: Set or change a blob's access tier with .NET
+
+description: Learn how to set or change a blob's access tier in your Azure Storage account using the .NET client library.
++++++ Last updated : 07/03/2023+
+ms.devlang: csharp
+++
+# Set or change a block blob's access tier with .NET
+
+This article shows how to set or change the access tier for a block blob using the [Azure Storage client library for .NET](/dotnet/api/overview/azure/storage).
+
+## Prerequisites
+
+To work with the code examples in this article, make sure you have:
+
+- An authorized client object to connect to Blob Storage data resources. To learn more, see [Create and manage client objects that interact with data resources](storage-blob-client-management.md).
+- Permissions to perform an operation to set the blob's access tier. To learn more, see the authorization guidance for the following REST API operation:
+ - [Set Blob Tier](/rest/api/storageservices/set-blob-tier#authorization)
+- The package **Azure.Storage.Blobs** installed to your project directory. To learn more about setting up your project, see [Get Started with Azure Storage and .NET](storage-blob-dotnet-get-started.md#set-up-your-project).
++
+> [!NOTE]
+> To set the access tier to `Cold` using .NET, you must use a minimum [client library](/dotnet/api/azure.storage.blobs) version of 12.15.0.
+
+## Set a blob's access tier during upload
+
+You can set a blob's access tier on upload by using the [BlobUploadOptions](/dotnet/api/azure.storage.blobs.models.blobuploadoptions) class. The following code example shows how to set the access tier when uploading a blob:
++
+To learn more about uploading a blob with .NET, see [Upload a blob with .NET](storage-blob-upload.md).
+
+## Change the access tier for an existing block blob
+
+You can change the access tier of an existing block blob by using one of the following functions:
+
+- [SetAccessTier](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.setaccesstier)
+- [SetAccessTierAsync](/dotnet/api/azure.storage.blobs.specialized.blobbaseclient.setaccesstierasync)
+
+The following code example shows how to change the access tier for an existing blob to `Cool`:
++
+If you are rehydrating an archived blob, you can optionally set the `rehydratePriority` parameter to `High` or `Standard`.
+
+## Copy a blob to a different access tier
+
+You can change the access tier of an existing block blob by specifying an access tier as part of a copy operation. To change the access tier during a copy operation, use the [BlobCopyFromUriOptions](/dotnet/api/azure.storage.blobs.models.blobcopyfromurioptions) class and specify the [AccessTier](/dotnet/api/azure.storage.blobs.models.blobcopyfromurioptions.accesstier#azure-storage-blobs-models-blobcopyfromurioptions-accesstier) property. If you're rehydrating a blob from the archive tier using a copy operation, you can optionally set the [RehydratePriority](/dotnet/api/azure.storage.blobs.models.blobcopyfromurioptions.rehydratepriority#azure-storage-blobs-models-blobcopyfromurioptions-rehydratepriority) property to `High` or `Standard`.
+
+The following code example shows how to rehydrate an archived blob to the `Hot` tier using a copy operation:
++
+## Resources
+
+To learn more about setting access tiers using the Azure Blob Storage client library for .NET, see the following resources.
+
+### REST API operations
+
+The Azure SDK for .NET contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar .NET paradigms. The client library methods for setting access tiers use the following REST API operation:
+
+- [Set Blob Tier](/rest/api/storageservices/set-blob-tier) (REST API)
++
+### Code samples
+
+- [View code samples from this article (GitHub)](https://github.com/Azure-Samples/AzureStorageSnippets/blob/master/blobs/howto/dotnet/BlobDevGuideBlobs/AccessTiers.cs)
+
+### See also
+
+- [Access tiers best practices](access-tiers-best-practices.md)
+- [Blob rehydration from the Archive tier](archive-rehydrate-overview.md)
storage Container Storage Aks Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/container-storage-aks-quickstart.md
Title: Quickstart for installing Azure Container Storage Preview for use with Azure Kubernetes Service (AKS) description: Learn how to install Azure Container Storage Preview for use with Azure Kubernetes Service. Create an AKS cluster, label the node pool, and install the Azure Container Storage extension. -+ Previously updated : 06/29/2023 Last updated : 07/03/2023 -
storage Container Storage Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/container-storage-faq.md
Title: Frequently asked questions (FAQ) for Azure Container Storage description: Get answers to Azure Container Storage frequently asked questions. - Previously updated : 06/22/2023+ Last updated : 07/03/2023 -
storage Container Storage Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/container-storage-introduction.md
Title: Introduction to Azure Container Storage Preview description: An overview of Azure Container Storage Preview, a service built natively for containers that enables customers to create and manage volumes for running production-scale stateful container applications. -+ Previously updated : 06/09/2023 Last updated : 07/03/2023 -
storage Remove Container Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/remove-container-storage.md
Title: How to remove Azure Container Storage description: Remove Azure Container Storage by deleting the extension instance for Azure Kubernetes Service (AKS). Optionally delete the AKS cluster or entire resource group to clean up resources. - Previously updated : 06/22/2023+ Last updated : 07/03/2023 -
storage Use Container Storage With Elastic San https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/use-container-storage-with-elastic-san.md
Title: Use Azure Container Storage Preview with Azure Elastic SAN Preview description: Configure Azure Container Storage Preview for use with Azure Elastic SAN Preview. Create a storage pool, select a storage class, create a persistent volume claim, and attach the persistent volume to a pod. -+ Previously updated : 06/29/2023 Last updated : 07/03/2023 -
storage Use Container Storage With Local Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/use-container-storage-with-local-disk.md
Title: Use Azure Container Storage Preview with Ephemeral Disk description: Configure Azure Container Storage Preview for use with Ephemeral Disk (NVMe). Create a storage pool, select a storage class, create a persistent volume claim, and attach the persistent volume to a pod. -+ Previously updated : 06/29/2023 Last updated : 07/03/2023 -
storage Use Container Storage With Managed Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/use-container-storage-with-managed-disks.md
Title: Use Azure Container Storage Preview with Azure managed disks description: Configure Azure Container Storage Preview for use with Azure managed disks. Create a storage pool, select a storage class, create a persistent volume claim, and attach the persistent volume to a pod. -+ Previously updated : 06/29/2023 Last updated : 07/03/2023 -
update-center Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/troubleshoot.md
To review the logs related to all actions performed by the extension, check for
* `WindowsUpdateExtension.log`: Contains details related to the patch actions, such as the patches assessed and installed on the machine, and any issues encountered in the process. * `CommandExecution.log`: There is a wrapper above the patch action, which is used to manage the extension and invoke specific patch operation. This log contains details about the wrapper. For Auto-Patching, the log has details on whether the specific patch operation was invoked.
+## Unable to change the patch orchestration option to manual updates from automatic updates
+
+### Issue
+
+Azure machine has the patch orchestration option as AutomaticByOS/Windows automatic updates and you are unable to change the patch orchestration to Manual Updates using Change update settings.
+
+### Resolution
+
+If you don't want any patch installation to be orchestrated by Azure or aren't using custom patching solutions, you must change the patch orchestration option to **Customer Managed Schedules (Preview)** and don't associate a schedule/maintenance configuration to the machine. This will ensure that no patching is performed on the machine until you change it explicitly. For more information, see [User scenarios](prerequsite-for-schedule-patching.md#user-scenarios).
++ ## Machine shows as "Not assessed" and shows an HRESULT exception ### Issue
virtual-desktop Whats New Documentation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-documentation.md
description: Learn about new and updated articles to the Azure Virtual Desktop d
Previously updated : 05/26/2023 Last updated : 07/03/2023 # What's new in documentation for Azure Virtual Desktop We update documentation for Azure Virtual Desktop on a regular basis. In this article we highlight articles for new features and where there have been important updates to existing articles.
+## June 2023
+
+In June 2023, we published the following changes:
+
+- Updated [Use Azure Virtual Desktop Insights](insights.md) to use the Azure Monitor Agent.
+- Updated [Supported features for Microsoft Teams on Azure Virtual Desktop](teams-supported-features.md) to include simulcast, mirror my video, manage breakout rooms, call health panel.
+- New article to [Assign RBAC roles to the Azure Virtual Desktop service principal](service-principal-assign-roles.md).
+- Added Intune to [Administrative template for Azure Virtual Desktop](administrative-template.md).
+- Updated [Configure single sign-on using Azure AD Authentication](configure-single-sign-on.md) to include how to use an Active Directory domain admin account with single sign-on, and highlight the need to create a Kerberos server object.
+ ## May 2023 In May 2023, we published the following changes:
virtual-machine-scale-sets Spot Priority Mix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/spot-priority-mix.md
Previously updated : 03/09/2023 Last updated : 07/01/2023
Azure allows you to have the flexibility of running a mix of uninterruptible sta
- Provide reassurance that all your VMs won't be taken away simultaneously due to evictions before the infrastructure has time to react and recover the evicted capacity - Simplify the scale-out and scale-in of compute workloads that require both Spot and standard VMs by letting Azure orchestrate the creation and deletion of VMs
+## Limitations
+Spot Priority Mix is not supported with `singlePlacementMode` enabled on the scale set.
+ ## Configure your mix You can configure a custom percentage distribution across Spot and standard VMs. The platform automatically orchestrates each scale-out and scale-in operation to achieve the desired distribution by selecting an appropriate number of VMs to create or delete. You can also optionally configure the number of base standard VMs you would like to maintain in the Virtual Machine Scale Set during any scale operation.
-### [Template](#tab/template-1)
+The eviction policy of your Spot VMs follows what is set for the Spot VMs in your scale set. *Deallocate* is the default behavior, wherein evicted Spot VMs move to a stop-deallocated state. Alternatively, the Spot eviction policy can be set to *Delete*, wherein the VM and its underlying disks are deleted.
+
+### ARM Template
-You can set your Spot Priority Mix by using a template to add the following properties to a scale set with Flexible orchestration using a Spot priority VM profile:
+You can set your Spot Priority Mix by using an ARM template to add the following properties to a scale set with Flexible orchestration using a Spot priority VM profile:
```json "priorityMixPolicy": {
You can set your Spot Priority Mix by using a template to add the following prop
``` **Parameters:**+ - `baseRegularPriorityCount` ΓÇô Specifies a base number of VMs that are standard, *Regular* priority; if the Scale Set capacity is at or below this number, all VMs are *Regular* priority. - `regularPriorityPercentageAboveBase` ΓÇô Specifies the percentage split of *Regular* and *Spot* priority VMs that are used when the Scale Set capacity is above the *baseRegularPriorityCount*.
You can set your Spot Priority Mix in the Scaling tab of the Virtual Machine Sca
1. In the search bar, search for and select **Virtual Machine Scale Sets**. 1. Select **Create** on the **Virtual Machine Scale Sets** page. 1. In the **Basics** tab, fill out the required fields, select **Flexible** as the **Orchestration** mode, and select the checkbox for **Run with Azure Spot discount**.
-1. Fill out the **Disks** and **Networking** tabs.
-1. In the **Spot** tab, select the check-box next to *Scale with VMs and Spot VMs* option under the **Scale with VMs and discounted Spot VMs** section.
-1. Fill out the **Base VM (uninterruptible) count** and **Instance distribution** fields to configure your priorities.
+1 In the **Spot** tab, select the check-box next to *Scale with VMs and Spot VMs* option under the **Scale with VMs and discounted Spot VMs** section.
+1. Fill out the **Base VM (uninterruptible) count** and **Instance distribution** fields to configure your percentage split between Spot and Standard VMs.
1. Continue through the Virtual Machine Scale Set creation process. ### [Azure CLI](#tab/cli-1)
az vmss create -n myScaleSet \
### [Azure PowerShell](#tab/powershell-1)
-You can set your Spot Priority Mix using Azure PowerShell by setting the `Priority` flag to `Spot` and including the `BaseRegularPriorityCount` and `RegularPriorityPercentage` flags.
+You can set your Spot Priority Mix using Azure PowerShell by setting the `Priority` parameter to `Spot` and including the `BaseRegularPriorityCount` and `RegularPriorityPercentage` parameters.
```azurepowershell $vmssConfig = New-AzVmssConfig `
New-AzVmss `
+## Updating your Spot Priority Mix
+Should your ideal percentage split of Spot and Standard VMs change, you can update your Spot Priority Mix after your scale set has been deployed. Updating your Spot Priority Mix will apply for all scale set actions *after* the change is made, existing VMs will remain as is.
+
+### [Portal](#tab/portal-2)
+You can update your existing Spot Priority Mix in the Configuration tab of the Virtual Machine Scale Set resource page in the Azure portal. The following steps instruct you on how to access this feature during that process. Note: in Portal, you can only update the Spot Priority Mix for scale sets that already have Spot Priority Mix enabled.
+
+1. Navigate to the specific virtual machine scale set that you're adjusting the Spot Priority Mix on.
+1. In the left side bar, scroll down to and select **Configuration**.
+1. Your current Spot Priority Mix should be visible. Here you can change the **Base VM (uninterruptible) count** and **Instance distribution** of Spot and Standard VMs.
+1. Update your Spot Mix as needed.
+1. Press the **Save** button to apply your changes.
+
+### [Azure CLI](#tab/cli-2)
+
+You can update your Spot Priority Mix using Azure CLI by updating the `regular-priority-count` and `regular-priority-percentage` parameters.
+
+```azurecli
+az vmss update --resource-group myResourceGroup \
+ --name myScaleSet \
+ --regular-priority-count 10 \
+ --regular-priority-percentage 80 \
+```
+
+### [Azure PowerShell](#tab/powershell-2)
+
+You can update your Spot Priority Mix using Azure PowerShell by updating the `BaseRegularPriorityCount` and `RegularPriorityPercentage` parameters.
+
+```azurepowershell
+$vmss = Get-AzVmss `
+ -ResourceGroupName "myResourceGroup" `
+ -VMScaleSetName "myScaleSet"
+
+Update-AzVmss `
+ -ResourceGroupName "myResourceGroup" `
+ -VirtualMachineScaleSet $vmss
+ -VMScaleSetName "myScaleSet" `
+ -BaseRegularPriorityCount 10 `
+ -RegularPriorityPercentage 80;
+
+```
+++ ## Examples The following examples have scenario assumptions, a table of actions, and walk-through of results to help you understand how Spot Priority Mix configuration works. Some important terminology to notice before referring to these examples:+ - **sku.capacity** is the total number of VMs in the Virtual Machine Scale Set - **Base (standard) VMs** are the number of standard non-Spot VMs, akin to a minimum VM number ### Scenario 1 The following scenario assumptions apply to this example:-- **sku.capacity** is variable, as the autoscaler will add or remove VMs from the scale set
+- **sku.capacity** is variable, as Autoscale will add or remove VMs from the scale set
- **Base (standard) VMs:** 10 - **Extra standard VMs:** 0 - **Spot priority VMs:** 0 - **regularPriorityPercentageAboveBase:** 50% - **Eviction policy:** Delete
-| Action | sku.capacity | Base (standard) VMs | Extra standard VMs | Spot priority VMs |
-||||||
-| Create | 10 | 10 | 0 | 0 |
-| Scale out | 20 | 10 | 5 | 5 |
-| Scale out | 30 | 10 | 10 | 10 |
-| Scale out | 40 | 10 | 15 | 15 |
-| Scale out | 41 | 10 | 15 | 16 |
-| Scale out | 42 | 10 | 16 | 16 |
-| Evict-Delete (all Spot instances) | 26 | 10 | 16 | 0 |
-| Scale out | 30 | 10 | 16 | 4 |
-| Scale out | 42 | 10 | 16 | 16 |
-| Scale out | 44 | 10 | 17 | 17 |
+| Action | sku.capacity | Base (standard) VMs | Extra standard VMs | Spot priority VMs |
+|--|-||--|-|
+| Create | 10 | 10 | 0 | 0 |
+| Scale out | 20 | 10 | 5 | 5 |
+| Scale out | 30 | 10 | 10 | 10 |
+| Scale out | 40 | 10 | 15 | 15 |
+| Scale out | 41 | 10 | 15 | 16 |
+| Scale out | 42 | 10 | 16 | 16 |
+| Scale in - Evict-Delete (all Spot instances) | 26 | 10 | 16 | 0 |
+| Scale out | 30 | 10 | 16 | 4 |
+| Scale out | 42 | 10 | 16 | 16 |
+| Scale out | 44 | 10 | 17 | 17 |
Example walk-through: 1. You start out with a Virtual Machine Scale Set with 10 VMs. - The `sku.capacity` is variable and doesn't set a starting number of VMs. The Base VMs are set at 10, thus your total starting VMs are just 10 Base (standard) VMs. 1. You then scale-out 5 times, with 50% standard VMs and 50% Spot VMs. - Note, because there's a 50/50 split, in the fourth scale-out, there's one more Spot VM than standard VM. Once it's scaled out again (5th scale-out), the 50/50 balance is restored with another standard VM.
-1. You then scale in your scale set with the eviction policy being delete, which deletes all the Spot VMs.
+1. You then scale in your scale set with the eviction policy being *evict-delete*, which deletes all the Spot VMs.
1. With the scale-out operations mentioned in this scenario, you restore the 50/50 balance in your scale set by only creating Spot VMs. 1. By the last scale-out, your scale set is already balanced, so one of each type of VM is created.
The following scenario assumptions apply to this example:
- **regularPriorityPercentageAboveBase:** 25% - **Eviction policy:** Deallocate
-| Action | sku.capacity | Base (standard) VMs | Extra standard VMs | Spot priority VMs |
-||||||
-| Create | 20 | 10 | 2 | 8 |
-| Scale out | 50 | 10 | 10 | 30 |
-| Scale out | 110 | 10 | 25 | 75 |
-| Evict-Delete (10 instances) | 100 | 10 | 25 | 75 (65 running VMs, 10 Stop-Deallocated VMs) |
-| Scale out | 120 | 10 | 27 | 83 (73 running VMs, 10 Stop-Deallocated VMs) |
+| Action | sku.capacity | Base (standard) VMs | Extra standard VMs | Spot priority VMs |
+|--|--||--|-|
+| Create | 20 | 10 | 2 | 8 |
+| Scale out | 50 | 10 | 10 | 30 |
+| Scale out | 110 | 10 | 25 | 75 |
+| Scale In: Stop-Deallocate (10 instances) | 100 | 10 | 25 | 75 (65 running VMs, 10 Stop-Deallocated VMs) |
+| Scale out | 120 | 10 | 27 | 83 (73 running VMs, 10 Stop-Deallocated VMs) |
+ Example walk-through: 1. With the initial creation of the Virtual Machine Scale Set and Spot Priority Mix, you have 20 VMs. - 10 of those VMs are the Base (standard) VMs, 2 extra standard VMs, and 8 Spot priority VMs for your 25% *regularPriorityPercentageAboveBase*. - Another way to look at this ratio is you have 1 standard VM for every 4 Spot VMs in the scale set.
-1. You then scale-out twice to create 90 more VMs; 23 standard VMs and 67 Spot VMs.
+1. You then scale out twice to create 90 more VMs; 23 standard VMs and 67 Spot VMs.
1. When you scale in by 10 VMs, 10 Spot VMs are *stop-deallocated*, creating an imbalance in your scale set.
-1. Your next scale-out operation creates another 2 standard VMs and 8 Spot VMs, bringing you closer to your 25% above base ratio.
+1. Your next scale out operation creates another 2 standard VMs and 8 Spot VMs, bringing you closer to your 25% above base ratio.
## Troubleshooting
-If Spot Priority Mix is not available to you, be sure to configure the `priorityMixPolicy` to specify a *Spot* priority in the `virtualMachineProfile`. Without that configuration, you will not be able to access this Spot feature.
+If Spot Priority Mix isn't available to you, be sure to configure the `priorityMixPolicy` to specify a *Spot* priority in the `virtualMachineProfile`. Without enabling the `priorityMixPolicy` setting, you won't be able to access this Spot feature.
+
+## FAQs
+### Q: I changed the Spot Priority Mix settings, why aren't my existing VMs changing?
+Spot Priority Mix applies for scale actions on the scale set. Changing the percentage split of Spot and Standard VMs won't rebalance existing scale set. You'll see the actual percentage split change as you scale the scale set.
+
+### Q: Is Spot Priority Mix enabled for Uniform orchestration mode?
+Spot Priority Mix is only available on Virtual Machine Scale Sets with Flexible orchestration mode.
+
+### Q: Which regions is Spot Priority Mix enabled in?
+Spot VMs, and therefore Spot Priority Mix, are available in all global Azure regions.
## Next steps
virtual-machines Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/delete.md
Depending on how you delete a VM, it may only delete the VM resource, not the ne
1. Open the [portal](https://portal.azure.com). 1. Select **+ Create a resource**. 1. On the **Create a resource** page, under **Virtual machines**, select **Create**.
-1. Make your choices on the **Basics**, then select **Next : Disks >**. The **Disks** tab will open.
+1. Make your choices on the **Basics**, then select **Next : Disks >** to open the **Disks** tab.
1. Under **Disk options**, by default the OS disk is set to **Delete with VM**. If you don't want to delete the OS disk, clear the checkbox. If you're using an existing OS disk, the default is to detach the OS disk when the VM is deleted. :::image type="content" source="media/delete/delete-disk.png" alt-text="Screenshot checkbox to choose to have the disk deleted when the VM is deleted."::: 1. Under **Data disks**, you can either attach an existing data disk or create a new disk and attach it to the VM.
- - If you choose **Create and attach a new disk**, the **Create a new disk** page will open and you can select whether to delete the disk when you delete the VM.
+ - If you choose **Create and attach a new disk**, the **Create a new disk** page opens and you can select whether to delete the disk when you delete the VM.
:::image type="content" source="media/delete/delete-data-disk.png" alt-text="Screenshot showing a checkbox to choose to delete the data disk when the VM is deleted.":::
- - If you choose to **Attach an existing disk**, you'll be able to choose the disk, LUN, and whether you want to delete the data disk when you delete the VM.
+ - If you choose to **Attach an existing disk**, can choose the disk, LUN, and whether you want to delete the data disk when you delete the VM.
:::image type="content" source="media/delete/delete-existing-data-disk.png" alt-text="Screenshot showing the checkbox to choose to delete the data disk when the VM is deleted.":::
-1. When you're done adding your disk information, select **Next : Networking >**. The **Networking** tab will open.
+1. When you're done adding your disk information, select **Next : Networking >** to open the **Networking** tab.
1. Towards the bottom of the page, select **Delete public IP and NIC when VM is deleted**. :::image type="content" source="media/delete/delete-networking.png" alt-text="Screenshot showing the checkbox to choose to delete the public IP and NIC when the VM is deleted.":::
-1. When you're done making selections, select **Review + create**. The **Review + create** page will open.
+1. When you're done making selections, select **Review + create**.
1. You can verify which resources you have chosen to delete when you delete the VM. 1. When you're satisfied with your selections, and validation passes, select **Create** to deploy the VM.
New-AzVm `
### [REST](#tab/rest2)
-This example shows how to set the data disk and NIC to be deleted when the VM is deleted.
+This example shows how to set the data disk and NIC to be deleted when the VM is deleted. Note, the API version specified in the api-version parameter must be '2021-03-01' or newer to configure the delete option.
```rest PUT
https://management.azure.com/subscriptions/subid/resourceGroups/rg1/providers/Mi
{ "id": "/subscriptions/.../Microsoft.Network/networkInterfaces/myNIC", "properties": {
- "primary": true,
- "deleteOption": "Delete"
+ "primary": true,
+ "deleteOption": "Delete"
} } ]
The following example sets the delete option to `detach` so you can reuse the di
az resource update --resource-group myResourceGroup --name myVM --resource-type virtualMachines --namespace Microsoft.Compute --set properties.storageProfile.osDisk.deleteOption=detach ```
+### [PowerShell](#tab/powershell3)
+
+The following example updates VM to delete the OS disk, all data disks, and all NICs when the VM is deleted.
+
+```azurepowershell
+$vmConfig = Get-AzVM -ResourceGroupName myResourceGroup -Name myVM
+$vmConfig.StorageProfile.OsDisk.DeleteOption = 'Delete'
+$vmConfig.StorageProfile.DataDisks | ForEach-Object { $_.DeleteOption = 'Delete' }
+$vmConfig.NetworkProfile.NetworkInterfaces | ForEach-Object { $_.DeleteOption = 'Delete' }
+$vmConfig | Update-AzVM
+```
+ ### [REST](#tab/rest3)
-The following example updates the VM to delete the NIC, OS disk, and data disk when the VM is deleted.
+The following example updates the VM to delete the NIC, OS disk, and data disk when the VM is deleted. Note, the API version specified in the api-version parameter must be '2021-03-01' or newer to configure the delete option.
```rest PATCH https://management.azure.com/subscriptions/subID/resourceGroups/resourcegroup/providers/Microsoft.Compute/virtualMachines/testvm?api-version=2021-07-01
PATCH https://management.azure.com/subscriptions/subID/resourceGroups/resourcegr
## Force Delete for VMs
-Force delete allows you to forcefully delete your virtual machine, reducing delete latency and immediately freeing up attached resources. For VMs that don't require graceful shutdown, Force Delete will delete the VM as fast as possible while relieving the logical resources from the VM, bypassing the graceful shutdown and some of the cleanup operations. Force Delete won't immediately free the MAC address associated with a VM, as this is a physical resource that may take up to 10 min to free. If you need to immediately re-use the MAC address on a new VM, Force Delete isn't recommended. Force delete should only be used when you aren't intending to re-use virtual hard disks. You can use force delete through Portal, CLI, PowerShell, and REST API.
+Force delete allows you to forcefully delete your virtual machine, reducing delete latency and immediately freeing up attached resources. For VMs that don't require graceful shutdown, Force Delete will delete the VM as fast as possible while relieving the logical resources from the VM, bypassing the graceful shutdown and some of the cleanup operations. Force Delete won't immediately free the MAC address associated with a VM, as this is a physical resource that may take up to 10 min to free. If you need to immediately reuse the MAC address on a new VM, Force Delete isn't recommended. Force delete should only be used when you aren't intending to reuse virtual hard disks. You can use force delete through Portal, CLI, PowerShell, and REST API.
### [Portal](#tab/portal4)
You can use the Azure REST API to apply force delete to your virtual machines. U
## Force Delete for scale sets
-Force delete allows you to forcefully delete your **Uniform** Virtual Machine Scale Set, reducing delete latency and immediately freeing up attached resources. Force Delete won't immediately free the MAC address associated with a VM, as this is a physical resource that may take up to 10 min to free. If you need to immediately re-use the MAC address on a new VM, Force Delete isn't recommended. Force delete should only be used when you are not intending to re-use virtual hard disks. You can use force delete through Portal, CLI, PowerShell, and REST API.
+Force delete allows you to forcefully delete your **Uniform** Virtual Machine Scale Set, reducing delete latency and immediately freeing up attached resources. Force Delete won't immediately free the MAC address associated with a VM, as this is a physical resource that may take up to 10 min to free. If you need to immediately reuse the MAC address on a new VM, Force Delete is not recommended. Force delete should only be used when you are not intending to reuse virtual hard disks. You can use force delete through Portal, CLI, PowerShell, and REST API.
### [Portal](#tab/portal5)
Use the `--force-deletion` parameter for [`az vmss delete`](/cli/azure/vmss#az-v
az vmss delete \ --resource-group myResourceGroup \ --name myVMSS \
- --force-deletion
+ --force-deletion true
``` ### [PowerShell](#tab/powershell5)