Updates from: 11/28/2023 02:08:55
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Add Api Connector Token Enrichment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-api-connector-token-enrichment.md
zone_pivot_groups: b2c-policy-type + # Enrich tokens with claims from external sources using API connectors [!INCLUDE [active-directory-b2c-choose-user-flow-or-custom-policy](../../includes/active-directory-b2c-choose-user-flow-or-custom-policy.md)] Azure Active Directory B2C (Azure AD B2C) enables identity developers to integrate an interaction with a RESTful API into their user flow using [API connectors](api-connectors-overview.md). It enables developers to dynamically retrieve data from external identity sources. At the end of this walkthrough, you'll be able to create an Azure AD B2C user flow that interacts with APIs to enrich tokens with information from external sources.
Additionally, these claims are typically sent in all requests for this step:
> [!IMPORTANT] > If a claim does not have a value at the time the API endpoint is called, the claim will not be sent to the API. Your API should be designed to explicitly check and handle the case in which a claim is not in the request.+ ## Expected response types from the web API at this step When the web API receives an HTTP request from Microsoft Entra ID during a user flow, it can return a "continuation response." ### Continuation response
In a continuation response, the API can return additional claims. A claim return
The claim value in the token will be that returned by the API, not the value in the directory. Some claim values cannot be overwritten by the API response. Claims that can be returned by the API correspond to the set found under **User attributes** with the exception of `email`. > [!NOTE] > The API is only invoked during an initial authentication. When using refresh tokens to silently get new access or ID tokens, the token will include the values evaluated during the initial authentication. + ## Example response ### Example of a continuation response ```http
You can also design the interaction as a validation technical profile. This is s
## Prerequisites - Complete the steps in [Get started with custom policies](tutorial-create-user-flows.md?pivots=b2c-custom-policy). You should have a working custom policy for sign-up and sign-in with local accounts. - Learn how to [Integrate REST API claims exchanges in your Azure AD B2C custom policy](api-connectors-overview.md).+ ## Prepare a REST API endpoint For this walkthrough, you should have a REST API that validates whether a user's Azure AD B2C objectId is registered in your back-end system. If registered, the REST API returns the user account balance. Otherwise, the REST API registers the new account in the directory and returns the starting balance `50.00`.
A claim provides temporary storage of data during an Azure AD B2C policy executi
1. Search for the [BuildingBlocks](buildingblocks.md) element. If the element doesn't exist, add it. 1. Locate the [ClaimsSchema](claimsschema.md) element. If the element doesn't exist, add it. 1. Add the following claims to the **ClaimsSchema** element. + ```xml <ClaimType Id="balance"> <DisplayName>Your Balance</DisplayName>
After you deploy your REST API, set the metadata of the `REST-GetProfile` techni
- **AuthenticationType**. Set the type of authentication being performed by the RESTful claims provider such as `Basic` or `ClientCertificate` - **AllowInsecureAuthInProduction**. In a production environment, make sure to set this metadata to `false`. + See the [RESTful technical profile metadata](restful-technical-profile.md#metadata) for more configurations. The comments above `AuthenticationType` and `AllowInsecureAuthInProduction` specify changes you should make when you move to a production environment. To learn how to secure your RESTful APIs for production, see [Secure your RESTful API](secure-rest-api.md). ## Add an orchestration step
The comments above `AuthenticationType` and `AllowInsecureAuthInProduction` spec
<OrchestrationStep Order="8" Type="SendClaims" CpimIssuerTechnicalProfileReferenceId="JwtIssuer" /> ``` 1. Repeat the last two steps for the **ProfileEdit** and **PasswordReset** user journeys.+ ## Include a claim in the token To return the `balance` claim back to the relying party application, add an output claim to the <em>`SocialAndLocalAccounts/`**`SignUpOrSignIn.xml`**</em> file. Adding an output claim will issue the claim into the token after a successful user journey, and will be sent to the application. Modify the technical profile element within the relying party section to add `balance` as an output claim.
Repeat this step for the **ProfileEdit.xml**, and **PasswordReset.xml** user jou
Save the files you changed: *TrustFrameworkBase.xml*, and *TrustFrameworkExtensions.xml*, *SignUpOrSignin.xml*, *ProfileEdit.xml*, and *PasswordReset.xml*. ## Test the custom policy 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Microsoft Entra tenant from the **Directories + subscriptions** menu.
1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **App registrations**. 1. Select **Identity Experience Framework**. 1. Select **Upload Custom Policy**, and then upload the policy files that you changed: *TrustFrameworkBase.xml*, and *TrustFrameworkExtensions.xml*, *SignUpOrSignin.xml*, *ProfileEdit.xml*, and *PasswordReset.xml*. 1. Select the sign-up or sign-in policy that you uploaded, and click the **Run now** button. 1. You should be able to sign up using an email address or a Facebook account. 1. The token sent back to your application includes the `balance` claim.+ ```json { "typ": "JWT",
In general, it's helpful to use the logging tools enabled by your web API servic
* A 401 or 403 HTTP status code typically indicates there's an issue with your authentication. Double-check your API's authentication layer and the corresponding configuration in the API connector. * Use more aggressive levels of logging (for example "trace" or "debug") in development if needed. * Monitor your API for long response times. + Additionally, Azure AD B2C logs metadata about the API transactions that happen during user authentications via a user flow. To find these: 1. Go to **Azure AD B2C** 1. Under **Activities**, select **Audit logs**. 1. Filter the list view: For **Date**, select the time interval you want, and for **Activity**, select **An API was called as part of a user flow**. 1. Inspect individual logs. Each row represents an API connector attempting to be called during a user flow. If an API call fails and a retry occurs, it's still represented as a single row. The `numberOfAttempts` indicates the number of times your API was called. This value can be `1`or `2`. Other information about the API call is detailed in the logs. ![Screenshot of an example audit log with API connector transaction.](media/add-api-connector-token-enrichment/example-anonymized-audit-log.png)+ ::: zone-end ## Next steps ::: zone pivot="b2c-user-flow" - Get started with our [samples](api-connector-samples.md#api-connector-rest-api-samples). - [Secure your API Connector](secure-rest-api.md)+ ::: zone-end ::: zone pivot="b2c-custom-policy" To learn how to secure your APIs, see the following articles: - [Walkthrough: Integrate REST API claims exchanges in your Azure AD B2C user journey as an orchestration step](add-api-connector-token-enrichment.md) - [Secure your RESTful API](secure-rest-api.md) - [Reference: RESTful technical profile](restful-technical-profile.md)+ ::: zone-end
ai-services Luis Container Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-container-howto.md
If you run the container with an output [mount](luis-container-configuration.md#
## Billing
-The LUIS container sends billing information to Azure, using a _Azure AI services_ resource on your Azure account.
+The LUIS container sends billing information to Azure, using an _Azure AI services_ resource on your Azure account.
[!INCLUDE [Container's Billing Settings](../../../includes/cognitive-services-containers-how-to-billing-info.md)]
ai-services Cognitive Services Container Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/cognitive-services-container-support.md
Azure AI containers provide the following set of Docker containers, each of whic
| [Speech Service API][sp-containers-stt] | **Speech to text** ([image](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/speech-to-text/about)) | Transcribes continuous real-time speech into text. | Generally available. <br> This container can also [run in disconnected environments](containers/disconnected-containers.md). | | [Speech Service API][sp-containers-cstt] | **Custom Speech to text** ([image](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/custom-speech-to-text/about)) | Transcribes continuous real-time speech into text using a custom model. | Generally available <br> This container can also [run in disconnected environments](containers/disconnected-containers.md). | | [Speech Service API][sp-containers-ntts] | **Neural Text to speech** ([image](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/neural-text-to-speech/about)) | Converts text to natural-sounding speech using deep neural network technology, allowing for more natural synthesized speech. | Generally available. <br> This container can also [run in disconnected environments](containers/disconnected-containers.md). |
-| [Speech Service API][sp-containers-lid] | **Speech language detection** ([image](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/language-detection/about)) | Determines the language of spoken audio. | Preview |
+| [Speech Service API][sp-containers-lid] | **Speech language identification** ([image](https://mcr.microsoft.com/product/azure-cognitive-services/speechservices/language-detection/about)) | Determines the language of spoken audio. | Preview |
### Vision containers
ai-services Cognitive Services Virtual Networks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/cognitive-services-virtual-networks.md
You can manage default network access rules for Azure AI services resources thro
## Grant access from a virtual network
-You can configure Azure AI services resources to allow access from specific subnets only. The allowed subnets might belong to a virtual network in the same subscription or in a different subscription. The other subscription can belong to a different Microsoft Entra tenant.
+You can configure Azure AI services resources to allow access from specific subnets only. The allowed subnets might belong to a virtual network in the same subscription or in a different subscription. The other subscription can belong to a different Microsoft Entra tenant. When the subnet belongs to a different subscription, the Microsoft.CognitiveServices resource provider needs to be also registered for that subscription.
Enable a *service endpoint* for Azure AI services within the virtual network. The service endpoint routes traffic from the virtual network through an optimal path to the Azure AI service. For more information, see [Virtual Network service endpoints](../virtual-network/virtual-network-service-endpoints-overview.md).
ai-services Computer Vision Resource Container Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/computer-vision-resource-container-config.md
This setting can be found in the following place:
## Billing configuration setting
-The `Billing` setting specifies the endpoint URI of the _Azure AI services_ resource on Azure used to meter billing information for the container. You must specify a value for this configuration setting, and the value must be a valid endpoint URI for a _Azure AI services_ resource on Azure. The container reports usage about every 10 to 15 minutes.
+The `Billing` setting specifies the endpoint URI of the _Azure AI services_ resource on Azure used to meter billing information for the container. You must specify a value for this configuration setting, and the value must be a valid endpoint URI for an _Azure AI services_ resource on Azure. The container reports usage about every 10 to 15 minutes.
This setting can be found in the following place:
ai-services Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/managed-identities.md
In the following steps, we enable a system-assigned managed identity and grant D
## Grant access to your storage account
-You need to grant Document Intelligence access to your storage account before it can create, read, or delete blobs. Now that you've enabled Document Intelligence with a system-assigned managed identity, you can use Azure role-based access control (Azure RBAC), to give Document Intelligence access to Azure storage. The **Storage Blob Data Reader** role gives Document Intelligence (represented by the system-assigned managed identity) read and list access to the blob container and data.
+You need to grant Document Intelligence access to your storage account before it can read blobs. Now that you've enabled Document Intelligence with a system-assigned managed identity, you can use Azure role-based access control (Azure RBAC), to give Document Intelligence access to Azure storage. The **Storage Blob Data Reader** role gives Document Intelligence (represented by the system-assigned managed identity) read and list access to the blob container and data.
1. Under **Permissions** select **Azure role assignments**:
You need to grant Document Intelligence access to your storage account before it
That's it! You've completed the steps to enable a system-assigned managed identity. With managed identity and Azure RBAC, you granted Document Intelligence specific access rights to your storage resource without having to manage credentials such as SAS tokens.
+### Additional role assignment for Document Intelligence Studio
+
+If you are going to use Document Intelligence Studio and your storage account is configured with network restriction such as firewall or virtual network, an additional role, **Storage Blob Data Contributor**, needs to be assigned to your Document Intelligence service. Document Intelligence Studio requires this role to write blobs to your storage account when you perform Auto label, OCR upgrade, Human in the loop, or Project sharing operations.
+ ## Next steps > [!div class="nextstepaction"] > [Configure secure access with managed identities and private endpoints](managed-identities-secured-access.md)
ai-services Try Document Intelligence Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/quickstarts/try-document-intelligence-studio.md
monikerRange: '>=doc-intel-3.0.0'
> [!TIP] > Create an Azure AI services resource if you plan to access multiple Azure AI services under a single endpoint/key. For Document Intelligence access only, create a Document Intelligence resource. Please note that you'll need a single-service resource if you intend to use [Microsoft Entra authentication](../../../active-directory/authentication/overview-authentication.md).
+#### Azure role assignments
+
+For document analysis and prebuilt models, following role assignments are required for different scenarios.
+* Basic
+ * **Cognitive Services User**: you need this role to Document Intelligence or Azure AI services resource to enter the analyze page.
+* Advanced
+ * **Contributor**: you need this role to create resource group, Document Intelligence service, or Azure AI services resource.
+ ## Models Prebuilt models help you add Document Intelligence features to your apps without having to build, train, and publish your own models. You can choose from several prebuilt models, each of which has its own set of supported data fields. The choice of model to use for the analyze operation depends on the type of document to be analyzed. Document Intelligence currently supports the following prebuilt models:
A **standard performance** [**Azure Blob Storage account**](https://portal.azure
* [**Create a storage account**](../../../storage/common/storage-account-create.md). When creating your storage account, make sure to select **Standard** performance in the **Instance details → Performance** field. * [**Create a container**](../../../storage/blobs/storage-quickstart-blobs-portal.md#create-a-container). When creating your container, set the **Public access level** field to **Container** (anonymous read access for containers and blobs) in the **New Container** window.
+### Azure role assignments
+
+For custom projects, the following role assignments are required for different scenarios.
+
+* Basic
+ * **Cognitive Services User**: You need this role for Document Intelligence or Azure AI services resource to train the custom model or do analysis with trained models.
+ * **Storage Blob Data Contributor**: You need this role for the Storage Account to create a project and label data.
+* Advanced
+ * **Storage Account Contributor**: You need this role for the Storage Account to set up CORS settings (this is a one-time effort if the same storage account is reused).
+ * **Contributor**: You need this role to create a resource group and resources.
+ ### Configure CORS [CORS (Cross Origin Resource Sharing)](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services) needs to be configured on your Azure storage account for it to be accessible from the Document Intelligence Studio. To configure CORS in the Azure portal, you need access to the CORS tab of your storage account.
ai-services Content Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/content-credentials.md
keywords:
# Content Credentials
-With the improved quality of content from generative AI models, there is an increased need for transparency on the history of AI generated content. All AI-generated images from the Azure OpenAI service now include a digital credential that discloses the content as AI-generated. This is done in collaboration with The Coalition for Content Provenance and Authenticity (C2PA), a Joint Development Foundation project. Visit the [C2PA site](https://c2pa.org/) to learn more about this coalition and its initiatives.
+With the improved quality of content from generative AI models, there is an increased need for more transparency on the origin of AI-generated content. All AI-generated images from the Azure OpenAI service now include Content Credentials, a tamper-evident way to disclose the origin and history of content. Content Credentials are based on an open technical specification from the [Coalition for Content Provenance and Authenticity (C2PA)](https://www.c2pa.org), a Joint Development Foundation project.
-## What are content credentials?
+## What are Content Credentials?
-Content credentials in Azure OpenAI Service provides customers with basic, trustworthy information (detailed in the chart below) about the origin of an image generated by the DALL-E series models. This information is represented by a manifest embedded inside the image. This manifest is cryptographically signed by a certificate that customers can trace back to Azure OpenAI Service. This signature is also embedded into the manifest itself.
+Content Credentials in the Azure OpenAI Service provide customers with information about the origin of an image generated by the DALL-E series models. This information is represented by a manifest attached to the image. The manifest is cryptographically signed by a certificate that traces back to Azure OpenAI Service.
-The JSON manifest contains several key pieces of information:
+The manifest contains several key pieces of information:
| Field name | Field content | | | | | `"description"` | This field has a value of `"AI Generated Image"` for all DALL-E model generated images, attesting to the AI-generated nature of the image. | | `"softwareAgent"` | This field has a value of `"Azure OpenAI DALL-E"` for all images generated by DALL-E series models in the Azure OpenAI service. |
-|`"when"` |The timestamp of when the image was generated. |
+|`"when"` |The timestamp of when the Content Credentials were created. |
-This digital signature can help people understand when visual content is AI-generated. It's important to keep in mind that image provenance can help establish the truth about the origin of digital content, but it alone can't tell you whether the digital content is true, accurate, or factual. Content credentials are designed to be used as one tool among others to help customers validate their media. For more information on how to responsibly build solutions with Azure OpenAI service image-generation models, visit the [Azure OpenAI transparency note](/legal/cognitive-services/openai/transparency-note?tabs=text)
+Content Credentials in the Azure OpenAI Service can help people understand when visual content is AI-generated. For more information on how to responsibly build solutions with Azure OpenAI service image-generation models, visit the [Azure OpenAI transparency note](/legal/cognitive-services/openai/transparency-note?tabs=text).
## How do I leverage Content Credentials in my solution today?
No additional set-up is necessary. Content Credentials are automatically applied
There are two recommended ways today to check the Credential of an image generated by Azure OpenAI DALL-E models:
-1. **By the content credentials website (contentcredentials.org/verify)**: This web page provides a user interface that allows users to upload any image. If an image is generated by DALL-E in Azure OpenAI, the content credentials webpage shows that the image was issued by Microsoft Corporation alongside the date and time of image creation.
+1. **Content Credentials Verify webpage (contentcredentials.org/verify)**: This is a tool that allows users to inspect the Content Credentials of a piece of content. If an image was generated by DALL-E in Azure OpenAI, the tool will display that its Content Credentials were issued by Microsoft Corporation alongside the date and time of issuance.
:::image type="content" source="../media/encryption/credential-check.png" alt-text="Screenshot of the content credential verification website.":::
- This page shows that an Azure OpenAI DALL-E generated image has been issued by Microsoft.
+ This page shows that an image generated by Azure OpenAI DALL-E has Content Credentials issued by Microsoft.
-2. **With the Content Authenticity Initiative (CAI) JavaScript SDK**: the Content Authenticity Initiative open-source tools and libraries can verify the provenance information embedded in DALL-E generated images and are recommended for web-based applications that display images generated with Azure OpenAI DALL-E models. Get started with the SDK [here](https://opensource.contentauthenticity.org/docs/js-sdk/getting-started/quick-start).
-
- As a best practice, consider checking provenance information in images displayed in your application using the CAI SDK and embedding the results of the check in the application UI along with AI-generated images. Below is an example from Bing Image Creator.
-
- :::image type="content" source="../media/encryption/image-with-credential.png" alt-text="Screenshot of an image with its content credential information displayed.":::
+2. **Content Authenticity Initiative (CAI) open-source tools**: The CAI provides multiple open-source tools that validate and display C2PA Content Credentials. Find the tool right for your application and [get started here](https://opensource.contentauthenticity.org/).
+
ai-services Switching Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/switching-endpoints.md
client = AzureOpenAI(
## Keyword argument for model
-OpenAI uses the `model` keyword argument to specify what model to use. Azure OpenAI has the concept of [deployments](create-resource.md?pivots=web-portal#deploy-a-model) and uses the `deployment_id` keyword argument to describe which model deployment to use. Azure OpenAI also supports the use of `engine` interchangeably with `deployment_id`. `deployment_id` corresponds to the custom name you chose for your model during model deployment. By convention in our docs, we often show `deployment_id`'s which match the underlying model name, but if you chose a different deployment name that doesn't match the model name you need to use that name when working with models in Azure OpenAI.
-
-For OpenAI `engine` still works in most instances, but it's deprecated and `model` is preferred.
+OpenAI uses the `model` keyword argument to specify what model to use. Azure OpenAI has the concept of unique model [deployments](create-resource.md?pivots=web-portal#deploy-a-model). When using Azure OpenAI `model` should refer to the underling deployment name you chose when you deployed the model.
<table> <tr>
aks Artifact Streaming https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/artifact-streaming.md
Enablement on ACR is a prerequisite for Artifact Streaming on AKS. For more info
### Enable Artifact Streaming on a new node pool
-* Create a new node pool with Artifact Streaming enabled using the [`az aks nodepool add`][az-aks-nodepool-add] command with the `--enable-artifact-streaming` flag set to `true`.
+* Create a new node pool with Artifact Streaming enabled using the [`az aks nodepool add`][az-aks-nodepool-add] command with the `--enable-artifact-streaming`.
```azurecli-interactive az aks nodepool add \ --resource-group myResourceGroup \ --cluster-name myAKSCluster \ --name myNodePool \
- --enable-artifact-streaming true
+ --enable-artifact-streaming
``` ## Check if Artifact Streaming is enabled
aks Azure Cni Overlay https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-cni-overlay.md
Previously updated : 11/03/2023 Last updated : 11/04/2023 # Configure Azure CNI Overlay networking in Azure Kubernetes Service (AKS)
az aks create -n $clusterName -g $resourceGroup \
The upgrade process triggers each node pool to be re-imaged simultaneously. Upgrading each node pool separately to Overlay isn't supported. Any disruptions to cluster networking are similar to a node image upgrade or Kubernetes version upgrade where each node in a node pool is re-imaged.
+### Azure CNI Cluster Upgrade
+ Update an existing Azure CNI cluster to use Overlay using the [`az aks update`][az-aks-update] command. ```azurecli-interactive
az aks update --name $clusterName \
The `--pod-cidr` parameter is required when upgrading from legacy CNI because the pods need to get IPs from a new overlay space, which doesn't overlap with the existing node subnet. The pod CIDR also can't overlap with any VNet address of the node pools. For example, if your VNet address is *10.0.0.0/8*, and your nodes are in the subnet *10.240.0.0/16*, the `--pod-cidr` can't overlap with *10.0.0.0/8* or the existing service CIDR on the cluster. +
+### Kubenet Cluster Upgrade
+
+Update an existing Kubenet cluster to use Azure CNI Overlay using the [`az aks update`][az-aks-update] command.
+
+```azurecli-interactive
+clusterName="myOverlayCluster"
+resourceGroup="myResourceGroup"
+location="westcentralus"
+
+az aks update --name $clusterName \
+--resource-group $resourceGroup \
+--network-plugin azure \
+--network-plugin-mode overlay
+```
+
+Since the cluster is already using a private CIDR for pods, you don't need to specify the `--pod-cidr` parameter and the Pod CIDR will remain the same.
+
+> [NOTE!]
+> When upgrading from Kubenet to CNI Overlay, the route table will no longer be required for pod routing. If the cluster is using a customer provided route table, the routes which were being used to direct pod traffic to the correct node will automatically be deleted during the migration operation. If the cluster is using a managed route table (the route table was created by AKS and lives in the node resource group) then that route table will be deleted as part of the migration.
+ ## Dual-stack Networking (Preview) You can deploy your AKS clusters in a dual-stack mode when using Overlay networking and a dual-stack Azure virtual network. In this configuration, nodes receive both an IPv4 and IPv6 address from the Azure virtual network subnet. Pods receive both an IPv4 and IPv6 address from a logically different address space to the Azure virtual network subnet of the nodes. Network address translation (NAT) is then configured so that the pods can reach resources on the Azure virtual network. The source IP address of the traffic is NAT'd to the node's primary IP address of the same family (IPv4 to IPv4 and IPv6 to IPv6).
aks Cluster Autoscaler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cluster-autoscaler.md
To further help improve cluster resource utilization and free up CPU and memory
[aks-faq-node-resource-group]: faq.md#can-i-modify-tags-and-other-properties-of-the-aks-resources-in-the-node-resource-group [aks-multiple-node-pools]: create-node-pools.md [aks-scale-apps]: tutorial-kubernetes-scale.md
-[aks-view-master-logs]: monitor-aks.md#resource-logs
+[aks-view-master-logs]: monitor-aks.md#aks-control-planeresource-logs
[azure-cli-install]: /cli/azure/install-azure-cli [az-aks-create]: /cli/azure/aks#az-aks-create [az-aks-update]: /cli/azure/aks#az-aks-update
aks Deploy Marketplace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/deploy-marketplace.md
Included among these solutions are Kubernetes application-based container offers
This feature is currently supported only in the following regions: -- East US, EastUS2EUAP, West US, Central US, West Central US, South Central US, East US2, West US2, West Europe, North Europe, Canada Central, South East Asia, Australia East, Central India, Japan East, Korea Central, UK South, UK West, Germany West Central, France Central, East Asia, West US3, Norway East, South African North, North Central US, Australia South East, Switzerland North, Japan West, South India
+- Australia East, Australia Southeast, Brazil South, Canada Central, Canada East, Central India, Central US, East Asia, East US, East US 2, East US 2 EAUP, France Central, France South, Germany North, Germany West Central, Japan East, Japan West, Jio India West, Korea Central, Korea South, North Central Us, North Europe, Norway East, Norway West, South Africa North, South Central US, South India, Southeast Asia, Sweden Central, Switzerland North, UAE North, UK South, UK West, West Central US, West Europe, West US, West US 2, West US 3
Kubernetes application-based container offers can't be deployed on AKS for Azure Stack HCI or AKS Edge Essentials.
az k8s-extension list --cluster-name <clusterName> --resource-group <resourceGro
+ ## Manage the offer lifecycle
az k8s-extension show --name <extension-name> --cluster-name <clusterName> --res
+ ## Monitor billing and usage information
az k8s-extension delete --name <extension-name> --cluster-name <clusterName> --r
+ ## Troubleshooting
aks Monitor Aks Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/monitor-aks-reference.md
The following table lists [dimensions](../azure-monitor/essentials/data-platform
## Resource logs
-AKS implements control plane logs for the cluster as [resource logs in Azure Monitor.](../azure-monitor/essentials/resource-logs.md). See [Resource logs](monitor-aks.md#resource-logs) for details on creating a diagnostic setting to collect these logs and [Sample queries](monitor-aks-reference.md#resource-logs) for query examples.
+AKS implements control plane logs for the cluster as [resource logs in Azure Monitor](../azure-monitor/essentials/resource-logs.md). See [Resource logs](monitor-aks.md#aks-control-planeresource-logs) for details on creating a diagnostic setting to collect these logs and [Sample queries](monitor-aks-reference.md#resource-logs) for query examples.
The following table lists the resource log categories you can collect for AKS. It also includes the table the logs for each category are sent to when you send the logs to a Log Analytics workspace using [resource-specific mode](../azure-monitor/essentials/resource-logs.md#resource-specific). In [Azure diagnostics mode](../azure-monitor/essentials/resource-logs.md#azure-diagnostics-mode), all logs are written to the [AzureDiagnostics](/azure/azure-monitor/reference/tables/azurediagnostics) table.
aks Monitor Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/monitor-aks.md
Title: Monitor Azure Kubernetes Service (AKS) description: Start here to learn how to monitor Azure Kubernetes Service (AKS). -+ Previously updated : 09/11/2023 Last updated : 11/01/2023 # Monitor Azure Kubernetes Service (AKS)
-When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability, performance, and operation. This article describes the monitoring data generated by AKS and analyzed with [Azure Monitor](../azure-monitor/overview.md). If you are unfamiliar with the features of Azure Monitor common to all Azure services that use it, read [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md).
+When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability, performance, and operation. This article describes the monitoring data generated by AKS and analyzed with [Azure Monitor](../azure-monitor/overview.md). If you're unfamiliar with the features of Azure Monitor common to all Azure services that use it, read [Monitoring Azure resources with Azure Monitor](../azure-monitor/containers/monitor-kubernetes.md).
> [!IMPORTANT]
-> This article provides basic information for getting started monitoring an AKS cluster. For complete monitoring of Kuberenetes clusters in Azure [Container insights](../azure-monitor/containers/container-insights-overview.md), see [Monitor Azure Kubernetes Service (AKS) with Azure Monitor](../azure-monitor/containers/monitor-kubernetes.md).
+> Kubernetes is a complex distributed system with many moving parts so monitoring at multiple levels is required. Although AKS is a managed Kubernetes service, the same rigor around monitoring at multiple levels is still required. This article provides high level information and best practices for monitoring an AKS cluster. See the following for additional details.
+- For detailed monitoring of the complete Kubernetes stack, see [Monitor Azure Kubernetes Service (AKS) with Azure Monitor](../azure-monitor/containers/monitor-kubernetes.md)
+- For collecting metric data from Kubernetes clusters, see [Azure Monitor managed service for Prometheus](../azure-monitor/essentials/prometheus-metrics-overview.md).
+- For collecting logs in Kubernetes clusters, see [Container insights](../azure-monitor/containers/container-insights-overview.md).
+- For data visualization, see [Azure Workbooks](../azure-monitor/visualize/workbooks-overview.md) and [Azure Managed Grafana](../azure-monitor/visualize/grafana-plugin.md).
## Monitoring data
-AKS generates the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](../azure-monitor/essentials/monitor-azure-resource.md#monitoring-data-from-azure-resources). See [Monitoring AKS data reference](monitor-aks-reference.md) for detailed information on the metrics and logs created by AKS. [Other Azure services and features](#integrations) will collect additional data and enable other analysis options as shown in the following diagram and table.
+AKS generates the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](../azure-monitor/essentials/monitor-azure-resource.md#monitoring-data-from-azure-resources). See [Monitoring AKS data reference](monitor-aks-reference.md) for detailed information on the metrics and logs created by AKS. [Other Azure services and features](#integrations) collect other data and enable other analysis options as shown in the following diagram and table.
- Source | Description |
+| Source | Description |
|:|:| | Platform metrics | [Platform metrics](monitor-aks-reference.md#metrics) are automatically collected for AKS clusters at no cost. You can analyze these metrics with [metrics explorer](../azure-monitor/essentials/analyze-metrics.md) or use them for [metric alerts](../azure-monitor/alerts/alerts-types.md#metric-alerts). | | Prometheus metrics | When you [enable metric scraping](../azure-monitor/containers/prometheus-metrics-enable.md) for your cluster, [Prometheus metrics](../azure-monitor/containers/prometheus-metrics-scrape-default.md) are collected by [Azure Monitor managed service for Prometheus](../azure-monitor/essentials/prometheus-metrics-overview.md) and stored in an [Azure Monitor workspace](../azure-monitor/essentials/azure-monitor-workspace-overview.md). Analyze them with [prebuilt dashboards](../azure-monitor/visualize/grafana-plugin.md#use-out-of-the-box-dashboards) in [Azure Managed Grafana](../managed-grafan). | | Activity logs | [Activity log](monitor-aks-reference.md) is collected automatically for AKS clusters at no cost. These logs track information such as when a cluster is created or has a configuration change. Send the [Activity log to a Log Analytics workspace](../azure-monitor/essentials/activity-log.md#send-to-log-analytics-workspace) to analyze it with your other log data. |
-| Resource logs | [Control plane logs](monitor-aks-reference.md#resource-logs) for AKS are implemented as resource logs. [Create a diagnostic setting](#resource-logs) to send them to [Log Analytics workspace](../azure-monitor/logs/log-analytics-workspace-overview.md) where you can analyze and alert on them with log queries in [Log Analytics](../azure-monitor/logs/log-analytics-overview.md). |
+| Resource logs | Control plane logs for AKS are implemented as resource logs. [Create a diagnostic setting](#aks-control-planeresource-logs) to send them to [Log Analytics workspace](../azure-monitor/logs/log-analytics-workspace-overview.md) where you can analyze and alert on them with log queries in [Log Analytics](../azure-monitor/logs/log-analytics-overview.md). |
| Container insights | Container insights collects various logs and performance data from a cluster including stdout/stderr streams and stores them in a [Log Analytics workspace](../azure-monitor/logs/log-analytics-workspace-overview.md) and [Azure Monitor Metrics](../azure-monitor/essentials/data-platform-metrics.md). Analyze this data with views and workbooks included with Container insights or with [Log Analytics](../azure-monitor/logs/log-analytics-overview.md) and [metrics explorer](../azure-monitor/essentials/analyze-metrics.md). | ## Monitoring overview page in Azure portal
-The **Monitoring** tab on the **Overview** page offers a quick way to get started viewing monitoring data in the Azure portal for each AKS cluster. This includes graphs with common metrics for the cluster separated by node pool. Click on any of these graphs to further analyze the data in [metrics explorer](../azure-monitor/essentials/analyze-metrics.md).
+The **Monitoring** tab on the **Overview** page offers a quick way to get started viewing monitoring data in the Azure portal for each AKS cluster. This includes graphs with common metrics for the cluster separated by node pool. Click on any of these graphs to further analyze the data in [metrics explorer](../azure-monitor/essentials/metrics-getting-started.md).
-The **Overview** page also includes links to [Managed Prometheus](#integrations) and [Container insights](#integrations) for the current cluster. If you haven't already enabled these tools, you'll be prompted to do so. You may also see a banner at the top of the screen recommending that you enable additional features to improve monitoring of your cluster.
+The **Overview** page also includes links to [Managed Prometheus](#integrations) and [Container insights](#integrations) for the current cluster. If you haven't already enabled these tools, you are prompted to do so. You may also see a banner at the top of the screen recommending that you enable other features to improve monitoring of your cluster.
:::image type="content" source="media/monitor-aks/overview.png" alt-text="Screenshot of AKS overview page." lightbox="media/monitor-aks/overview.png"::: -- > [!TIP] > Access monitoring features for all AKS clusters in your subscription from the **Monitoring** menu in the Azure portal, or for a single AKS cluster from the **Monitor** section of the **Kubernetes services** menu.
+## Integrations
+The following Azure services and features of Azure Monitor can be used for extra monitoring of your Kubernetes clusters. You can enable these features during AKS cluster creation from the Integrations tab in the Azure portal, Azure CLI, Terraform, Azure Policy, or onboard your cluster to them later. Each of these features may incur cost, so refer to the pricing information for each before you enabled them.
++
+| Service / Feature | Description |
+|:|:|
+| [Container insights](../azure-monitor/containers/container-insights-overview.md) | Uses a containerized version of the [Azure Monitor agent](../azure-monitor/agents/agents-overview.md) to collect stdout/stderr logs, and Kubernetes events from each node in your cluster, supporting a [variety of monitoring scenarios for AKS clusters](../azure-monitor/containers/container-insights-overview.md#features-of-container-insights). You can enable monitoring for an AKS cluster when it's created by using [Azure CLI](../aks/learn/quick-kubernetes-deploy-cli.md), [Azure Policy](../azure-monitor/containers/container-insights-enable-aks-policy.md), Azure portal or Terraform. If you don't enable Container insights when you create your cluster, see [Enable Container insights for Azure Kubernetes Service (AKS) cluster](../azure-monitor/containers/container-insights-enable-aks.md) for other options to enable it.<br><br>Container insights store most of its data in a [Log Analytics workspace](../azure-monitor/logs/log-analytics-workspace-overview.md), and you'll typically use the same log analytics workspace as the [resource logs](monitor-aks-reference.md#resource-logs) for your cluster. See [Design a Log Analytics workspace architecture](../azure-monitor/logs/workspace-design.md) for guidance on how many workspaces you should use and where to locate them. |
+| [Azure Monitor managed service for Prometheus](../azure-monitor/essentials/prometheus-metrics-overview.md) | [Prometheus](https://prometheus.io/) is a cloud-native metrics solution from the Cloud Native Compute Foundation and the most common tool used for collecting and analyzing metric data from Kubernetes clusters. Azure Monitor managed service for Prometheus is a fully managed Prometheus-compatible monitoring solution in Azure. If you don't enable managed Prometheus when you create your cluster, see [Collect Prometheus metrics from an AKS cluster](../azure-monitor/essentials/prometheus-metrics-enable.md) for other options to enable it.<br><br>Azure Monitor managed service for Prometheus stores its data in an [Azure Monitor workspace](../azure-monitor/essentials/azure-monitor-workspace-overview.md), which is [linked to a Grafana workspace](../azure-monitor/essentials/azure-monitor-workspace-manage.md#link-a-grafana-workspace) so that you can analyze the data with Azure Managed Grafana. |
+| [Azure Managed Grafana](../managed-grafan#link-a-grafana-workspace) details on linking it to your Azure Monitor workspace so it can access Prometheus metrics for your cluster. |
++
+## Metrics
+Metrics play an important role in cluster monitoring, identifying issues, and optimizing performance in the AKS clusters. Platform metrics are captured using the out of the box metrics server installed in kube-system namespace, which periodically scrapes metrics from all Kubernetes nodes served by Kubelet. T=You should also enable Azure Managed Prometheus metrics to collect container metrics and Kubernetes object metrics, such as object state of Deployments. See [Collect Prometheus metrics from an AKS cluster](../azure-monitor/containers/prometheus-metrics-enable.md) to send data to Azure Managed service for Prometheus.
-## Resource logs
-Control plane logs for AKS clusters are implemented as [resource logs](../azure-monitor/essentials/resource-logs.md) in Azure Monitor. Resource logs are not collected and stored until you create a diagnostic setting to route them to one or more locations. You'll typically send them to a Log Analytics workspace, which is where most of the data for Container insights is stored.
+- [List of default platform metrics](/azure/azure-monitor/reference/supported-metrics/microsoft-containerservice-managedclusters-metrics)
+- [List of default Prometheus metrics](../azure-monitor/containers/prometheus-metrics-scrape-default.md)
-See [Create diagnostic settings](../azure-monitor/essentials/create-diagnostic-settings.md) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, or PowerShell. When you create a diagnostic setting, you specify which categories of logs to collect. The categories for AKS are listed in [AKS monitoring data reference](monitor-aks-reference.md#resource-logs).
+## Logs
+
+### AKS control plane/resource logs
+
+Control plane logs for AKS clusters are implemented as [resource logs](../azure-monitor/essentials/resource-logs.md) in Azure Monitor. Resource logs aren't collected and stored until you create a diagnostic setting to route them to one or more locations. You'll typically send them to a Log Analytics workspace, which is where most of the data for Container insights is stored.
+
+See [Create diagnostic settings](../azure-monitor/essentials/diagnostic-settings.md) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, or PowerShell. When you create a diagnostic setting, you specify which categories of logs to collect. The categories for AKS are listed in [AKS monitoring data reference](monitor-aks-reference.md#resource-logs).
> [!IMPORTANT] > There can be substantial cost when collecting resource logs for AKS, particularly for *kube-audit* logs. Consider the following recommendations to reduce the amount of data collected:
Resource-specific mode is recommended for AKS for the following reasons:
- Data is easier to query because it's in individual tables dedicated to AKS. - Supports configuration as [basic logs](../azure-monitor/logs/basic-logs-configure.md) for significant cost savings.
-For more details on the difference between collection modes including how to change an existing setting, see [Select the collection mode](../azure-monitor/essentials/resource-logs.md#select-the-collection-mode).
+For more information on the difference between collection modes including how to change an existing setting, see [Select the collection mode](../azure-monitor/essentials/resource-logs.md#select-the-collection-mode).
> [!NOTE] > The ability to select the collection mode isn't available in the Azure portal in all regions yet. For those regions where it's not yet available, use CLI to create the diagnostic setting with a command such as the following:
For more details on the difference between collection modes including how to cha
> az monitor diagnostic-settings create --name AKS-Diagnostics --resource /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/myresourcegroup/providers/Microsoft.ContainerService/managedClusters/my-cluster --logsΓÇ»'[{""category"": ""kube-audit"",""enabled"": true}, {""category"": ""kube-audit-admin"", ""enabled"": true}, {""category"": ""kube-apiserver"", ""enabled"": true}, {""category"": ""kube-controller-manager"", ""enabled"": true}, {""category"": ""kube-scheduler"", ""enabled"": true}, {""category"": ""cluster-autoscaler"", ""enabled"": true}, {""category"": ""cloud-controller-manager"", ""enabled"": true}, {""category"": ""guard"", ""enabled"": true}, {""category"": ""csi-azuredisk-controller"", ""enabled"": true}, {""category"": ""csi-azurefile-controller"", ""enabled"": true}, {""category"": ""csi-snapshot-controller"", ""enabled"": true}]' --workspace /subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/myresourcegroup/providers/microsoft.operationalinsights/workspaces/myworkspace --export-to-resource-specific true > ```
-## Sample log queries
+#### Sample log queries
> [!IMPORTANT] > When you select **Logs** from the menu for an AKS cluster, Log Analytics is opened with the query scope set to the current cluster. This means that log queries will only include data from that resource. If you want to run a query that includes data from other clusters or data from other Azure services, select **Logs** from the **Azure Monitor** menu. See [Log query scope and time range in Azure Monitor Log Analytics](../azure-monitor/logs/scope.md) for details. -
-If the [diagnostic setting for your cluster](#resource-logs) uses Azure diagnostics mode, the resource logs for AKS are stored in the [AzureDiagnostics](/azure/azure-monitor/reference/tables/azurediagnostics) table. You can distinguish different logs with the **Category** column. For a description of each category, see [AKS reference resource logs](monitor-aks-reference.md).
+If the [diagnostic setting for your cluster](monitor-aks-reference.md#resource-logs) uses Azure diagnostics mode, the resource logs for AKS are stored in the [AzureDiagnostics](/azure/azure-monitor/reference/tables/azurediagnostics) table. You can distinguish different logs with the **Category** column. For a description of each category, see [AKS reference resource logs](monitor-aks-reference.md).
| Description | Log query | |:|:|
If the [diagnostic setting for your cluster](#resource-logs) uses Azure diagnost
| All audit logs excluding the get and list audit events <br>(resource-specific mode) | AKSAuditAdmin | | All API server logs<br>(resource-specific mode) | AKSControlPlane<br>\| where Category == "kube-apiserver" |
+To access a set of prebuilt queries in the Log Analytics workspace, see the [Log Analytics queries interface](../azure-monitor/logs/queries.md#queries-interface) and select resource type **Kubernetes Services**. For a list of common queries for Container insights, see [Container insights queries](../azure-monitor/containers/container-insights-log-query.md).
+### AKS data plane/Container Insights logs
+Container Insights collect various types of telemetry data from containers and Kubernetes clusters to help you monitor, troubleshoot, and gain insights into your containerized applications running in your AKS clusters. For a list of tables and their detailed descriptions used by Container insights, see the [Azure Monitor table reference](/azure/azure-monitor/reference/tables/tables-resourcetype#kubernetes-services). All these tables are available for [log queries](../azure-monitor/logs/log-query-overview.md).
-To access a set of prebuilt queries in the Log Analytics workspace, see the [Log Analytics queries interface](../azure-monitor/logs/queries.md#queries-interface) and select resource type **Kubernetes Services**. For a list of common queries for Container insights, see [Container insights queries](../azure-monitor/containers/container-insights-log-query.md).
+[Cost optimization settings](../azure-monitor/containers/container-insights-cost-config.md) allow you to customize and control the metrics data collected through the container insights agent. This feature supports the data collection settings for individual table selection, data collection intervals, and namespaces to exclude the data collection through [Azure Monitor Data Collection Rules (DCR)](../azure-monitor/essentials/data-collection-rule-overview.md). These settings control the volume of ingestion and reduce the monitoring costs of container insights. Container insights Collected Data can be customized through the Azure portal, using the following options. Selecting any options other than **All (Default)** leads to the container insights experience becoming unavailable.
-## Integrations
-The following Azure services and features of Azure Monitor can be used for additional monitoring of your Kubernetes clusters. You can enable these features when you create your AKS cluster (on the Integrations tab when creating the cluster in the Azure portal), or onboard your cluster to them later. Each of these features may include additional cost, so refer to the pricing information for each before you enabled them.
+| Grouping | Tables | Notes |
+| | | |
+| All (Default) | All standard container insights tables | Required for enabling the default container insights visualizations |
+| Performance | Perf, InsightsMetrics | |
+| Logs and events | ContainerLog or ContainerLogV2, KubeEvents, KubePodInventory | Recommended if you enabled managed Prometheus metrics |
+| Workloads, Deployments, and HPAs | InsightsMetrics, KubePodInventory, KubeEvents, ContainerInventory, ContainerNodeInventory, KubeNodeInventory, KubeServices | |
+| Persistent Volumes | InsightsMetrics, KubePVInventory | |
-| Service / Feature | Description |
-|:|:|
-| [Container insights](../azure-monitor/containers/container-insights-overview.md) | Uses a containerized version of the [Azure Monitor agent](../azure-monitor/agents/agents-overview.md) to collect stdout/stderr logs, performance metrics, and Kubernetes events from each node in your cluster, supporting a [variety of monitoring scenarios for AKS clusters](../azure-monitor/containers/container-insights-overview.md#features-of-container-insights). If you don't enable Container insights when you create your cluster, see [Enable Container insights for Azure Kubernetes Service (AKS) cluster](../azure-monitor/containers/container-insights-enable-aks.md) for other options to enable it.<br><br>Container insights stores most of its data in a [Log Analytics workspace](../azure-monitor/logs/log-analytics-workspace-overview.md), and you'll typically use the same one as the [resource logs](#resource-logs) for your cluster. See [Design a Log Analytics workspace architecture](../azure-monitor/logs/workspace-design.md) for guidance on how many workspaces you should use and where to locate them. |
-| [Azure Monitor managed service for Prometheus](../azure-monitor/essentials/prometheus-metrics-overview.md) | [Prometheus](https://prometheus.io/) is a cloud-native metrics solution from the Cloud Native Compute Foundation and the most common tool used for collecting and analyzing metric data from Kubernetes clusters. Azure Monitor managed service for Prometheus is a fully managed Prometheus-compatible monitoring solution in Azure. If you don't enable managed Prometheus when you create your cluster, see [Collect Prometheus metrics from an AKS cluster](../azure-monitor/essentials/prometheus-metrics-enable.md) for other options to enable it.<br><br>Azure Monitor managed service for Prometheus stores its data in an [Azure Monitor workspace](../azure-monitor/essentials/azure-monitor-workspace-overview.md), which is [linked to a Grafana workspace](../azure-monitor/essentials/azure-monitor-workspace-manage.md#link-a-grafana-workspace) so that you can analyze the data with Azure Managed Grafana. |
-| [Azure Managed Grafana](../managed-grafan#link-a-grafana-workspace) details on linking it to your Azure Monitor workspace so it can access Prometheus metrics for your cluster. |
+The **Logs and events** grouping captures the logs from the _ContainerLog_ or _ContainerLogV2_, _KubeEvents_, _KubePodInventory_ tables, but not the metrics. The recommended path to collect metrics is to enable [Azure Monitor managed service Prometheus for Prometheus](../azure-monitor/essentials/prometheus-metrics-overview.md) from your AKS cluster and to use [Azure Managed Grafana](../managed-grafan).
++
+#### ContainerLogV2 schema
+Azure Monitor Container Insights provides a schema for container logs known as ContainerLogV2, which is the recommended option. This format includes the following fields to facilitate common queries for viewing data related to AKS and Azure Arc-enabled Kubernetes clusters:
+
+- ContainerName
+- PodName
+- PodNamespace
+
+In addition, this schema is compatible with [Basic Logs](../azure-monitor/logs/basic-logs-configure.md?tabs=portal-1#set-a-tables-log-data-plan) data plan, which offers a low-cost alternative to standard analytics logs. The Basic log data plan lets you save on the cost of ingesting and storing high-volume verbose logs in your Log Analytics workspace for debugging, troubleshooting, and auditing, but not for analytics and alerts. For more information, see [Manage tables in a Log Analytics workspace](../azure-monitor/logs/manage-logs-tables.md?tabs=azure-portal).
+ContainerLogV2 is the recommended approach and is the default schema for customers onboarding container insights with Managed Identity Auth using ARM, Bicep, Terraform, Policy, and Azure portal. For more information about how to enable ContainerLogV2 through either the cluster's Data Collection Rule (DCR) or ConfigMap, see [Enable the ContainerLogV2 schema](../azure-monitor/containers/container-insights-logging-v2.md?tabs=configure-portal#enable-the-containerlogv2-schema-1).
+
+## Visualization
+Data visualization is an essential concept that makes it easier for system administrators and operational engineers to consume the collected information. Instead of looking at raw data, they can use visual representations, which quickly display the data and reveal trends that might be hidden when looking at raw data. You can use Grafana Dashboards or native Azure workbooks for data visualization.
+### Azure Managed Grafana
+The most common way to analyze and present Prometheus data is with a Grafana Dashboard. Azure Managed Grafana includes [prebuilt dashboards](../azure-monitor/visualize/grafana-plugin.md#use-out-of-the-box-dashboards) for monitoring Kubernetes clusters including several that present similar information as Container insights views. There are also various community-created dashboards to visualize multiple aspects of a Kubernetes cluster from the metrics collected by Prometheus.
++
+### Workbooks
+[Azure Monitor Workbooks](../azure-monitor/visualize/workbooks-overview.md) is a feature in Azure Monitor that provides a flexible canvas for data analysis and the creation of rich visual reports. Workbooks help you to create visual reports that help in data analysis. Reports in Container insights are recommended out-of-the-box for Azure workbooks. Azure provides built-in workbooks for each service, including Azure Kubernetes Service (AKS), which you can access from the Azure portal. On the **Azure Monitor** menu in the Azure portal, select **Containers**. In the **Monitoring** section, select **Insights**, choose a particular cluster, and then select the **Reports** tab. You can also view them from the [workbook gallery](../azure-monitor/visualize/workbooks-overview.md#the-gallery) in Azure Monitor.
+
+For instance, the [Cluster Optimization Workbook](../azure-monitor/containers/container-insights-reports.md#cluster-optimization-workbook) provides multiple analyzers that give you a quick view of the health and performance of your Kubernetes cluster. It has multiple analyzers that each provide different information related to your cluster. The workbook requires no configuration once Container insights is enabled on the cluster. Salient capabilities include the ability to detect liveness probe failures and their frequencies, identify and group event anomalies that indicate recent increases in event volume for more accessible analysis, and identify containers with high or low CPU and memory limits and requests, along with suggested limit and request values for these containers running in your AKS clusters.ΓÇï For more information about these workbooks, see [Reports in Container insights](../azure-monitor/containers/container-insights-reports.md).
## Alerts
-Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](../azure-monitor/alerts/alerts-metric-overview.md), [logs](../azure-monitor/alerts/alerts-unified-log.md), and the [activity log](../azure-monitor/alerts/activity-log-alerts.md). Different types of alerts have benefits and drawbacks.
+[Azure Monitor alerts](../azure-monitor/alerts/alerts-overview.md) help you detect and address issues before users notice them by proactively notifying you when Azure Monitor collected data indicates there might be a problem with your cloud infrastructure or application. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](../azure-monitor/alerts/alerts-metric-overview.md), [logs](../azure-monitor/alerts/alerts-unified-log.md), and the [activity log](../azure-monitor/alerts/activity-log-alerts.md). Different types of alerts have benefits and drawbacks.
+There are two types of metric rules used by Container insights based on either Prometheus metrics or platform metrics.
-### Metric alerts
-The following table lists the recommended metric alert rules for AKS clusters. You can choose to automatically enable these
-alert rules when the cluster is created. These alerts are based on [platform metrics](#monitoring-data) for the cluster.
+### Prometheus metrics based alerts
+When you [enable collection of Prometheus metrics](#integrations) for your cluster, then you can download a collection of [recommended Prometheus alert rules](../azure-monitor/containers/container-insights-metric-alerts.md#enable-prometheus-alert-rules). This includes the following rules:
+
+ Level | Alerts |
+|:|:|
+| Pod level | KubePodCrashLooping<br>Job didn't complete in time<br>Pod container restarted in last 1 hour<br>Ready state of pods is less than 80%<br>Number of pods in failed state are greater than 0<br>KubePodNotReadyByController<br>KubeStatefulSetGenerationMismatch<br>KubeJobNotCompleted<br>KubeJobFailed<br>Average CPU usage per container is greater than 95%<br>Average Memory usage per container is greater than 95%<br>KubeletPodStartUpLatencyHigh |
+| Cluster level | Average PV usage is greater than 80%<br>KubeDeploymentReplicasMismatch<br>KubeStatefulSetReplicasMismatch<br>KubeHpaReplicasMismatch<br>KubeHpaMaxedOut<br>KubeCPUQuotaOvercommit<br>KubeMemoryQuotaOvercommit<br>KubeVersionMismatch<br>KubeClientErrors<br>CPUThrottlingHigh<br>KubePersistentVolumeFillingUp<br>KubePersistentVolumeInodesFillingUp<br>KubePersistentVolumeErrors |
+| Node level | Average node CPU utilization is greater than 80%<br>Working set memory for a node is greater than 80%<br>Number of OOM killed containers is greater than 0<br>KubeNodeUnreachable<br>KubeNodeNotReady<br>KubeNodeReadinessFlapping<br>KubeContainerWaiting<br>KubeDaemonSetNotScheduled<br>KubeDaemonSetMisScheduled<br>KubeletPlegDurationHigh<br>KubeletServerCertificateExpiration<br>KubeletClientCertificateRenewalErrors<br>KubeletServerCertificateRenewalErrors<br>KubeQuotaAlmostFull<br>KubeQuotaFullyUsed<br>KubeQuotaExceeded |
+++
+### Platform metric based alerts
+
+The following table lists the recommended metric alert rules for AKS clusters. These alerts are based on [platform metrics](#monitoring-data) for the cluster.
| Condition | Description |
-|:|:|:|
+|:|:|
| CPU Usage Percentage > 95 | Fires when the average CPU usage across all nodes exceeds the threshold. | | Memory Working Set Percentage > 100 | Fires when the average working set across all nodes exceeds the threshold. |
-### Prometheus alerts
-When you [enable collection of Prometheus metrics](#integrations) for your cluster, then you can download a collection of [recommended Prometheus alert rules](../azure-monitor/containers/container-insights-metric-alerts.md#enable-prometheus-alert-rules). This includes the following rules:
+### Log based alerts
+[Log alerts](../azure-monitor/alerts/alerts-types.md#log-alerts) allow you to alert on your [data plane](#aks-data-planecontainer-insights-logs) and [control plane](#aks-control-planeresource-logs) logs. Run queries at predefined intervals and create an alert based on the results. You may check for the count of certain records or perform calculations based on numeric columns.
+
+See [How to create log alerts from Container Insights](../azure-monitor/containers/container-insights-log-alerts.md) and [How to query logs from Container Insights](../azure-monitor/containers/container-insights-log-query.md).
+[Log alerts](../azure-monitor/alerts/alerts-unified-log.md) can measure two different things, which can be used to monitor in different scenarios:
+
+- [Result count](../azure-monitor/alerts/alerts-unified-log.md#result-count): Counts the number of rows returned by the query and can be used to work with events such as Windows event logs, Syslog, and application exceptions.
+- [Calculation of a value](../azure-monitor/alerts/alerts-unified-log.md#calculation-of-a-value): Makes a calculation based on a numeric column and can be used to include any number of resources. An example is CPU percentage.
+
+Depending on the alerting scenario required, log queries need to be created comparing a DateTime to the present time by using the `now` operator and going back one hour. To learn how to build log-based alerts, see [Create log alerts from Container insights](../azure-monitor/containers/container-insights-log-alerts.md).
-- Average CPU usage per container is greater than 95%-- Average Memory usage per container is greater than 95%-- Number of OOM killed containers is greater than 0-- Average PV usage is greater than 80%-- Pod container restarted in last 1 hour-- Node is not ready-- Ready state of pods is less than 80%-- Job did not complete in time-- Average node CPU utilization is greater than 80%-- Working set memory for a node is greater than 80%-- Disk space usage for a node is greater than 85%-- Number of pods in failed state are greater than 0
+## Network Observability
+[Network observability](./network-observability-overview.md) is an important part of maintaining a healthy and performant Kubernetes cluster. By collecting and analyzing data about network traffic, you can gain insights into how your cluster is operating and identify potential problems before they cause outages or performance degradation.
+When the [Network Observability](/azure/aks/network-observability-overview) add-on is enabled, it collects and converts useful metrics into Prometheus format, which can be visualized in Grafana. When enabled, the collected metrics are automatically ingested into Azure Monitor managed service for Prometheus. A Grafana dashboard is available in the Grafana public dashboard repo to visualize the network observability metrics collected by Prometheus. For more information, see [Network Observability setup](./network-observability-managed-cli.md) for detailed instructions.
## Next steps <!-- Add additional links. You can change the wording of these and add more if useful. --> -- See [Monitoring AKS data reference](monitor-aks-reference.md) for a reference of the metrics, logs, and other important values created by AKS.
+- See [Monitoring AKS data reference](monitor-aks-reference.md) for a reference of the metrics, logs, and other important values created by AKS.
aks Rdp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/rdp.md
If you need more troubleshooting data, you can [view the Kubernetes primary node
[install-azure-cli]: /cli/azure/install-azure-cli [install-azure-powershell]: /powershell/azure/install-az-ps [ssh-steps]: ssh.md
-[view-primary-logs]: monitor-aks.md#resource-logs
+[view-primary-logs]: monitor-aks.md#aks-control-planeresource-logs
[azure-bastion]: ../bastion/bastion-overview.md
application-gateway How To Header Rewrite Ingress Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-header-rewrite-ingress-api.md
Previously updated : 11/6/2023 Last updated : 11/27/2023
metadata:
spec: rules: - host: contoso.com
- httpPort: 80
rewrites: - type: RequestHeaderModifier requestHeaderModifier:
Via the response we should see:
} ```
-Congratulations, you have installed ALB Controller, deployed a backend application and modified header values via Gateway API on Application Gateway for Containers.
+Congratulations, you have installed ALB Controller, deployed a backend application and modified header values via Gateway API on Application Gateway for Containers.
application-gateway How To Url Rewrite Ingress Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/how-to-url-rewrite-ingress-api.md
Previously updated : 11/07/2023 Last updated : 11/27/2023
metadata:
spec: rules: - host: contoso.com
- httpPort: 80
rewrites: - type: URLRewrite urlRewrite:
application-gateway Ipv6 Application Gateway Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ipv6-application-gateway-portal.md
You can also complete this quickstart using [Azure PowerShell](ipv6-application-
## Regions and availability
-The IPv6 Application Gateway preview is available to all public cloud regions where Application Gateway v2 SKU is supported.
+The IPv6 Application Gateway preview is available to all public cloud regions where Application Gateway v2 SKU is supported. It's also available in [Microsoft Azure operated by 21Vianet](https://www.azure.cn/) and [Azure Government](https://azure.microsoft.com/overview/clouds/government/)
## Limitations
application-gateway Ipv6 Application Gateway Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/ipv6-application-gateway-powershell.md
If you choose to install and use PowerShell locally, this article requires the A
## Regions and availability
-The IPv6 Application Gateway preview is available to all public cloud regions where Application Gateway v2 SKU is supported.
+The IPv6 Application Gateway preview is available to all public cloud regions where Application Gateway v2 SKU is supported. It's also available in [Microsoft Azure operated by 21Vianet](https://www.azure.cn/) and [Azure Government](https://azure.microsoft.com/overview/clouds/government/)
## Limitations
automation Runbook Input Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/runbook-input-parameters.md
Runbook input parameters increase the flexibility of a runbook by allowing data to be passed to it when it's started. These parameters allow runbook actions to be targeted for specific scenarios and environments. This article describes the configuration and use of input parameters in your runbooks.
-You can configure input parameters for PowerShell, PowerShell Workflow, graphical, and Python runbooks. A runbook can have multiple parameters with different data types, or no parameters at all. Input parameters can be mandatory or optional, and you can use default values for optional parameters.
+You can configure input parameters for PowerShell, PowerShell Workflow, graphical, and Python runbooks. A runbook can have multiple parameters with different data types or no parameters. Input parameters can be mandatory or optional, and you can use default values for optional parameters.
You assign values to the input parameters for a runbook when you start it. You can start a runbook from the Azure portal, a web service, or PowerShell. You can also start one as a child runbook that is called inline in another runbook.
+## Input types
+
+Azure Automation supports various input parameter values across the different runbook types. Supported input types for each type of runbook are listed in the following table.
+
+| Runbook type | Supported parameter inputs |
+|||
+| PowerShell | - String <br>- Security.SecureString <br>- INT32 <br>- Boolean <br>- DateTime <br>- Array <br>- Collections.Hashtable <br>- Management.Automation.SwitchParameter |
+| PowerShell Workflow | - String <br>- Security.SecureString <br>- INT32 <br>- Boolean <br>- DateTime <br>- Array <br>- Collections.Hashtable <br>- Management.Automation.SwitchParameter |
+| Graphical PowerShell| - String <br>- INT32 <br>- INT64 <br>- Boolean <br>- Decimal <br>- DateTime <br>- Object |
+| Python | - String | |
+ ## Configure input parameters in PowerShell runbooks PowerShell and PowerShell Workflow runbooks in Azure Automation support input parameters that are defined through the following properties. | **Property** | **Description** | |: |: |
-| Type |Required. The data type expected for the parameter value. Any .NET type is valid. |
+| Type |Required. The data type is expected for the parameter value. Any .NET type is valid. |
| Name |Required. The name of the parameter. This name must be unique within the runbook, must start with a letter, and can contain only letters, numbers, or underscore characters. |
-| Mandatory |Optional. Boolean value specifying if the parameter requires a value. If you set this to True, a value must be provided when the runbook is started. If you set this to False, a value is optional. If you don't specify a value for the `Mandatory` property, PowerShell considers the input parameter optional by default. |
+| Mandatory |Optional. The Boolean value specifies whether the parameter requires a value. If you set this to True, a value must be provided when the runbook is started. If you set this to False, a value is optional. If you don't specify a value for the `Mandatory` property, PowerShell considers the input parameter optional by default. |
| Default value |Optional. A value that is used for the parameter if no input value is passed in when the runbook starts. The runbook can set a default value for any parameter. | Windows PowerShell supports more attributes of input parameters than those listed above, such as validation, aliases, and parameter sets. However, Azure Automation currently supports only the listed input parameter properties.
azure-app-configuration Howto Create Snapshots https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-create-snapshots.md
In your App Configuration store, go to **Operations** > **Configuration explorer
## Create a snapshot
- > [!IMPORTANT]
- > You may see any error "You are not authorized to view this configuration store data" when you switch to the Snapshots blade in the Azure portal if you opt to use Microsoft Entra authentication in the Configuration explorer or the Feature manager blades. This is a known issue in the Azure portal, and we are working on addressing it. It doesn't affect any scenarios other than the Azure Portal regarding accessing snapshots with Microsoft Entra authentication.
-
-As a temporary workaround, you can switch to using Access keys authentication from either the Configuration explorer or the Feature manager blades. You should then see the Snapshot blade displayed properly, assuming you have permission for the access keys.
- Under **Operations** > **Snapshots**, select **Create a new snapshot**. 1. Enter a **snapshot name** and optionally also add **Tags**.
azure-app-configuration Quickstart Azure Kubernetes Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-azure-kubernetes-service.md
Add following key-values to the App Configuration store and leave **Label** and
mountPath: /app volumes: - name: config-volume
- configMap: configmap-created-by-appconfig-provider
- items:
- - key: mysettings.json
- path: mysettings.json
+ configMap:
+ name: configmap-created-by-appconfig-provider
+ items:
+ - key: mysettings.json
+ path: mysettings.json
``` 3. Run the following command to deploy the changes. Replace the namespace if you are using your existing AKS application.
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/overview.md
Title: Azure Arc resource bridge overview description: Learn how to use Azure Arc resource bridge to support VM self-servicing on Azure Stack HCI, VMware, and System Center Virtual Machine Manager. Previously updated : 11/15/2023 Last updated : 11/27/2023
Azure Arc resource bridge is a Microsoft managed product that is part of the core Azure Arc platform. It is designed to host other Azure Arc services. In this release, the resource bridge supports VM self-servicing and management from Azure, for virtualized Windows and Linux virtual machines hosted in an on-premises environment on [Azure Stack HCI](/azure-stack/hci/manage/azure-arc-vm-management-overview), VMware ([Arc-enabled VMware vSphere](../vmware-vsphere/index.yml)), and System Center Virtual Machine Manager (SCVMM) [Arc-enabled SCVMM](../system-center-virtual-machine-manager/index.yml).
-Azure Arc resource bridge is a Kubernetes management cluster installed on the customerΓÇÖs on-premises infrastructure. The resource bridge is provided credentials to the infrastructure control plane that allows it to apply guest management services on the on-premises resources. Arc resource bridge enables projection of on-premises resources as ARM resources and management from ARM as ΓÇ£arc-enabledΓÇ¥ Azure resources.
+Azure Arc resource bridge is a Kubernetes management cluster installed on the customerΓÇÖs on-premises infrastructure. The resource bridge is provided credentials to the infrastructure control plane that allows it to apply guest management services on the on-premises resources. Arc resource bridge enables projection of on-premises resources as ARM resources and management from ARM as "Arc-enabled" Azure resources.
Arc resource bridge delivers the following benefits:
By registering resource pools, networks, and VM templates, you can represent a s
### System Center Virtual Machine Manager (SCVMM)
-You can connect an SCVMM management server to Azure by deploying Azure Arc resource bridgeΓÇ»(preview) in the VMM environment. Azure Arc resource bridge enables you to represent the SCVMM resources (clouds, VMs, templates etc.) in Azure and perform various operations on them:
+You can connect an SCVMM management server to Azure by deploying Azure Arc resource bridge in the VMM environment. Azure Arc resource bridge enables you to represent the SCVMM resources (clouds, VMs, templates etc.) in Azure and perform various operations on them:
* Start, stop, and restart a virtual machine * Control access and add Azure tags * Add, remove, and update network interfaces * Add, remove, and update disks and update VM size (CPU cores and memory)
+* Enable guest management
+* Install extensions
## Example scenarios
Arc resource bridge typically releases a new version on a monthly cadence, at th
## Next steps
-* Learn more about [how Azure Arc-enabled VMware vSphere extends Azure's governance and management capabilities to VMware vSphere infrastructure](../vmware-vsphere/overview.md).
-* Learn more about [provisioning and managing on-premises Windows and Linux VMs running on Azure Stack HCI clusters](/azure-stack/hci/manage/azure-arc-enabled-virtual-machines).
+* Learn how [Azure Arc-enabled VMware vSphere extends Azure's governance and management capabilities to VMware vSphere infrastructure](../vmware-vsphere/overview.md).
+* Learn how [Azure Arc-enabled SCVMM extends Azure's governance and management capabilities to System Center managed infrastructure(../system-center-virtual-machine-manager/overview.md).
+* Learn about [provisioning and managing on-premises Windows and Linux VMs running on Azure Stack HCI clusters](/azure-stack/hci/manage/azure-arc-enabled-virtual-machines).
* Review the [system requirements](system-requirements.md) for deploying and managing Arc resource bridge.--
azure-arc Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/upgrade.md
Title: Upgrade Arc resource bridge description: Learn how to upgrade Arc resource bridge using either cloud-managed upgrade or manual upgrade. Previously updated : 11/15/2023 Last updated : 11/27/2023
There are two ways to upgrade Arc resource bridge: cloud-managed upgrades manage
## Cloud-managed upgrade
-As a Microsoft-managed product, Arc resource bridges on a supported [private cloud provider](#private-cloud-providers) with an appliance version 1.0.15 or higher are automatically opted into cloud-manaaged upgrade. With cloud-managed upgrade, Microsoft will manage the upgrade of your Arc resource bridge to be within supported versions provided prerequisites are met. If prerequisites are not met, then cloud managed upgrade will fail.
+As a Microsoft-managed product, Arc resource bridges on a supported [private cloud provider](#private-cloud-providers) with an appliance version 1.0.15 or higher are automatically opted into cloud-manaaged upgrade. With cloud-managed upgrade, Microsoft manages the upgrade of your Arc resource bridge to be within supported versions, as long as the prerequisites are met. If prerequisites are not met, then cloud managed upgrade will fail.
-While Microsoft manages the upgrade of your Arc resource bridge, you are still responsible for checking that your resource bridge is healthy, online, in a "Running" status and within the supported versions. Disruptions could cause cloud-managed upgrade to fail and you should remain proactive on the health, version and status of your appliance VM. You can check on the health, version and status of your appliance by using the az arcappliance show command from your management machine or checking the Azure resource of your Arc resource bridge.
+While Microsoft manages the upgrade of your Arc resource bridge, it's still important for you to ensure that your resource bridge is healthy, online, in a `Running` status, and on a supported version. To do so, run the `az arcappliance show` command from your management machine, or check the Azure resource of your Arc resource bridge. If your appliance VM isn't in a healthy state, cloud-managed upgrade might fail, and your version may become unsupported.
Cloud-managed upgrades are handled through Azure. A notification is pushed to Azure to reflect the state of the appliance VM as it upgrades. As the resource bridge progresses through the upgrade, its status might switch back and forth between different upgrade steps. Upgrade is complete when the appliance VM `status` is `Running` and `provisioningState` is `Succeeded`.
Arc resource bridge can be manually upgraded from the management machine. You mu
Manual upgrade generally takes between 30-90 minutes, depending on network speeds. The upgrade command takes your Arc resource bridge to the next appliance version, which might not be the latest available appliance version. Multiple upgrades could be needed to reach a [supported version](#supported-versions). You can check your appliance version by checking the Azure resource of your Arc resource bridge.
-To manually upgrade your Arc resource bridge, make sure you have installed the latest `az arcappliance` CLI extension by running the extension upgrade command from the management machine:
+To manually upgrade your Arc resource bridge, make sure you're using the latest `az arcappliance` CLI extension by running the extension upgrade command from the management machine:
```azurecli az extension add --upgrade --name arcappliance
To manually upgrade your resource bridge, use the following command:
az arcappliance upgrade <private cloud> --config-file <file path to ARBname-appliance.yaml> ```
-For example, to upgrade a resource bridge on VMware: `az arcappliance upgrade vmware --config-file c:\contosoARB01-appliance.yaml`
+For example, to upgrade a resource bridge on VMware, run: `az arcappliance upgrade vmware --config-file c:\contosoARB01-appliance.yaml`
+
+To upgrade a resource bridge on System Center Virtual Machine Manager (SCVMM), run: `az arcappliance upgrade scvmm --config-file c:\contosoARB01-appliance.yaml`
Or to upgrade a resource bridge on Azure Stack HCI, run: `az arcappliance upgrade hci --config-file c:\contosoARB01-appliance.yaml`
Or to upgrade a resource bridge on Azure Stack HCI, run: `az arcappliance upgrad
Currently, private cloud providers differ in how they perform Arc resource bridge upgrades. Review the following information to see how to upgrade your Arc resource bridge for a specific provider.
-For Arc-enabled VMware vSphere, manual upgrade is available, but appliances on version 1.0.15 and higher will receive cloud-managed upgrade as the default experience. Appliances that are below version 1.0.15 must be manually upgraded. A manual upgrade only upgrades the appliance to the next version, not the latest version. If you have multiple versions to upgrade, then another option is to review the steps for [performing a recovery](/azure/azure-arc/vmware-vsphere/recover-from-resource-bridge-deletion), then delete the appliance VM and perform the recovery steps. This will deploy a new Arc resource bridge using the latest version and reconnect pre-existing Azure resources.
+For Arc-enabled VMware vSphere, manual upgrade is available, but appliances on version 1.0.15 and higher automatically receive cloud-managed upgrade as the default experience. Appliances that are earlier than version 1.0.15 must be manually upgraded. A manual upgrade only upgrades the appliance to the next version, not the latest version. If you have multiple versions to upgrade, another option is to review the steps for [performing a recovery](/azure/azure-arc/vmware-vsphere/recover-from-resource-bridge-deletion), then delete the appliance VM and perform the recovery steps. This deploys a new Arc resource bridge using the latest version and reconnects pre-existing Azure resources.
+
+Azure Arc VM management (preview) on Azure Stack HCI supports upgrade of an Arc resource bridge on Azure Stack HCI, version 22H2 up until appliance version 1.0.14 and `az arcappliance` CLI extension version 0.2.33. These upgrades can be done through manual upgrade. However, HCI version 22H2 won't be supported for appliance version 1.0.15 or higher because it's being deprecated. Customers on HCI 22H2 will receive limited support. To use appliance version 1.0.15 or higher, you must transition to Azure Stack HCI, version 23H2 (preview). In version 23H2 (), the LCM tool manages upgrades across all components as a "validated recipe" package. For more information, visit the [Arc VM management FAQ page](/azure-stack/hci/manage/azure-arc-vms-faq).
-Azure Arc VM management (preview) on Azure Stack HCI supports upgrade of an Arc resource bridge on Azure Stack HCI, version 22H2 up until appliance version 1.0.14 and `az arcappliance` CLI extension version 0.2.33. These upgrades can be done through manual upgrade. However, HCI version 22H2 will not be supported for appliance version 1.0.15 or higher because it is being deprecated. Customers on HCI 22h2 will receive limited support. To use appliance version 1.0.15 or higher, you must transition to Azure Stack HCI, version 23H2 (preview). In version 23H2 (preview), the LCM tool manages upgrades across all components as a "validated recipe" package. For more information, visit the [Arc VM management FAQ page](/azure-stack/hci/manage/azure-arc-vms-faq).
+For Arc-enabled System Center Virtual Machine Manager (SCVMM), the manual upgrade feature is available for appliance version 1.0.14 and higher. Appliances below version 1.0.14 need to perform the recovery option to get to version 1.0.15 or higher. Review the steps for [performing the recovery operation](/azure/azure-arc/system-center-virtual-machine-manager/disaster-recovery), then delete the appliance VM from SCVMM and perform the recovery steps. This deploys a new resource bridge and reconnects pre-existing Azure resources.
-For Arc-enabled System Center Virtual Machine Manager (SCVMM) (preview), the manual upgrade feature is available for appliance version 1.0.14 and higher. Appliances below version 1.0.14 need to perform the recovery option to get to version 1.0.15 or higher. Review the steps for [performing the recovery operation](/azure/azure-arc/system-center-virtual-machine-manager/disaster-recovery), then delete the appliance VM from SCVMM and perform the recovery steps. This deploys a new resource bridge and reconnects pre-existing Azure resources.
-
## Version releases The Arc resource bridge version is tied to the versions of underlying components used in the appliance image, such as the Kubernetes version. When there's a change in the appliance image, the Arc resource bridge version gets incremented. This generally happens when a new `az arcappliance` CLI extension version is released. A new extension is typically released on a monthly cadence at the end of the month. For detailed release info, see the [Arc resource bridge release notes](https://github.com/Azure/ArcResourceBridge/releases) on GitHub.
For example, if the current version is 1.0.18, then the typical n-3 supported ve
There might be instances where supported versions aren't sequential. For example, version 1.0.18 is released and later found to contain a bug. A hot fix is released in version 1.0.19 and version 1.0.18 is removed. In this scenario, n-3 supported versions become 1.0.19, 1.0.17, 1.0.16, 1.0.15.
-Arc resource bridge typically releases a new version on a monthly cadence, at the end of the month, although it's possible that delays could push the release date further out. Regardless of when a new release comes out, if you are within n-3 supported versions, then your Arc resource bridge version is supported. To stay updated on releases, visit the [Arc resource bridge release notes](https://github.com/Azure/ArcResourceBridge/releases) on GitHub.
+Arc resource bridge typically releases a new version on a monthly cadence, at the end of the month, although it's possible that delays could push the release date further out. Regardless of when a new release comes out, if you're within n-3 supported versions, then your Arc resource bridge version is supported. To stay updated on releases, visit the [Arc resource bridge release notes](https://github.com/Azure/ArcResourceBridge/releases) on GitHub.
If a resource bridge isn't upgraded to one of the supported versions (n-3), then it will fall outside the support window and be unsupported. If this happens, it might not always be possible to upgrade an unsupported resource bridge to a newer version, as component services used by Arc resource bridge could no longer be compatible. In addition, the unsupported resource bridge might not be able to provide reliable monitoring and health metrics.
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/overview.md
Title: Overview of the Azure Connected System Center Virtual Machine Manager description: This article provides a detailed overview of the Azure Arc-enabled System Center Virtual Machine Manager. Previously updated : 11/15/2023 Last updated : 11/27/2023 ms.
Arc-enabled System Center VMM allows you to:
- Perform various VM lifecycle operations such as start, stop, pause, and delete VMs on SCVMM managed VMs directly from Azure. - Empower developers and application teams to self-serve VM operations on demand using [Azure role-based access control (RBAC)](https://learn.microsoft.com/azure/role-based-access-control/overview).-- Browse your VMM resources (VMs, templates, VM networks, and storage) in Azure, providing you a single pane view for your infrastructure across both environments.
+- Browse your VMM resources (VMs, templates, VM networks, and storage) in Azure, providing you with a single pane view for your infrastructure across both environments.
- Discover and onboard existing SCVMM managed VMs to Azure.-- Install the Arc-connected machine agents at scale on SCVMM VMs to [govern, protect, configure, and monitor them](https://learn.microsoft.com/azure/azure-arc/servers/overview#supported-cloud-operations).
+- Install the Arc-connected machine agents at scale on SCVMM VMs to [govern, protect, configure, and monitor them](../servers/overview.md#supported-cloud-operations).
## Onboard resources to Azure management at scale
The following image shows the architecture for the Arc-enabled SCVMM:
- Azure Arc-enabled servers interact on the guest operating system level, with no awareness of the underlying infrastructure fabric and the virtualization platform that they're running on. Since Arc-enabled servers also support bare-metal machines, there might, in fact, not even be a host hypervisor in some cases. - Azure Arc-enabled SCVMM is a superset of Arc-enabled servers that extends management capabilities beyond the guest operating system to the VM itself. This provides lifecycle management and CRUD (Create, Read, Update, and Delete) operations on an SCVMM VM. These lifecycle management capabilities are exposed in the Azure portal and look and feel just like a regular Azure VM. Azure Arc-enabled SCVMM also provides guest operating system management, in fact, it uses the same components as Azure Arc-enabled servers.
-You have the flexibility to start with either option, or incorporate the other one later without any disruption. With both options, you will enjoy the same consistent experience.
+You have the flexibility to start with either option, or incorporate the other one later without any disruption. With both options, you'll enjoy the same consistent experience.
### Supported scenarios
The following scenarios are supported in Azure Arc-enabled SCVMM:
- Administrators can use the Azure portal to browse SCVMM inventory and register SCVMM cloud, virtual machines, VM networks, and VM templates into Azure. - Administrators can provide app teams/developers fine-grained permissions on those SCVMM resources through Azure RBAC. - App teams can use Azure interfaces (portal, CLI, or REST API) to manage the lifecycle of on-premises VMs they use for deploying their applications (CRUD, Start/Stop/Restart).-- Administrators can install Arc agents on SCVMM VMs at-scale and install corresponding extensions to leverage Azure management services like Microsoft Defender for Cloud, Azure Update Manager, Azure Monitor, etc.
+- Administrators can install Arc agents on SCVMM VMs at-scale and install corresponding extensions to use Azure management services like Microsoft Defender for Cloud, Azure Update Manager, Azure Monitor, etc.
### Supported VMM versions
-Azure Arc-enabled SCVMM works with VMM 2019 and 2022 versions and supports SCVMM management servers with a maximum of 15000 VMs.
+Azure Arc-enabled SCVMM works with VMM 2019 and 2022 versions and supports SCVMM management servers with a maximum of 15,000 VMs.
### Supported regions Azure Arc-enabled SCVMM is currently supported in the following regions: - East US-- East US2-- West US2-- West US3
+- East US 2
+- West US 2
+- West US 3
+- Central US
- South Central US - UK South - North Europe
azure-arc Quickstart Connect System Center Virtual Machine Manager To Arc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/quickstart-connect-system-center-virtual-machine-manager-to-arc.md
Title: Quick Start for Azure Arc-enabled System Center Virtual Machine Manager
-description: In this QuickStart, you will learn how to use the helper script to connect your System Center Virtual Machine Manager management server to Azure Arc.
+description: In this QuickStart, you learn how to use the helper script to connect your System Center Virtual Machine Manager management server to Azure Arc.
ms. Previously updated : 11/15/2023 Last updated : 11/27/2023
This QuickStart shows you how to connect your SCVMM management server to Azure A
## Prerequisites >[!Note]
->- If VMM server is running on Windows Server 2016 machine, ensure that [Open SSH package](https://github.com/PowerShell/Win32-OpenSSH/releases) is installed.
->- If you deploy an older version of appliance (version lesser than 0.2.25), Arc operation fails with the error *Appliance cluster is not deployed with AAD authentication*. To fix this issue, download the latest version of the onboarding script and deploy the resource bridge again.
->- Azure Arc Resource Bridge deployment using private link is currently not supported.
+> - If VMM server is running on Windows Server 2016 machine, ensure that [Open SSH package](https://github.com/PowerShell/Win32-OpenSSH/releases) is installed.
+> - If you deploy an older version of appliance (version lesser than 0.2.25), Arc operation fails with the error *Appliance cluster is not deployed with AAD authentication*. To fix this issue, download the latest version of the onboarding script and deploy the resource bridge again.
+> - Azure Arc Resource Bridge deployment using private link is currently not supported.
| **Requirement** | **Details** | | | | | **Azure** | An Azure subscription <br/><br/> A resource group in the above subscription where you have the *Owner/Contributor* role. |
-| **SCVMM** | You need an SCVMM management server running version 2019 or later.<br/><br/> A private cloud with minimum free capacity of 32 GB of RAM, 4 vCPUs with 100 GB of free disk space. <br/><br/> A VM network with internet access, directly or through proxy. Appliance VM will be deployed using this VM network.<br/><br/> Only Static IP allocation is supported and VMM Static IP Pool is required. Follow [these steps](https://learn.microsoft.com/system-center/vmm/network-pool?view=sc-vmm-2022) to create a VMM Static IP Pool and ensure that the Static IP Pool has at least four IP addresses. Dynamic IP allocation using DHCP is not supported. |
-| **SCVMM accounts** | An SCVMM admin account that can perform all administrative actions on all objects that VMM manages. <br/><br/> The user should be part of local administrator account in the SCVMM server. <br/><br/>This will be used for the ongoing operation of Azure Arc-enabled SCVMM as well as the deployment of the Arc Resource bridge VM. |
+| **SCVMM** | You need an SCVMM management server running version 2019 or later.<br/><br/> A private cloud with minimum free capacity of 32 GB of RAM, 4 vCPUs with 100 GB of free disk space. <br/><br/> A VM network with internet access, directly or through proxy. Appliance VM will be deployed using this VM network.<br/><br/> Only Static IP allocation is supported and VMM Static IP Pool is required. Follow [these steps](https://learn.microsoft.com/system-center/vmm/network-pool?view=sc-vmm-2022) to create a VMM Static IP Pool and ensure that the Static IP Pool has at least four IP addresses. Dynamic IP allocation using DHCP isn't supported. |
+| **SCVMM accounts** | An SCVMM admin account that can perform all administrative actions on all objects that VMM manages. <br/><br/> The user should be part of local administrator account in the SCVMM server. If the SCVMM server is installed in a High Availability configuration, the user should be a part of the local administrator accounts in all the SCVMM cluster nodes. <br/><br/>This will be used for the ongoing operation of Azure Arc-enabled SCVMM and the deployment of the Arc Resource bridge VM. |
| **Workstation** | The workstation will be used to run the helper script.<br/><br/> A Windows/Linux machine that can access both your SCVMM management server and internet, directly or through proxy.<br/><br/> The helper script can be run directly from the VMM server machine as well.<br/><br/> To avoid network latency issues, we recommend executing the helper script directly in the VMM server machine.<br/><br/> Note that when you execute the script from a Linux machine, the deployment takes a bit longer and you might experience performance issues. | ## Prepare SCVMM management server
The script execution will take up to half an hour and you'll be prompted for var
| **Parameter** | **Details** | | | |
-| **Azure login** | You would be asked to log in to Azure by visiting [this site](https://www.microsoft.com/devicelogin) and pasting the prompted code. |
+| **Azure login** | You would be asked to sign in to Azure by visiting [this site](https://www.microsoft.com/devicelogin) and pasting the prompted code. |
| **SCVMM management server FQDN/Address** | FQDN for the VMM server (or an IP address). </br> Provide role name if itΓÇÖs a Highly Available VMM deployment. </br> For example: nyc-scvmm.contoso.com or 10.160.0.1 | | **SCVMM Username**</br> (domain\username) | Username for the SCVMM administrator account. The required permissions for the account are listed in the prerequisites above.</br> Example: contoso\contosouser | | **SCVMM password** | Password for the SCVMM admin account | | **Private cloud selection** | Select the name of the private cloud where the Arc resource bridge VM should be deployed. | | **Virtual Network selection** | Select the name of the virtual network to which *Arc resource bridge VM* needs to be connected. This network should allow the appliance to talk to the VMM management server and the Azure endpoints (or internet). |
-| **Static IP pool** | Select the VMM static IP pool that will be used to allot IP address. |
+| **Static IP pool** | Select the VMM static IP pool that will be used to allot the IP address. |
| **Control Plane IP** | Provide a reserved IP address in the same subnet as the static IP pool used for Resource Bridge deployment. This IP address should be outside of the range of static IP pool used for Resource Bridge deployment and shouldn't be assigned to any other machine on the network. |
-| **Appliance proxy settings** | Type ΓÇÿYΓÇÖ if there's a proxy in your appliance network, else type ΓÇÿNΓÇÖ.|
+| **Appliance proxy settings** | Enter *Y* if there's a proxy in your appliance network, else enter *N*.|
| **http** | Address of the HTTP proxy server. | | **https** | Address of the HTTPS proxy server.| | **NoProxy** | Addresses to be excluded from proxy.| |**CertificateFilePath** | For SSL based proxies, provide the path to the certificate. |
-Once the command execution is completed, your setup is complete, and you can try out the capabilities of Azure Arc- enabled SCVMM.
+Once the command execution is completed, your setup is complete, and you can try out the capabilities of Azure Arc-enabled SCVMM.
### Retry command - Windows If for any reason, the appliance creation fails, you need to retry it. Run the command with ```-Force``` to clean up and onboard again. ```powershell-interactive
- ./resource-bridge-onboarding-script.ps1-Force -Subscription <Subscription> -ResourceGroup <ResourceGroup> -AzLocation <AzLocation> -ApplianceName <ApplianceName> -CustomLocationName <CustomLocationName> -VMMservername <VMMservername>
+ ./resource-bridge-onboarding-script.ps1 -Force -Subscription <Subscription> -ResourceGroup <ResourceGroup> -AzLocation <AzLocation> -ApplianceName <ApplianceName> -CustomLocationName <CustomLocationName> -VMMservername <VMMservername>
``` ### Retry command - Linux
If for any reason, the appliance creation fails, you need to retry it. Run the c
>[!NOTE] > - After successful deployment, we recommend maintaining the state of **Arc Resource Bridge VM** as *online*. > - Intermittently appliance might become unreachable when you shut down and restart the VM.
->- After successful deployment, save the config YAML files in a secure location. The config files are required to perform management operations on the resource bridge.
+> - After successful deployment, save the config YAML files in a secure location. The config files are required to perform management operations on the resource bridge.
> - After the execution of command, your setup is complete, and you can try out the capabilities of Azure Arc-enabled SCVMM.
azure-functions Functions Bindings Azure Sql Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-trigger.md
- devx-track-js - devx-track-python - ignite-2023 Previously updated : 11/14/2023 Last updated : 11/27/2023 zone_pivot_groups: programming-languages-set-functions-lang-workers
zone_pivot_groups: programming-languages-set-functions-lang-workers
# Azure SQL trigger for Functions > [!NOTE]
-> In consumption plan functions, automatic scaling is not available for SQL trigger. Use premium or dedicated plans for [scaling benefits](functions-scale.md) with SQL trigger.
+> In consumption plan functions, automatic scaling is not supported for SQL trigger. If the automatic scaling process stops the function, all processing of events will stop and it will need to be manually restarted.
+>
+> Use premium or dedicated plans for [scaling benefits](functions-scale.md) with SQL trigger.
+>
The Azure SQL trigger uses [SQL change tracking](/sql/relational-databases/track-changes/about-change-tracking-sql-server) functionality to monitor a SQL table for changes and trigger a function when a row is created, updated, or deleted. For configuration details for change tracking for use with the Azure SQL trigger, see [Set up change tracking](#set-up-change-tracking-required). For information on setup details of the Azure SQL extension for Azure Functions, see the [SQL binding overview](./functions-bindings-azure-sql.md).
azure-functions Functions Premium Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-premium-plan.md
These are the currently supported maximum scale-out values for a single plan in
|Japan West| 100 | 20 | |Jio India West| 100 | 20 | |Korea Central| 100 | 20 |
-|Korea South| Not Available | 20 |
+|Korea South| 40 | 20 |
|North Central US| 100 | 20 | |North Europe| 100 | 100 | |Norway East| 100 | 20 |
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
The tables below provide a comparison of Azure Monitor Agent with the legacy the
| **Data sent to** | | | | | | | Azure Monitor Logs | Γ£ô | Γ£ô | | | | Azure Monitor Metrics<sup>1</sup> | Γ£ô (Public preview) | | Γ£ô (Public preview) |
-| | Azure Storage | Γ£ô (Preview) | | Γ£ô |
-| | Event Hubs | Γ£ô (Preview) | | Γ£ô |
+| | Azure Storage - for Azure VMs only | Γ£ô (Preview) | | Γ£ô |
+| | Event Hubs - for Azure VMs only | Γ£ô (Preview) | | Γ£ô |
| **Services and features supported** | | | | | | | Microsoft Sentinel | Γ£ô ([View scope](./azure-monitor-agent-migration.md#migrate-additional-services-and-features)) | Γ£ô | | | | VM Insights | Γ£ô | Γ£ô | |
-| | Microsoft Defender for Cloud | Γ£ô (Public preview) | Γ£ô | |
-| | Automation Update Management | | Γ£ô | |
+| | Microsoft Defender for Cloud - Olny uses MDE agent | | | |
+| | Automation Update Management - Moved to Azure Update Manager | Γ£ô | Γ£ô | |
| | Azure Stack HCI | Γ£ô | | |
-| | Update Manager | N/A (Public preview, independent of monitoring agents) | | |
-| | Change Tracking | Γ£ô (Public preview) | Γ£ô | |
-| | SQL Best Practices Assessment | Γ£ô | | |
+| | Update Manager - no longer uses agents | | | |
+| | Change Tracking | Γ£ô | Γ£ô | |
+| | SQL Best Practices Assessment | Γ£ô | | |
### Linux agents
The tables below provide a comparison of Azure Monitor Agent with the legacy the
| **Data sent to** | | | | | | | | Azure Monitor Logs | Γ£ô | Γ£ô | | | | | Azure Monitor Metrics<sup>1</sup> | Γ£ô (Public preview) | | | Γ£ô (Public preview) |
-| | Azure Storage | Γ£ô (Preview) | | Γ£ô | |
-| | Event Hubs | Γ£ô (Preview) | | Γ£ô | |
+| | Azure Storage - for Azrue VMs only | Γ£ô (Preview) | | Γ£ô | |
+| | Event Hubs - for azure VMs only | Γ£ô (Preview) | | Γ£ô | |
| **Services and features supported** | | | | | | | | Microsoft Sentinel | Γ£ô ([View scope](./azure-monitor-agent-migration.md#migrate-additional-services-and-features)) | Γ£ô | | | | VM Insights | Γ£ô | Γ£ô | |
-| | Microsoft Defender for Cloud | Γ£ô (Public preview) | Γ£ô | |
-| | Automation Update Management | | Γ£ô | |
-| | Update Manager | N/A (Public preview, independent of monitoring agents) | | |
-| | Change Tracking | Γ£ô (Public preview) | Γ£ô | |
+| | Microsoft Defender for Cloud - Only use MDE agent | | | |
+| | Automation Update Management - Moved to Azure Update Manager | Γ£ô | Γ£ô | |
+| | Update Manager - no longer uses agents | | | |
+| | Change Tracking | Γ£ô | Γ£ô | |
<sup>1</sup> To review other limitations of using Azure Monitor Metrics, see [quotas and limits](../essentials/metrics-custom-overview.md#quotas-and-limits). On Linux, using Azure Monitor Metrics as the only destination is supported in v.1.10.9.0 or higher.
azure-monitor Alerts Troubleshoot Metric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-troubleshoot-metric.md
To avoid a deployment failure when you try to validate the custom metric's defin
> [!NOTE] > Using the `skipMetricValidation` parameter might also be required when you define an alert rule on an existing custom metric that hasn't been emitted in several days.
-## Process data for a metric alert rule in a specific region
+## How to process data for a metric alert rule in a specific region
You can make sure that an alert rule is processed in a specified region if your metric alert rule is defined with a scope of that region and if it monitors a custom metric.
To enable regional data processing in one of these regions, select the specified
> [!NOTE] > We're continually adding more regions for regional data processing.
-## Alert rule with dynamic threshold fires too much or is too noisy
+## Metric alert rule with dynamic threshold fires too much or is too noisy
If an alert rule that uses dynamic thresholds is too noisy or fires too much, you may need to reduce the sensitivity of your dynamic thresholds alert rule. Use one of the following options: - **Threshold sensitivity:** Set the sensitivity to **Low** to be more tolerant for deviations. - **Number of violations (under Advanced settings):** Configure the alert rule to trigger only if several deviations occur within a certain period of time. This setting makes the rule less susceptible to transient deviations.
-## Alert rule with dynamic threshold doesn't fire enough
+## Metric alert rule with dynamic threshold doesn't fire enough
You may encounter an alert rule that uses dynamic thresholds doesn't fire or isn't sensitive enough, even though it's configured with high sensitivity. This can happen when the metric's distribution is highly irregular. Consider one of the following solutions to fix the issue: - Move to monitoring a complementary metric that's suitable for your scenario, if applicable. For example, check for changes in success rate rather than failure rate.
If you've reached the quota limit, the following steps might help resolve the is
- Subscription IDs for which the quota limit needs to be increased. - Resource type for the quota increase. Select **Metric alerts** or **Metric alerts (Classic)**. - Requested quota limit.+ ## `Metric not found` error: - **For a platform metric:** Make sure you're using the **Metric** name from [the Azure Monitor supported metrics page](../essentials/metrics-supported.md) and not the **Metric Display Name**. - **For a custom metric:** Make sure that the metric is already being emitted because you can't create an alert rule on a custom metric that doesn't yet exist. Also ensure that you're providing the custom metric's namespace. For a Resource Manager template example, see [Create a metric alert with a Resource Manager template](./alerts-metric-create-templates.md#template-for-a-static-threshold-metric-alert-that-monitors-a-custom-metric). - If you're creating [metric alerts on logs](./alerts-metric-logs.md), ensure appropriate dependencies are included. For a sample template, see [Create Metric Alerts for Logs in Azure Monitor](./alerts-metric-logs.md#resource-template-for-metric-alerts-for-logs).
+## Network error when creating a metric alert rule using dimensions
+
+If you're creating a metric alert rule that uses dimensions, you might encounter a network error. This can happen if you create a metric alert rule that specifies a lot of dimension values. For example, if you create a metric alert rule that monitors the heartbeat metric for 200 computers, specifying each computer as a unique dimension value. This creates a payload with a large amount of text, that is too large to be sent over the network, and you may receive the following network error: `The network connectivity issue encountered for 'microsoft.insights'; cannot fulfill the request`.
+
+To resolve this, we recommend that you either:
+ΓÇó Define multiple rules (each with a subset of the dimension values).
+ΓÇó Use the **StartsWith** operator if the dimension values have common names.
+ΓÇó If relevant, configure the rule to monitor all dimension values if thereΓÇÖs no need to individually monitor the specific dimension values.
++ ## No permissions to create metric alert rules To create a metric alert rule, you must have the following permissions:
azure-monitor Container Insights Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-cost.md
The following types of data collected from a Kubernetes cluster with Container i
- Container environment variables from every monitored container in the cluster - Completed Kubernetes jobs/pods in the cluster that don't require monitoring - Active scraping of Prometheus metrics-- [Resource log collection](../../aks/monitor-aks.md#resource-logs) of Kubernetes main node logs in your Azure Kubernetes Service (AKS) cluster to analyze log data generated by main components, such as `kube-apiserver` and `kube-controller-manager`.
+- [Resource log collection](../../aks/monitor-aks.md#aks-control-planeresource-logs) of Kubernetes main node logs in your Azure Kubernetes Service (AKS) cluster to analyze log data generated by main components, such as `kube-apiserver` and `kube-controller-manager`.
## Control ingestion to reduce cost
azure-monitor Monitor Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/monitor-kubernetes.md
If you have an existing solution for collection of logs, then follow the guidanc
#### Collect control plane logs for AKS clusters
-The logs for AKS control plane components are implemented in Azure as [resource logs](../essentials/resource-logs.md). Container Insights doesn't use these logs, so you need to create your own log queries to view and analyze them. For details on log structure and queries, see [How to query logs from Container Insights](../../aks/monitor-aks.md#resource-logs).
+The logs for AKS control plane components are implemented in Azure as [resource logs](../essentials/resource-logs.md). Container Insights doesn't use these logs, so you need to create your own log queries to view and analyze them. For details on log structure and queries, see [How to query logs from Container Insights](../../aks/monitor-aks.md#aks-control-planeresource-logs).
-[Create a diagnostic setting](../../aks/monitor-aks.md#resource-logs) for each AKS cluster to send resource logs to a Log Analytics workspace. Use [Azure Policy](../essentials/diagnostic-settings-policy.md) to ensure consistent configuration across multiple clusters.
+[Create a diagnostic setting](../../aks/monitor-aks.md#aks-control-planeresource-logs) for each AKS cluster to send resource logs to a Log Analytics workspace. Use [Azure Policy](../essentials/diagnostic-settings-policy.md) to ensure consistent configuration across multiple clusters.
There's a cost for sending resource logs to a workspace, so you should only collect those log categories that you intend to use. For a description of the categories that are available for AKS, see [Resource logs](../../aks/monitor-aks-reference.md#resource-logs). Start by collecting a minimal number of categories and then modify the diagnostic setting to collect additional categories as your needs increase and as you understand your associated costs. You can send logs to an Azure storage account to reduce costs if you need to retain the information for compliance reasons. For details on the cost of ingesting and retaining log data, see [Azure Monitor Logs pricing details](../logs/cost-logs.md).
azure-monitor Activity Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/activity-log.md
For details on how to create a diagnostic setting, see [Create diagnostic settin
> [!NOTE] > * Entries in the Activity Log are system generated and can't be changed or deleted. > * Entries in the Activity Log are representing control plane changes like a virtual machine restart, any non related entries should be written into [Azure Resource Logs](resource-logs.md)
+> * Entries in the Activity Log are typically a result of changes (create, update or delete operations) or an action having been initiated. Operations focused on reading details of a resource are not typically captured.
## Retention period
If a log profile already exists, you first must remove the existing log profile,
|enabled | Yes |True or False. Used to enable or disable the retention policy. If True, then the `days` parameter must be a value greater than zero. | categories |Yes |Space-separated list of event categories that should be collected. Possible values are Write, Delete, and Action. | + ### Data structure changes
Learn more about:
* [Platform logs](./platform-logs-overview.md) * [Activity log event schema](activity-log-schema.md) * [Activity log insights](activity-log-insights.md)+
azure-monitor Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/availability-zones.md
Last updated 06/05/2023
-#customer-intent: As an IT manager, I want to understand the data and service resilience benefits Azure Monitor availability zones provide so that can ensure my data and services are sufficiently protected in the event of datacenter failure.
+#customer-intent: As an IT manager, I want to understand the data and service resilience benefits Azure Monitor availability zones provide to ensure my data and services are sufficiently protected in the event of datacenter failure.
# Enhance data and service resilience in Azure Monitor Logs with availability zones
-[Azure availability zones](../../availability-zones/az-overview.md) protect applications and data from datacenter failures and can enhance the resilience of Azure Monitor features that rely on a Log Analytics workspace. This article describes the data and service resilience benefits Azure Monitor availability zones provide by default to [dedicated clusters](logs-dedicated-clusters.md) in supported regions.
+[Azure availability zones](../../reliability/availability-zones-overview.md) protect applications and data from datacenter failures and can enhance the resilience of Azure Monitor features that rely on a Log Analytics workspace. This article describes the data and service resilience benefits Azure Monitor availability zones provide by default to [dedicated clusters](logs-dedicated-clusters.md) in supported regions.
## Prerequisites
> [!NOTE] > Application Insights resources can use availability zones only if they're workspace-based and the workspace uses a dedicated cluster. Classic Application Insights resources can't use availability zones.
-
-## Data resilience - supported regions
-Availability zones protect your data from datacenter failures by relying on datacenters in different physical locations, equipped with independent power, cooling, and networking.
+## How availability zones enhance data and service resilience in Azure Monitor Logs
-> [!NOTE]
-> Moving to a dedicated cluster in a region that supports availablility zones protects data ingested after the move, not historical data.
+Each Azure region that supports availability zones is made of one or more datacenters, or zones, equipped with independent power, cooling, and networking infrastructure.
+
+Azure Monitor Logs availability zones are [zone-redundant](../../reliability/availability-zones-overview.md#zonal-and-zone-redundant-services), which means that Microsoft manages spreading service requests and replicating data across different zones in supported regions. If one zone is affected by an incident, Microsoft manages failover to a different availability zone in the region automatically. You don't need to take any action because switching between zones is seamless.
+
+A subset of the availability zones that support data resilience currently also support service resilience for Azure Monitor Logs, as listed in the [Service resilience - supported regions](#service-resiliencesupported-regions) section. In regions that support service resilience, Azure Monitor Logs service operations - for example, log ingestion, queries, and alerts - can continue in the event of a zone failure. In regions that only support data resilience, your stored data is protected against zonal failures, but service operations might be impacted by regional incidents.
+## Data resilience - supported regions
+ Azure Monitor currently supports data resilience for availability-zone-enabled dedicated clusters in these regions: | Americas | Europe | Middle East | Africa | Asia Pacific |
Azure Monitor currently supports data resilience for availability-zone-enabled d
| West US 3 | Switzerland North | | | | | | Poland Central | | | |
+> [!NOTE]
+> Moving to a dedicated cluster in a region that supports availablility zones protects data ingested after the move, not historical data.
+ ## Service resilience - supported regions When available in your region, Azure Monitor availability zones enhance your Azure Monitor service resilience automatically. Physical separation and independent infrastructure makes interruption of service availability in your Log Analytics workspace far less likely because the Log Analytics workspace can rely on resources from a different zone.
azure-monitor Private Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/private-storage.md
Previously updated : 04/04/2022 Last updated : 11/26/2023 # Use customer-managed storage accounts in Azure Monitor Logs
To configure your Azure Storage account to use CMKs with Key Vault, use the [Azu
## Link storage accounts to your Log Analytics workspace > [!NOTE]
-> If you link a storage account for queries, or for log alerts, existing queries will be removed from the workspace. Copy saved searches and log alerts that you need before you undertake this configuration. For directions on moving saved queries and log alerts, see [Workspace move procedure](./move-workspace-region.md).
->
-> You can connect up to:
-> - Five storage accounts for the ingestion of custom logs and IIS logs.
-> - One storage account for saved queries.
-> - One storage account for saved log alert queries.
+> - When linking storage account for privacy and compliance typically, saved queries and log alerts are deleted from workspace permanently and can't be restored. To prevent lose of existing saved queries and log alerts, copy saved queries and log alerts using a template as described in [Workspace move procedure](./move-workspace-region.md).
+> - A single storage account can be linked for custom log and IIS logs, query, and alert.
+> - When linking storage account for custom log and IIS logs ingestion, you might need to consider linking more storage accounts depending on the ingestion rate. You can link up to five storage accounts to a workspace.
### Use the Azure portal On the Azure portal, open your workspace menu and select **Linked storage accounts**. A pane shows the linked storage accounts by the use cases previously mentioned (ingestion over Private Link, applying CMKs to saved queries or to alerts).
azure-monitor Workspace Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/workspace-design.md
The following table presents criteria to consider when you design your workspace
| [Azure regions](#azure-regions) | Each workspace resides in a particular Azure region. You might have regulatory or compliance requirements to store data in specific locations. | | [Data ownership](#data-ownership) | You might choose to create separate workspaces to define data ownership. For example, you might create workspaces by subsidiaries or affiliated companies. | | [Split billing](#split-billing) | By placing workspaces in separate subscriptions, they can be billed to different parties. |
-| [Data retention and archive](#data-retention-and-archive) | You can set different retention settings for each table in a workspace. You need a separate workspace if you require different retention settings for different resources that send data to the same tables. |
+| [Data retention and archive](#data-retention-and-archive) | You can set different retention settings for each workspace and each table in a workspace. You need a separate workspace if you require different retention settings for different resources that send data to the same tables. |
| [Commitment tiers](#commitment-tiers) | Commitment tiers allow you to reduce your ingestion cost by committing to a minimum amount of daily data in a single workspace. | | [Legacy agent limitations](#legacy-agent-limitations) | Legacy virtual machine agents have limitations on the number of workspaces they can connect to. | | [Data access control](#data-access-control) | Configure access to the workspace and to different tables and data from different resources. |
+|[Resilience](#resilience)| To ensure that data in your workspace is available in the event of a region failure, you can ingest data into multiple workspaces in different regions.|
### Operational and security data The decision whether to combine your operational data from Azure Monitor in the same workspace as security data from Microsoft Sentinel or separate each into their own workspace depends on your security requirements and the potential cost implications for your environment.
For example, you might grant access to only specific tables collected by Microso
- **If you don't require granular access control by table:** Grant the operations and security team access to their resources and allow resource owners to use resource-context RBAC for their resources. - **If you require granular access control by table:** Grant or deny access to specific tables by using table-level RBAC.
+### Resilience
+
+To ensure that critical data in your workspace is available in the event of a region failure, you can ingest some or all of your data into multiple workspaces in different regions.
+
+This option requires managing integration with other services and products separately for each workspace. Even though the data will be available in the alternate workspace in case of failure, resources that rely on the data, such as alerts and workbooks, won't know to switch over to the alternate workspace. Consider storing ARM templates for critical resources with configuration for the alternate workspace in Azure DevOps, or as disabled policies that can quickly be enabled in a failover scenario.
+ ## Work with multiple workspaces Many designs will include multiple workspaces, so Azure Monitor and Microsoft Sentinel include features to assist you in analyzing this data across workspaces. For more information, see:
azure-netapp-files Azure Netapp Files Resize Capacity Pools Or Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-resize-capacity-pools-or-volumes.md
For information about monitoring a volumeΓÇÖs capacity, see [Monitor the capacit
## Considerations * Volume quotas are indexed against `maxfiles` limits. Once a volume has surpassed a `maxfiles` limit, you cannot reduce the volume size below the quota that corresponds to that `maxfiles` limit. For more information and specific limits, see [`maxfiles` limits](azure-netapp-files-resource-limits.md#maxfiles-limits-).
-* Capacity pools with Basic network features have a minimum size of 4 TiB. For capacity pools with Standard network features, the minimum size is 2 TiB. For more information, see [Resource limits](azure-netapp-files-resource-limits.md)
+* Capacity pools with Basic network features have a minimum size of 4 TiB. For capacity pools with Standard network features, the minimum size is 1 TiB. For more information, see [Resource limits](azure-netapp-files-resource-limits.md)
## Resize the capacity pool using the Azure portal
azure-netapp-files Azure Netapp Files Resource Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-resource-limits.md
The following table describes resource limits for Azure NetApp Files:
| Number of volumes per capacity pool | 500 | Yes | | Number of snapshots per volume | 255 | No | | Number of IPs in a virtual network (including immediately peered VNets) accessing volumes in an Azure NetApp Files hosting VNet | <ul><li>**Basic**: 1000</li><li>**Standard**: [Same standard limits as VMs](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-resource-manager-virtual-networking-limits)</li></ul> | No |
-| Minimum size of a single capacity pool | 2 TiB* | No |
+| Minimum size of a single capacity pool | 1 TiB* | No |
| Maximum size of a single capacity pool | 1000 TiB | Yes | | Minimum size of a single regular volume | 100 GiB | No | | Maximum size of a single regular volume | 100 TiB | No |
The following table describes resource limits for Azure NetApp Files:
| Maximum number of manual backups per volume per day | 5 | No | | Maximum number of volumes supported for cool access per subscription per region | 10 | Yes |
-\* [!INCLUDE [Limitations for capacity pool minimum of 2 TiB](includes/2-tib-capacity-pool.md)]
+
+\* [!INCLUDE [Limitations for capacity pool minimum of 1 TiB](includes/2-tib-capacity-pool.md)]
For more information, see [Capacity management FAQs](faq-capacity-management.md).
azure-netapp-files Azure Netapp Files Set Up Capacity Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-set-up-capacity-pool.md
Creating a capacity pool enables you to create volumes within it.
## Before you begin
-* You must have already [created a NetApp account](azure-netapp-files-create-netapp-account.md).
+* You must have already [created a NetApp account](azure-netapp-files-create-netapp-account.md).
* If you are using Azure CLI, ensure that you are using the latest version. For more information, see [How to update the Azure CLI](/cli/azure/update-azure-cli). * If you are using PowerShell, ensure that you are using the latest version of the Az.NetAppFiles module. To update to the latest version, use the 'Update-Module Az.NetAppFiles' command. For more information, see [Update-Module](/powershell/module/powershellget/update-module). * If you are using the Azure REST API, ensure that you are specifying the latest version.
+* If this is your first time using a 1-TiB capacity pool, you must first register the feature:
+ 1. Register the feature:
+ ```azurepowershell-interactive
+ Register-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANF1TiBPoolSize
+ ```
+ 2. Check the status of the feature registration:
+ > [!NOTE]
+ > The **RegistrationState** may be in the `Registering` state for up to 60 minutes before changing to `Registered`. Wait until the status is **Registered** before continuing.
+ ```azurepowershell-interactive
+ Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANF1TiBPoolSize
+ ```
+ You can also use [Azure CLI commands](/cli/azure/feature) `az feature register` and `az feature show` to register the feature and display the registration status.
## Steps
Creating a capacity pool enables you to create volumes within it.
* **Size** Specify the size of the capacity pool that you are purchasing.
- The minimum capacity pool size is 2 TiB. You can change the size of a capacity pool in 1-TiB increments.
- > [!NOTE]
- > You can only take advantage of the 2-TiB minimum if all the volumes in the capacity pool are using Standard network features. If any volume is using Basic network features, the minimum size is 4 TiB.
+ The minimum capacity pool size is 1 TiB. You can change the size of a capacity pool in 1-TiB increments.
+
+ >[!NOTE]
+ >[!INCLUDE [Limitations for capacity pool minimum of 1 TiB](includes/2-tib-capacity-pool.md)]
* **Enable cool access** *(for Standard service level only)* This option specifies whether volumes in the capacity pool will support cool access. This option is currently supported for the Standard service level only. For details about using this option, see [Manage Azure NetApp Files standard storage with cool access](manage-cool-access.md).
azure-netapp-files Azure Netapp Files Understand Storage Hierarchy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-understand-storage-hierarchy.md
na Previously updated : 02/23/2023 Last updated : 07/27/2023 # Storage hierarchy of Azure NetApp Files
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
na Previously updated : 11/08/2023 Last updated : 11/27/2023 # What's new in Azure NetApp Files Azure NetApp Files is updated regularly. This article provides a summary about the latest new features and enhancements.-
+
## November 2023
-* [Standard network features is US Gov regions](azure-netapp-files-network-topologies.md#supported-regions) is now generally available (GA)
+* [Capacity pool enhancement:](azure-netapp-files-set-up-capacity-pool.md) New lower limits
+
+ * 2 TiB capacity pool: The 2 TiB lower limit for capacity pools using Standard network features is now generally available (GA).
+
+ * 1 TiB capacity pool: Azure NetApp Files now supports a lower limit of 1 TiB for capacity pool sizing with Standard network features. This feature is currently in preview.
+
+* [Metrics enhancement: Throughput limits](azure-netapp-files-metrics.md#volumes)
+
+ Azure NetApp Files now supports a "throughput limit reached" metric for volumes. The metric is a Boolean value that denotes the volume is hitting its QoS limit. With this metric, you know whether or not to adjust volumes so they meet the specific needs of your workloads.
+
+* [Standard network features in US Gov regions](azure-netapp-files-network-topologies.md#supported-regions) is now generally available (GA)
Azure NetApp Files now supports Standard network features for new volumes in US Gov Arizona, US Gov Texas, and US Gov Virginia. Standard network features provide an enhanced virtual networking experience through various features for a seamless and consistent experience with security posture of all their workloads including Azure NetApp Files.
Azure NetApp Files is updated regularly. This article provides a summary about t
In addition to Citrix App Layering, FSLogix user profiles including FSLogix ODFC containers, and Microsoft SQL Server, Azure NetApp Files now supports [MSIX app attach](../virtual-desktop/create-netapp-files.md) with SMB Continuous Availability shares to enhance resiliency during storage service maintenance operations. Continuous Availability enables SMB transparent failover to eliminate disruptions as a result of service maintenance events and improves reliability and user experience.
-* [Azure NetApp Files datastores for Azure VMware Solution](../azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md#supported-regions) in select US Gov regions
+* [Azure NetApp Files datastores for Azure VMware Solution](../azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md#supported-regions) in US Gov regions
Azure NetApp Files now supports [Azure NetApp Files datastores for Azure VMware Solution](../azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md?tabs=azure-portal) in US Gov Arizona and US Gov Virginia regions. Azure NetApp Files datastores for Azure VMware Solution provide the ability to scale storage independently of compute and can go beyond the limits of the local instance storage provided by vSAN reducing total cost of ownership.
azure-resource-manager Bicep Config Linter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-config-linter.md
Title: Linter settings for Bicep config
description: Describes how to customize configuration values for the Bicep linter Previously updated : 10/05/2023 Last updated : 11/27/2023 # Add linter settings in the Bicep config file
The following example shows the rules that are available for configuration.
"decompiler-cleanup": { "level": "warning" },
+ "explicit-values-for-loc-params": {
+ "level": "warning"
+ },
"max-outputs": { "level": "warning" },
The following example shows the rules that are available for configuration.
}, "nested-deployment-template-scoping": { "level": "error"
- }
+ },
"no-conflicting-metadata" : { "level": "warning" }, "no-deployments-resources" : { "level": "warning"
- }
+ },
"no-hardcoded-env-urls": { "level": "warning" },
For the rule about hardcoded environment URLs, you can customize which URLs are
"api.loganalytics.io", "api.loganalytics.iov1", "asazure.windows.net",
- "azuredatalakestore.net",
"azuredatalakeanalytics.net",
+ "azuredatalakestore.net",
"batch.core.windows.net", "core.windows.net", "database.windows.net",
azure-resource-manager Bicep Functions String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-string.md
The following example shows a comparison between using interpolation and using t
```bicep param prefix string = 'prefix'
-output concatOutput string = concat(prefix, uniqueString(resourceGroup().id))
+output concatOutput string = concat(prefix, 'And', uniqueString(resourceGroup().id))
output interpolationOutput string = '${prefix}And${uniqueString(resourceGroup().id)}' ```
azure-resource-manager Bicep Import https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-import.md
Functionality that has been imported from another file can be used without restr
### Example
-module.bicep
+exports.bicep
```bicep @export()
azure-resource-manager Bicep Using https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-using.md
Last updated 10/11/2023
The `using` statement in [Bicep parameter files](./parameter-files.md) ties the [Bicep parameters file](./parameter-files.md) to a [Bicep file](./file.md), an [ARM JSON template](../templates/syntax.md), or a [Bicep module](./modules.md), or a [template spec](./template-specs.md). A `using` declaration must be present in any Bicep parameters file. > [!NOTE]
-> The Bicep parameters file is only supported in [Bicep CLI](./install.md) version 0.18.4 or later, and [Azure CLI](/azure/developer/azure-developer-cli/install-azd?tabs=winget-windows%2Cbrew-mac%2Cscript-linux&pivots=os-windows) version 2.47.0 or later.
+> The Bicep parameters file is only supported in [Bicep CLI](./install.md) version 0.18.4 or newer, [Azure CLI](/cli/azure/install-azure-cli) version 2.47.0 or newer, and [Azure PowerShell](/powershell/azure/install-azure-powershell) version 9.7.1 or newer.
>
-> To use the statement with ARM JSON templates, Bicep modules, and template specs, you need to have [Bicep CLI](./install.md) version 0.22.6 or later, and [Azure CLI](/azure/developer/azure-developer-cli/install-azd?tabs=winget-windows%2Cbrew-mac%2Cscript-linux&pivots=os-windows) version 2.53.0 or later.
+> To use the statement with ARM JSON templates, Bicep modules, and template specs, you need to have [Bicep CLI](./install.md) version 0.22.6 or later, and [Azure CLI](/cli/azure/install-azure-cli) version 2.53.0 or later.
## Syntax
azure-resource-manager Deployment Script Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deployment-script-bicep.md
The benefits of deployment script:
- Allow passing command-line arguments to the script. - Can specify script outputs and pass them back to the deployment.
-The deployment script resource is only available in the regions where Azure Container Instance is available. See [Resource availability for Azure Container Instances in Azure regions](../../container-instances/container-instances-region-availability.md). Currently, deployment script only uses public networking.
+The deployment script resource is only available in the regions where Azure Container Instance is available. See [Resource availability for Azure Container Instances in Azure regions](../../container-instances/container-instances-region-availability.md).
> [!IMPORTANT] > The deployment script service requires two supporting resources for script execution and troubleshooting: a storage account and a container instance. You can specify an existing storage account, otherwise the script service creates one for you. The two automatically-created supporting resources are usually deleted by the script service when the deployment script execution gets in a terminal state. You are billed for the supporting resources until they are deleted. For the price information, see [Container Instances pricing](https://azure.microsoft.com/pricing/details/container-instances/) and [Azure Storage pricing](https://azure.microsoft.com/pricing/details/storage/). To learn more, see [Clean-up deployment script resources](#clean-up-deployment-script-resources).
azure-resource-manager Linter Rule Explicit Values For Loc Params https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter-rule-explicit-values-for-loc-params.md
Title: Linter rule - use explicit values for module location parameters
description: Linter rule - use explicit values for module location parameters Previously updated : 06/23/2023 Last updated : 11/27/2023 # Linter rule - use explicit values for module location parameters
azure-resource-manager Linter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/linter.md
Title: Use Bicep linter
description: Learn how to use Bicep linter. Previously updated : 10/13/2023 Last updated : 11/27/2023 # Use Bicep linter
The default set of linter rules is minimal and taken from [arm-ttk test cases](.
- [adminusername-should-not-be-literal](./linter-rule-admin-username-should-not-be-literal.md) - [artifacts-parameters](./linter-rule-artifacts-parameters.md) - [decompiler-cleanup](./linter-rule-decompiler-cleanup.md)
+- [explicit-values-for-loc-params](./linter-rule-explicit-values-for-loc-params.md)
- [max-outputs](./linter-rule-max-outputs.md) - [max-params](./linter-rule-max-parameters.md) - [max-resources](./linter-rule-max-resources.md)
azure-resource-manager Parameter Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/parameter-files.md
Last updated 11/03/2023
Rather than passing parameters as inline values in your script, you can use a Bicep parameters file with the `.bicepparam` file extension or a JSON parameters file that contains the parameter values. This article shows how to create parameters files. > [!NOTE]
-> The Bicep parameters file is only supported in [Bicep CLI](./install.md) version 0.18.4 or newer, and [Azure CLI](/azure/developer/azure-developer-cli/install-azd?tabs=winget-windows%2Cbrew-mac%2Cscript-linux&pivots=os-windows) version 2.47.0 or newer.
+> The Bicep parameters file is only supported in [Bicep CLI](./install.md) version 0.18.4 or newer, [Azure CLI](/cli/azure/install-azure-cli) version 2.47.0 or newer, and [Azure PowerShell](/powershell/azure/install-azure-powershell) version 9.7.1 or newer.
A single Bicep file can have multiple Bicep parameters files associated with it. However, each Bicep parameters file is intended for one particular Bicep file. This relationship is established using the [`using` statement](./bicep-using.md) within the Bicep parameters file.
communication-services Known Limitations Acs Telephony https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/known-limitations-acs-telephony.md
This article provides information about limitations and known issues related to
- Location-based routing isn't supported. - No quality dashboard is available for customers. - Enhanced 911 isn't supported.
+- In-band DTMF is not supported, use RFC 2833 DTMF instead.
## Next steps
communication-services Monitor Direct Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/monitoring-troubleshooting-telephony/monitor-direct-routing.md
Title: "Monitor Azure Communication Services direct routing" Previously updated : 11/15/2023 Last updated : 11/27/2023 audience: ITPro
description: Learn how to monitor Azure Communication Services direct routing configuration, including Session Border Controllers, cloud components, and Telecom trunks.
-# Monitor direct routing
+# Monitor Direct Routing
This article describes how to monitor your direct routing configuration.
-The ability to make and receive calls by using direct routing involves the following components:
+The process of making and receiving calls through direct routing involves the following components:
- Session Border Controllers (SBCs) - Direct routing components in the Microsoft Cloud
If you have difficulties troubleshooting issues, you can open a support case wit
Microsoft is working on providing more tools for troubleshooting and monitoring. Check the documentation periodically for updates.
-## Monitoring availability of Session Border Controllers using Session Initiation Protocol (SIP) OPTIONS messages
+## Monitoring Availability of Session Border Controllers using Session Initiation Protocol (SIP) OPTIONS Messages
Azure Communication Services direct routing uses SIP OPTIONS sent by the Session Border Controller to monitor SBC health. There are no actions required from the Azure administrator to enable the SIP OPTIONS monitoring. The collected information is taken into consideration when routing decisions are made.
When an SBC stops sending OPTIONS but not yet marked as demoted, Azure tries to
If two (or more) SBCs in one route are considered healthy and equal, Fisher-Yates shuffle is applied to distribute the calls between the SBCs.
-## Monitor with Azure portal and SBC logs
+## Monitor with Azure Portal and SBC logs
-In some cases, especially during the initial pairing, there might be issues related to misconfiguration of the SBCs or the direct routing service.
+During the initial pairing phase, there may be issues related particularly to misconfiguration of the SBCs or the direct routing service.
-You can use the following tools to monitor your configuration:
+The following tools can be used to monitor your configuration:
- Azure portal - SBC logs In the direct routing section of Azure portal, you can check [SBC connection status](../direct-routing-provisioning.md#session-border-controller-connection-status).
-If calls can be made, you can also check [Azure monitors logs](../../analytics/logs/voice-and-video-logs.md) that provide descriptive SIP error codes
+If calls can be made, [Azure monitors logs](../../analytics/logs/voice-and-video-logs.md) can help provide descriptive SIP error codes
SBC logs also is a great source of data for troubleshooting. Reach out to your SBC vendor's documentation on how to configure and collect those logs.
communication-services Audio Conferencing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/calling-sdk/audio-conferencing.md
+
+ Title: Teams Meeting Audio Conferencing
+
+description: Use Azure Communication Services SDKs to retrieve Teams Meeting Audio Conferencing Details
+++++ Last updated : 09/28/2023++++
+# Microsoft Teams Meeting Audio Conferencing
+In this article, you learn how to use Azure Communication Services Calling SDK to retrieve Microsoft Teams Meeting audio conferencing details. This functionality allows users who are already connected to a Microsoft Teams Meeting to be able to get the conference ID and dial in phone number associated with the meeting. At present, Teams audio conferencing feature returns a conference ID and only one dial-in toll or toll-free phone number depending on the priority assigned. In the future, Teams audio conferencing feature will return a collection of all toll and toll-free numbers, giving users control on what Teams meeting dial-in details to use
+
+## Prerequisites
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- A deployed Communication Services resource. [Create a Communication Services resource](../../quickstarts/create-communication-resource.md).
+- A user access token to enable the calling client. For more information, see [Create and manage access tokens](../../quickstarts/identity/access-tokens.md).
+- Optional: Complete the quickstart to [add voice calling to your application](../../quickstarts/voice-video-calling/getting-started-with-calling.md)
++
+## Next steps
+- [Learn how to manage calls](./manage-calls.md)
+- [Learn how to manage video](./manage-video.md)
+- [Learn how to record calls](./record-calls.md)
+- [Learn how to transcribe calls](./call-transcription.md)
confidential-computing Confidential Enclave Nodes Aks Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/confidential-enclave-nodes-aks-get-started.md
spec:
metadata: labels: app: oe-helloworld
- spec::
+ spec:
affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution:
az aks delete --resource-group myResourceGroup --cluster-name myAKSCluster
<!-- LINKS --> [az-group-create]: /cli/azure/group#az_group_create [az-aks-create]: /cli/azure/aks#az_aks_create
-[az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials
+[az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials
confidential-ledger Create Blob Managed App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/create-blob-managed-app.md
Once a Managed Application is created, you're able to then connect the Managed A
### Create a topic and event subscription for the storage account
-The Managed Application uses an Azure Service Bus Queue to track and record all **Create Blob** events. You can add this Queue as an Event Subscriber for any storage account that you're creating blobs for.
+The Managed Application uses an Azure Service Bus Queue to track and record all **Create Blob** events. You will use the Queue created in the Managed Resource Group by the Managed Application and add it as an Event Subscriber for any storage account that you're creating blobs for.
-#### Azure portal
+### [Azure portal](#tab/azure-portal)
:::image type="content" source="./media/managed-application/managed-app-event-subscription-inline.png" alt-text="Screenshot of the Azure portal in a web browser, showing how to set up a storage event subscription." lightbox="./media/managed-application/managed-app-event-subscription-enhanced.png":::
On the Azure portal, you can navigate to the storage account that you would like
The queue uses sessions to maintain ordering across multiple storage accounts so you will also need to navigate to the `Delivery Properties` tab and to enter a unique session ID for this event subscription.
-#### Azure CLI
+### [CLI](#tab/cli-or-sdk)
**Creating the Event Topic:**
-```bash
+```azurecli
az eventgrid system-topic create \ --resource-group {resource_group} \ --name {sample_topic_name} \
az eventgrid system-topic create \
**Creating the Event Subscription:**
-```bash
+```azurecli
az eventgrid system-topic event-subscription create \ --name {sample_subscription_name} \ --system-topic-name {sample_topic_name} \
az eventgrid system-topic event-subscription create \
`endpoint` - Resource ID of the service bus queue that is subscribing to the storage account Topic ++ ### Add required role to storage account The Managed Application requires the `Storage Blob Data Owner` role to read and create hashes for each blob and this role is required to be added in order for the digest to be calculated correctly.
-#### Azure portal
+### [Azure portal](#tab/azure-portal)
:::image type="content" source="./media/managed-application/managed-app-managed-identity-inline.png" alt-text="Screenshot of the Azure portal in a web browser, showing how to set up a managed identity for the managed app." lightbox="./media/managed-application/managed-app-managed-identity-enhanced.png":::
-#### Azure CLI
+### [CLI](#tab/cli-or-sdk)
-```bash
+```azurecli
az role assignment create \ --role "Storage Blob Data Owner" \ --assignee-object-id {function_oid} \
az role assignment create \
`scope` - Resource ID of storage account to create the role for ++ > [!NOTE] > Multiple storage accounts can be connected to a single Managed Application instance. We currently recommend a maximum of **10 storage accounts** that contain high usage blob containers.
The transaction table holds information about each blob and a unique hash that i
The block table holds information related to every digest this is created for the blob container and the associated transaction ID for the digest is stored in Azure Confidential Ledger.
+> [!NOTE]
+> Every blob creation event will not result in a digest being created. Digests are created after a certain block size is reached. Currently, a digest will be created for every **4 blob creation events**.
### Viewing digest on Azure Confidential Ledger
An audit can be triggered by including the following message to the Service Bus
} ```
-#### Azure portal
+### [Azure portal](#tab/azure-portal)
:::image type="content" source="./media/managed-application/managed-app-queue-trigger-audit-inline.png" alt-text="Screenshot of the Azure portal in a web browser, how to trigger an audit by adding a message to the queue." lightbox="./media/managed-application/managed-app-queue-trigger-audit-enhanced.png"::: Be sure to include a `Session ID` as the queue has sessions enabled.
-#### Azure Service Bus Python SDK
+### [Python SDK](#tab/cli-or-sdk)
```python import json
message = {
message = ServiceBusMessage(json.dumps(message), session_id=SESSION_ID) sender.send_messages(message) ```+ ### Viewing audit results
data-factory Parameters Data Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/parameters-data-flow.md
- Title: Parameterizing mapping data flows-
-description: Learn how to parameterize a mapping data flow from Azure Data Factory and Azure Synapse Analytics pipelines
------- Previously updated : 07/20/2023--
-# Parameterizing mapping data flows
--
-Mapping data flows in Azure Data Factory and Synapse pipelines support the use of parameters. Define parameters inside of your data flow definition and use them throughout your expressions. The parameter values are set by the calling pipeline via the Execute Data Flow activity. You have three options for setting the values in the data flow activity expressions:
-
-* Use the pipeline control flow expression language to set a dynamic value
-* Use the data flow expression language to set a dynamic value
-* Use either expression language to set a static literal value
-
-Use this capability to make your data flows general-purpose, flexible, and reusable. You can parameterize data flow settings and expressions with these parameters.
-
-## Create parameters in a mapping data flow
-
-To add parameters to your data flow, click on the blank portion of the data flow canvas to see the general properties. In the settings pane, you will see a tab called **Parameter**. Select **New** to generate a new parameter. For each parameter, you must assign a name, select a type, and optionally set a default value.
--
-## Use parameters in a mapping data flow
-
-Parameters can be referenced in any data flow expression. Parameters begin with $ and are immutable. You will find the list of available parameters inside of the Expression Builder under the **Parameters** tab.
--
-You can quickly add additional parameters by selecting **New parameter** and specifying the name and type.
--
-## Assign parameter values from a pipeline
-
-Once you've created a data flow with parameters, you can execute it from a pipeline with the Execute Data Flow Activity. After you add the activity to your pipeline canvas, you will be presented with the available data flow parameters in the activity's **Parameters** tab.
-
-When assigning parameter values, you can use either the [pipeline expression language](control-flow-expression-language-functions.md) or the [data flow expression language](data-transformation-functions.md) based on spark types. Each mapping data flow can have any combination of pipeline and data flow expression parameters.
--
-### Pipeline expression parameters
-
-Pipeline expression parameters allow you to reference system variables, functions, pipeline parameters, and variables similar to other pipeline activities. When you click **Pipeline expression**, a side-nav will open allowing you to enter an expression using the expression builder.
--
-When referenced, pipeline parameters are evaluated and then their value is used in the data flow expression language. The pipeline expression type doesn't need to match the data flow parameter type.
-
-#### String literals vs expressions
-
-When assigning a pipeline expression parameter of type string, by default quotes will be added and the value will be evaluated as a literal. To read the parameter value as a data flow expression, check the expression box next to the parameter.
--
-If data flow parameter `stringParam` references a pipeline parameter with value `upper(column1)`.
--- If expression is checked, `$stringParam` evaluates to the value of column1 all uppercase.-- If expression is not checked (default behavior), `$stringParam` evaluates to `'upper(column1)'`-
-#### Passing in timestamps
-
-In the pipeline expression language, System variables such as `pipeline().TriggerTime` and functions like `utcNow()` return timestamps as strings in format 'yyyy-MM-dd\'T\'HH:mm:ss.SSSSSSZ'. To convert these into data flow parameters of type timestamp, use string interpolation to include the desired timestamp in a `toTimestamp()` function. For example, to convert the pipeline trigger time into a data flow parameter, you can use `toTimestamp(left('@{pipeline().TriggerTime}', 23), 'yyyy-MM-dd\'T\'HH:mm:ss.SSS')`.
--
-> [!NOTE]
-> Data Flows can only support up to 3 millisecond digits. The `left()` function is used trim off additional digits.
-
-#### Pipeline parameter example
-
-Say you have an integer parameter `intParam` that is referencing a pipeline parameter of type String, `@pipeline.parameters.pipelineParam`.
--
-`@pipeline.parameters.pipelineParam` is assigned a value of `abs(1)` at runtime.
--
-When `$intParam` is referenced in an expression such as a derived column, it will evaluate `abs(1)` return `1`.
--
-### Data flow expression parameters
-
-Select **Data flow expression** will open up the data flow expression builder. You will be able to reference functions, other parameters and any defined schema column throughout your data flow. This expression will be evaluated as is when referenced.
-
-> [!NOTE]
-> If you pass in an invalid expression or reference a schema column that doesn't exist in that transformation, the parameter will evaluate to null.
--
-### Passing in a column name as a parameter
-
-A common pattern is to pass in a column name as a parameter value. If the column is defined in the data flow schema, you can reference it directly as a string expression. If the column isn't defined in the schema, use the `byName()` function. Remember to cast the column to its appropriate type with a casting function such as `toString()`.
-
-For example, if you wanted to map a string column based upon a parameter `columnName`, you can add a derived column transformation equal to `toString(byName($columnName))`.
--
-> [!NOTE]
-> In data flow expressions, string interpolation (substituting variables inside of the string) is not supported. Instead, concatenate the expression into string values. For example, `'string part 1' + $variable + 'string part 2'`
-
-## Next steps
-* [Execute data flow activity](control-flow-execute-data-flow-activity.md)
-* [Control flow expressions](control-flow-expression-language-functions.md)
+
+ Title: Parameterizing mapping data flows
+
+description: Learn how to parameterize a mapping data flow from Azure Data Factory and Azure Synapse Analytics pipelines
+++++++ Last updated : 11/15/2023++
+# Parameterizing mapping data flows
++
+Mapping data flows in Azure Data Factory and Synapse pipelines support the use of parameters. Define parameters inside of your data flow definition and use them throughout your expressions. The parameter values are set by the calling pipeline via the Execute Data Flow activity. You have three options for setting the values in the data flow activity expressions:
+
+* Use the pipeline control flow expression language to set a dynamic value
+* Use the data flow expression language to set a dynamic value
+* Use either expression language to set a static literal value
+
+Use this capability to make your data flows general-purpose, flexible, and reusable. You can parameterize data flow settings and expressions with these parameters.
+
+## Create parameters in a mapping data flow
+
+To add parameters to your data flow, click on the blank portion of the data flow canvas to see the general properties. In the settings pane, you'll see a tab called **Parameter**. Select **New** to generate a new parameter. For each parameter, you must assign a name, select a type, and optionally set a default value.
++
+## Use parameters in a mapping data flow
+
+Parameters can be referenced in any data flow expression. Parameters begin with $ and are immutable. you'll find the list of available parameters inside of the Expression Builder under the **Parameters** tab.
++
+You can quickly add additional parameters by selecting **New parameter** and specifying the name and type.
++
+## Using parameterized linked services in a mapping data flow
+
+Parameterized linked services can be used in a mapping data flow (for either dataset or inline source types).
+
+For the inline source type, the linked service parameters are exposed in the data flow activity settings within the pipeline as shown below.
++
+For the dataset source type, the linked service parameters are exposed directly in the dataset configuration.
+
+## Assign parameter values from a pipeline
+
+Once you've created a data flow with parameters, you can execute it from a pipeline with the Execute Data Flow Activity. After you add the activity to your pipeline canvas, you'll be presented with the available data flow parameters in the activity's **Parameters** tab.
+
+When assigning parameter values, you can use either the [pipeline expression language](control-flow-expression-language-functions.md) or the [data flow expression language](data-transformation-functions.md) based on spark types. Each mapping data flow can have any combination of pipeline and data flow expression parameters.
++
+### Pipeline expression parameters
+
+Pipeline expression parameters allow you to reference system variables, functions, pipeline parameters, and variables similar to other pipeline activities. When you click **Pipeline expression**, a side-nav will open allowing you to enter an expression using the expression builder.
++
+When referenced, pipeline parameters are evaluated and then their value is used in the data flow expression language. The pipeline expression type doesn't need to match the data flow parameter type.
+
+#### String literals vs expressions
+
+When assigning a pipeline expression parameter of type string, by default quotes will be added and the value will be evaluated as a literal. To read the parameter value as a data flow expression, check the expression box next to the parameter.
++
+If data flow parameter `stringParam` references a pipeline parameter with value `upper(column1)`.
+
+- If expression is checked, `$stringParam` evaluates to the value of column1 all uppercase.
+- If expression isn't checked (default behavior), `$stringParam` evaluates to `'upper(column1)'`
+
+#### Passing in timestamps
+
+In the pipeline expression language, System variables such as `pipeline().TriggerTime` and functions like `utcNow()` return timestamps as strings in format 'yyyy-MM-dd\'T\'HH:mm:ss.SSSSSSZ'. To convert these into data flow parameters of type timestamp, use string interpolation to include the desired timestamp in a `toTimestamp()` function. For example, to convert the pipeline trigger time into a data flow parameter, you can use `toTimestamp(left('@{pipeline().TriggerTime}', 23), 'yyyy-MM-dd\'T\'HH:mm:ss.SSS')`.
++
+> [!NOTE]
+> Data Flows can only support up to 3 millisecond digits. The `left()` function is used trim off additional digits.
+
+#### Pipeline parameter example
+
+Say you have an integer parameter `intParam` that is referencing a pipeline parameter of type String, `@pipeline.parameters.pipelineParam`.
++
+`@pipeline.parameters.pipelineParam` is assigned a value of `abs(1)` at runtime.
++
+When `$intParam` is referenced in an expression such as a derived column, it will evaluate `abs(1)` return `1`.
++
+### Data flow expression parameters
+
+Select **Data flow expression** will open up the data flow expression builder. You'll be able to reference functions, other parameters and any defined schema column throughout your data flow. This expression will be evaluated as is when referenced.
+
+> [!NOTE]
+> If you pass in an invalid expression or reference a schema column that doesn't exist in that transformation, the parameter will evaluate to null.
++
+### Passing in a column name as a parameter
+
+A common pattern is to pass in a column name as a parameter value. If the column is defined in the data flow schema, you can reference it directly as a string expression. If the column isn't defined in the schema, use the `byName()` function. Remember to cast the column to its appropriate type with a casting function such as `toString()`.
+
+For example, if you wanted to map a string column based upon a parameter `columnName`, you can add a derived column transformation equal to `toString(byName($columnName))`.
++
+> [!NOTE]
+> In data flow expressions, string interpolation (substituting variables inside of the string) isn't supported. Instead, concatenate the expression into string values. For example, `'string part 1' + $variable + 'string part 2'`
+
+## Next steps
+* [Execute data flow activity](control-flow-execute-data-flow-activity.md)
+* [Control flow expressions](control-flow-expression-language-functions.md)
defender-for-cloud Advanced Configurations For Malware Scanning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/advanced-configurations-for-malware-scanning.md
Title: Microsoft Defender for Storage - advanced configurations for malware scanning description: Learn about the advanced configurations of Microsoft Defender for Storage malware scanning Previously updated : 08/21/2023 Last updated : 11/20/2023
Overriding the settings of the subscriptions are usually used for the following
To configure the settings of individual storage accounts different from those configured on the subscription level using the Azure portal:
-1. Sign in to the Azure portal.
+1. Sign in to the [Azure portal](https://portal.azure.com/).
1. Navigate to your storage account that you want to configure custom settings.
defender-for-cloud Concept Agentless Data Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-agentless-data-collection.md
Agentless scanning for VMs provides vulnerability assessment and software invent
||| |Release state:| GA | |Pricing:|Requires either [Defender Cloud Security Posture Management (CSPM)](concept-cloud-security-posture-management.md) or [Microsoft Defender for Servers Plan 2](plan-defender-for-servers-select-plan.md#plan-features)|
-| Supported use cases:| :::image type="icon" source="./media/icons/yes-icon.png"::: Vulnerability assessment (powered by Defender Vulnerability Management)<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Software inventory (powered by Defender Vulnerability Management)<br />:::image type="icon" source="./media/icons/yes-icon.png":::Secret scanning (Preview) |
+| Supported use cases:| :::image type="icon" source="./media/icons/yes-icon.png"::: Vulnerability assessment (powered by Defender Vulnerability Management)<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Software inventory (powered by Defender Vulnerability Management)<br />:::image type="icon" source="./media/icons/yes-icon.png":::Secret scanning |
| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Azure Commercial clouds<br> :::image type="icon" source="./media/icons/no-icon.png"::: Azure Government<br>:::image type="icon" source="./media/icons/no-icon.png"::: Microsoft Azure operated by 21Vianet<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected AWS accounts<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected GCP projects | | Operating systems: | :::image type="icon" source="./media/icons/yes-icon.png"::: Windows<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Linux | | Instance and disk types: | **Azure**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Standard VMs<br>:::image type="icon" source="./media/icons/no-icon.png"::: Unmanaged disks<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Virtual machine scale set - Flex<br>:::image type="icon" source="./media/icons/no-icon.png"::: Virtual machine scale set - Uniform<br><br>**AWS**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: EC2<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Auto Scale instances<br>:::image type="icon" source="./media/icons/no-icon.png"::: Instances with a ProductCode (Paid AMIs)<br><br>**GCP**<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Compute instances<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Instance groups (managed and unmanaged) |
defender-for-cloud Concept Regulatory Compliance Standards https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-regulatory-compliance-standards.md
Title: Regulatory compliance standards in Microsoft Defender for Cloud
description: Learn about regulatory compliance standards in Microsoft Defender for Cloud Previously updated : 01/10/2023 Last updated : 11/27/2023 # Regulatory compliance standards
You can drill down into controls to get information about resources that have pa
By default, when you enable Defender for Cloud, the following standards are enabled: - **Azure**: The [Microsoft Cloud Security Benchmark (MCSB)](concept-regulatory-compliance.md) is enabled for Azure subscriptions.-- **AWS**: AWS accounts get the [AWS Foundational Security Best Practices standard](https://docs.aws.amazon.com/securityhub/latest/userguide/fsbp-standard.html) assigned. This standard contains AWS-specific guidelines for security and compliance best practices based on common compliance frameworks. AWS accounts also have MCSB assigned by default.
+- **AWS**: AWS accounts get the [AWS Foundational Security Best Practices standard](https://docs.aws.amazon.com/securityhub/latest/userguide/fsbp-standard.html) and [Microsoft Cloud Security Benchmark (MCSB)](concept-regulatory-compliance.md) assigned by default. AWS Foundational Security Best Practices standard contains AWS-specific guidelines for security and compliance best practices based on common compliance frameworks.
- **GCP**: GCP projects get the GCP Default standard assigned.
defender-for-cloud Custom Security Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/custom-security-policies.md
Title: Create custom security standards for Azure resources in Microsoft Defende
description: Learn how to create custom security standards for Azure resources in Microsoft Defender for Cloud Previously updated : 10/30/2023 Last updated : 11/27/2023 zone_pivot_groups: manage-asc-initiatives
Security recommendations in Microsoft Defender for Cloud help you to improve and
:::image type="content" source="media/custom-security-policies/create-custom-standard.png" alt-text="Screenshot that shows how to create a custom security standard." lightbox="media/custom-security-policies/create-custom-standard.png":::
-1. In **Create a new standard** > **Basics**, enter a name and description. Make sure the name is unique. If you create a custom standard with the same name as an existing standard, it causes a conflict in the information displayed in the dashboard.
+1. Enter a name and description.
+
+ > [!IMPORTANT]
+ > Make sure the name is unique. If you create a custom standard with the same name as an existing standard, it causes a conflict in the information displayed in the dashboard.
1. Select **Next**.
Security recommendations in Microsoft Defender for Cloud help you to improve and
:::image type="content" source="media/custom-security-policies/select-recommendations.png" alt-text="Screenshot that shows the list of all of the recommendations that are available to select for the custom standard." lightbox="media/custom-security-policies/select-recommendations.png":::
-1. (Optional) Select **...** > **Manage effect and parameters** to manage the effects and parameters of each recommendation, and save the setting.
+1. (Optional) Select the three dot button (**...**) > **Manage effect and parameters** to manage the effects and parameters of each recommendation, and save the setting.
1. Select **Next**. 1. In **Review + create**, select **Create**.
-Your new standard takes effect after you create it. Here's what you'll see:
+Your new standard takes effect after you create it. You can see the effects of your new standard:
-- In Defender for Cloud > **Regulatory compliance**, the compliance dashboard shows the new custom standard alongside existing standards.
+- On the Regulatory compliance page, you will see the new custom standard alongside existing standards.
- If your environment doesn't align with the custom standard, you begin to receive recommendations to fix issues found in the **Recommendations** page. -- ## Create a custom recommendation If you want to create a custom recommendation for Azure resources, you currently need to do that in Azure Policy, as follows:
defender-for-cloud Exempt Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/exempt-resource.md
After creating the exemption it can take up to 30 minutes to take effect. After
- If you've exempted specific resources, they'll be listed in the **Not applicable** tab of the recommendation details page. - If you've exempted a recommendation, it will be hidden by default on Defender for Cloud's recommendations page. This is because the default options of the **Recommendation status** filter on that page are to exclude **Not applicable** recommendations. The same is true if you exempt all recommendations in a security control. --- ## Next steps [Review exempted resources](review-exemptions.md) in Defender for Cloud.
defender-for-cloud Governance Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/governance-rules.md
description: Learn how to drive remediation of security recommendations with gov
Previously updated : 10/29/2023 Last updated : 11/27/2023 # Drive remediation with governance rules
For tracking, you can review the progress of the remediation tasks by subscripti
## Before you begin -- To use governance rules, the [Defender Cloud Security Posture Management (CSPM) plan](concept-cloud-security-posture-management.md) must be enabled.-- You need **Contributor**, **Security Admin**, or **Owner** permissions on Azure subscriptions.-- For AWS accounts and GCP projects, you need **Contributor**, **Security Admin**, or **Owner** permissions on the Defender for Cloud AWS/GCP connectors.
+- The [Defender Cloud Security Posture Management (CSPM) plan](concept-cloud-security-posture-management.md) must be enabled.
+- You need **Contributor**, **Security Admin**, or **Owner** permissions on the Azure subscriptions.
+- For AWS accounts and GCP projects, you need **Contributor**, **Security Admin**, or **Owner** permissions on the Defender for Cloud AWS or GCP connectors.
## Define a governance rule
-Define a governance rule as follows.
+**You can define a governance rule as follows**:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Navigate to **Microsoft Defender for Cloud** > **Environment settings** > **Governance rules**.
-1. In Defender for Cloud, open the **Environment settings** page, and select **Governance rules**.
1. Select **Create governance rule**.
-1. In **Create governance rule** > **General details**, specify a rule name, and the scope in which the rule applies.
+
+ :::image type="content" source="./media/governance-rules/add-rule.png" alt-text="Screenshot of page for adding a governance rule." lightbox="media/governance-rules/add-rule.png":::
+
+1. Specify a rule name and scope in which to apply the rule.
- Rules for management scope (Azure management groups, AWS master accounts, GCP organizations) are applied prior to the rules on a single scope. - You can define exclusions within the scope as needed.
-1. Priority is assigned automatically. Rules are run in priority order from the highest (1) to the lowest (1000).
-1. Specify a description to help you identify the rule. Then select **Next**.
+1. Set a priority level.
- :::image type="content" source="./media/governance-rules/add-rule.png" alt-text="Screenshot of page for adding a governance rule." lightbox="media/governance-rules/add-rule.png":::
+ Rules are run in priority order from the highest (1) to the lowest (1000).
+
+1. Specify a description to help you identify the rule.
+
+1. Select **Next**
+
+1. Specify how recommendations are impacted by the rule.
-1. In the **Conditions** tab, specify how recommendations are impacted by the rule.
- **By severity** - The rule assigns the owner and due date to any recommendation in the subscription that doesn't already have them assigned. - **By specific recommendations** - Select the specific built-in or custom recommendations that the rule applies to.
-1. In **Set owner**, specify who's responsible for fixing recommendations covered by the rule.
+
+ :::image type="content" source="./media/governance-rules/create-rule-conditions.png" alt-text="Screenshot of page for adding conditions for a governance rule." lightbox="media/governance-rules/create-rule-conditions.png":::
+
+1. Set the owner to specify who's responsible for fixing recommendations covered by the rule.
+ - **By resource tag** - Enter the resource tag on your resources that defines the resource owner. - **By email address** - Enter the email address of the owner to assign to the recommendations.
-1. In **Set remediation timeframe**, specify the time that can elapse between when resources are identified as requiring remediation, and the time that the remediation is due.
-1. For recommendations issued by MCSB, if you don't want the resources to affect your secure score until they're overdue, select **Apply grace period**.
-1. By default owners and their managers are notified weekly about open and overdue tasks. If you don't want them to receive these weekly emails, clear the notification options.
-1. Select **Create**.
+1. Specify remediation time frame to set the time that can elapse between when resources are identified as requiring remediation, and the time that the remediation is due.
- :::image type="content" source="./media/governance-rules/create-rule-conditions.png" alt-text="Screenshot of page for adding conditions for a governance rule." lightbox="media/governance-rules/create-rule-conditions.png":::
+ For recommendations issued by MCSB, if you don't want the resources to affect your secure score until they're overdue, select **Apply grace period**.
+1. (Optional) By default owners and their managers are notified weekly about open and overdue tasks. If you don't want them to receive these weekly emails, clear the notification options.
-- If there are existing recommendations that match the definition of the governance rule, you can either:
+1. Select **Create**.
+
+If there are existing recommendations that match the definition of the governance rule, you can either:
- - Assign an owner and due date to recommendations that don't already have an owner or due date.
- - Overwrite the owner and due date of existing recommendations.
-- When you delete or disable a rule, all existing assignments and notifications remain.
+- Assign an owner and due date to recommendations that don't already have an owner or due date.
+- Overwrite the owner and due date of existing recommendations.
+
+When you delete or disable a rule, all existing assignments and notifications remain.
## View effective rules You can view the effect of government rules in your environment.
-1. In the Defender for Cloud portal, open the **Governance rules** page.
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Navigate to **Microsoft Defender for Cloud** > **Environment settings** > **Governance rules**.
+ 1. Review governance rules. The default list shows all the governance rules applicable in your environment.+ 1. You can search for rules, or filter rules.+ - Filter on **Environment** to identify rules for Azure, AWS, and GCP.
+
- Filter on rule name, owner, or time between the recommendation being issued and due date.
+
- Filter on **Grace period** to find MCSB recommendations that won't affect your secure score.
+
- Identify by status. :::image type="content" source="./media/governance-rules/view-filter-rules.png" alt-text="Screenshot of page for viewing and filtering rules." lightbox="media/governance-rules/view-filter-rules.png":::
+## Review the governance report
+The governance report lets you select subscriptions that have governance rules and, for each rule and owner, shows you how many recommendations are completed, on time, overdue, or unassigned.
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Navigate to **Microsoft Defender for Cloud** > **Environment settings** > **Governance rules** >**Governance report**.
-## Review the governance report
+ :::image type="content" source="media/governance-rules/governance-report.png" alt-text="Screenshot of the governance rules page that shows where the governance report button is located." lightbox="media/governance-rules/governance-report.png":::
-The governance report lets you select subscriptions that have governance rules and, for each rule and owner, shows you how many recommendations are completed, on time, overdue, or unassigned.
-
-1. In Defender for Cloud > **Environment settings** > **Governance rules**, select **Governance report**.
-1. In **Governance**, select a subscription.
+1. Select a subscription.
:::image type="content" source="./media/governance-rules/governance-in-workbook.png" alt-text="Screenshot of governance status by rule and owner in the governance workbook." lightbox="media/governance-rules/governance-in-workbook.png":::
-1. From the governance report, you drill down into recommendations by rule and owner.
-
+From the governance report, you can drill down into recommendations by scope, display name, priority, remediation timeframe, owner type, owner details, grace period and cloud.
## Next steps - Learn how to [Implement security recommendations](implement-security-recommendations.md).
defender-for-cloud Implement Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/implement-security-recommendations.md
description: Learn how to remediate security recommendations in Microsoft Defend
Previously updated : 11/08/2023 Last updated : 11/22/2023 + # Remediate security recommendations Resources and workloads protected by Microsoft Defender for Cloud are assessed against built-in and custom security standards enabled in your Azure subscriptions, AWS accounts, and GCP projects. Based on those assessments, security recommendations provide practical steps to remediate security issues, and improve security posture.
This article describes how to remediate security recommendations in your Defende
Before you attempt to remediate a recommendation you should review it in detail. Learn how to [review security recommendations](review-security-recommendations.md).
+> [!IMPORTANT]
+> This page discusses how to use the new recommendations experience where you have the ability to prioritize your recommendations by their effective risk level. To view this experience, you must select **Try it now**.
+>
+> :::image type="content" source="media/review-security-recommendations/try-it-now.png" alt-text="Screenshot that shows where the try it now button is located on the recommendation page." lightbox="media/review-security-recommendations/try-it-now.png":::
+ ## Group recommendations by risk level Before you start remediating, we recommend grouping your recommendations by risk level in order to remediate the most critical recommendations first.
Before you start remediating, we recommend grouping your recommendations by risk
Recommendations are displayed in groups of risk levels.
-1. Review critical and other recommendations to understand the recommendation and remediation steps. Use the graph to understand the risk to your business, including which resources are exploitable, and the effect that the recommendation has on your business.
-
+You can now review critical and other recommendations to understand the recommendation and remediation steps. Use the graph to understand the risk to your business, including which resources are exploitable, and the effect that the recommendation has on your business.
## Remediate recommendations
After reviewing recommendations by risk, decide which one to remediate first.
In addition to risk level, we recommend that you prioritize the security controls in the default [Microsoft Cloud Security Benchmark (MCSB)](concept-regulatory-compliance.md) standard in Defender for Cloud, since these controls affect your [secure score](secure-score-security-controls.md).
+1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In the **Recommendations** page, select the recommendation you want to remediate.
+1. Navigate to **Microsoft Defender for Cloud** > **Recommendations**.
-1. In the recommendation details page, select **Take action** > **Remediate**.
-1. Follow the remediation instructions.
+1. Select a recommendation to remediate.
- As an example, the following screenshot shows remediation steps for configuring applications to only allow traffic over HTTPS.
+1. Select **Take action**
- :::image type="content" source="./media/implement-security-recommendations/security-center-remediate-recommendation.png" alt-text="This screenshots shows manual remediation steps for a recommendation." lightbox="./media/implement-security-recommendations/security-center-remediate-recommendation.png":::
+1. Locate the Remediate section and follow the remediation instructions.
-1. Once completed, a notification appears informing you whether the issue is resolved.
+ :::image type="content" source="./media/implement-security-recommendations/security-center-remediate-recommendation.png" alt-text="This screenshot shows manual remediation steps for a recommendation." lightbox="./media/implement-security-recommendations/security-center-remediate-recommendation.png":::
## Use the Fix option
-To simplify remediation and improve your environment's security (and increase your secure score), many recommendations include a **Fix** option to help you quickly remediate a recommendation on multiple resources.
+To simplify remediation and improve your environment's security (and increase your secure score), many recommendations include a **Fix** option to help you quickly remediate a recommendation on multiple resources. If the Fix button is not present in the recommendation, then there is no option to apply a quick fix.
+
+**To remediate a recommendation with the Fix button**:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Navigate to **Microsoft Defender for Cloud** > **Recommendations**.
+
+1. Select a recommendation to remediate.
-1. In the **Recommendations** page, select a recommendation that shows the **Fix** action icon: :::image type="icon" source="media/implement-security-recommendations/fix-icon.png" border="false":::.
+1. Select **Take action** > **Fix**.
:::image type="content" source="./media/implement-security-recommendations/microsoft-defender-for-cloud-recommendations-fix-action.png" alt-text="This screenshot shows recommendations with the Fix action" lightbox="./media/implement-security-recommendations/microsoft-defender-for-cloud-recommendations-fix-action.png":::
-1. In **Take action**, select **Fix**.
1. Follow the rest of the remediation steps. -
-After remediation completes, it can take several minutes to see the resources appear in the **Findings** tab when the status is filtered to view **Healthy** resources.
+After remediation completes, it can take several minutes for the change to take place.
## Next steps [Learn about](governance-rules.md) using governance rules in your remediation processes.--
defender-for-cloud Integration Servicenow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/integration-servicenow.md
Title: Integrate ServiceNow with Microsoft Defender for Cloud description: Learn about integrating ServiceNow with Microsoft Defender for Cloud to protect Azure, hybrid, and multicloud machines.--- Previously updated : 11/13/2023 Last updated : 11/26/2023 # Integrate ServiceNow with Microsoft Defender for Cloud (preview)
As part of the integration, you can create and monitor tickets in ServiceNow dir
| Prerequisite | Details | |--||
-| Environment | - Have an application registry in ServiceNow. For more information, see [Create a ServiceNow API Client ID and Client Secret for the SCOM ServiceNow Incident Connector (opslogix.com)](https://www.opslogix.com/knowledgebase/servicenow/kb-create-a-servicenow-api-key-and-secret-for-the-scom-servicenow-incident-connector) <br>- Enable Defender Cloud Security Posture Management (DCSPM) |
+| Environment | - Have an application registry in ServiceNow. <br>- Enable Defender Cloud Security Posture Management (DCSPM) |
| Roles | To create an integration:<br>- Security Admin<br>- Contributor<br>- Owner<br><br>To create an assignment:<br>- The user should have admin permissions to ServiceNow | | Cloud | &#x2705; Azure <br> &#10060; Azure Government, Azure China 21Vianet, air-gapped clouds |
As part of the integration, you can create and monitor tickets in ServiceNow dir
To onboard ServiceNow to Defender for Cloud, you need a Client ID and Client Secret for the ServiceNow instance. If you don't have a Client ID and Client Secret, follow these steps to create them: 1. Sign in to ServiceNow with an account that has permission to modify the Application Registry.
-1. Browse to **System OAuth**, and select **Application Registry**.
+
+1. Navigate to **System OAuth** > **Application Registry**.
:::image type="content" border="true" source="./media/integration-servicenow/app-registry.png" alt-text="Screenshot of application registry."::: 1. In the upper right corner, select **New**.
- :::image type="content" border="true" source="./media/integration-servicenow/new.png" alt-text="Screenshot of where to start a new instance.":::
+ :::image type="content" border="true" source="./media/integration-servicenow/new.png" alt-text="Screenshot of where to start a new instance." lightbox="media/integration-servicenow/new.png":::
1. Select **Create an OAuth API endpoint for external clients**.
Secret:
>[!NOTE] >The default value of Refresh Token Lifespan is too small. Increase the value as much as possible so that you don't need to refresh the token soon.
- :::image type="content" border="true" source="./media/integration-servicenow/app-details.png" alt-text="Screenshot of application details.":::
+ :::image type="content" border="true" source="./media/integration-servicenow/app-details.png" alt-text="Screenshot of application details." lightbox="media/integration-servicenow/app-details.png":::
1. Select **Submit** to save the API Client ID and Client Secret.
After you complete these steps, you can use this integration name (MDCIntegratio
1. Select **Add integration** > **ServiceNow**.
- :::image type="content" border="true" source="./media/integration-servicenow/add-servicenow.png" alt-text="Screenshot of how to add ServiceNow.":::
+ :::image type="content" border="true" source="./media/integration-servicenow/add-servicenow.png" alt-text="Screenshot of how to add ServiceNow." lightbox="media/integration-servicenow/add-servicenow.png":::
Use the instance URL, name, password, Client ID, and Client Secret that you previously created for the application registry to help complete the ServiceNow general information.
After you complete these steps, you can use this integration name (MDCIntegratio
For simplicity, We recommend creating the integration on the higher scope based on the user permissions. For example, if you have permission for a management group, you could create a single integration of a management group rather than create integrations in each one of the subscriptions.
-1. Choose **Default** or **Customized** based on your requirement.
-
+1. Select **Default** or **Customized** based on your requirement.
+
The default option creates a Title, Description and Short description in the backend. The customized option lets you choose other fields such as **Incident data**, **Problems data**, and **Changes data**. :::image type="content" border="true" source="./media/integration-servicenow/customize-fields.png" alt-text="Screenshot of how to customize fields.":::
defender-for-cloud Manage Mcsb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/manage-mcsb.md
This article describes how you can manage recommendations provided by MCSB.
- **Enforce** lets you take advantage of the **DeployIfNotExist** effect in Azure Policy, and automatically remediate non-compliant resources upon creation.
+ > [!NOTE]
+ > Enforce and Deny are applicable to Azure recommendations and are supported on a subset of recommendations.
To review which recommendations you can deny and enforce, in the **Security policies** page, on the **Standards** tab, select **Microsoft cloud security benchmark** and drill into a recommendation to see if the deny/enforce actions are available.
defender-for-cloud Quickstart Onboard Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-devops.md
To connect your Azure DevOps organization to Defender for Cloud by using a nativ
- Select **all existing organizations** to auto-discover all projects and repositories in organizations you are currently a Project Collection Administrator in. - Select **all existing and future organizations** to auto-discover all projects and repositories in all current and future organizations you are a Project Collection Administrator in.
+> [!NOTE]
+> **Third-party application access via OAuth** must be set to `On` on for each Azure DevOps Organization. [Learn more about OAuth and how to enable it in your organizations](/azure/devops/organizations/accounts/change-application-access-policies).
+ Since Azure DevOps repositories are onboarded at no additional cost, autodiscover is applied across the organization to ensure Defender for Cloud can comprehensively assess the security posture and respond to security threats across your entire DevOps ecosystem. Organizations can later be manually added and removed through **Microsoft Defender for Cloud** > **Environment settings**.
-1. Select **Next: Review and generate**.
+11. Select **Next: Review and generate**.
-1. Review the information, and then select **Create**.
+12. Review the information, and then select **Create**.
> [!NOTE] > To ensure proper functionality of advanced DevOps posture capabilities in Defender for Cloud, only one instance of an Azure DevOps organization can be onboarded to the Azure Tenant you are creating a connector in.
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes description: This page is updated frequently with the latest updates in Defender for Cloud. Previously updated : 11/23/2023 Last updated : 11/27/2023 # What's new in Microsoft Defender for Cloud?
If you're looking for items older than six months, you can find them in the [Arc
| Date | Update | |--|--|
+| November 27 | [General availability of agentless secret scanning in Defender for Servers and Defender CSPM](#general-availability-of-agentless-secret-scanning-in-defender-for-servers-and-defender-cspm) |
| November 22 | [Enable permissions management with Defender for Cloud (Preview)](#enable-permissions-management-with-defender-for-cloud-preview) | | November 22 | [Defender for Cloud integration with ServiceNow](#defender-for-cloud-integration-with-servicenow) | | November 20| [General Availability of the autoprovisioning process for SQL Servers on machines plan](#general-availability-of-the-autoprovisioning-process-for-sql-servers-on-machines-plan)|
If you're looking for items older than six months, you can find them in the [Arc
| November 15 | [General Availability release of sensitive data discovery for databases](#general-availability-release-of-sensitive-data-discovery-for-databases) | | November 6 | [New version of the recommendation to find missing system updates is now GA](#new-version-of-the-recommendation-to-find-missing-system-updates-is-now-ga) |
+### General availability of agentless secret scanning in Defender for Servers and Defender CSPM
+
+November 27, 2023
+
+Agentless secret scanning enhances the security cloud based Virtual Machines (VM) by identifying plaintext secrets on VM disks. Agentless secret scanning provides comprehensive information to help prioritize detected findings and mitigate lateral movement risks before they occur. This proactive approach prevents unauthorized access, ensuring your cloud environment remains secure.
+
+We're announcing the General Availability (GA) of agentless secret scanning, which is included in both the [Defender for Servers P2](tutorial-enable-servers-plan.md) and the [Defender CSPM](tutorial-enable-cspm-plan.md) plans.
+
+Agentless secret scanning utilizes cloud APIs to capture snapshots of your disks, conducting out-of-band analysis that ensures that there is no effect on your VM's performance. Agentless secret scanning broadens the coverage offered by Defender for Cloud over cloud assets across Azure, AWS, and GCP environments to enhance your cloud security.
+
+With this release, Defender for Cloud's detection capabilities now support additional database types, data store signed URLs, access tokens, and more.
+
+Learn how to [manage secrets with agentless secret scanning](secret-scanning.md).
+ ### Enable permissions management with Defender for Cloud (Preview) November 22, 2023
defender-for-cloud Review Exemptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/review-exemptions.md
Title: Exempt a recommendation in Microsoft Defender for Cloud.
+ Title: Exempt a recommendation in Microsoft Defender for Cloud
description: Learn how to exempt recommendations so they're not taken into account in Microsoft Defender for Cloud. Previously updated : 01/02/2022 Last updated : 11/22/2023 # Review resources exempted from recommendations
-In Microsoft Defender for Cloud, you can exempt protected resources from Defender for Cloud security recommendations. [Learn more](exempt-resource.md). This article describes how to review and work with exempted resources.
+In Microsoft Defender for Cloud, you can [exempt protected resources from Defender for Cloud security recommendations](exempt-resource.md). This article describes how to review and work with exempted resources.
+> [!IMPORTANT]
+> This page discusses how to use the new recommendations experience where you have the ability to prioritize your recommendations by their effective risk level. To view this experience, you must select **Try it now**.
+>
+> :::image type="content" source="media/review-security-recommendations/try-it-now.png" alt-text="Screenshot that shows where the try it now button is located on the recommendation page." lightbox="media/review-security-recommendations/try-it-now.png":::
## Review exempted resources in the portal
-1. In Defender for Cloud, open the **Recommendations** page.
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. Navigate to **Defender for Cloud** > **Recommendations**.
+ 1. Select **Add filter** > **Is exempt**.
-1. Select whether you want to see recommendations that have exempted resources, or those without exemptions.
+
+1. Select **All**, **Yes** or **No**.
+
+1. Select **Apply**.
:::image type="content" source="media/review-exemptions/filter-exemptions.png" alt-text="Steps to create an exemption rule to exempt a recommendation from your subscription or management group." lightbox="media/review-exemptions/filter-exemptions.png":::
In Microsoft Defender for Cloud, you can exempt protected resources from Defende
1. For each resource, the **Reason** column shows why the resource is exempted. To modify the exemption settings for a resource, select the ellipsis in the resource > **Manage exemption**.
-You can also review exempted resources on the Defender for Cloud > **Inventory** page. In the page, select **Add filter**. In the **Filter** dropdown list, select **Contains Exemptions** to find all resources that have been exempted from one or more recommendations.
+You can also find all resources that have been exempted from one or more recommendations on the Inventory page.
+
+**To review exempted resources on the Defender for Cloud's Inventory page**:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. Navigate to **Defender for Cloud** > **Recommendations**.
+
+1. Select **Add filter**
+
+ :::image type="content" source="media/review-exemptions/inventory-exemptions.png" alt-text="Defender for Cloud's asset inventory page and the filter to find resources with exemptions." lightbox="media/review-exemptions/inventory-exemptions.png":::
+
+1. Select **Contains Exemptions**.
+1. Select **Yes**.
+1. Select **OK**.
## Review exempted resources with Azure Resource Graph
defender-for-cloud Review Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/review-security-recommendations.md
Title: Review security recommendations in Microsoft Defender for Cloud description: Learn how to review security recommendations in Microsoft Defender for Cloud Previously updated : 11/08/2023 Last updated : 11/21/2023 # Review security recommendations
This article describes how to review security recommendations in your Defender f
## Get an overview
-In the Defender for Cloud portal > **Overview** dashboard, get a holistic look at your environment, including security recommendations.
+In Defender for Cloud, navigate to the **Overview** dashboard to get a holistic look at your environments, including:
- **Active recommendations**: Recommendations that are active in your environment. - **Unassigned recommendations**: See which recommendations don't have owners assigned to them. - **Overdue recommendations**: Recommendations that have an expired due date. - **Attack paths**: See the number of attack paths. - ## Review recommendations
-1. In Defender for Cloud, open the **Recommendations** page.
+> [!IMPORTANT]
+> This page discusses how to use the new recommendations experience where you have the ability to prioritize your recommendations by their effective risk level. To view this experience, you must select **Try it now**.
+>
+> :::image type="content" source="media/review-security-recommendations/try-it-now.png" alt-text="Screenshot that shows where the try it now button is located on the recommendation page." lightbox="media/review-security-recommendations/try-it-now.png":::
+
+**To review recommendations**:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. Navigate to **Defender for Cloud** > **Recommendations**.
+ 1. For each recommendation, review: - **Risk level** - Specifies whether the recommendation risk is Critical, High, Medium or Low.
In the Defender for Cloud portal > **Overview** dashboard, get a holistic look a
- **Attack Paths** - The number of attack paths. - **Owner** - The person assigned to this recommendation. - **Due date** - Indicates the due date for fixing the recommendation.
- - **Recommendation status** indicates whether the recommendation has been assigned, and whether the due date for fixing the recommendation has passed.
-
+ - **Recommendation status** indicates whether the recommendation is assigned, and the status of the due date for fixing the recommendation.
## Review recommendation details
-1. In the **Recommendations** page, select the recommendation.
+It's important to review all of the details related to a recommendation before trying to understand the process needed to resolve the recommendation. We recommend ensuring that all of the recommendation details are correct before resolving the recommendation.
+
+**To review a recommendation's details**:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. Navigate to **Defender for Cloud** > **Recommendations**.
+
+1. Select a recommendation.
+ 1. In the recommendation page, review the details: - **Description** - A short description of the security issue. - **Attack Paths** - The number of attack paths.
In the Defender for Cloud portal > **Overview** dashboard, get a holistic look a
## Explore a recommendation
-You can perform a number of actions to interact with recommendations. If an option isn't available, it isn't relevant for the recommendation.
+You can perform many actions to interact with recommendations. If an option isn't available, it isn't relevant for the recommendation.
-1. In the **Recommendations** page, select a recommendation.
-1. Select **Open query** to view detailed information about the affected resources using an Azure Resource Graph Explorer query
-1. Select **View policy definition** to view the Azure Policy entry for the underlying recommendation (if relevant).
-1. In **Review findings**, you can review affiliated findings by severity.
+**To explore a recommendation**:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. Navigate to **Defender for Cloud** > **Recommendations**.
+
+1. Select a recommendation.
+
+1. In the recommendation, you can perform the following actions:
+
+ - Select **Open query** to view detailed information about the affected resources using an Azure Resource Graph Explorer query.
+
+ - Select **View policy definition** to view the Azure Policy entry for the underlying recommendation (if relevant).
+
+1. In **Findings**, you can review affiliated findings by severity.
:::image type="content" source="media/review-security-recommendations/recommendation-findings.png" alt-text="Screenshot of the findings tab in a recommendation that shows all of the attack paths for that recommendation." lightbox="media/review-security-recommendations/recommendation-findings.png"::: 1. In **Take action**: - **Remediate**: A description of the manual steps required to remediate the security issue on the affected resources. For recommendations with the **Fix** option, you can select **View remediation logic** before applying the suggested fix to your resources.
+
- **Assign owner and due date**: If you have a [governance rule](governance-rules.md) turned on for the recommendation, you can assign an owner and due date.
+
- **Exempt**: You can exempt resources from the recommendation, or disable specific findings using disable rules.
+
- **Workflow automation**: Set a logic app to trigger with this recommendation.
-1. In **Graph**, you can view and investigate all context that is used for risk prioritization, including [attack paths](how-to-manage-attack-path.md).
+
+ :::image type="content" source="media/review-security-recommendations/recommendation-take-action.png" alt-text="Screenshot that shows what you can see in the recommendation when you select the take action tab." lightbox="media/review-security-recommendations/recommendation-take-action.png":::
+
+1. In **Graph**, you can view and investigate all context that is used for risk prioritization, including [attack paths](how-to-manage-attack-path.md). You can select a node in an attack path to view the details of the selected node.
:::image type="content" source="media/review-security-recommendations/recommendation-graph.png" alt-text="Screenshot of the graph tab in a recommendation that shows all of the attack paths for that recommendation." lightbox="media/review-security-recommendations/recommendation-graph.png"::: -- ## Manage recommendations assigned to you Defender for Cloud supports governance rules for recommendations, to specify a recommendation owner or due date for action. Governance rules help ensure accountability and an SLA for recommendations.
Defender for Cloud supports governance rules for recommendations, to specify a r
[Learn more](governance-rules.md) about configuring governance rules.
-Manage recommendations assigned to you as follows:
+**To manage recommendations assigned to you**:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. In the Defender for Cloud portal > **Recommendations** page, select **Add filter** > **Owner**.
+1. Navigate to **Defender for Cloud** > **Recommendations**.
+
+1. Select **Add filter** > **Owner**.
1. Select your user entry.+
+1. Select **Apply**.
+ 1. In the recommendation results, review the recommendations, including affected resources, risk factors, attack paths, due dates, and status.+ 1. Select a recommendation to review it further.
-1. In **Take action** > **Change owner & due date**, you change the recommendation owner and due date if needed.
+
+1. In **Take action** > **Change owner & due date**, select **Edit assignment** to change the recommendation owner and due date if needed.
- By default the owner of the resource gets a weekly email listing the recommendations assigned to them. - If you select a new remediation date, in **Justification** specify reasons for remediation by that date. - In **Set email notifications** you can: - Override the default weekly email to the owner. - Notify owners weekly with a list of open/overdue tasks. - Notify the owner's direct manager with an open task list.+ 1. Select **Save**. > [!NOTE]
Manage recommendations assigned to you as follows:
## Review recommendations in Azure Resource Graph
-You can use [Azure Resource Graph](../governance/resource-graph/index.yml) to query Defender for Cloud security posture data across multiple subscriptions. Azure Resource Graph provides an efficient way to query at scale across cloud environments by viewing, filtering, grouping, and sorting data.
+You can use [Azure Resource Graph](../governance/resource-graph/index.yml) to write a [Kusto Query Language (KQL)](/azure/data-explorer/kusto/query/) to query Defender for Cloud security posture data across multiple subscriptions. Azure Resource Graph provides an efficient way to query at scale across cloud environments by viewing, filtering, grouping, and sorting data.
+
+**To review recommendations in Azure Resource Graph**:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. Navigate to **Defender for Cloud** > **Recommendations**.
+
+1. Select a recommendation.
-1. In the Defender for Cloud portal > **Recommendations** page > select **Open query**.
+1. Select **Open query**.
-1. In [Azure Resource Graph](../governance/resource-graph/index.yml), write a [Kusto Query Language (KQL)](/azure/data-explorer/kusto/query/).
1. You can open the query in one of two ways: - **Query returning affected resource** - Returns a list of all of the resources affected by this recommendation. - **Query returning security findings** - Returns a list of all security issues found by the recommendation.
+1. Select **run query**.
-### Example
-
-In this example, this recommendation details page shows 15 affected resources:
+ :::image type="content" source="./media/review-security-recommendations/run-query.png" alt-text="Screenshot of Azure Resource Graph Explorer showing the results for the recommendation shown in the previous screenshot." lightbox="media/review-security-recommendations/run-query.png":::
+1. Review the results.
-When you open the underlying query, and run it, Azure Resource Graph Explorer returns the same affected resources for this recommendation:
+### Example
+In this example, this recommendation details page shows 15 affected resources:
+When you open the underlying query, and run it, Azure Resource Graph Explorer returns the same affected resources for this recommendation.
## Next steps
defender-for-cloud Secret Scanning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/secret-scanning.md
Title: Manage secrets with agentless secret scanning (preview)
+ Title: Manage secrets with agentless secret scanning
description: Learn how to scan your servers for secrets with Defender for Server's agentless secret scanning. Previously updated : 08/15/2023 Last updated : 11/27/2023
-# Manage secrets with agentless secret scanning (preview)
+# Manage secrets with agentless secret scanning
Attackers can move laterally across networks, find sensitive data, and exploit vulnerabilities to damage critical information systems by accessing internet-facing workloads and exploiting exposed credentials and secrets. Defender for Cloud's agentless secret scanning for Virtual Machines (VM) locates plaintext secrets that exist in your environment. If secrets are detected, Defender for Cloud can assist your security team to prioritize and take actionable remediation steps to minimize the risk of lateral movement, all without affecting your machine's performance.
-By using agentless secret scanning, you can proactively discover the following types of secrets across your environments:
--- **Insecure SSH private keys (Azure, AWS, GCP)** - supports RSA algorithm for PuTTy files, PKCS#8 and PKCS#1 standards-- **Plaintext Azure SQL connection strings (Azure, AWS)** - supports SQL PAAS-- **Plaintext Azure storage account connection strings (Azure, AWS)**-- **Plaintext Azure storage account SAS tokens (Azure, AWS)**-- **Plaintext AWS access keys (Azure, AWS)**-- **Plaintext AWS RDS SQL connection string (Azure, AWS)** -supports SQL PAAS-
-In addition to detecting SSH private keys, the agentless scanner verifies whether they can be used to move laterally in the network. Keys that we didn't successfully verify are categorized as **unverified** in the **Recommendation** pane.
+By using agentless secret scanning, you can proactively discover the following types of secrets across your environments (in Azure, AWS and GCP cloud providers):
+
+- Insecure SSH private keys:
+ - Supports RSA algorithm for PuTTy files.
+ - PKCS#8 and PKCS#1 standards.
+ - OpenSSH standard.
+- Plaintext Azure SQL connection strings, supports SQL PAAS.
+- Plaintext Azure database for PostgreSQL.
+- Plaintext Azure database for MySQL.
+- Plaintext Azure database for MariaDB.
+- Plaintext Azure Cosmos DB, including PostgreSQL, MySQL and MariaDB.
+- Plaintext AWS RDS connection string, supports SQL PAAS:
+ - Plaintext Amazon Aurora with Postgres and MySQL flavors.
+ - Plaintext Amazon custom RDS with Oracle and SQL Server flavors.
+- Plaintext Azure storage account connection strings.
+- Plaintext Azure storage account SAS tokens.
+- Plaintext AWS access keys.
+- Plaintext AWS S3 pre-signed URL.
+- Plaintext Google storage signed URL.
+- Plaintext Azure AD Client Secret.
+- Plaintext Azure DevOps Personal Access Token.
+- Plaintext GitHub Personal Access Token.
+- Plaintext Azure App Configuration Access Key.
+- Plaintext Azure Cognitive Service Key.
+- Plaintext Azure AD User Credentials.
+- Plaintext Azure Container Registry Access Key.
+- Plaintext Azure App Service Deployment Password.
+- Plaintext Azure Databricks Personal Access Token.
+- Plaintext Azure SignalR Access Key.
+- Plaintext Azure API Management Subscription Key.
+- Plaintext Azure Bot Framework Secret Key.
+- Plaintext Azure Machine Learning Web Service API Key.
+- Plaintext Azure Communication Services Access Key.
+- Plaintext Azure EventGrid Access Key.
+- Plaintext Amazon Marketplace Web Service (MWS) Access Key.
+- Plaintext Azure Maps Subscription Key.
+- Plaintext Azure Web PubSub Access Key.
+- Plaintext OpenAI API Key.
+- Plaintext Azure Batch Shared Access Key.
+- Plaintext NPM Author Token.
+- Plaintext Azure Subscription Management Certificate.
+
+Secret findings can be found using the [Cloud Security Explorer](#remediate-secrets-with-cloud-security-explorer) and the [Secrets tab](#remediate-secrets-from-your-asset-inventory) with their metadata like secret type, file name, file path, last access time, and more.
+
+The following secrets can also be accessed from the `Security Recommendations` and `Attack Path`, across Azure, AWS and GCP cloud providers:
+
+- Insecure SSH private keys:
+ - Supporting RSA algorithm for PuTTy files.
+ - PKCS#8 and PKCS#1 standards.
+ - OpenSSH standard.
+- Plaintext Azure database connection string:
+ - Plaintext Azure SQL connection strings, supports SQL PAAS.
+ - Plaintext Azure database for PostgreSQL.
+ - Plaintext Azure database for MySQL.
+ - Plaintext Azure database for MariaDB.
+ - Plaintext Azure Cosmos DB, including PostgreSQL, MySQL and MariaDB.
+- Plaintext AWS RDS connection string, supports SQL PAAS:
+ - Plaintext Amazon Aurora with Postgres and MySQL flavors.
+ - Plaintext Amazon custom RDS with Oracle and SQL Server flavors.
+- Plaintext Azure storage account connection strings.
+- Plaintext Azure storage account SAS tokens.
+- Plaintext AWS access keys.
+- Plaintext AWS S3 pre-signed URL.
+- Plaintext Google storage signed URL.
+
+The agentless scanner verifies whether SSH private keys can be used to move laterally in your network. Keys that aren't successfully verified are categorized as `unverified` on the Recommendation page.
## Prerequisites
defender-for-cloud Secure Score Security Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/secure-score-security-controls.md
Title: Secure score in Microsoft Defender for Cloud description: Learn about the Microsoft Cloud Security Benchmark secure score in Microsoft Defender for Cloud Previously updated : 11/16/2023 Last updated : 11/27/2023 # Secure score
defender-for-cloud Security Policy Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/security-policy-concept.md
Title: Security policies, standards, and recommendations in Microsoft Defender for Cloud description: Learn about security policies, standards, and recommendations in Microsoft Defender for Cloud. Previously updated : 01/24/2023 Last updated : 11/27/2023 # Security policies in Defender for Cloud
Security standards in Defender for Cloud come from a couple of sources:
- **Regulatory compliance standards**. In addition to MCSB, when you enable one or more [Defender for Cloud plans](defender-for-cloud-introduction.md) you can add standards from a wide range of predefined regulatory compliance programs. [Learn more](regulatory-compliance-dashboard.md). - **Custom standards**. You can create custom security standards in Defender for Cloud, and add built-in and custom recommendations to those custom standards as needed.
-Security standards in Defender for Cloud are based on the Defender for Cloud platform, or on [Azure Policy](../governance/policy/overview.md) [initiatives](../governance/policy/concepts/initiative-definition-structure.md). At the time of writing (November 2023) AWS and GCP standards are Defender for Cloud platform-based, and Azure standards are currently based on Azure Policy.
+Security standards in Defender for Cloud are based on [Azure Policy](../governance/policy/overview.md) [initiatives](../governance/policy/concepts/initiative-definition-structure.md) or on the Defender for Cloud native platform. At the time of writing (November 2023) AWS and GCP standards are Defender for Cloud platform-based, and Azure standards are currently based on Azure Policy.
Security standards in Defender for Cloud simplify the complexity of Azure Policy. In most cases, you can work directly with security standards and recommendations in the Defender for Cloud portal, without needing to directly configure Azure Policy.
defender-for-cloud Support Matrix Defender For Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/support-matrix-defender-for-servers.md
This table summarizes Azure cloud support for Defender for Servers features.
[Adaptive application controls](./adaptive-application-controls.md) | GA | GA | GA [Adaptive network hardening](./adaptive-network-hardening.md) | GA | NA | NA [Docker host hardening](./harden-docker-hosts.md) | GA | GA | GA
-[Agentless secret scanning](secret-scanning.md) | Preview | NA | NA
+[Agentless secret scanning](secret-scanning.md) | GA | NA | NA
## Windows machine support
The following table shows feature support for AWS and GCP machines.
| Third-party vulnerability assessment | - | - | | [Network security assessment](protect-network-resources.md) | - | - | | [Cloud security explorer](how-to-manage-cloud-security-explorer.md) | Γ£ö | - |
-| [Agentless secret scanning](secret-scanning.md) | Γ£ö | - |
+| [Agentless secret scanning](secret-scanning.md) | Γ£ö | Γ£ö |
## Endpoint protection support
defender-for-cloud Update Regulatory Compliance Packages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/update-regulatory-compliance-packages.md
Title: Assign regulatory compliance standards in Microsoft Defender for Cloud description: Learn how to assign regulatory compliance standards in Microsoft Defender for Cloud. Previously updated : 10/10/2023 Last updated : 11/20/2023 # Assign security standards
-Regulatory standards and benchmarks are represented in Microsoft Defender for Cloud as [security standards](security-policy-concept.md). Each standard is an initiative defined in Azure Policy.
+Defender for Cloud's regulatory standards and benchmarks are represented as [security standards](security-policy-concept.md). Each standard is an initiative defined in Azure Policy.
+In Defender for Cloud, you assign security standards to specific scopes such as Azure subscriptions, AWS accounts, and GCP projects that have Defender for Cloud enabled.
-In Defender for Cloud you assign security standards to specific scopes such as Azure subscriptions, AWS accounts, and GCP projects that have Defender for Cloud enabled.
-
-Defender for Cloud continually assesses the environment-in-scope against standards. Based on assessments, it shows in-scope resources as being compliant or non-compliant with the standard, and provides remediation recommendations.
+Defender for Cloud continually assesses the environment-in-scope against standards. Based on assessments, it shows in-scope resources as being compliant or noncompliant with the standard, and provides remediation recommendations.
This article describes how to add regulatory compliance standards as security standards in an Azure subscriptions, AWS account, or GCP project. ## Before you start - To add compliance standards, at least one Defender for Cloud plan must be enabled.-- You need Owner or Policy Contributor permissions to add a standard.-
+- You need `Owner` or `Policy Contributor` permissions to add a standard.
## Assign a standard (Azure)
-1. In the Defender for Cloud portal, select **Regulatory compliance**. For each standard, you can see the subscription in which it's applied.
+**To assign regulatory compliance standards on Azure**:
-1. From the top of the page, select **Manage compliance policies**.
+1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Select the subscription or management group on which you want to assign the security standard.
+1. Navigate to **Microsoft Defender for Cloud** > **Regulatory compliance**. For each standard, you can see the applied subscription.
-We recommend selecting the highest scope for which the standard is applicable so that compliance data is aggregated and tracked for all nested resources.
+1. Select **Manage compliance policies**.
-1. Select **Security policies**.
+ :::image type="content" source="media/update-regulatory-compliance-packages/manage-compliance.png" alt-text="Screenshot of the regulatory compliance page that shows you where to select the manage compliance policy button." lightbox="media/update-regulatory-compliance-packages/manage-compliance.png":::
+
+1. Select the subscription or management group on which you want to assign the security standard.
-1. For the standard you want to enable, in the **Status** column, switch the toggle button to **On**.
+ > [!NOTE]
+ > We recommend selecting the highest scope for which the standard is applicable so that compliance data is aggregated and tracked for all nested resources.
-1. If any information is needed in order to enable the standard, the **Set parameters** page appears for you to type in the information.
+1. Select **Security policies**.
+1. Locate the standard you want to enable and toggle the status to **On**.
:::image type="content" source="media/update-regulatory-compliance-packages/turn-standard-on.png" alt-text="Screenshot showing regulatory compliance dashboard options." lightbox="media/update-regulatory-compliance-packages/turn-standard-on.png":::
-1. From the menu at the top of the page, select **Regulatory compliance** again to go back to the regulatory compliance dashboard.
+ If any information is needed in order to enable the standard, the **Set parameters** page appears for you to type in the information.
+The selected standard appears in **Regulatory compliance** dashboard as enabled for the subscription it was enabled on.
-The selected standard appears in **Regulatory compliance** dashboard as enabled for the subscription.
+## Assign a standard (AWS)
+**To assign regulatory compliance standards on AWS accounts**:
-## Assign a standard (AWS)
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. Navigate to **Microsoft Defender for Cloud** > **Regulatory compliance**. For each standard, you can see the applied subscription.
-To assign regulatory compliance standards on AWS accounts:
+1. Select **Manage compliance policies**.
-1. Navigate to **Environment settings**.
1. Select the relevant AWS account.+ 1. Select **Security policies**.+ 1. In the **Standards** tab, select the three dots in the standard you want to assign > **Assign standard**. :::image type="content" source="media/update-regulatory-compliance-packages/assign-standard-aws-from-list.png" alt-text="Screenshot that shows where to select a standard to assign." lightbox="media/update-regulatory-compliance-packages/assign-standard-aws-from-list.png":::
The selected standard appears in **Regulatory compliance** dashboard as enabled
## Assign a standard (GCP)
-To assign regulatory compliance standards on GCP projects:
+**To assign regulatory compliance standards on GCP projects**:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. Navigate to **Microsoft Defender for Cloud** > **Regulatory compliance**. For each standard, you can see the applied subscription.
+
+1. Select **Manage compliance policies**.
-1. Navigate to **Environment settings**.
1. Select the relevant GCP project.+ 1. Select **Security policies**.+ 1. In the **Standards** tab, select the three dots alongside an unassigned standard and select **Assign standard**.+
+ :::image type="content" source="media/update-regulatory-compliance-packages/assign-standard-gcp-from-list.png" alt-text="Screenshot that shows how to assign a standard to your GCP project." lightbox="media/update-regulatory-compliance-packages/assign-standard-gcp-from-list.png":::
+ 1. At the prompt, select **Yes**. The standard is assigned to your GCP project. The selected standard appears in the **Regulatory compliance** dashboard as enabled for the project.
deployment-environments Concept Environment Yaml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/concept-environment-yaml.md
+
+ Title: environment.yaml schema
+description: Learn how to use environment.yaml to define parameters in your environment definition.
++++ Last updated : 11/17/2023+
+#customer intent: As a developer, I want to know which parameters I can assign for parameters in environment.yaml.
+++
+# Parameters and data types in environment.yaml
+
+ADE environment definitions are infrastructure as code (IaC), written in Bicep or Terraform, stored in repositories. Environment definitions can be modified and adapted for your specific requirements and then used to create a deployment environment on Azure. The environment.yaml schema defines and describes the types of Azure resources included in environment definitions.
++
+## What is environment.yaml?
+
+The environment.yaml file acts as a manifest, describing the resources used and the template location for the environment definition.
+
+### Sample environment.yaml
+The following script is a generic example of an environment.yaml required for your environment definition.
+
+```yml
+name: WebApp
+version: 1.0.0
+summary: Azure Web App Environment
+description: Deploys a web app in Azure without a datastore
+runner: ARM
+templatePath: azuredeploy.json
+```
+### Definitions
+The following table describes the properties that you can use in environment.yaml.
+
+| **Property** | **Type** | **Description** | **Required** | **Examples** |
+| | -- | -- | | -- |
+| name | string | The display name of the catalog item. | Yes | |
+| version | string | The version of the catalog item. | | 1.0.0 |
+| summary | string | A short summary string about the catalog item. | | |
+| description | string | A description of the catalog item. | | |
+| runner | string | The container image to use when executing actions. | | ARM template </br> Terraform |
+| templatePath | string | The relative path of the entry template file. | Yes | main.tf </br> main.bicep </br> azuredeploy.json |
+| parameters | array | Input parameters to use when creating the environment and executing actions. | | #/definitions/Parameter |
+
+## Parameters in environment.yaml
+
+Parameters enable you to reuse an environment definition in different scenarios. For example, you might want developers in different regions to deploy the same environment. You can define a location parameter to prompt the developer to enter the desired location as they create their environment.
+
+### Sample environment.yaml with parameters
+
+The following script is an example of a environment.yaml file that includes two parameters; `location` and `name`:
+
+```yml
+name: WebApp
+summary: Azure Web App Environment
+description: Deploys a web app in Azure without a datastore
+runner: ARM
+templatePath: azuredeploy.json
+parameters:
+- id: "location"
+ name: "location"
+ description: "Location to deploy the environment resources"
+ default: "[resourceGroup().location]"
+ type: "string"
+ required: false
+- id: "name"
+ name: "name"
+ description: "Name of the Web App "
+ default: ""
+ type: "string"
+ required: false
+```
+
+### Parameter definitions
+
+The following table describes the data types that you can use in environment.yaml. The data type names used in the environment.yaml manifest file differ from the ones used in ARM templates.
+
+Each parameter can use any of the following properties:
+
+| **Properties** | **Type** | **Description** | **Further Settings** |
+| -- | -- | | |
+| ID | string | Unique ID of the parameter. | |
+| name | string | Display name of the parameter. | |
+| description | string | Description of the parameter. | |
+| default | array </br> boolean </br> integer </br> number </br> object </br> string | The default value of the parameter. | |
+| type | array </br> boolean </br> integer </br> number </br> object </br> string | The data type of the parameter. This data type must match the parameter data type in the ARM template, BICEP file, or Terraform file with the corresponding parameter name. | **Default type:** string |
+| readOnly | boolean | Whether or not this parameter is read-only. | |
+| required | boolean | Whether or not this parameter is required. | |
+| allowed | array | An array of allowed values. | "items": { </br> "type": "string" </br> }, </br> "minItems": 1, </br> "uniqueItems": true, |
+
+## YAML schema
+
+There's a defined schema for Azure Deployment Environments environment.yaml files, which can make editing these files a little easier. You can add the schema definition to the beginning of your environment.yaml file:
+
+```yml
+# yaml-language-server: $schema=https://github.com/Azure/deployment-environments/releases/download/2022-11-11-preview/manifest.schema.json
+```
+
+Here's an example environment definition that uses the schema:
+
+```yml
+# yaml-language-server: $schema=https://github.com/Azure/deployment-environments/releases/download/2022-11-11-preview/manifest.schema.json
+name: FunctionApp
+version: 1.0.0
+summary: Azure Function App Environment
+description: Deploys an Azure Function App, Storage Account, and Application Insights
+runner: ARM
+templatePath: azuredeploy.json
+
+parameters:
+ - id: name
+ name: Name
+ description: 'Name of the Function App.'
+ type: string
+ required: true
+
+ - id: supportsHttpsTrafficOnly
+ name: 'Supports Https Traffic Only'
+ description: 'Allows https traffic only to Storage Account and Functions App if set to true.'
+ type: boolean
+
+ - id: runtime
+ name: Runtime
+ description: 'The language worker runtime to load in the function app.'
+ type: string
+ allowed:
+ - 'dotnet'
+ - 'dotnet-isolated'
+ - 'java'
+ - 'node'
+ - 'powershell'
+ - 'python'
+ default: 'dotnet-isolated'
+```
+
+## Related content
+
+- [Add and configure an environment definition in Azure Deployment Environments](configure-environment-definition.md)
+- [Parameters in ARM templates](../azure-resource-manager/templates/parameters.md)
+- [Data types in ARM templates](../azure-resource-manager/templates/data-types.md)
deployment-environments Configure Environment Definition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/configure-environment-definition.md
An environment definition is combined of least two files:
>[!NOTE] > Azure Deployment Environments currently supports only ARM templates.
-The IaC template contains the environment definition (template), and the environment file, that provides metadata about the template. Your development teams use the environment definitions that you provide in the catalog to deploy environments in Azure.
+The IaC template contains the environment definition (template), and the environment file, which provides metadata about the template. Your development teams use the environment definitions that you provide in the catalog to deploy environments in Azure.
We offer a [sample catalog](https://aka.ms/deployment-environments/SampleCatalog) that you can use as your repository. You also can use your own private repository, or you can fork and customize the environment definitions in the sample catalog.
To add an environment definition:
- [Understand the structure and syntax of ARM templates](../azure-resource-manager/templates/syntax.md): Describes the structure of an ARM template and the properties that are available in the different sections of a template. - [Use linked templates](../azure-resource-manager/templates/linked-templates.md?tabs=azure-powershell#use-relative-path-for-linked-templates): Describes how to use linked templates with the new ARM template `relativePath` property to easily modularize your templates and share core components between environment definitions.
- - A environment as a YAML file.
+ - An environment as a YAML file.
The *environment.yaml* file contains metadata related to the ARM template.
To add an environment definition:
:::image type="content" source="../deployment-environments/media/configure-environment-definition/create-subfolder-path.png" alt-text="Screenshot that shows a folder path with a subfolder that contains an ARM template and an environment file.":::
+ To learn more about the options and data types you can use in environment.yaml, see [Parameters and data types in environment.yaml](concept-environment-yaml.md#what-is-environmentyaml).
+ 1. In your dev center, go to **Catalogs**, select the repository, and then select **Sync**. :::image type="content" source="../deployment-environments/media/configure-environment-definition/sync-catalog-list.png" alt-text="Screenshot that shows how to sync the catalog." :::
The service scans the repository to find new environment definitions. After you
You can specify parameters for your environment definitions to allow developers to customize their environments.
-Parameters are defined in the environment.yaml file. You can use the following options for parameters:
-
-|Option |Description |
-|||
-|ID |Enter an ID for the parameter.|
-|name |Enter a name for the parameter.|
-|description |Enter a description for the parameter.|
-|default |Optional. Enter a default value for the parameter. The default value can be overwritten at creation.|
-|type |Enter the data type for the parameter.|
-|required|Enter `true` for a required value, and `false` for an optional value.|
+Parameters are defined in the environment.yaml file.
The following script is an example of a *environment.yaml* file that includes two parameters; `location` and `name`:
parameters:
type: "string" required: false ```
+To learn more about the parameters and their data types you can use in environment.yaml, see [Parameters and data types in environment.yaml](concept-environment-yaml.md#parameters-in-environmentyaml).
Developers can supply values for specific parameters for their environments through the developer portal.
deployment-environments How To Configure Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-configure-catalog.md
You can use a catalog to provide your development teams with a curated set of in
Deployment Environments supports catalogs hosted in Azure Repos (the repository service in Azure, commonly referred to as Azure DevOps) and catalogs hosted in GitHub. Azure DevOps supports authentication by assigning permissions to a managed identity. Azure DevOps and GitHub both support the use of PATs for authentication. To further secure your templates, the catalog is encrypted; Azure Deployment Environments supports encryption at rest with platform-managed encryption keys, which Microsoft for Azure Services manages.
-A catalog is a repository that's hosted in [GitHub](https://github.com) or [Azure DevOps](https://dev.azure.com/).
+A catalog is a repository hosted in [GitHub](https://github.com) or [Azure DevOps](https://dev.azure.com/).
- To learn how to host a repository in GitHub, see [Get started with GitHub](https://docs.github.com/get-started). - To learn how to host a Git repository in an Azure DevOps project, see [Azure Repos](https://azure.microsoft.com/services/devops/repos/).
Get the path to the secret you created in the key vault.
### Add your repository as a catalog 1. In the [Azure portal](https://portal.azure.com/), go to your dev center.
-1. Ensure that the [identity](./how-to-configure-managed-identity.md) that's attached to the dev center has [access to the key vault secret](./how-to-configure-managed-identity.md#grant-the-managed-identity-access-to-the-key-vault-secret) where your personal access token is stored.
+1. Ensure that the [identity](./how-to-configure-managed-identity.md) attached to the dev center has [access to the key vault secret](./how-to-configure-managed-identity.md#grant-the-managed-identity-access-to-the-key-vault-secret) where your personal access token is stored.
1. In the left menu under **Environment configuration**, select **Catalogs**, and then select **Add**. 1. In **Add catalog**, enter the following information, and then select **Add**:
To add a catalog, you complete these tasks:
### Create a personal access token in GitHub
+Azure Deployment Environments supports authenticating to GitHub repositories by using either classic tokens or fine-grained tokens. In this example, you create a fine-grained token.
+ 1. Go to the home page of the GitHub repository that contains the template definitions. 1. In the upper-right corner of GitHub, select the profile image, and then select **Settings**. 1. On the left sidebar, select **Developer settings** > **Personal access tokens** > **Fine-grained tokens**.
To add a catalog, you complete these tasks:
|**Token name**|Enter a descriptive name for the token.| |**Expiration**|Select the token expiration period in days.| |**Description**|Enter a description for the token.|
- |**Repository access**|Select **Public Repositories (read-only)**.|
-
- Leave the other options at their defaults.
+ |**Resource owner**|Select the owner of the repository.|
+ |**Repository access**|Select **Only select repositories**.|
+ |**Select repositories**|Select the repository that contains the environment definitions.|
+ |**Repository permissions**|Expand **Repository permissions**, and for **Contents**, from the **Access** list, select **Code read**.|
+
+ :::image type="content" source="media/how-to-configure-catalog/github-repository-permissions.png" alt-text="Screenshot of the GitHub New fine-grained personal access token page, showing the Repository permissions with Contents highlighted." lightbox="media/how-to-configure-catalog/github-repository-permissions.png":::
+ 1. Select **Generate token**. 1. Save the generated token. You use the token later.
+> [!IMPORTANT]
+> When working with a private repository stored within a GitHub organization, you must ensure that the GitHub PAT is configured to give access to the correct organization and the repositories within it.
+> - Classic tokens within the organization must be SSO authorized to the specific organization after they are created.
+> - Fine grained tokens must have the owner of the token set as the organization itself to be authorized.
+>
+> Incorrectly configured PATs can result in a *Repository not found* error.
+ ### Create a Key Vault You need an Azure Key Vault to store the personal access token (PAT) that is used to grant Azure access to your repository. Key vaults can control access with either access policies or role-based access control (RBAC). If you have an existing key vault, you can use it, but you should check whether it uses access policies or RBAC assignments to control access. For help with configuring an access policy for a key vault, see [Assign a Key Vault access policy](/azure/key-vault/general/assign-access-policy?branch=main&tabs=azure-portal).
Get the path to the secret you created in the key vault.
### Add your repository as a catalog 1. In the [Azure portal](https://portal.azure.com/), go to your dev center.
-1. Ensure that the [identity](./how-to-configure-managed-identity.md) that's attached to the dev center has [access to the key vault secret](./how-to-configure-managed-identity.md#grant-the-managed-identity-access-to-the-key-vault-secret) where your personal access token is stored.
+1. Ensure that the [identity](./how-to-configure-managed-identity.md) attached to the dev center has [access to the key vault secret](./how-to-configure-managed-identity.md#grant-the-managed-identity-access-to-the-key-vault-secret) where your personal access token is stored.
1. In the left menu under **Environment configuration**, select **Catalogs**, and then select **Add**. 1. In **Add catalog**, enter the following information, and then select **Add**:
energy-data-services Concepts Entitlements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/concepts-entitlements.md
Access management is a critical function for any service or resource. The entitl
## Groups
-The entitlements service of Azure Data Manager for Energy allows you to create groups and manage memberships of the groups. An entitlement group defines permissions on services/data sources for your Azure Data Manager for Energy instance. Users added to a given group obtain the associated permissions.
+The entitlements service of Azure Data Manager for Energy allows you to create groups and manage memberships of the groups. An entitlement group defines permissions on services/data sources for a given data partition in your Azure Data Manager for Energy instance. Users added to a given group obtain the associated permissions. Please note that different groups and associated user entitlements need to be set for a new data partition even in the same Azure Data Manager for Energy instance.
The entitlements service enables three use cases for authorization:
The entitlements service enables three use cases for authorization:
- **Service groups** used for service authorization (for example, service.storage.user, service.storage.admin) - **User groups** used for hierarchical grouping of user and service identities (for example, users.datalake.viewers, users.datalake.editors)
-Some user, data, and service groups are created by default when a data partition is provisioned with details in [Bootstrapped OSDU Entitlements Groups](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/osdu-entitlement-roles.md).
+Some user, data, and service groups are created by default when a data partition is provisioned. Details of these groups and their hierarchy scope is in [Bootstrapped OSDU Entitlements Groups](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/osdu-entitlement-roles.md).
## Group naming
energy-data-services How To Convert Segy To Ovds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-convert-segy-to-ovds.md
If the user isn't part of the required group, you can add the required entitleme
[![Screenshot that shows the API call to get register a user as an admin in Postman.](media/how-to-convert-segy-to-vds/postman-api-add-user-to-admins.png)](media/how-to-convert-segy-to-vds/postman-api-add-user-to-admins.png#lightbox)
-If you haven't yet created entitlements groups, follow the directions as outlined in [How to manage users](how-to-manage-users.md). If you would like to see what groups you have, use [Get entitlements groups for a given user](how-to-manage-users.md#get-entitlements-groups-for-a-given-user). Data access isolation is achieved with this dedicated ACL (access control list) per object within a given data partition.
+If you haven't yet created entitlements groups, follow the directions as outlined in [How to manage users](how-to-manage-users.md). If you would like to see what groups you have, use [Get entitlements groups for a given user](how-to-manage-users.md#get-entitlements-groups-for-a-given-user-in-a-data-partition). Data access isolation is achieved with this dedicated ACL (access control list) per object within a given data partition.
### Prepare Subproject
OSDU&trade; is a trademark of The Open Group.
## Next steps <!-- Add a context sentence for the following links --> > [!div class="nextstepaction"]
-> [How to convert a segy to zgy file](./how-to-convert-segy-to-zgy.md)
+> [How to convert a segy to zgy file](./how-to-convert-segy-to-zgy.md)
energy-data-services How To Convert Segy To Zgy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-convert-segy-to-zgy.md
If the user isn't part of the required group, you can add the required entitleme
[![Screenshot that shows the API call to get register a user as an admin in Postman.](media/how-to-convert-segy-to-zgy/postman-api-add-user-to-admins.png)](media/how-to-convert-segy-to-zgy/postman-api-add-user-to-admins.png#lightbox)
-If you haven't yet created entitlements groups, follow the directions as outlined in [How to manage users](how-to-manage-users.md). If you would like to see what groups you have, use [Get entitlements groups for a given user](how-to-manage-users.md#get-entitlements-groups-for-a-given-user). Data access isolation is achieved with this dedicated ACL (access control list) per object within a given data partition.
+If you haven't yet created entitlements groups, follow the directions as outlined in [How to manage users](how-to-manage-users.md). If you would like to see what groups you have, use [Get entitlements groups for a given user](how-to-manage-users.md#get-entitlements-groups-for-a-given-user-in-a-data-partition). Data access isolation is achieved with this dedicated ACL (access control list) per object within a given data partition.
### Prepare Subproject
OSDU&trade; is a trademark of The Open Group.
## Next steps <!-- Add a context sentence for the following links --> > [!div class="nextstepaction"]
-> [How to convert SEGY to OVDS](./how-to-convert-segy-to-ovds.md)
+> [How to convert SEGY to OVDS](./how-to-convert-segy-to-ovds.md)
energy-data-services How To Manage Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-manage-users.md
# How to manage users
-In this article, you'll learn how to manage users and their memberships in OSDU groups in Azure Data Manager for Energy. [Entitlements APIs](https://community.opengroup.org/osdu/platform/security-and-compliance/entitlements/-/tree/master/) are used to add or remove users to OSDU groups and to check the entitlements when the user tries to access the OSDU services or data. For more information about OSDU groups, see [entitlement services](concepts-entitlements.md).
+In this article, you learn how to manage users and their memberships in OSDU groups in Azure Data Manager for Energy. [Entitlements APIs](https://community.opengroup.org/osdu/platform/security-and-compliance/entitlements/-/tree/master/) are used to add or remove users to OSDU groups and to check the entitlements when the user tries to access the OSDU services or data. For more information about OSDU groups, see [entitlement services](concepts-entitlements.md).
## Prerequisites 1. Create an Azure Data Manager for Energy instance using the tutorial at [How to create Azure Data Manager for Energy instance](quickstart-create-microsoft-energy-data-services-instance.md). 2. Generate the access token needed to call the Entitlements APIs. 3. Get various parameters of your instance such as client-id, client-secret, etc.
-4. Keep all these parameter values handy as they will be needed for executing different user management requests via the Entitlements API.
+4. Keep all these parameter values handy as they are needed for executing different user management requests via the Entitlements API.
## Fetch Parameters #### Find `tenant-id`
In this article, you'll learn how to manage users and their memberships in OSDU
:::image type="content" source="media/how-to-manage-users/tenant-id.png" alt-text="Screenshot of finding the tenant-id."::: #### Find `client-id`
-It's the same value that you used to register your application during the provisioning of your [Azure Data Manager for Energy instance](quickstart-create-microsoft-energy-data-services-instance.md). It is often referred to as `app-id`.
+It's the same value that you use to register your application during the provisioning of your [Azure Data Manager for Energy instance](quickstart-create-microsoft-energy-data-services-instance.md). It is often referred to as `app-id`.
1. Find the `client-id` in the *Essentials* pane of Azure Data Manager for Energy *Overview* page. 2. Copy the `client-id` and paste it into an editor to be used later.
curl --location --request POST 'https://login.microsoftonline.com/<tenant-id>/oa
"access_token": "abcdefgh123456............." } ```
-2. Copy the `access_token` value from the response. You'll need it to pass as one of the headers in all calls to the Entitlements APIs.
+2. Copy the `access_token` value from the response. You need it to pass as one of the headers in all calls to the Entitlements APIs.
## Fetch OID `object-id` (OID) is the Microsoft Entra user Object ID. 1. Find the 'object-id' (OID) of the user(s) first. If you are managing an application's access, you must find and use the application ID (or client ID) instead of the OID.
-2. Input the `object-id` (OID) of the users (or the application or client ID if managing access for an application) as parameters in the calls to the Entitlements API of your Azure Data Manager for Energy Instance.
+2. Input the `object-id` (OID) of the users (or the application or client ID if managing access for an application) as parameters in the calls to the Entitlements API of your Azure Data Manager for Energy instance.
:::image type="content" source="media/how-to-manage-users/azure-active-directory-object-id.png" alt-text="Screenshot of finding the object-id from Microsoft Entra I D."::: :::image type="content" source="media/how-to-manage-users/profile-object-id.png" alt-text="Screenshot of finding the object-id from the profile.":::
-## Get the list of all available groups
+## First time addition of users in a new data partition
+In order to add entitlements to a new data partition of Azure Data Manager for Energy instance, use the SPN token of the app that was used to provision the instance. If you try to directly use user tokens for adding entitlements, it results in 401 error. The SPN token must be used to add initial users in the system and those users (with admin access) can then manage additional users.
+
+The SPN is generated using client_credentials flow
+```bash
+curl --location --request POST 'https://login.microsoftonline.com/<tenant-id>/oauth2/token' \
+--header 'Content-Type: application/x-www-form-urlencoded' \
+--data-urlencode 'grant_type=client_credentials' \
+--data-urlencode 'scope=<client-id>.default' \
+--data-urlencode 'client_id=<client-id>' \
+--data-urlencode 'client_secret=<client-secret>' \
+--data-urlencode 'resource=<client-id>'
+```
+
+## Get the list of all available groups in a data partition
Run the below curl command in Azure Cloud Bash to get all the groups that are available for your Azure Data Manager for the Energy instance and its data partitions.
Run the below curl command in Azure Cloud Bash to get all the groups that are av
--header 'Authorization: Bearer <access_token>' ```
-## Add user(s) to a OSDU group
+## Add user(s) to an OSDU group in a data partition
1. Run the below curl command in Azure Cloud Bash to add the user(s) to the "Users" group using the Entitlement service. 2. The value to be sent for the param **"email"** is the **Object_ID (OID)** of the user and not the user's email.
Consider an Azure Data Manager for Energy instance named "medstest" with a data
> The app-id is the default OWNER of all the groups. :::image type="content" source="media/how-to-manage-users/appid.png" alt-text="Screenshot of app-d in Microsoft Entra ID.":::
-## Add user(s) to an entitlements group
+## Add user(s) to an entitlements group in a data partition
1. Run the below curl command in Azure Cloud Bash to add the user(s) to an entitlement group using the Entitlement service. 2. The value to be sent for the param **"email"** is the **Object_ID (OID)** of the user and not the user's email.
Consider an Azure Data Manager for Energy instance named "medstest" with a data
} ```
-## Get entitlements groups for a given user
+## Get entitlements groups for a given user in a data partition
1. Run the below curl command in Azure Cloud Bash to get all the groups associated with the user.
Consider an Azure Data Manager for Energy instance named "medstest" with a data
} ```
-## Delete entitlement groups of a given user
+## Delete entitlement groups of a given user in a data partition
1. Run the below curl command in Azure Cloud Bash to delete a given user from a given data partition. 2. As stated above, **DO NOT** delete the OWNER of a group unless you have another OWNER who can manage users in that group.
event-grid Blob Event Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/blob-event-quickstart-portal.md
Title: 'Use Azure Event Grid to send Blob storage events to web endpoint - portal' description: 'Quickstart: Use Azure Event Grid and Azure portal to create Blob storage account, and subscribe its events. Send the events to a Webhook.' Previously updated : 10/27/2022 Last updated : 11/27/2023
In this article, you use the Azure portal to do the following tasks:
When you're finished, you see that the event data has been sent to the web app.
-![View results.](./media/blob-event-quickstart-portal/view-results.png)
## Create a storage account
When you're finished, you see that the event data has been sent to the web app.
>[!NOTE] > Only storage accounts of kind **StorageV2 (general purpose v2)** and **BlobStorage** support event integration. **Storage (general purpose v1)** does *not* support integration with Event Grid.
-1. The deployment may take a few minutes to complete. On the **Deployment** page, select **Go to resource**.
+1. The deployment takes a few minutes to complete. On the **Deployment** page, select **Go to resource**.
:::image type="content" source="./media/blob-event-quickstart-portal/go-to-resource-link.png" alt-text="Screenshot showing the deployment succeeded page with a link to go to the resource."::: 1. On the **Storage account** page, select **Events** on the left menu.
When you're finished, you see that the event data has been sent to the web app.
1. Keep this page in the web browser open. ## Create a message endpoint
-Before subscribing to the events for the Blob storage, let's create the endpoint for the event message. Typically, the endpoint takes actions based on the event data. To simplify this quickstart, you deploy a [pre-built web app](https://github.com/Azure-Samples/azure-event-grid-viewer) that displays the event messages. The deployed solution includes an App Service plan, an App Service web app, and source code from GitHub.
+Before subscribing to the events for the Blob storage, let's create the endpoint for the event message. Typically, the endpoint takes actions based on the event data. To simplify this quickstart, you deploy a [prebuilt web app](https://github.com/Azure-Samples/azure-event-grid-viewer) that displays the event messages. The deployed solution includes an App Service plan, an App Service web app, and source code from GitHub.
1. Select **Deploy to Azure** to deploy the solution to your subscription.
Before subscribing to the events for the Blob storage, let's create the endpoint
:::image type="content" source="./media/blob-event-quickstart-portal/template-deploy-parameters.png" alt-text="Screenshot showing the Custom deployment page."::: 1. On the **Review + create** page, select **Create**.
-1. The deployment may take a few minutes to complete. On the **Deployment** page, select **Go to resource group**.
+1. The deployment takes a few minutes to complete. On the **Deployment** page, select **Go to resource group**.
:::image type="content" source="./media/blob-event-quickstart-portal/navigate-resource-group.png" alt-text="Screenshot showing the deployment succeeded page with a link to go to the resource group."::: 4. On the **Resource group** page, in the list of resources, select the web app that you created. You also see the App Service plan and the storage account in this list.
You subscribe to a topic to tell Event Grid which events you want to track, and
2. Select **Web Hook** for **Endpoint type**. :::image type="content" source="./media/blob-event-quickstart-portal/select-web-hook-end-point-type.png" alt-text="Screenshot showing the Create Event Subscription page with Web Hook selected as an endpoint.":::
-4. For **Endpoint**, click **Select an endpoint**, and enter the URL of your web app and add `api/updates` to the home page URL (for example: `https://spegridsite.azurewebsites.net/api/updates`), and then select **Confirm Selection**.
+4. For **Endpoint**, choose **Select an endpoint**, and enter the URL of your web app and add `api/updates` to the home page URL (for example: `https://spegridsite.azurewebsites.net/api/updates`), and then select **Confirm Selection**.
:::image type="content" source="./media/blob-event-quickstart-portal/confirm-endpoint-selection.png" lightbox="./media/blob-event-quickstart-portal/confirm-endpoint-selection.png" alt-text="Screenshot showing the Select Web Hook page."::: 5. Now, on the **Create Event Subscription** page, select **Create** to create the event subscription.
Now, let's trigger an event to see how Event Grid distributes the message to you
## Send an event to your endpoint
-You trigger an event for the Blob storage by uploading a file. The file doesn't need any specific content. The articles assumes you have a file named testfile.txt, but you can use any file.
+You trigger an event for the Blob storage by uploading a file. The file doesn't need any specific content.
1. In the Azure portal, navigate to your Blob storage account, and select **Containers** on the let menu. 1. Select **+ Container**. Give your container a name, and use any access level, and select **Create**.
You trigger an event for the Blob storage by uploading a file. The file doesn't
:::image type="content" source="./media/blob-event-quickstart-portal/select-container.png" alt-text="Screenshot showing the selection of the container."::: 1. To upload a file, select **Upload**. On the **Upload blob** page, browse and select a file that you want to upload for testing, and then select **Upload** on that page.
- :::image type="content" source="./media/blob-event-quickstart-portal/upload-file.png" alt-text="Screenshot showing Upload blob page.":::
+ :::image type="content" source="./media/blob-event-quickstart-portal/upload-file.png" alt-text="Screenshot showing Upload blob page." lightbox="./media/blob-event-quickstart-portal/upload-file.png":::
1. Browse to your test file and upload it. 1. You've triggered the event, and Event Grid sent the message to the endpoint you configured when subscribing. The message is in the JSON format and it contains an array with one or more events. In the following example, the JSON message contains an array with one event. View your web app and notice that a **blob created** event was received.
event-grid Security Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/security-authorization.md
Title: Azure Event Grid security and authentication description: Describes Azure Event Grid and its concepts. Previously updated : 10/25/2022 Last updated : 11/27/2023 # Authorizing access to Event Grid resources
The following operations return potentially secret information, which gets filte
## Built-in roles Event Grid provides the following three built-in roles.
-The Event Grid Subscription Reader and Event Grid Subscription Contributor roles are for managing event subscriptions. They're important when implementing [event domains](event-domains.md) because they give users the permissions they need to subscribe to topics in your event domain. These roles are focused on event subscriptions and don't grant access for actions such as creating topics.
-
-The Event Grid Contributor role allows you to create and manage Event Grid resources.
- | Role | Description | | - | -- |
The Event Grid Contributor role allows you to create and manage Event Grid resou
| [`EventGrid Contributor`](../role-based-access-control/built-in-roles.md#eventgrid-contributor) | Lets you create and manage Event Grid resources. | | [`EventGrid Data Sender`](../role-based-access-control/built-in-roles.md#eventgrid-data-sender) | Lets you send events to Event Grid topics. |
+The **Event Grid Subscription Reader** and **Event Grid Subscription Contributor** roles are for managing event subscriptions. They're important when implementing [event domains](event-domains.md) because they give users the permissions they need to subscribe to topics in your event domain. These roles are focused on event subscriptions and don't grant access for actions such as creating topics.
+
+The **Event Grid Contributor** role allows you to create and manage Event Grid resources.
++ > [!NOTE] > Select links in the first column to navigate to an article that provides more details about the role. For instructions on how to assign users or groups to RBAC roles, see [this article](../role-based-access-control/quickstart-assign-role-user-portal.md).
The Event Grid Contributor role allows you to create and manage Event Grid resou
## Custom roles
-If you need to specify permissions that are different than the built-in roles, you can create custom roles.
+If you need to specify permissions that are different than the built-in roles, create custom roles.
The following are sample Event Grid role definitions that allow users to take different actions. These custom roles are different from the built-in roles because they grant broader access than just event subscriptions.
event-hubs Event Hubs Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-create.md
Title: Azure Quickstart - Create an event hub using the Azure portal description: In this quickstart, you learn how to create an Azure event hub using Azure portal. Previously updated : 10/10/2022- Last updated : 11/27/2023 # Quickstart: Create an event hub using Azure portal
Azure Event Hubs is a Big Data streaming platform and event ingestion service th
In this quickstart, you create an event hub using the [Azure portal](https://portal.azure.com). ## Prerequisites-
-To complete this quickstart, make sure that you have:
--- Azure subscription. If you don't have one, [create a free account](https://azure.microsoft.com/free/) before you begin.
+To complete this quickstart, make sure that you have an Azure subscription. If you don't have one, [create a free account](https://azure.microsoft.com/free/) before you begin.
## Create a resource group
A resource group is a logical collection of Azure resources. All resources are d
An Event Hubs namespace provides a unique scoping container, in which you create one or more event hubs. To create a namespace in your resource group using the portal, do the following actions:
-1. In the Azure portal, and select **Create a resource** at the top left of the screen.
-1. Select **All services** in the left menu, and select **star (`*`)** next to **Event Hubs** in the **Analytics** category. Confirm that **Event Hubs** is added to **FAVORITES** in the left navigational menu.
+1. In the Azure portal, select **All services** in the left menu, and select **star (`*`)** next to **Event Hubs** in the **Analytics** category. Confirm that **Event Hubs** is added to **FAVORITES** in the left navigational menu.
:::image type="content" source="./media/event-hubs-quickstart-portal/select-event-hubs-menu.png" alt-text="Screenshot showing the selection of Event Hubs in the All services page."::: 1. Select **Event Hubs** under **FAVORITES** in the left navigational menu, and select **Create** on the toolbar.
event-hubs Event Hubs Scalability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-scalability.md
Title: Scalability - Azure Event Hubs | Microsoft Docs description: This article provides information on how to scale Azure Event Hubs by using partitions and throughput units. Previously updated : 10/25/2022 Last updated : 11/23/2023 # Scaling with Event Hubs
-There are two factors which influence scaling with Event Hubs.
-* Throughput units (standard tier) or processing units (premium tier)
-* Partitions
+There are two factors that influence scaling with Event Hubs.
+
+- Throughput units (standard tier) or processing units (premium tier)
+- Partitions
## Throughput units
-The throughput capacity of Event Hubs is controlled by *throughput units*. Throughput units are pre-purchased units of capacity. A single throughput unit lets you:
+The throughput capacity of event hubs is controlled by **throughput units**. Throughput units are prepurchased units of capacity. A single throughput unit lets you:
-* Ingress: Up to 1 MB per second or 1000 events per second (whichever comes first).
-* Egress: Up to 2 MB per second or 4096 events per second.
+* Ingress: Up to 1 MB per second or 1,000 events per second (whichever comes first).
+* Egress: Up to 2 MB per second or 4,096 events per second.
-Beyond the capacity of the purchased throughput units, ingress is throttled and a [ServerBusyException](/dotnet/api/microsoft.azure.eventhubs.serverbusyexception) is returned. Egress does not produce throttling exceptions, but is still limited to the capacity of the purchased throughput units. If you receive publishing rate exceptions or are expecting to see higher egress, be sure to check how many throughput units you have purchased for the namespace. You can manage throughput units on the **Scale** blade of the namespaces in the [Azure portal](https://portal.azure.com). You can also manage throughput units programmatically using the [Event Hubs APIs](./event-hubs-samples.md).
+Beyond the capacity of the purchased throughput units, ingress is throttled and Event Hubs throws a [ServerBusyException](/dotnet/api/microsoft.azure.eventhubs.serverbusyexception). Egress doesn't produce throttling exceptions, but is still limited to the capacity of the purchased throughput units. If you receive publishing rate exceptions or are expecting to see higher egress, be sure to check how many throughput units you have purchased for the namespace. You can manage throughput units on the **Scale** page of the namespaces in the [Azure portal](https://portal.azure.com). You can also manage throughput units programmatically using the [Event Hubs APIs](./event-hubs-samples.md).
-Throughput units are pre-purchased and are billed per hour. Once purchased, throughput units are billed for a minimum of one hour. Up to 40 throughput units can be purchased for an Event Hubs namespace and are shared across all event hubs in that namespace.
+Throughput units are prepurchased and are billed per hour. Once purchased, throughput units are billed for a minimum of one hour. Up to 40 throughput units can be purchased for an Event Hubs namespace and are shared across all event hubs in that namespace.
The **Auto-inflate** feature of Event Hubs automatically scales up by increasing the number of throughput units, to meet usage needs. Increasing throughput units prevents throttling scenarios, in which:
The **Auto-inflate** feature of Event Hubs automatically scales up by increasing
The Event Hubs service increases the throughput when load increases beyond the minimum threshold, without any requests failing with ServerBusy errors.
-For more information about the auto-inflate feature, see [Automatically scale throughput units](event-hubs-auto-inflate.md).
+For more information about the autoinflate feature, see [Automatically scale throughput units](event-hubs-auto-inflate.md).
## Processing units
- [Event Hubs Premium](./event-hubs-premium-overview.md) provides superior performance and better isolation within a managed multitenant PaaS environment. The resources in a Premium tier are isolated at the CPU and memory level so that each tenant workload runs in isolation. This resource container is called a *Processing Unit* (PU). You can purchase 1, 2, 4, 8 or 16 processing Units for each Event Hubs Premium namespace.
+ [Event Hubs Premium](./event-hubs-premium-overview.md) provides superior performance and better isolation within a managed multitenant PaaS environment. The resources in a Premium tier are isolated at the CPU and memory level so that each tenant workload runs in isolation. This resource container is called a **Processing Unit** (PU). You can purchase 1, 2, 4, 8 or 16 processing Units for each Event Hubs Premium namespace.
How much you can ingest and stream with a processing unit depends on various factors such as your producers, consumers, the rate at which you're ingesting and processing, and much more.
-For example, Event Hubs Premium namespace with 1 PU and 1 event hub (100 partitions) can approximately offer core capacity of ~5-10 MB/s ingress and 10-20 MB/s egress for both AMQP or Kafka workloads.
+For example, Event Hubs Premium namespace with one PU and one event hub (100 partitions) can approximately offer core capacity of ~5-10 MB/s ingress and 10-20 MB/s egress for both AMQP or Kafka workloads.
To learn about configuring PUs for a premium tier namespace, see [Configure processing units](configure-processing-units-premium-namespace.md).
expressroute Expressroute About Virtual Network Gateways https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-about-virtual-network-gateways.md
ErGwScale is free of charge during public preview. For information about Express
* For more information about creating ExpressRoute gateways, see [Create a virtual network gateway for ExpressRoute](expressroute-howto-add-gateway-resource-manager.md).
-* For more information on how to deploy ErGwScale, see [How to configure ErGwScale]().
+* For more information on how to deploy ErGwScale, see [Configure a virtual network gateway for ExpressRoute using the Azure portal](https://learn.microsoft.com/azure/expressroute/expressroute-howto-add-gateway-portal-resource-manager).
* For more information about configuring zone-redundant gateways, see [Create a zone-redundant virtual network gateway](../../articles/vpn-gateway/create-zone-redundant-vnet-gateway.md).
firewall Protect Office 365 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/protect-office-365.md
Previously updated : 03/28/2023 Last updated : 11/27/2023
You can use the Azure Firewall built-in Service Tags and FQDN tags to allow outbound communication to [Office 365 endpoints and IP addresses](/microsoft-365/enterprise/urls-and-ip-address-ranges).
+> [!NOTE]
+> Office 365 service tags and FQDN tags are supported in Azure Firewall policy only. They aren't supported in classic rules.
+ ## Tags creation For each Office 365 product and category, Azure Firewall automatically retrieves the required endpoints and IP addresses, and creates tags accordingly:
hdinsight-aks Monitor With Prometheus Grafana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/monitor-with-prometheus-grafana.md
This article covers the details of enabling the monitoring feature in HDInsight
## Prerequisites
-* An Azure Managed Prometheus workspace. You can think of this workspace as a unique Azure Monitor logs environment with its own data repository, data sources, and solutions. For the instructions, see [Create a Azure Managed Prometheus workspace](../azure-monitor/essentials/azure-monitor-workspace-manage.md).
-* Azure Managed Grafana workspace. For the instructions, see [Create a Azure Managed Grafana workspace](../managed-grafan).
+* An Azure Managed Prometheus workspace. You can think of this workspace as a unique Azure Monitor logs environment with its own data repository, data sources, and solutions. For the instructions, see [Create an Azure Managed Prometheus workspace](../azure-monitor/essentials/azure-monitor-workspace-manage.md).
+* Azure Managed Grafana workspace. For the instructions, see [Create an Azure Managed Grafana workspace](../managed-grafan).
* An [HDInsight on AKS cluster](./quickstart-create-cluster.md). Currently, you can use Azure Managed Prometheus with the following HDInsight on AKS cluster types: * Apache Spark™ * Apache Flink®
iot-central Howto Administer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-administer.md
Use the **Delete** button to permanently delete your IoT Central application. Th
To delete an application, you must also have permissions to delete resources in the Azure subscription you chose when you created the application. To learn more, see [Assign Azure roles to manage access to your Azure subscription resources](../../role-based-access-control/role-assignments-portal.md).
+> [!IMPORTANT]
+> If you delete and IoT Central application, it's not possible to recover it. It is possible to create a new application with the same name, but it will be a new application with no data. You need to wait for several minutes before you can create a new application with the same name.
+ ## Manage programmatically IoT Central Azure Resource Manager SDK packages are available for Node, Python, C#, Ruby, Java, and Go. You can use these packages to create, list, update, or delete IoT Central applications. The packages include helpers to manage authentication and error handling.
iot-central Howto Configure File Uploads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-configure-file-uploads.md
description: How to configure, implement, and manage file uploads from your devi
Previously updated : 08/25/2022 Last updated : 11/27/2023
iot-central Howto Connect Eflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-connect-eflow.md
Title: Connect Azure IoT Edge for Linux on Windows (EFLOW)
description: Learn how to connect an Azure IoT Edge for Linux on Windows (EFLOW) device to an IoT Central application Previously updated : 10/11/2022 Last updated : 11/27/2023
You've now finished configuring your IoT Central application to enable an IoT Ed
To install and provision your EFLOW device:
-1. In an elevated PowerShell session, run each of the following commands to download IoT Edge for Linux on Windows.
+1. In an elevated PowerShell session, run the following commands to download IoT Edge for Linux on Windows.
```powershell $msiPath = $([io.Path]::Combine($env:TEMP, 'AzureIoTEdge.msi'))
iot-central Howto Connect Rigado Cascade 500 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-connect-rigado-cascade-500.md
Title: Connect a Rigado Cascade 500 in Azure IoT Central
description: Learn how to configure and connect a Rigado Cascade 500 gateway device to your IoT Central application. Previously updated : 11/01/2022 Last updated : 11/27/2023
To connect the Cascade 500 device to your IoT Central application, you need to r
1. Now select **SAS-IoT-Edge-Devices** and make a note of the **Primary key**:
- :::image type="content" source="media/howto-connect-rigado-cascade-500/primary-key-sas.png" alt-text="Screenshot that shows the primary SAS key for you device connection group." lightbox="media/howto-connect-rigado-cascade-500/primary-key-sas.png":::
+ :::image type="content" source="media/howto-connect-rigado-cascade-500/primary-key-sas.png" alt-text="Screenshot that shows the primary SAS key for your device connection group." lightbox="media/howto-connect-rigado-cascade-500/primary-key-sas.png":::
## Contact Rigado to connect the gateway
iot-central Howto Create Custom Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-create-custom-rules.md
Title: Extend Azure IoT Central by using custom rules
description: Configure an IoT Central application to send notifications when a device stops sending telemetry by using Azure Stream Analytics, Azure Functions, and SendGrid. Previously updated : 11/28/2022 Last updated : 11/27/2023
# Extend Azure IoT Central with custom rules using Stream Analytics, Azure Functions, and SendGrid
-This how-to guide shows you how to extend your IoT Central application with custom rules and notifications. The example shows sending a notification to an operator when a device stops sending telemetry. The solution uses an [Azure Stream Analytics](../../stream-analytics/index.yml) query to detect when a device has stopped sending telemetry. The Stream Analytics job uses [Azure Functions](../../azure-functions/index.yml) to send notification emails using [SendGrid](https://sendgrid.com/docs/for-developers/partners/microsoft-azure/).
+This how-to guide shows you how to extend your IoT Central application with custom rules and notifications. The example shows sending a notification to an operator when a device stops sending telemetry. The solution uses an [Azure Stream Analytics](../../stream-analytics/index.yml) query to detect when a device stops sending telemetry. The Stream Analytics job uses [Azure Functions](../../azure-functions/index.yml) to send notification emails using [SendGrid](https://sendgrid.com/docs/for-developers/partners/microsoft-azure/).
This how-to guide shows you how to extend IoT Central beyond what it can already do with the built-in rules and actions. In this how-to guide, you learn how to: * Stream telemetry from an IoT Central application using *continuous data export*.
-* Create a Stream Analytics query that detects when a device has stopped sending data.
+* Create a Stream Analytics query that detects when a device stops sending data.
* Send an email notification using the Azure Functions and SendGrid services. ## Prerequisites
To tidy up after this how-to and avoid unnecessary costs, delete the **DetectSto
In this how-to guide, you learned how to: * Stream telemetry from an IoT Central application using the data export feature.
-* Create a Stream Analytics query that detects when a device has stopped sending data.
+* Create a Stream Analytics query that detects when a device stops sending data.
* Send an email notification using the Azure Functions and SendGrid services. Now that you know how to create custom rules and notifications, the suggested next step is to learn how to [Extend Azure IoT Central with custom analytics](howto-create-custom-analytics.md).
iot-central Howto Transform Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-transform-data.md
Title: Transform data for an IoT Central application
description: IoT devices send data in various formats that you may need to transform. This article describes how to transform data both on the way in and out of IoT Central. Previously updated : 01/10/2023 Last updated : 11/27/2023
To build the custom module in the [Azure Cloud Shell](https://shell.azure.com/):
This scenario uses an IoT Edge gateway device to transform the data from any downstream devices. This section describes how to create IoT Central device template for the gateway device in your IoT Central application. IoT Edge devices use a deployment manifest to configure their modules.
-In this example, the downstream device doesn't need a device template. The downstream device is registered in IoT Central so you can generate the credentials it needs to connect the IoT Edge device. Because the IoT Edge module transforms the data, all the downstream device telemetry arrives in IoT Central as if the IoT Edge device had sent it.
+In this example, the downstream device doesn't need a device template. The downstream device is registered in IoT Central so you can generate the credentials it needs to connect the IoT Edge device. Because the IoT Edge module transforms the data, all the downstream device telemetry arrives in IoT Central as if the IoT Edge device has sent it.
To create a device template for the IoT Edge gateway device:
To check that the IoT Edge gateway device is running correctly:
1. Open your IoT Central application. Then navigate to the **IoT Edge Gateway device** on the list of devices on the **Devices** page.
-1. Select the **Modules** tab and check the status of the three modules. It takes a few minutes for the IoT Edge runtime to start up in the virtual machine. When it's started, the status of the three modules is **Running**. If the IoT Edge runtime doesn't start, see [Troubleshoot your IoT Edge device](../../iot-edge/troubleshoot.md).
+1. Select the **Modules** tab and check the status of the three modules. It takes a few minutes for the IoT Edge runtime to start up in the virtual machine. When the virtual machine is running, the status of the three modules is **Running**. If the IoT Edge runtime doesn't start, see [Troubleshoot your IoT Edge device](../../iot-edge/troubleshoot.md).
For your IoT Edge device to function as a gateway, it needs some certificates to prove its identity to any downstream devices. This article uses demo certificates. In a production environment, use certificates from your certificate authority.
Set up the data export to send data to your Device bridge:
### Verify
-The sample device you use to test the scenario is written in Node.js. Make sure you have Node.js and npm installed on your local machine. If you don't want to install these prerequisites, use the [Azure Cloud Shell](https://shell.azure.com/) that has them preinstalled.
+The sample device you use to test the scenario is written in Node.js. Make sure you have Node.js and npm installed on your local machine. If you don't want to install these prerequisites, use the [Azure Cloud Shell](https://shell.azure.com/) where they are preinstalled.
To run a sample device that tests the scenario:
iot Howto Use Iot Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/howto-use-iot-explorer.md
To use the Azure IoT explorer tool, you need:
## Install Azure IoT explorer
-Go to [Azure IoT explorer releases](https://github.com/Azure/azure-iot-explorer/releases) and expand the list of assets for the most recent release. Download and install the most recent version of the application.
+Go to [Azure IoT explorer releases](https://github.com/Azure/azure-iot-explorer/releases) and expand the list of assets for the most recent release. Download and install the most recent version of the application. The installation package configures a way for you to launch the application on your platform. For example, in Windows you can launch the application from the Start menu.
>[!Important] > Update to version 0.13.x or greater to resolve models from any repository based on [https://github.com/Azure/iot-plugandplay-models](https://github.com/Azure/iot-plugandplay-models)
iot Iot Mqtt Connect To Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/iot-mqtt-connect-to-iot-hub.md
The following table explains the differences in MQTT support between the two ser
| Limited feature support for MQTT v3.1.1, and limited feature support for [MQTT v5 in preview](./iot-mqtt-5-preview.md). More feature support isn't planned. | MQTT v3.1.1 and v5 protocol support, with more feature support and industry compliance planned. | | Static, predefined topics. | Custom hierarchical topics with wildcard support. | | No support for cloud-to-device broadcasts and device-to-device communication. | Supports device-to-cloud, high fan-out cloud-to-device broadcasts, and device-to-device communication patterns. |
-| 256-kb max message size. | 512-kb max message size. |
+| 256KB max message size. | 512KB max message size. |
## Connecting to IoT Hub
machine-learning How To Troubleshoot Secure Connection Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-secure-connection-workspace.md
Check if DNS over HTTP is enabled in your web browser. DNS over HTTP can prevent
* Mozilla Firefox: For more information, see [Disable DNS over HTTPS in Firefox](https://support.mozilla.org/en-US/kb/firefox-dns-over-https). * Microsoft Edge:
- 1. Search for DNS in Microsoft Edge settings: image.png
- 2. Disable __Use secure DNS to specify how to look up the network address for websites__.
+ 1. In Edge, select __...__ and then select __Settings__.
+ 1. From settings, search for `DNS` and then disable __Use secure DNS to specify how to look up the network address for websites__.
+
+ :::image type="content" source="./media/how-to-troubleshoot-secure-connection-workspace/disable-dns-over-http.png" alt-text="Screenshot of the use secure DNS setting in Microsoft Edge.":::
## Proxy configuration
machine-learning How To Retrain Designer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-retrain-designer.md
Use the following steps to submit a parameterized pipeline endpoint job from the
You can find the REST endpoint of a published pipeline in the overview panel. By calling the endpoint, you can retrain the published pipeline.
-To make a REST call, you need an OAuth 2.0 bearer-type authentication header. For information about setting up authentication to your workspace and making a parameterized REST call, see [Build an Azure Machine Learning pipeline for batch scoring](../tutorial-pipeline-batch-scoring-classification.md#publish-and-run-from-a-rest-endpoint).
+To make a REST call, you need an OAuth 2.0 bearer-type authentication header. For information about setting up authentication to your workspace and making a parameterized REST call, see [Use REST to manage resources](../how-to-manage-rest.md).
## Next steps
In this article, you learned how to create a parameterized training pipeline end
For a complete walkthrough of how you can deploy a model to make predictions, see the [designer tutorial](tutorial-designer-automobile-price-train-score.md) to train and deploy a regression model.
-For how to publish and submit a job to pipeline endpoint using the SDK v1, see [this article](how-to-deploy-pipelines.md).
+For how to publish and submit a job to pipeline endpoint using the SDK v1, see [Publish pipelines](how-to-deploy-pipelines.md).
mariadb Whats Happening To Mariadb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/whats-happening-to-mariadb.md
description: The Azure Database for MariaDB service is being deprecated.
Previously updated : 09/19/2023 Last updated : 11/27/2023
A. Your existing Azure Database for MariaDB workloads will continue to function
A. Unfortunately, we don't plan to support Azure Database for MariaDB beyond the sunset date of September 19, 2025. Hence, we advise that you start planning your migration as soon as possible.
+**Q. How do I manage my reserved instances for MariaDB?**
+
+A. You will not be able to purchase or renew MariaDB reserved instances starting **December 1 2023**. You can renew the reserved instances before December first using Azure portal. For any reserved instances expiring after *December 1 2023*, will be converted to Pay As You Go billing model. After migrating your workload to Azure Database for MySQL Flexible server, you can [purchase reserved instances](../mysql/single-server/concept-reserved-pricing.md) for MySQL Flexible Server.
+ **Q. After the Azure Database for MariaDB retirement announcement, what if I still need to create a new MariaDB server to meet my business needs?** A. As part of this retirement, we'll no longer support creating new MariaDB instances from the Azure portal beginning **January 19, 2024**. Suppose you still need to create MariaDB instances to meet business continuity needs. In that case, you can use [Azure CLI](/azure/mysql/single-server/quickstart-create-mysql-server-database-using-azure-cli) until **March 19, 2024**.
network-watcher Network Watcher Nsg Flow Logging Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-nsg-flow-logging-overview.md
Storage of logs is charged separately. For relevant prices, see [Azure Blob Stor
## Related content - To learn how to manage NSG flow logs, see [Create, change, disable, or delete NSG flow logs using the Azure portal](nsg-flow-logging.md).-- To find answers to some of the most frequently asked questions about NSG flow logs, see [NSG flow logs FAQ](frequently-asked-questions.yml#nsg-flow-logs).
+- To find answers to some of the most frequently asked questions about NSG flow logs, see [Flow logs FAQ](frequently-asked-questions.yml#flow-logs).
- To learn about traffic analytics, see [Traffic analytics overview](traffic-analytics.md).
network-watcher Required Rbac Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/required-rbac-permissions.md
Previously updated : 10/09/2023 Last updated : 11/27/2023 #CustomerIntent: As an Azure administrator, I want to know the required Azure role-based access control (Azure RBAC) permissions to use each of the Network Watcher capabilities, so I can assign them correctly to users using any of those capabilities.
Since traffic analytics is enabled as part of the Flow log resource, the followi
> | Microsoft.Network/virtualNetworkGateways/read | Get a VirtualNetworkGateway | > | Microsoft.Network/virtualNetworks/read | Get a virtual network definition | > | Microsoft.Network/expressRouteCircuits/read | Get an ExpressRouteCircuit |
-> | Microsoft.OperationalInsights/workspaces/* | Perform actions on a workspace |
+> | Microsoft.OperationalInsights/workspaces/read | Get an existing workspace |
+> | Microsoft.OperationalInsights/workspaces/sharedkeys/action | Retrieve the shared keys for the workspace |
> | Microsoft.Insights/dataCollectionRules/read <sup>1</sup> | Read a data collection rule | > | Microsoft.Insights/dataCollectionRules/write <sup>1</sup> | Create or update a data collection rule | > | Microsoft.Insights/dataCollectionRules/delete <sup>1</sup> | Delete a data collection rule |
network-watcher Traffic Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/traffic-analytics.md
Previously updated : 10/27/2023 Last updated : 11/27/2023 #CustomerIntent: As an Azure administrator, I want to use Traffic analytics to analyze Network Watcher flow logs so that I can view network activity, secure my networks, and optimize performance.
Traffic analytics requires the following prerequisites:
- `Microsoft.Network/virtualNetworkGateways/read` - `Microsoft.Network/virtualNetworks/read` - `Microsoft.Network/expressRouteCircuits/read`
- - `Microsoft.OperationalInsights/workspaces/*` <sup>1</sup>
+ - `Microsoft.OperationalInsights/workspaces/read` <sup>1</sup>
+ - `Microsoft.OperationalInsights/workspaces/sharedkeys/action` <sup>1</sup>
- `Microsoft.Insights/dataCollectionRules/read` <sup>2</sup> - `Microsoft.Insights/dataCollectionRules/write` <sup>2</sup> - `Microsoft.Insights/dataCollectionRules/delete` <sup>2</sup>
postgresql Generative Ai Azure Cognitive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/generative-ai-azure-cognitive.md
Azure AI extension gives the ability to invoke the [language services](../../ai-
## Prerequisites
-1. [Create a Language resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics) in the Azure portal to get your key and endpoint.
-1. After it deploys, selectΓÇ»**Go to resource**.
+1. [Create a Language resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesTextAnalytics) in the Azure portal to get your key and endpoint.
+1. After it deploys, select **Go to resource**.
> [!NOTE] > You will need the key and endpoint from the resource you create to connect the extension to the API. ## Configure Azure Cognitive Services endpoint and key
-In the Azure AI services under **Resource Management** > **Keys and Endpoints** you can find the **Endpoint and Keys** for your Azure AI resource. Use the endpoint and key to enable `azure_ai` extension to invoke the model deployment.
+In the Language resource, under **Resource Management** > **Keys and Endpoints** you can find the endpoint and keys for your language resource. Use the endpoint and key to enable `azure_ai` extension to invoke the model deployment.
```postgresql
-select azure_ai.set_setting('azure_cognitive.endpoint','https://<endpoint>.openai.azure.com');
+select azure_ai.set_setting('azure_cognitive.endpoint','https://<endpoint>.cognitiveservices.azure.com');
select azure_ai.set_setting('azure_cognitive.subscription_key', '<API Key>'); ```
azure_cognitive.analyze_sentiment(text text, language text, timeout_ms integer D
`text` two-letter ISO 639-1 representation of the language that the input text is written in. Check [language support](../../ai-services/language-service/concepts/language-support.md) for allowed values.
-#### `timeout_ms`
+##### `timeout_ms`
`integer DEFAULT 3600000` timeout in milliseconds after which the operation is stopped.
-#### `throw_on_error`
+##### `throw_on_error`
`boolean DEFAULT true` on error should the function throw an exception resulting in a rollback of wrapping transactions.
-#### `disable_service_logs`
+##### `disable_service_logs`
-`boolean DEFAULT false` The Language service logs your input text for 48 hours solely to allow for troubleshooting issues. Setting this property to `true` disables input logging and might limit our ability to investigate issues that occur.
+`boolean DEFAULT false` the Language service logs your input text for 48 hours solely to allow for troubleshooting issues. Setting this property to `true` disables input logging and might limit our ability to investigate issues that occur.
For more information, see Cognitive Services Compliance and Privacy notes at https://aka.ms/cs-compliance, and Microsoft Responsible AI principles at https://www.microsoft.com/ai/responsible-ai.
azure_cognitive.detect_language(text TEXT, timeout_ms INTEGER DEFAULT 3600000, t
`text` input to be processed.
-#### `timeout_ms`
+##### `timeout_ms`
`integer DEFAULT 3600000` timeout in milliseconds after which the operation is stopped.
-#### `throw_on_error`
+##### `throw_on_error`
`boolean DEFAULT true` on error should the function throw an exception resulting in a rollback of wrapping transactions.
-#### `disable_service_logs`
+##### `disable_service_logs`
-`boolean DEFAULT false` The Language service logs your input text for 48 hours solely to allow for troubleshooting issues. Setting this property to `true` disables input logging and might limit our ability to investigate issues that occur.
+`boolean DEFAULT false` the Language service logs your input text for 48 hours solely to allow for troubleshooting issues. Setting this property to `true` disables input logging and might limit our ability to investigate issues that occur.
For more information, see Cognitive Services Compliance and Privacy notes at https://aka.ms/cs-compliance, and Microsoft Responsible AI principles at https://www.microsoft.com/ai/responsible-ai. #### Return type
-`azure_cognitive.language_detection_result`, a result containing the detected language name, its two-letter ISO 639-1 representation and the confidence score for the detection.
+`azure_cognitive.language_detection_result`, a result containing the detected language name, its two-letter ISO 639-1 representation and the confidence score for the detection. For example in `(Portuguese,pt,0.97)`, the language is `Portuguese`, and detection confidence is `0.97`.
### `azure_cognitive.extract_key_phrases`
azure_cognitive.extract_key_phrases(text TEXT, language TEXT, timeout_ms INTEGER
`text` two-letter ISO 639-1 representation of the language that the input text is written in. Check [language support](../../ai-services/language-service/concepts/language-support.md) for allowed values.
-#### `timeout_ms`
+##### `timeout_ms`
`integer DEFAULT 3600000` timeout in milliseconds after which the operation is stopped.
-#### `throw_on_error`
+##### `throw_on_error`
`boolean DEFAULT true` on error should the function throw an exception resulting in a rollback of wrapping transactions.
-#### `disable_service_logs`
+##### `disable_service_logs`
-`boolean DEFAULT false` The Language service logs your input text for 48 hours solely to allow for troubleshooting issues. Setting this property to `true` disables input logging and might limit our ability to investigate issues that occur.
+`boolean DEFAULT false` the Language service logs your input text for 48 hours solely to allow for troubleshooting issues. Setting this property to `true` disables input logging and might limit our ability to investigate issues that occur.
For more information, see Cognitive Services Compliance and Privacy notes at https://aka.ms/cs-compliance, and Microsoft Responsible AI principles at https://www.microsoft.com/ai/responsible-ai. #### Return type
-`text[]`, a collection of key phrases identified in the text.
+`text[]`, a collection of key phrases identified in the text. For example, if invoked with a `text` set to `'For more information, see Cognitive Services Compliance and Privacy notes.'`, and `language` set to `'en'`, it could return `{"Cognitive Services Compliance","Privacy notes",information}`.
### `azure_cognitive.linked_entities`
azure_cognitive.linked_entities(text text, language text, timeout_ms integer DEF
`text` two-letter ISO 639-1 representation of the language that the input text is written in. Check [language support](../../ai-services/language-service/concepts/language-support.md) for allowed values.
-#### `timeout_ms`
+##### `timeout_ms`
`integer DEFAULT 3600000` timeout in milliseconds after which the operation is stopped.
-#### `throw_on_error`
+##### `throw_on_error`
`boolean DEFAULT true` on error should the function throw an exception resulting in a rollback of wrapping transactions.
-#### `disable_service_logs`
+##### `disable_service_logs`
-`boolean DEFAULT false` The Language service logs your input text for 48 hours solely to allow for troubleshooting issues. Setting this property to `true` disables input logging and might limit our ability to investigate issues that occur.
+`boolean DEFAULT false` the Language service logs your input text for 48 hours solely to allow for troubleshooting issues. Setting this property to `true` disables input logging and might limit our ability to investigate issues that occur.
For more information, see Cognitive Services Compliance and Privacy notes at https://aka.ms/cs-compliance, and Microsoft Responsible AI principles at https://www.microsoft.com/ai/responsible-ai. #### Return type
-`azure_cognitive.linked_entity[]`, a collection of linked entities, where each defines the name, data source entity identifier, language, data source, URL, collection of `azure_cognitive.linked_entity_match` (defining the text and confidence score) and finally a Bing entity search API identifier.
+`azure_cognitive.linked_entity[]`, a collection of linked entities, where each defines the name, data source entity identifier, language, data source, URL, collection of `azure_cognitive.linked_entity_match` (defining the text and confidence score) and finally a Bing entity search API identifier. For example, if invoked with a `text` set to `'For more information, see Cognitive Services Compliance and Privacy notes.'`, and `language` set to `'en'`, it could return `{"(\"Cognitive computing\",\"Cognitive computing\",en,Wikipedia,https://en.wikipedia.org/wiki/Cognitive_computing,\"{\"\"(\\\\\"\"Cognitive Services\\\\\"\",0.78)\
+"\"}\",d73f7d5f-fddb-0908-27b0-74c7db81cd8d)","(\"Regulatory compliance\",\"Regulatory compliance\",en,Wikipedia,https://en.wikipedia.org/wiki/Regulatory_compliance
+,\"{\"\"(Compliance,0.28)\"\"}\",89fefaf8-e730-23c4-b519-048f3c73cdbd)","(\"Information privacy\",\"Information privacy\",en,Wikipedia,https://en.wikipedia.org/wiki
+/Information_privacy,\"{\"\"(Privacy,0)\"\"}\",3d0f2e25-5829-4b93-4057-4a805f0b1043)"}`.
### `azure_cognitive.recognize_entities`
azure_cognitive.recognize_entities(text text, language text, timeout_ms integer
`text` two-letter ISO 639-1 representation of the language that the input text is written in. Check [language support](../../ai-services/language-service/concepts/language-support.md) for allowed values.
-#### `timeout_ms`
+##### `timeout_ms`
`integer DEFAULT 3600000` timeout in milliseconds after which the operation is stopped.
-#### `throw_on_error`
+##### `throw_on_error`
`boolean DEFAULT true` on error should the function throw an exception resulting in a rollback of wrapping transactions.
-#### `disable_service_logs`
+##### `disable_service_logs`
-`boolean DEFAULT false` The Language service logs your input text for 48 hours solely to allow for troubleshooting issues. Setting this property to `true` disables input logging and might limit our ability to investigate issues that occur.
+`boolean DEFAULT false` the Language service logs your input text for 48 hours solely to allow for troubleshooting issues. Setting this property to `true` disables input logging and might limit our ability to investigate issues that occur.
For more information, see Cognitive Services Compliance and Privacy notes at https://aka.ms/cs-compliance, and Microsoft Responsible AI principles at https://www.microsoft.com/ai/responsible-ai. #### Return type
-`azure_cognitive.entity[]`, a collection of entities, where each defines the text identifying the entity, category of the entity and confidence score of the match.
+`azure_cognitive.entity[]`, a collection of entities, where each defines the text identifying the entity, category of the entity and confidence score of the match. For example, if invoked with a `text` set to `'For more information, see Cognitive Services Compliance and Privacy notes.'`, and `language` set to `'en'`, it could return `{"(\"Cognitive Services\",Skill,\"\",0.94)"}`.
### `azure_cognitive.recognize_pii_entities`
azure_cognitive.recognize_pii_entities(text text, language text, timeout_ms inte
`text` two-letter ISO 639-1 representation of the language that the input text is written in. Check [language support](../../ai-services/language-service/concepts/language-support.md) for allowed values.
-#### `timeout_ms`
+##### `timeout_ms`
`integer DEFAULT 3600000` timeout in milliseconds after which the operation is stopped.
-#### `throw_on_error`
+##### `throw_on_error`
`boolean DEFAULT true` on error should the function throw an exception resulting in a rollback of wrapping transactions.
-#### `domain`
+##### `domain`
`text DEFAULT 'none'::text`, the personal data domain used for personal data Entity Recognition. Valid values are `none` for no domain specified and `phi` for Personal Health Information.
-#### `disable_service_logs`
+##### `disable_service_logs`
-`boolean DEFAULT true` The Language service logs your input text for 48 hours solely to allow for troubleshooting issues. Setting this property to `true` disables input logging and might limit our ability to investigate issues that occur.
+`boolean DEFAULT true` the Language service logs your input text for 48 hours solely to allow for troubleshooting issues. Setting this property to `true` disables input logging and might limit our ability to investigate issues that occur.
For more information, see Cognitive Services Compliance and Privacy notes at https://aka.ms/cs-compliance, and Microsoft Responsible AI principles at https://www.microsoft.com/ai/responsible-ai. #### Return type
-`azure_cognitive.pii_entity_recognition_result`, a result containing the redacted text and entities as `azure_cognitive.entity[]`. Each entity contains the nonredacted text, personal data category, subcategory and a score indicating the confidence that the entity correctly matches the identified substring.
+`azure_cognitive.pii_entity_recognition_result`, a result containing the redacted text and entities as `azure_cognitive.entity[]`. Each entity contains the nonredacted text, personal data category, subcategory and a score indicating the confidence that the entity correctly matches the identified substring. For example, if invoked with a `text` set to `'My phone number is +1555555555, and the address of my office is 16255 NE 36th Way, Redmond, WA 98052.'`, and `language` set to `'en'`, it could return `("My phone number is ***********, and the address of my office is ************************************.","{""(+1555555555,PhoneNumber,\\""\\"",0.8)"",""(\\""16255 NE 36th Way, Redmond, WA 98052\\"",Address,\\""\\"",1)""}")`.
### `azure_cognitive.summarize_abstractive`
azure_cognitive.summarize_abstractive(text text, language text, timeout_ms integ
`text` two-letter ISO 639-1 representation of the language that the input text is written in. Check [language support](../../ai-services/language-service/concepts/language-support.md) for allowed values.
-#### `timeout_ms`
+##### `timeout_ms`
`integer DEFAULT 3600000` timeout in milliseconds after which the operation is stopped.
-#### `throw_on_error`
+##### `throw_on_error`
`boolean DEFAULT true` on error should the function throw an exception resulting in a rollback of wrapping transactions.
-#### `sentence_count`
+##### `sentence_count`
`integer DEFAULT 3`, maximum number of sentences that the summarization should contain.
-#### `disable_service_logs`
+##### `disable_service_logs`
-`boolean DEFAULT false` The Language service logs your input text for 48 hours solely to allow for troubleshooting issues. Setting this property to `true` disables input logging and might limit our ability to investigate issues that occur.
+`boolean DEFAULT false` the Language service logs your input text for 48 hours solely to allow for troubleshooting issues. Setting this property to `true` disables input logging and might limit our ability to investigate issues that occur.
For more information, see Cognitive Services Compliance and Privacy notes at https://aka.ms/cs-compliance, and Microsoft Responsible AI principles at https://www.microsoft.com/ai/responsible-ai. #### Return type
-`text[]`, a collection of summaries with each one not exceeding the defined `sentence_count`.
+`text[]`, a collection of summaries with each one not exceeding the defined `sentence_count`. For example, if invoked with a `text` set to `'PostgreSQL features transactions with atomicity, consistency, isolation, durability (ACID) properties, automatically updatable views, materialized views, triggers, foreign keys, and stored procedures. It is designed to handle a range of workloads, from single machines to data warehouses or web services with many concurrent users. It was the default database for macOS Server and is also available for Linux, FreeBSD, OpenBSD, and Windows.'`, and `language` set to `'en'`, it could return `{"PostgreSQL is a database system with advanced features such as atomicity, consistency, isolation, and durability (ACID) properties. It is designed to handle a range of workloads, from single machines to data warehouses or web services with many concurrent users. PostgreSQL was the default database for macOS Server and is available for Linux, BSD, OpenBSD, and Windows."}`.
### `azure_cognitive.summarize_extractive`
azure_cognitive.summarize_extractive(text text, language text, timeout_ms intege
`text` two-letter ISO 639-1 representation of the language that the input text is written in. Check [language support](../../ai-services/language-service/concepts/language-support.md) for allowed values.
-#### `timeout_ms`
+##### `timeout_ms`
`integer DEFAULT 3600000` timeout in milliseconds after which the operation is stopped.
-#### `throw_on_error`
+##### `throw_on_error`
`boolean DEFAULT true` on error should the function throw an exception resulting in a rollback of wrapping transactions.
-#### `sentence_count`
+##### `sentence_count`
`integer DEFAULT 3`, maximum number of sentences to extract.
-#### `sort_by`
+##### `sort_by`
`text DEFAULT ``offset``::text`, order of extracted sentences. Valid values are `rank` and `offset`.
-#### `disable_service_logs`
+##### `disable_service_logs`
-`boolean DEFAULT false` The Language service logs your input text for 48 hours solely to allow for troubleshooting issues. Setting this property to `true` disables input logging and might limit our ability to investigate issues that occur.
+`boolean DEFAULT false` the Language service logs your input text for 48 hours solely to allow for troubleshooting issues. Setting this property to `true` disables input logging and might limit our ability to investigate issues that occur.
For more information, see Cognitive Services Compliance and Privacy notes at https://aka.ms/cs-compliance, and Microsoft Responsible AI principles at https://www.microsoft.com/ai/responsible-ai. #### Return type
-`azure_cognitive.sentence[]`, a collection of extracted sentences along with their rank score.
+`azure_cognitive.sentence[]`, a collection of extracted sentences along with their rank score. For example, if invoked with a `text` set to `'PostgreSQL features transactions with atomicity, consistency, isolation, durability (ACID) properties, automatically updatable views, materialized views, triggers, foreign keys, and stored procedures. It is designed to handle a range of workloads, from single machines to data warehouses or web services with many concurrent users. It was the default database for macOS Server and is also available for Linux, FreeBSD, OpenBSD, and Windows.'`, and `language` set to `'en'`, it could return `{"(\"PostgreSQL features transactions with atomicity, consistency, isolation, durability (ACID) properties, automatically updatable views, materialized views, triggers, foreign keys, and stored procedures.\",0.16)","(\"It is designed to handle a range of workloads, from single machines to data warehouses or web services with many concurrent users.\",0)","(\"It was the default database for macOS Server and is also available for Linux, FreeBSD, OpenBSD, and Windows.\",1)"}`.
## Next steps
postgresql Generative Ai Azure Openai https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/generative-ai-azure-openai.md
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-Invoke [Azure OpenAI embeddings](../../ai-services/openai/reference.md#embeddings) easily to get a vector representation of the input, which can be used then in [vector similarity](./how-to-use-pgvector.md) searches and consumed by machine learning models.
+Invoke [Azure OpenAI embeddings](../../ai-services/openai/reference.md#embeddings) easily to get a vector representation of the input, which can be used then in [vector similarity](./how-to-use-pgvector.md#vector-similarity) searches and consumed by machine learning models.
## Prerequisites 1. Create an Open AI account and [request access to Azure OpenAI Service](https://aka.ms/oai/access).
-1. Grant Access to Azure OpenAI in the desired subscription
-1. Grant permissions toΓÇ»[create Azure OpenAI resources and to deploy models](../../ai-services/openai/how-to/role-based-access-control.md).
+1. Grant Access to Azure OpenAI in the desired subscription.
+1. Grant permissions toΓÇ»[create Azure OpenAI resources and to deploy models](../../ai-services/openai/how-to/role-based-access-control.md).
[Create and deploy an Azure OpenAI service resource and a model](../../ai-services/openai/how-to/create-resource.md), for example deploy the embeddings model [text-embedding-ada-002](../../ai-services/openai/concepts/models.md#embeddings-models). Copy the deployment name as it is needed to create embeddings. ## Configure OpenAI endpoint and key
-In the Azure AI services under **Resource Management** > **Keys and Endpoints** you can find the **Endpoint and Keys** for your Azure AI resource. Use the endpoint and key to enable `azure_ai` extension to invoke the model deployment.
+In the Azure OpenAI resource, under **Resource Management** > **Keys and Endpoints** you can find the endpoint and the keys for your Azure OpenAI resource. Use the endpoint and one of the keys to enable `azure_ai` extension to invoke the model deployment.
```postgresql select azure_ai.set_setting('azure_openai.endpoint','https://<endpoint>.openai.azure.com');
postgresql Generative Ai Azure Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/generative-ai-azure-overview.md
CREATE EXTENSION azure_ai;
``` > [!NOTE]
-> To remove the extension from the currently connected database use `DROP EXTENSION vector;`.
+> To remove the extension from the currently connected database use `DROP EXTENSION azure_ai;`.
Installing the extension `azure_ai` creates the following three schemas:
postgresql How To Stop Start Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-stop-start-server-cli.md
az postgres flexible-server stop [--name]
**Example without local context:** ```azurecli
-az postgres flexible-server stop --resource-group --name myservername
+az postgres flexible-server stop --resource-group resourcegroupname --name myservername
``` **Example with local context:**
remote-rendering Point Cloud Rendering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/overview/features/point-cloud-rendering.md
Conversion settings specifically for point cloud files are explained in the [con
## Size limitations
-Point cloud asset conversion has a hard limit of 2.5 billion points per converted asset.
+Point cloud asset conversion has a hard limit of 12.5 billion points per converted asset. If larger data sets need to be rendered, the source file needs to be split into multiple assets that obey the 12.5 billion points constraint each. The renderer doesn't limit you on the number of unique assets being loaded, and the [streaming data technique](#point-cloud-data-streaming) ensures that prioritization works seamlessly across all loaded instances.
For the overall maximum number of allowed points loaded and rendered by ARR, the same kind of distinctions between a `standard` and `premium` rendering session applies, as described in paragraph about [server size limits](../../reference/limits.md#overall-number-of-primitives). ## Global rendering properties
void ChangeGlobalPointCloudSettings(ApiHandle<RenderingSession> session)
} ```
+## Point cloud data streaming
+
+Point cloud asset files are automatically configured for dynamic data streaming during conversion. That means that unlike triangular mesh assets, point cloud assets of significant size aren't fully downloaded to the rendering VM, but rather partially loaded from storage as needed.
+
+Regardless of the point cloud file size, the great benefit of the data streaming approach is that the renderer can start early with presenting the data. The decision of the renderer which data to prioritize, is based on camera view and proximity across all loaded point cloud models. No custom interaction through the API is necessary. Furthermore, data streaming automatically manages the budget and priorities based on how much particular data is relevant for the current view.
+In case multiple point cloud assets are instantiated on the scene, the streaming system makes sure to prioritize data seamlessly across all point clouds, just as it would be a single asset. Accordingly, splitting the source file is a convenient way to work around the size limitation per file.
+ ## API documentation * [C# RenderingConnection.PointCloudSettings_Experimental property](/dotnet/api/microsoft.azure.remoterendering.renderingconnection.pointcloudsettings_experimental)
remote-rendering Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/reference/limits.md
The following limitations apply to the frontend API (C++ and C#):
* **Animation:** Animations are limited to animating individual transforms of [game objects](../concepts/entities.md). Skeletal animations with skinning or vertex animations aren't supported. Animation tracks from the source asset file aren't preserved. Instead, object transform animations have to be driven by client code. * **Custom shaders:** Authoring of custom shaders isn't supported. Only built-in [Color materials](../overview/features/color-materials.md) or [PBR materials](../overview/features/pbr-materials.md) can be used.
-* **Maximum number of distinct materials** in a singular triangular mesh asset: 65,535. For more information about automatic material count reduction, see the [material de-duplication](../how-tos/conversion/configure-model-conversion.md#material-deduplication) chapter.
+* **Maximum number of distinct materials** in a singular triangular mesh asset: 65,535. For more information about automatic material count reduction, see the [material deduplication](../how-tos/conversion/configure-model-conversion.md#material-deduplication) chapter.
* **Maximum number of distinct textures**: There's no hard limit on the number of distinct textures. The only constraint is overall GPU memory and the number of distinct materials.
-* **Maximum dimension of a single texture**: 16,384 x 16,384. Larger textures can't be used by the renderer. The conversion process can sometimes reduce larger textures in size, but in general it will fail to process textures larger than this limit.
-* **Maximum number of points in a single point cloud asset**: 2.5 billion.
+* **Maximum dimension of a single texture**: 16,384 x 16,384. Larger textures can't be used by the renderer. The conversion process can sometimes reduce larger textures in size, but in general it fails to process textures larger than this limit.
+* **Maximum number of points in a single point cloud asset**: 12.5 billion.
### Overall number of primitives
remote-rendering Vm Sizes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/reference/vm-sizes.md
Azure Remote Rendering is available in two server configurations: `Standard` and
A primitive is either a single triangle (in triangular meshes) or a single point (in point cloud meshes). Triangular meshes can be instantiated together with point clouds, in which case the sum of all points and triangles in the session are counted against the limit.
-Remote Rendering with `Standard` size server has a maximum scene size of 20 million primitives. Remote Rendering with `Premium` size doesn't enforce a hard maximum, but performance may be degraded if your content exceeds the rendering capabilities of the service.
+### Standard size
-When the renderer on a 'Standard' server size hits this limitation, it switches rendering to a checkerboard background:
+Remote Rendering with `Standard` size server has a maximum scene size of 20 million primitives. When the renderer on a 'Standard' server size hits this limitation, it switches rendering to a checkerboard background:
![Screenshot shows a grid of black and white squares with a Tools menu.](media/checkerboard.png)
+### Premium size
+
+Remote Rendering with `Premium` size doesn't enforce a hard maximum, but performance may be degraded if your content exceeds the rendering capabilities of the service. Furthermore, for triangular meshes (and unlike point clouds), the available amount of graphics memory is a hard limit. It's not possible to map the amount of graphics memory to a specific number of triangles, because there are many contributing factors that depend on the source mesh and settings:
+
+* number and resolution of textures,
+* amount of unique geometry versus sub-mesh instantiation inside the mesh (see also [instancing objects](../how-tos/conversion/configure-model-conversion.md#instancing)),
+* [vertex streams](../how-tos/conversion/configure-model-conversion.md#vertex-format) being used,
+* the [rendering composition mode](../concepts/rendering-modes.md) used with the `Premium` size.
+
+For [point clouds](../overview/features/point-cloud-rendering.md) there's no real limit since point cloud assets use the [data streaming approach](../overview/features/point-cloud-rendering.md#point-cloud-data-streaming). With data streaming, the renderer automatically manages the memory budget on the graphics card, based on the actual visible geometry.
+ ## Specify the server size The desired type of server configuration has to be specified at rendering session initialization time. It can't be changed within a running session. The following code examples show the place where the server size must be specified:
Accordingly, it's possible to write an application that targets the `standard` s
There are two ways to determine the number of primitives of a model or scene that contribute to the budget limit of the `standard` configuration size: * On the model conversion side, retrieve the [conversion output json file](../how-tos/conversion/get-information.md), and check the `numFaces` entry in the [*inputStatistics* section](../how-tos/conversion/get-information.md#the-inputstatistics-section). This number denotes the triangle count in triangular meshes and number of points in point clouds respectively.
-* If your application is dealing with dynamic content, the number of rendered primitives can be queried dynamically during runtime. Use a [performance assessment query](../overview/features/performance-queries.md#performance-assessment-queries) and check for the sum of the values in the two members `PolygonsRendered` and `PointsRendered` in the `PerformanceAssessment` struct. The `PolygonsRendered` / `PointsRendered` field will be set to `bad` when the renderer hits the primitive limitation. The checkerboard background is always faded in with some delay to ensure user action can be taken after this asynchronous query. User action can, for instance, be hiding or deleting model instances.
+* If your application is dealing with dynamic content, the number of rendered primitives can be queried dynamically during runtime. Use a [performance assessment query](../overview/features/performance-queries.md#performance-assessment-queries) and check for the sum of the values in the two members `PolygonsRendered` and `PointsRendered` in the `PerformanceAssessment` struct. The `PolygonsRendered` / `PointsRendered` field is set to `bad` when the renderer hits the primitive limitation. The checkerboard background is always faded in with some delay to ensure user action can be taken after this asynchronous query. User action can, for instance, be hiding or deleting model instances.
## Pricing
search Cognitive Search How To Debug Skillset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-how-to-debug-skillset.md
If skills produce output but the search index is empty, check the field mappings
## Debug a custom skill locally
-Custom skills can be more challenging to debug because the code runs externally, so the debug session can't be used to debug them. This section describes how to locally debug your Custom Web API skill, debug session, Visual Studio Code and [ngrok](https://ngrok.com/docs). This technique works with custom skills that execute in [Azure Functions](../azure-functions/functions-overview.md) or any other Web Framework that runs locally (for example, [FastAPI](https://fastapi.tiangolo.com/)).
+Custom skills can be more challenging to debug because the code runs externally, so the debug session can't be used to debug them. This section describes how to locally debug your Custom Web API skill, debug session, Visual Studio Code and [ngrok](https://ngrok.com/docs) or [Tunnelmole](https://github.com/robbie-cahill/tunnelmole-client). This technique works with custom skills that execute in [Azure Functions](../azure-functions/functions-overview.md) or any other Web Framework that runs locally (for example, [FastAPI](https://fastapi.tiangolo.com/)).
-### Run ngrok
+### Get a public URL
-[**ngrok**](https://ngrok.com/docs) is a cross-platform application that can create a tunneling or forwarding URL, so that internet requests reach your local machine. Use ngrok to forward requests from an enrichment pipeline in your search service to your machine to allow local debugging.
+#### Using Tunnelmole
+Tunnelmole is an open source tunneling tool that can create a public URL that forwards requests to your local machine through a tunnel.
+
+1. Install Tunnelmole:
+ - npm: `npm install -g tunnelmole`
+ - Linux: `curl -s https://tunnelmole.com/sh/install-linux.sh | sudo bash`
+ - Mac: `curl -s https://tunnelmole.com/sh/install-mac.sh --output install-mac.sh && sudo bash install-mac.sh`
+ - Windows: Install by using npm. Or if you don't have NodeJS installed, download the [precompiled .exe file for Windows](https://tunnelmole.com/downloads/tmole.exe) and put it somewhere in your PATH.
+
+2. Run this command to create a new tunnel:
+ ```console
+ Γ₧£ ~ tmole 7071
+ http://m5hdpb-ip-49-183-170-144.tunnelmole.net is forwarding to localhost:7071
+ https://m5hdpb-ip-49-183-170-144.tunnelmole.net is forwarding to localhost:7071
+ ```
+
+In the preceding example, `https://m5hdpb-ip-49-183-170-144.tunnelmole.net` forwards to port `7071` on your local machine, which is the default port where Azure functions are exposed.
+
+#### Using ngrok
+
+[**ngrok**](https://ngrok.com/docs) is a popular, closed source, cross-platform application that can create a tunneling or forwarding URL, so that internet requests reach your local machine. Use ngrok to forward requests from an enrichment pipeline in your search service to your machine to allow local debugging.
1. Install ngrok.
-1. Open a terminal and go to the folder with the ngrok executable.
+2. Open a terminal and go to the folder with the ngrok executable.
-1. Run ngrok with the following command to create a new tunnel:
+3. Run ngrok with the following command to create a new tunnel:
```console ngrok http 7071
Custom skills can be more challenging to debug because the code runs externally,
> [!NOTE] > By default, Azure functions are exposed on 7071. Other tools and configurations might require that you provide a different port.
-1. When ngrok starts, copy and save the public forwarding URL for the next step. The forwarding URL is randomly generated.
+4. When ngrok starts, copy and save the public forwarding URL for the next step. The forwarding URL is randomly generated.
:::image type="content" source="media/cognitive-search-debug/ngrok.png" alt-text="Screenshot of ngrok terminal." border="false"::: ### Configure in Azure portal
-Within the debug session, modify your Custom Web API Skill URI to call the ngrok forwarding URL. Ensure that you append "/api/FunctionName" when using Azure Function for executing the skillset code.
+Within the debug session, modify your Custom Web API Skill URI to call the Tunnelmole or ngrok forwarding URL. Ensure that you append "/api/FunctionName" when using Azure Function for executing the skillset code.
You can edit the skill definition in the portal.
search Search Api Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-api-migration.md
Title: Upgrade REST API versions
-description: Review differences in API versions and learn which actions are required to migrate existing code to the newest Azure AI Search service REST API version.
+description: Review differences in API versions and learn the steps for migrating code to the newest Azure AI Search service REST API version.
- ignite-2023 Previously updated : 10/03/2022 Last updated : 11/27/2023 # Upgrade to the latest REST API in Azure AI Search
-If you're using an earlier version of the [**Search REST API**](/rest/api/searchservice/), this article will help you upgrade your application to the newest generally available API version, **2020-06-30**.
+Use this article to migrate data plane REST API calls to newer *stable* versions of the [**Search REST API**](/rest/api/searchservice/).
-Version 2020-06-30 includes an important new feature ([knowledge store](knowledge-store-concept-intro.md)), and introduces several minor behavior changes. As such, this version is mostly backward compatible so code changes should be minimal if you're upgrading from the previous version (2019-05-06).
++ [**2023-11-01**](/rest/api/searchservice/search-service-api-versions#2023-11-01) is the most recent stable version. Semantic ranking and vector search support are generally available in this version.+++ [**2023-10-01-preview**](/rest/api/searchservice/search-service-api-versions#2023-10-01-preview) is the most recent preview version. Integrated data chunking and vectorization using the [Text Split](cognitive-search-skill-textsplit.md) skill and [AzureOpenAiEmbedding](cognitive-search-skill-azure-openai-embedding.md) skill are introduced in this version. There's no migration guidance for preview API versions, but you can review code samples and walkthroughs for guidance. See [Integrated vectorization (preview)](vector-search-integrated-vectorization.md) for your first step. > [!NOTE]
-> A search service supports a range of REST API versions, including earlier ones. You can continue to use those API versions, but we recommend migrating your code to the newest version so that you can access new capabilities. Over time, the most outdated versions of the REST API will be deprecated and [no longer supported](search-api-versions.md#unsupported-versions).
+> New filter controls on the table of contents provide version-specific API reference pages. To get the right information, open a reference page and then apply the filter.
<a name="UpgradeSteps"></a> ## How to upgrade
-When upgrading to a new version, you probably won't have to make many changes to your code, other than to change the version number. The only situations in which you may need to change code are when:
-
-* Your code fails when unrecognized properties are returned in an API response. By default your application should ignore properties that it doesn't understand.
-
-* Your code persists API requests and tries to resend them to the new API version. For example, this might happen if your application persists continuation tokens returned from the Search API (for more information, look for `@search.nextPageParameters` in the [Search API Reference](/rest/api/searchservice/Search-Documents)).
+Azure AI Search strives for backward compatibility. To upgrade and continue with existing functionality, you can usually just change the API version number. Conversely, situations calling for change codes include:
-* Your code references an API version that predates 2019-05-06 and is subject to one or more of the breaking changes in that release. The section [Upgrade to 2019-05-06](#upgrade-to-2019-05-06) provides more detail.
-
-If any of these situations apply to you, then you may need to change your code accordingly. Otherwise, no changes should be necessary, although you might want to start using features added in the new version.
++ Your code fails when unrecognized properties are returned in an API response. As a best practice, your application should ignore properties that it doesn't understand.+++ Your code persists API requests and tries to resend them to the new API version. For example, this might happen if your application persists continuation tokens returned from the Search API (for more information, look for `@search.nextPageParameters` in the [Search API Reference](/rest/api/searchservice/Search-Documents)).+++ Your code references an API version that predates 2019-05-06 and is subject to one or more of the breaking changes in that release. The section [Upgrade to 2019-05-06](#upgrade-to-2019-05-06) provides more detail. +
+If any of these situations apply to you, change your code to maintain existing functionality. Otherwise, no changes should be necessary, although you might want to start using features added in the new version.
+
+## Upgrade to 2023-11-01
+
+This version has breaking changes and behavioral differences for semantic ranking and vector search support.
+
+[Semantic ranking](semantic-search-overview.md) no longer uses `queryLanguage`. It also requires a `semanticConfiguration` definition. If you're migrating from 2020-06-30-preview, a semantic configuration replaces `searchFields`. See [Migrate from preview version](semantic-how-to-query-request.md#migrate-from-preview-versions) for steps.
+
+[Vector search](vector-search-overview.md) support was introduced in [Create or Update Index (2023-07-01-preview)](/rest/api/searchservice/preview-api/create-or-update-index). If you're migrating from that version, there are new options and several breaking changes. New options include vector filter mode, vector profiles, and an exhaustive K-nearest neighbors algorithm and query-time exhaustive k-NN flag. Breaking changes include renaming and restructuring the vector configuration in the index, and vector query syntax.
+
+If you added vector support using 2023-10-01-preview, there are no breaking changes. There's one behavior difference: the `vectorFilterMode` default changed from postfilter to prefilter. Change the API version and test your code to confirm the migration from the previous preview version (2023-07-01-preview).
+
+> [!TIP]
+> You can upgrade a 2023-07-01-preview index in the Azure portal. The portal detects the previous version and provides a **Migrate** button. Select **Edit JSON** to review the updated schema before selecting **Migrate**. The new and changed schema conforms to the steps described in this section. Portal migration only handles indexes with one vector field. Indexes with more than one vector field require manual migration.
+
+Here are the steps for migrating from 2023-07-01-preview:
+
+1. Call [Get Index](/rest/api/searchservice/indexes/get?view=rest-searchservice-2023-11-01&tabs=HTTP&preserve-view=true) to retrieve the existing definition.
+
+1. Modify the vector search configuration. This API introduces the concept of "vector profiles" which bundles together vector-related configurations under one name. It also renames `algorithmConfigurations` to `algorithms`.
+
+ + Rename `algorithmConfigurations` to `algorithms`. This is only a renaming of the array. The contents are backwards compatible. This means your existing HNSW configuration parameters can be used.
+
+ + Add `profiles`, giving a name and an algorithm configuration for each one.
+
+ **Before migration (2023-07-01-preview)**:
+
+ ```http
+ "vectorSearch": {
+ "algorithmConfigurations": [
+ {
+ "name": "myHnswConfig",
+ "kind": "hnsw",
+ "hnswParameters": {
+ "m": 4,
+ "efConstruction": 400,
+ "efSearch": 500,
+ "metric": "cosine"
+ }
+ }
+ ]}
+ ```
+
+ **After migration (2023-11-01)**:
+
+ ```http
+ "vectorSearch": {
+ "profiles": [
+ {
+ "name": "myHnswProfile",
+ "algorithm": "myHnswConfig"
+ }
+ ],
+ "algorithms": [
+ {
+ "name": "myHnswConfig",
+ "kind": "hnsw",
+ "hnswParameters": {
+ "m": 4,
+ "efConstruction": 400,
+ "efSearch": 500,
+ "metric": "cosine"
+ }
+ }
+ ]
+ }
+ ```
+
+1. Modify vector field definitions, replacing `vectorSearchConfiguration` with `vectorSearchProfile`. Other vector field properties remain unchanged. For example, they can't be filterable, sortable, or facetable, nor use analyzers or normalizers or synonym maps.
+
+ **Before (2023-07-01-preview)**:
+
+ ```http
+ {
+ "name": "contentVector",
+ "type": "Collection(Edm.Single)",
+ "key": false,
+ "searchable": true,
+ "retrievable": true,
+ "filterable": false,
+ "sortable": false,
+ "facetable": false,
+ "analyzer": "",
+ "searchAnalyzer": "",
+ "indexAnalyzer": "",
+ "normalizer": "",
+ "synonymMaps": "",
+ "dimensions": 1536,
+ "vectorSearchConfiguration": "myHnswConfig"
+ }
+ ```
+
+ **After (2023-11-01)**:
+
+ ```http
+ {
+ "name": "contentVector",
+ "type": "Collection(Edm.Single)",
+ "searchable": true,
+ "retrievable": true,
+ "filterable": false,
+ "sortable": false,
+ "facetable": false,
+ "analyzer": "",
+ "searchAnalyzer": "",
+ "indexAnalyzer": "",
+ "normalizer": "",
+ "synonymMaps": "",
+ "dimensions": 1536,
+ "vectorSearchProfile": "myHnswProfile"
+ }
+ ```
+
+1. Call [Create or Update Index](/rest/api/searchservice/indexes/create-or-update?view=rest-searchservice-2023-11-01&tabs=HTTP&preserve-view=true) to post the changes.
+
+1. Modify [Search POST](/rest/api/searchservice/documents/search-post?view=rest-searchservice-2023-11-01&tabs=HTTP&preserve-view=true) to change the query syntax. This API change enables support for polymorphic vector query types.
+
+ + Rename `vectors` to `vectorQueries`.
+ + For each vector query, add `kind`, setting it to "vector".
+ + For each vector query, rename `value` to `vector`.
+ + Optionally, add `vectorFilterMode` if you're using [filter expressions](vector-search-filters.md). The default is prefilter for indexes created after 2023-10-01. Indexes created before that date only support postfilter, regardless of how you set the filter mode.
+
+ **Before (2023-07-01-preview)**:
+
+ ```http
+ {
+ "search": (this parameter is ignored in vector search),
+ "vectors": [{
+ "value": [
+ 0.103,
+ 0.0712,
+ 0.0852,
+ 0.1547,
+ 0.1183
+ ],
+ "fields": "contentVector",
+ "k": 5
+ }],
+ "select": "title, content, category"
+ }
+ ```
+
+ **After (2023-11-01)**:
+
+ ```http
+ {
+ "search": "(this parameter is ignored in vector search)",
+ "vectorQueries": [
+ {
+ "kind": "vector",
+ "vector": [
+ 0.103,
+ 0.0712,
+ 0.0852,
+ 0.1547,
+ 0.1183
+ ],
+ "fields": "contentVector",
+ "k": 5
+ }
+ ],
+ "vectorFilterMode": "preFilter",
+ "select": "title, content, category"
+ }
+ ```
+
+These steps complete the migration to 2023-11-01 API version.
## Upgrade to 2020-06-30
-Version 2020-06-30 is the new generally available release of the REST API. There's one breaking change and several behavioral differences.
-
-Features are now generally available in this API version include:
+In this version, there's one breaking change and several behavioral differences. Generally available features include:
-* [Knowledge store](knowledge-store-concept-intro.md), persistent storage of enriched content created through skillsets, created for downstream analysis and processing through other applications. With this capability, an indexer-driven AI enrichment pipeline can populate a knowledge store in addition to a search index. If you used the preview version of this feature, it's equivalent to the generally available version. The only code change required is modifying the api-version.
++ [Knowledge store](knowledge-store-concept-intro.md), persistent storage of enriched content created through skillsets, created for downstream analysis and processing through other applications. A knowledge store exists in Azure Storage, which you provision and then provide connection details to a skillset. With this capability, an indexer-driven AI enrichment pipeline can populate a knowledge store in addition to a search index. If you used the preview version of this feature, it's equivalent to the generally available version. The only code change required is modifying the api-version. ### Breaking change
Existing code written against earlier API versions will break on api-version=202
### Behavior changes
-* [BM25 ranking algorithm](index-ranking-similarity.md) replaces the previous ranking algorithm with newer technology. New services will use this algorithm automatically. For existing services, you must set parameters to use the new algorithm.
+* [BM25 ranking algorithm](index-ranking-similarity.md) replaces the previous ranking algorithm with newer technology. New services use this algorithm automatically. For existing services, you must set parameters to use the new algorithm.
* Ordered results for null values have changed in this version, with null values appearing first if the sort is `asc` and last if the sort is `desc`. If you wrote code to handle how null values are sorted, be aware of this change.
From API versions 2019-05-06 and 2019-05-06-Preview onwards, the data source API
#### Named Entity Recognition cognitive skill is now discontinued
-If you called the [Name Entity Recognition](cognitive-search-skill-named-entity-recognition.md) skill in your code, the call will fail. Replacement functionality is [Entity Recognition Skill (V3)](cognitive-search-skill-entity-recognition-v3.md). Follow the recommendations in [Deprecated skills](cognitive-search-skill-deprecated.md) to migrate to a supported skill.
+If you called the [Name Entity Recognition](cognitive-search-skill-named-entity-recognition.md) skill in your code, the call fails. Replacement functionality is [Entity Recognition Skill (V3)](cognitive-search-skill-entity-recognition-v3.md). Follow the recommendations in [Deprecated skills](cognitive-search-skill-deprecated.md) to migrate to a supported skill.
### Upgrading complex types API version 2019-05-06 added formal support for complex types. If your code implemented previous recommendations for complex type equivalency in 2017-11-11-Preview or 2016-09-01-Preview, there are some new and changed limits starting in version 2019-05-06 of which you need to be aware:
-+ The limits on the depth of subfields and the number of complex collections per index have been lowered. If you created indexes that exceed these limits using the preview api-versions, any attempt to update or recreate them using API version 2019-05-06 will fail. If you find yourself in this situation, you'll need to redesign your schema to fit within the new limits and then rebuild your index.
++ The limits on the depth of subfields and the number of complex collections per index have been lowered. If you created indexes that exceed these limits using the preview api-versions, any attempt to update or recreate them using API version 2019-05-06 will fail. If you find yourself in this situation, you need to redesign your schema to fit within the new limits and then rebuild your index.
-+ There's a new limit starting in api-version 2019-05-06 on the number of elements of complex collections per document. If you created indexes with documents that exceed these limits using the preview api-versions, any attempt to reindex that data using api-version 2019-05-06 will fail. If you find yourself in this situation, you'll need to reduce the number of complex collection elements per document before reindexing your data.
++ There's a new limit starting in api-version 2019-05-06 on the number of elements of complex collections per document. If you created indexes with documents that exceed these limits using the preview api-versions, any attempt to reindex that data using api-version 2019-05-06 will fail. If you find yourself in this situation, you need to reduce the number of complex collection elements per document before reindexing your data. For more information, see [Service limits for Azure AI Search](search-limits-quotas-capacity.md). #### How to upgrade an old complex type structure
-If your code is using complex types with one of the older preview API versions, you may be using an index definition format that looks like this:
+If your code is using complex types with one of the older preview API versions, you might be using an index definition format that looks like this:
```json {
You can update "flat" indexes to the new format with the following steps using A
1. Perform a GET request to retrieve your index. If itΓÇÖs already in the new format, youΓÇÖre done.
-2. Translate the index from the ΓÇ£flatΓÇ¥ format to the new format. YouΓÇÖll have to write code for this task since there's no sample code available at the time of this writing.
+2. Translate the index from the ΓÇ£flatΓÇ¥ format to the new format. You have to write code for this task since there's no sample code available at the time of this writing.
3. Perform a PUT request to update the index to the new format. Avoid changing any other details of the index, such as the searchability/filterability of fields, because changes that affect the physical expression of existing index isn't allowed by the Update Index API.
search Vector Search How To Chunk Documents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-how-to-chunk-documents.md
mountains. /n You can both ski in winter and swim in summer.
## Try it out: Chunking and vector embedding generation sample
-A [fixed-sized chunking and embedding generation sample](https://github.com/Azure-Samples/azure-search-power-skills/blob/main/Vector/EmbeddingGenerator/README.md) demonstrates both chunking and vector embedding generation using [Azure OpenAI](/azure/ai-services/openai/) embedding models. This sample uses a [Azure AI Search custom skill](cognitive-search-custom-skill-web-api.md) in the [Power Skills repo](https://github.com/Azure-Samples/azure-search-power-skills/tree/main#readme) to wrap the chunking step.
+A [fixed-sized chunking and embedding generation sample](https://github.com/Azure-Samples/azure-search-power-skills/blob/main/Vector/EmbeddingGenerator/README.md) demonstrates both chunking and vector embedding generation using [Azure OpenAI](/azure/ai-services/openai/) embedding models. This sample uses an [Azure AI Search custom skill](cognitive-search-custom-skill-web-api.md) in the [Power Skills repo](https://github.com/Azure-Samples/azure-search-power-skills/tree/main#readme) to wrap the chunking step.
This sample is built on LangChain, Azure OpenAI, and Azure AI Search.
search Vector Search How To Create Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-how-to-create-index.md
- ignite-2023 Previously updated : 11/04/2023 Last updated : 11/27/2023 # Add vector fields to a search index
Follow these steps to index vector data:
This article applies to the generally available, non-preview version of [vector search](vector-search-overview.md), which assumes your application code calls external resources for chunking and encoding. > [!NOTE]
-> Code samples in the [azure-search-vector](https://github.com/Azure/cognitive-search-vector-pr) repository demonstrate end-to-end workflows that include schema definition, vectorization, indexing, and queries.
+> Looking for migration guidance from 2023-07-01-preview? See [Upgrade REST APIs](search-api-migration.md).
## Prerequisites
A vector configuration specifies the [vector search algorithm](vector-search-ran
If you choose HNSW on a field, you can opt in for exhaustive KNN at query time. But the other direction wonΓÇÖt work: if you choose exhaustive, you canΓÇÖt later request HNSW search because the extra data structures that enable approximate search donΓÇÖt exist.
+Looking for preview-to-stable version migration guidance? See [Upgrade REST APIs](search-api-migration.md) for steps.
+ ### [**2023-11-01**](#tab/config-2023-11-01) REST API version [**2023-11-01**](/rest/api/searchservice/search-service-api-versions#2023-11-01) supports a vector configuration having:
-+ `hnsw` and `exhaustiveKnn` nearest neighbors algorithm for indexing vector content.
-+ Parameters for specifying the similarity metric used for scoring.
++ `vectorSearch` algorithms, `hnsw` and `exhaustiveKnn` nearest neighbors, with parameters for indexing and scoring. + `vectorProfiles` for multiple combinations of algorithm configurations. Be sure to have a strategy for [vectorizing your content](vector-search-how-to-generate-embeddings.md). The stable version doesn't provide [vectorizers](vector-search-how-to-configure-vectorizer.md) for built-in embedding.
Be sure to have a strategy for [vectorizing your content](vector-search-how-to-g
REST API version [**2023-10-01-Preview**](/rest/api/searchservice/search-service-api-versions#2023-10-01-Preview) supports external and [internal vectorization](vector-search-how-to-configure-vectorizer.md). This section assumes an external vectorization strategy. This API supports:
-+ `hnsw` and `exhaustiveKnn` nearest neighbors algorithm for indexing vector content.
-+ Parameters for specifying the similarity metric used for scoring.
++ `vectorSearch` algorithms, `hnsw` and `exhaustiveKnn` nearest neighbors, with parameters for indexing and scoring. + `vectorProfiles` for multiple combinations of algorithm configurations. 1. Use the [Create or Update Index Preview REST API](/rest/api/searchservice/indexes/create-or-update?view=rest-searchservice-2023-10-01-preview&preserve-view=true) to create the index.
api-key: {{admin-api-key}}
As a next step, we recommend [Query vector data in a search index](vector-search-how-to-query.md).
-You might also consider reviewing the demo code for [Python](https://github.com/Azure/cognitive-search-vector-pr/tree/main/demo-python), [C#](https://github.com/Azure/cognitive-search-vector-pr/tree/main/demo-dotnet) or [JavaScript](https://github.com/Azure/cognitive-search-vector-pr/tree/main/demo-javascript).
+Code samples in the [azure-search-vector](https://github.com/Azure/cognitive-search-vector-pr) repository demonstrate end-to-end workflows that include schema definition, vectorization, indexing, and queries.
+
+There's demo code for [Python](https://github.com/Azure/cognitive-search-vector-pr/tree/main/demo-python), [C#](https://github.com/Azure/cognitive-search-vector-pr/tree/main/demo-dotnet) [Java](https://github.com/Azure/cognitive-search-vector-pr/tree/main/demo-java), and [JavaScript](https://github.com/Azure/cognitive-search-vector-pr/tree/main/demo-javascript).
search Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/whats-new.md
Previously updated : 11/15/2023 Last updated : 11/27/2023 - references_regions - ignite-2023
| [**Integrated vectorization (preview)**](vector-search-integrated-vectorization.md) | Feature | Adds data chunking and text-to-vector conversions during indexing, and also adds text-to-vector conversions at query time. | | [**Import and vectorize data wizard (preview)**](search-get-started-portal-import-vectors.md) | Feature | A new wizard in the Azure portal that automates data chunking and vectorization. It targets the [2023-10-01-Preview](/rest/api/searchservice/skillsets/create-or-update?view=rest-searchservice-2023-10-01-preview&preserve-view=true) REST API. | | [**Index projections (preview)**](index-projections-concept-intro.md) | Feature | A component of a skillset definition that defines the shape of a secondary index. Index projections are used for a one-to-many index pattern, where content from an enrichment pipeline can target multiple indexes. You can define index projections using the [2023-10-01-Preview](/rest/api/searchservice/skillsets/create-or-update?view=rest-searchservice-2023-10-01-preview&preserve-view=true) REST API, the Azure portal, and any Azure SDK beta packages that are updated to use this feature. |
-| [**2023-11-01 Search REST API**](/rest/api/searchservice/search-service-api-versions#2023-11-01) | API | New stable version of the Search REST APIs for [vector fields](vector-search-how-to-create-index.md), [vector queries](vector-search-how-to-query.md), and [semantic ranking](semantic-how-to-query-request.md). |
+| [**2023-11-01 Search REST API**](/rest/api/searchservice/search-service-api-versions#2023-11-01) | API | New stable version of the Search REST APIs for [vector fields](vector-search-how-to-create-index.md), [vector queries](vector-search-how-to-query.md), and [semantic ranking](semantic-how-to-query-request.md). See [Upgrade REST APIs](search-api-migration.md) for migration steps to generally available features.|
| [**2023-11-01 Management REST API**](/rest/api/searchmanagement/operation-groups?view=rest-searchmanagement-2023-11-01&preserve-view=true) | API | New stable version of the Management REST APIs for control plane operations. This version adds APIs that [enable or disable semantic ranking](/rest/api/searchmanagement/services/create-or-update#searchsemanticsearch). | | [**Azure OpenAI Embedding skill (preview)**](cognitive-search-skill-azure-openai-embedding.md) | Skill | Connects to a deployed embedding model on your Azure OpenAI resource to generate embeddings during skillset execution. This skill is available through the [2023-10-01-Preview](/rest/api/searchservice/skillsets/create-or-update?view=rest-searchservice-2023-10-01-preview&preserve-view=true) REST API, the Azure portal, and any Azure SDK beta packages that are updated to use this feature.| | [**Text Split skill (preview)**](cognitive-search-skill-textsplit.md) | Skill | Updated in [2023-10-01-Preview](/rest/api/searchservice/skillsets/create-or-update?view=rest-searchservice-2023-10-01-preview&preserve-view=true) to support native data chunking. |
security Azure CA Details https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/azure-CA-details.md
Previously updated : 07/17/2023 Last updated : 11/27/2023
Any entity trying to access Microsoft Entra identity services via the TLS/SSL pr
| [DigiCert Global Root G2](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt) | 0x033af1e6a711a9a0bb2864b11d09fae5<br>DF3C24F9BFD666761B268073FE06D1CC8D4F82A4 | | [DigiCert Global Root G3](https://cacerts.digicert.com/DigiCertGlobalRootG3.crt) | 0x055556bcf25ea43535c3a40fd5ab4572<br>7E04DE896A3E666D00E687D33FFAD93BE83D349E | | [Microsoft ECC Root Certificate Authority 2017](https://www.microsoft.com/pkiops/certs/Microsoft%20ECC%20Root%20Certificate%20Authority%202017.crt) | 0x66f23daf87de8bb14aea0c573101c2ec<br>999A64C37FF47D9FAB95F14769891460EEC4C3C5 |
-| [Microsoft RSA Root Certificate Authority 2017](https://www.microsoft.com/pkiops/certs/archived/Microsoft%20RSA%20Root%20Certificate%20Authority%202017.crt) | 29c87039f4dbfdb94dbcda6ca792836b<br>ee68c3e94ab5d55eb9395116424e25b0cadd9009 |
+| [Microsoft RSA Root Certificate Authority 2017](https://www.microsoft.com/pkiops/certs/Microsoft%20RSA%20Root%20Certificate%20Authority%202017.crt) | 0x1ed397095fd8b4b347701eaabe7f45b3<br>73a5e64a3bff8316ff0edccc618a906e4eae4d74 |
### Subordinate Certificate Authorities
Any entity trying to access Microsoft Entra identity services via the TLS/SSL pr
| Certificate Authority | Serial Number<br>Thumbprint | |- |- |
-| [**Baltimore CyberTrust Root**](https://crt.sh/?d=76) | 020000b9<br>d4de20d05e66fc53fe1a50882c78db2852cae474 |
+| [**Baltimore CyberTrust Root**](https://cacerts.digicert.com/BaltimoreCyberTrustRoot.crt) | 020000b9<br>d4de20d05e66fc53fe1a50882c78db2852cae474 |
| Γöö [Microsoft RSA TLS CA 01](https://crt.sh/?d=3124375355) | 0x0f14965f202069994fd5c7ac788941e2<br>703D7A8F0EBF55AAA59F98EAF4A206004EB2516A | | Γöö [Microsoft RSA TLS CA 02](https://crt.sh/?d=3124375356) | 0x0fa74722c53d88c80f589efb1f9d4a3a<br>B0C2D2D13CDD56CDAA6AB6E2C04440BE4A429C75 |
-| [**DigiCert Global Root CA**](https://crt.sh/?d=853428) | 0x083be056904246b1a1756ac95991c74a<br>A8985D3A65E5E5C4B2D7D66D40C6DD2FB19C5436 |
+| [**DigiCert Global Root CA**](https://cacerts.digicert.com/DigiCertGlobalRootCA.crt) | 0x083be056904246b1a1756ac95991c74a<br>A8985D3A65E5E5C4B2D7D66D40C6DD2FB19C5436 |
| Γöö [DigiCert Basic RSA CN CA G2](https://crt.sh/?d=2545289014) | 0x02f7e1f982bad009aff47dc95741b2f6<br>4D1FA5D1FB1AC3917C08E43F65015E6AEA571179 | | Γöö [DigiCert Cloud Services CA-1](https://crt.sh/?d=12624881) | 0x019ec1c6bd3f597bb20c3338e551d877<br>81B68D6CD2F221F8F534E677523BB236BBA1DC56 | | Γöö [DigiCert SHA2 Secure Server CA](https://crt.sh/?d=3422153451) | 0x02742eaa17ca8e21c717bb1ffcfd0ca0<br>626D44E704D1CEABE3BF0D53397464AC8080142C |
Any entity trying to access Microsoft Entra identity services via the TLS/SSL pr
| Γöö [DigiCert TLS RSA SHA256 2020 CA1](https://crt.sh/?d=4385364571) | 0x06d8d904d5584346f68a2fa754227ec4<br>1C58A3A8518E8759BF075B76B750D4F2DF264FCD | | Γöö [GeoTrust Global TLS RSA4096 SHA256 2022 CA1](https://crt.sh/?d=6670931375) | 0x0f622f6f21c2ff5d521f723a1d47d62d<br>7E6DB7B7584D8CF2003E0931E6CFC41A3A62D3DF | | Γöö [GeoTrust TLS DV RSA Mixed SHA256 2020 CA-1](https://crt.sh/?d=3112858728) |0x0c08966535b942a9735265e4f97540bc<br>2F7AA2D86056A8775796F798C481A079E538E004 |
-| [**DigiCert Global Root G2**](https://crt.sh/?d=8656329) | 0x033af1e6a711a9a0bb2864b11d09fae5<br>DF3C24F9BFD666761B268073FE06D1CC8D4F82A4 |
+| [**DigiCert Global Root G2**](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt) | 0x033af1e6a711a9a0bb2864b11d09fae5<br>DF3C24F9BFD666761B268073FE06D1CC8D4F82A4 |
| Γöö [Microsoft Azure TLS Issuing CA 01](https://www.microsoft.com/pki/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2001.cer) | 0x0aafa6c5ca63c45141ea3be1f7c75317<br>2F2877C5D778C31E0F29C7E371DF5471BD673173 | | Γöö [Microsoft Azure TLS Issuing CA 02](https://www.microsoft.com/pki/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2002.cer) | 0x0c6ae97cced599838690a00a9ea53214<br>E7EEA674CA718E3BEFD90858E09F8372AD0AE2AA | | Γöö [*Microsoft Azure RSA TLS Issuing CA 03*](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2003%20-%20xsign.crt) | 0x05196526449a5e3d1a38748f5dcfebcc<br>F9388EA2C9B7D632B66A2B0B406DF1D37D3901F6 |
Any entity trying to access Microsoft Entra identity services via the TLS/SSL pr
| Γöö [*Microsoft Azure RSA TLS Issuing CA 08*](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2008%20-%20xsign.crt) | 0x0efb7e547edf0ff1069aee57696d7ba0<br>31600991ED5FEC63D355A5484A6DCC787EAD89BC | | Γöö [Microsoft Azure TLS Issuing CA 05](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2005.cer) | 0x0d7bede97d8209967a52631b8bdd18bd<br>6C3AF02E7F269AA73AFD0EFF2A88A4A1F04ED1E5 | | Γöö [Microsoft Azure TLS Issuing CA 06](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2006.cer) | 0x02e79171fb8021e93fe2d983834c50c0<br>30E01761AB97E59A06B41EF20AF6F2DE7EF4F7B0 |
-| [**DigiCert Global Root G3**](https://crt.sh/?d=8568700) | 0x055556bcf25ea43535c3a40fd5ab4572<br>7E04DE896A3E666D00E687D33FFAD93BE83D349E |
+| [**DigiCert Global Root G3**](https://cacerts.digicert.com/DigiCertGlobalRootG3.crt) | 0x055556bcf25ea43535c3a40fd5ab4572<br>7E04DE896A3E666D00E687D33FFAD93BE83D349E |
| Γöö [Microsoft Azure ECC TLS Issuing CA 01](https://www.microsoft.com/pki/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2001.cer) | 0x09dc42a5f574ff3a389ee06d5d4de440<br>92503D0D74A7D3708197B6EE13082D52117A6AB0 | | Γöö [Microsoft Azure ECC TLS Issuing CA 02](https://www.microsoft.com/pki/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2002.cer) | 0x0e8dbe5ea610e6cbb569c736f6d7004b<br>1E981CCDDC69102A45C6693EE84389C3CF2329F1 | | Γöö [*Microsoft Azure ECC TLS Issuing CA 03*](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2003%20-%20xsign.crt) | 0x01529ee8368f0b5d72ba433e2d8ea62d<br>56D955C849887874AA1767810366D90ADF6C8536 |
Any entity trying to access Microsoft Entra identity services via the TLS/SSL pr
| Γöö [*Microsoft Azure ECC TLS Issuing CA 08*](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2008%20-%20xsign.crt) | 0x0ef2e5d83681520255e92c608fbc2ff4<br>716DF84638AC8E6EEBE64416C8DD38C2A25F6630 | | Γöö [Microsoft Azure ECC TLS Issuing CA 05](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2005.cer) | 0x0ce59c30fd7a83532e2d0146b332f965<br>C6363570AF8303CDF31C1D5AD81E19DBFE172531 | | Γöö [Microsoft Azure ECC TLS Issuing CA 06](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2006.cer) | 0x066e79cd7624c63130c77abeb6a8bb94<br>7365ADAEDFEA4909C1BAADBAB68719AD0C381163 |
-| [**Microsoft ECC Root Certificate Authority 2017**](https://crt.sh/?d=2565145421) | 0x66f23daf87de8bb14aea0c573101c2ec<br>999A64C37FF47D9FAB95F14769891460EEC4C3C5 |
+| [**Microsoft ECC Root Certificate Authority 2017**](https://www.microsoft.com/pkiops/certs/Microsoft%20ECC%20Root%20Certificate%20Authority%202017.crt) | 0x66f23daf87de8bb14aea0c573101c2ec<br>999A64C37FF47D9FAB95F14769891460EEC4C3C5 |
| Γöö [Microsoft Azure ECC TLS Issuing CA 01](https://crt.sh/?d=2616305805) | 0x330000001aa9564f44321c54b900000000001a<br>CDA57423EC5E7192901CA1BF6169DBE48E8D1268 | | Γöö [Microsoft Azure ECC TLS Issuing CA 02](https://crt.sh/?d=2616326233) | 0x330000001b498d6736ed5612c200000000001b<br>489FF5765030EB28342477693EB183A4DED4D2A6 | | Γöö [*Microsoft Azure ECC TLS Issuing CA 03*](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2003.crt) | 0x330000003322a2579b5e698bcc000000000033<br>91503BE7BF74E2A10AA078B48B71C3477175FEC3 |
Any entity trying to access Microsoft Entra identity services via the TLS/SSL pr
| Γöö [Microsoft ECC TLS Issuing AOC CA 02](https://crt.sh/?d=4814787086) |33000000290f8a6222ef6a5695000000000029<br>3709cd92105d074349d00ea8327f7d5303d729c8 | | Γöö [Microsoft ECC TLS Issuing EOC CA 01](https://crt.sh/?d=4814787088) |330000002a2d006485fdacbfeb00000000002a<br>5fa13b879b2ad1b12e69d476e6cad90d01013b46 | | Γöö [Microsoft ECC TLS Issuing EOC CA 02](https://crt.sh/?d=4814787085) |330000002be6902838672b667900000000002b<br>58a1d8b1056571d32be6a7c77ed27f73081d6e7a |
-| [**Microsoft RSA Root Certificate Authority 2017**](https://crt.sh/?id=2565151295) | 0x1ed397095fd8b4b347701eaabe7f45b3<br>73A5E64A3BFF8316FF0EDCCC618A906E4EAE4D74 |
+| [**Microsoft RSA Root Certificate Authority 2017**](https://www.microsoft.com/pkiops/certs/Microsoft%20RSA%20Root%20Certificate%20Authority%202017.crt) | 0x1ed397095fd8b4b347701eaabe7f45b3<br>73A5E64A3BFF8316FF0EDCCC618A906E4EAE4D74 |
| Γöö [*Microsoft Azure RSA TLS Issuing CA 03*](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2003.crt) | 0x330000003968ea517d8a7e30ce000000000039<br>37461AACFA5970F7F2D2BAC5A659B53B72541C68 | | Γöö [*Microsoft Azure RSA TLS Issuing CA 04*](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2004.crt) | 0x330000003cd7cb44ee579961d000000000003c<br>7304022CA8A9FF7E3E0C1242E0110E643822C45E | | Γöö [*Microsoft Azure RSA TLS Issuing CA 07*](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2007.crt) | 0x330000003bf980b0c83783431700000000003b<br>0E5F41B697DAADD808BF55AD080350A2A5DFCA93 |
sentinel Automate Incident Handling With Automation Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/automate-incident-handling-with-automation-rules.md
The order of automation rules that add [incident tasks](incident-tasks.md) deter
Rules based on the update trigger have their own separate order queue. If such rules are triggered to run on a just-created incident (by a change made by another automation rule), they will run only after all the applicable rules based on the create trigger have run.
+#### Notes on execution order and priority
+
+- Setting the **order** number in automation rules determines their order of execution.
+- Each trigger type maintains its own queue.
+- For rules created in the Azure portal, the **order** field will be automatically populated with the number following the highest number used by existing rules of the same trigger type.
+- However, for rules created in other ways (command line, API, etc.), the **order** number must be assigned manually.
+- There is no validation mechanism preventing multiple rules from having the same order number, even within the same trigger type.
+- You can allow two or more rules of the same trigger type to have the same order number, if you don't care which order they run in.
+- For rules of the same trigger type with the same order number, the execution engine randomly selects which rules will run in which order.
+- For rules of different *incident trigger* types, all applicable rules with the *incident creation* trigger type will run first (according to their order numbers), and only then the rules with the *incident update* trigger type (according to *their* order numbers).
+- Rules always run sequentially, never in parallel.
+ ## Common use cases and scenarios ### Incident tasks
When you're configuring an automation rule and adding a **run playbook** action,
#### Permissions in a multi-tenant architecture
-Automation rules fully support cross-workspace and [multi-tenant deployments](extend-sentinel-across-workspaces-tenants.md#manage-workspaces-across-tenants-using-azure-lighthouse) (in the case of multi-tenant, using [Azure Lighthouse](../lighthouse/index.yml)).
+Automation rules fully support cross-workspace and [multitenant deployments](extend-sentinel-across-workspaces-tenants.md#manage-workspaces-across-tenants-using-azure-lighthouse) (in the case of multitenant, using [Azure Lighthouse](../lighthouse/index.yml)).
-Therefore, if your Microsoft Sentinel deployment uses a multi-tenant architecture, you can have an automation rule in one tenant run a playbook that lives in a different tenant, but permissions for Sentinel to run the playbooks must be defined in the tenant where the playbooks reside, not in the tenant where the automation rules are defined.
+Therefore, if your Microsoft Sentinel deployment uses a multitenant architecture, you can have an automation rule in one tenant run a playbook that lives in a different tenant, but permissions for Sentinel to run the playbooks must be defined in the tenant where the playbooks reside, not in the tenant where the automation rules are defined.
In the specific case of a Managed Security Service Provider (MSSP), where a service provider tenant manages a Microsoft Sentinel workspace in a customer tenant, there are two particular scenarios that warrant your attention:
sentinel Create Manage Use Automation Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/create-manage-use-automation-rules.md
In this article you'll learn how to define the triggers and conditions that will
### Determine the scope
-The first step in designing and defining your automation rule is figuring out which incidents (or alerts) you want it to apply to. This determination will directly impact how you create the rule.
+The first step in designing and defining your automation rule is figuring out which incidents or alerts you want it to apply to. This determination will directly impact how you create the rule.
You also want to determine your use case. What are you trying to accomplish with this automation? Consider the following options:
You also want to determine your use case. What are you trying to accomplish with
### Determine the trigger
-Do you want this automation to be activated when new incidents (or alerts, in preview) are created? Or anytime an incident gets updated?
+Do you want this automation to be activated when new incidents or alerts are created? Or anytime an incident gets updated?
Automation rules are triggered **when an incident is created or updated** or **when an alert is created**. Recall that incidents include alerts, and that both alerts and incidents are created by analytics rules, of which there are several types, as explained in [Detect threats with built-in analytics rules in Microsoft Sentinel](detect-threats-built-in.md).
You can change the order of actions in your rule even after you've added them. S
### Finish creating your rule
-1. Set an **expiration date** for your automation rule if you want it to have one.
+1. Under **Rule expiration**, if you want your automation rule to expire, set an expiration date (and optionally, a time). Otherwise, leave it as *Indefinite*.
-1. Enter a number under **Order** to determine where in the sequence of automation rules this rule will run.
+1. The **Order** field is pre-populated with the next available number for your rule's trigger type. This number determines where in the sequence of automation rules (of the same trigger type) this rule will run. You can change the number if you want this rule to run before an existing rule.
+
+ See [Notes on execution order and priority](automate-incident-handling-with-automation-rules.md#notes-on-execution-order-and-priority) for more information.
1. Click **Apply**. You're done! + ## Audit automation rule activity Find out what automation rules may have done to a given incident. You have a full record of incident chronicles available to you in the *SecurityIncident* table in the **Logs** blade. Use the following query to see all your automation rule activity:
SecurityIncident
## Automation rules execution
-Automation rules are run sequentially, according to the order you determine. Each automation rule is executed after the previous one has finished its run. Within an automation rule, all actions are run sequentially in the order in which they are defined.
+Automation rules are run sequentially, according to the order you determine. Each automation rule is executed after the previous one has finished its run. Within an automation rule, all actions are run sequentially in the order in which they are defined. See [Notes on execution order and priority](automate-incident-handling-with-automation-rules.md#notes-on-execution-order-and-priority) for more information.
Playbook actions within an automation rule may be treated differently under some circumstances, according to the following criteria:
service-bus-messaging Service Bus Azure And Service Bus Queues Compared Contrasted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-azure-and-service-bus-queues-compared-contrasted.md
Title: Compare Azure Storage queues and Service Bus queues
description: Analyzes differences and similarities between two types of queues offered by Azure. Previously updated : 10/25/2022 Last updated : 11/27/2023 # Storage queues and Service Bus queues - compared and contrasted
This article analyzes the differences and similarities between the two types of
## Introduction Azure supports two types of queue mechanisms: **Storage queues** and **Service Bus queues**.
-**Storage queues** are part of the [Azure Storage](https://azure.microsoft.com/services/storage/) infrastructure. They allow you to store large numbers of messages. You access messages from anywhere in the world via authenticated calls using HTTP or HTTPS. A queue message can be up to 64 KB in size. A queue may contain millions of messages, up to the total capacity limit of a storage account. Queues are commonly used to create a backlog of work to process asynchronously. For more information, see [What are Azure Storage queues](../storage/queues/storage-queues-introduction.md).
+**Storage queues** are part of the [Azure Storage](https://azure.microsoft.com/services/storage/) infrastructure. They allow you to store large numbers of messages. You access messages from anywhere in the world via authenticated calls using HTTP or HTTPS. A queue message can be up to 64 KB in size. A queue might contain millions of messages, up to the total capacity limit of a storage account. Queues are commonly used to create a backlog of work to process asynchronously. For more information, see [What are Azure Storage queues](../storage/queues/storage-queues-introduction.md).
-**Service Bus queues** are part of a broader [Azure messaging](https://azure.microsoft.com/services/service-bus/) infrastructure that supports queuing, publish/subscribe, and more advanced integration patterns. They're designed to integrate applications or application components that may span multiple communication protocols, data contracts, trust domains, or network environments. For more information about Service Bus queues/topics/subscriptions, see the [Service Bus queues, topics, and subscriptions](service-bus-queues-topics-subscriptions.md).
+**Service Bus queues** are part of a broader [Azure messaging](https://azure.microsoft.com/services/service-bus/) infrastructure that supports queuing, publish/subscribe, and more advanced integration patterns. They're designed to integrate applications or application components that might span multiple communication protocols, data contracts, trust domains, or network environments. For more information about Service Bus queues/topics/subscriptions, see the [Service Bus queues, topics, and subscriptions](service-bus-queues-topics-subscriptions.md).
## Technology selection considerations
This section compares advanced capabilities provided by Storage queues and Servi
* The duplication detection feature of Service Bus queues automatically removes duplicate messages sent to a queue or topic, based on the value of the message ID property. ## Capacity and quotas
-This section compares Storage queues and Service Bus queues from the perspective of [capacity and quotas](service-bus-quotas.md) that may apply.
+This section compares Storage queues and Service Bus queues from the perspective of [capacity and quotas](service-bus-quotas.md) that might apply.
| Comparison Criteria | Storage queues | Service Bus queues | | | | | | Maximum queue size |500 TB<br/><br/>(limited to a [single storage account capacity](../storage/common/storage-introduction.md#queue-storage)) |1 GB to 80 GB<br/><br/>(defined upon creation of a queue and [enabling partitioning](service-bus-partitioning.md) ΓÇô see the ΓÇ£Additional InformationΓÇ¥ section) |
-| Maximum message size |64 KB<br/><br/>(48 KB when using Base64 encoding)<br/><br/>Azure supports large messages by combining queues and blobs ΓÇô at which point you can enqueue up to 200 GB for a single item. |256 KB or 100 MB<br/><br/>(including both header and body, maximum header size: 64 KB).<br/><br/>Depends on the [service tier](service-bus-premium-messaging.md). |
+| Maximum message size |64 KB<br/><br/>(48 KB when using Base 64 encoding)<br/><br/>Azure supports large messages by combining queues and blobs ΓÇô at which point you can enqueue up to 200 GB for a single item. |256 KB or 100 MB<br/><br/>(including both header and body, maximum header size: 64 KB).<br/><br/>Depends on the [service tier](service-bus-premium-messaging.md). |
| Maximum message TTL |Infinite (api-version 2017-07-27 or later) |TimeSpan.MaxValue | | Maximum number of queues |Unlimited |10,000<br/><br/>(per service namespace) | | Maximum number of concurrent clients |Unlimited |5,000 |
This section discusses the authentication and authorization features supported b
### Additional information * Every request to either of the queuing technologies must be authenticated. Public queues with anonymous access aren't supported. * Using shared access signature (SAS) authentication, you can create a shared access authorization rule on a queue that can give users a write-only, read-only, or full access. For more information, see [Azure Storage - SAS authentication](../storage/common/storage-sas-overview.md) and [Azure Service Bus - SAS authentication](service-bus-sas.md).
-* Both queues support authorizing access using Microsoft Entra ID. Authorizing users or applications using OAuth 2.0 token returned by Microsoft Entra ID provides superior security and ease of use over shared access signatures (SAS). With Microsoft Entra ID, there is no need to store the tokens in your code and risk potential security vulnerabilities. For more information, see [Azure Storage - Microsoft Entra authentication](../storage/queues/assign-azure-role-data-access.md) and [Azure Service Bus - Microsoft Entra authentication](service-bus-authentication-and-authorization.md#azure-active-directory).
+* Both queues support authorizing access using Microsoft Entra ID. Authorizing users or applications using OAuth 2.0 token returned by Microsoft Entra ID provides superior security and ease of use over shared access signatures (SAS). With Microsoft Entra ID, there's no need to store the tokens in your code and risk potential security vulnerabilities. For more information, see [Azure Storage - Microsoft Entra authentication](../storage/queues/assign-azure-role-data-access.md) and [Azure Service Bus - Microsoft Entra authentication](service-bus-authentication-and-authorization.md#azure-active-directory).
## Conclusion
-By gaining a deeper understanding of the two technologies, you can make a more informed decision on which queue technology to use, and when. The decision on when to use Storage queues or Service Bus queues clearly depends on many factors. These factors may depend heavily on the individual needs of your application and its architecture.
+By gaining a deeper understanding of the two technologies, you can make a more informed decision on which queue technology to use, and when. The decision on when to use Storage queues or Service Bus queues clearly depends on many factors. These factors depend heavily on the individual needs of your application and its architecture.
-You may prefer to choose Storage queues for reasons such as the following ones:
+You might prefer to choose Storage queues for reasons such as the following ones:
- If your application already uses the core capabilities of Microsoft Azure - If you require basic communication and messaging between services - Need queues that can be larger than 80 GB in size
-Service Bus queues provide many advanced features such as the following ones. So, they may be a preferred choice if you're building a hybrid application or if your application otherwise requires these features.
+Service Bus queues provide many advanced features such as the following ones. So, they might be a preferred choice if you're building a hybrid application or if your application otherwise requires these features.
- [Sessions](message-sessions.md) - [Transactions](service-bus-transactions.md)
service-connector How To Integrate Redis Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-redis-cache.md
Use the environment variable names and application properties listed below to co
| | - | - | | AZURE_REDIS_CONNECTIONSTRING | node-redis connection string | `rediss://:<redis-key>@<redis-server-name>.redis.cache.windows.net:6380/0` |
+#### [Other](#tab/none)
+
+| Default environment variable name | Description | Example value |
+| | -- | -- |
+| AZURE_REDIS_HOST | Redis host | `<redis-server-name>.redis.cache.windows.net` |
+| AZURE_REDIS_PORT | Redis port | `6380` |
+| AZURE_REDIS_DATABASE | Redis database | `0` |
+| AZURE_REDIS_PASSWORD | Redis key | `<redis-key>` |
+| AZURE_REDIS_SSL | SSL setting | `true` |
+ #### Sample code
service-connector How To Integrate Storage Blob https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/how-to-integrate-storage-blob.md
For default environment variables and sample code of other authentication type,
| azure.storage.account-name | Your Blob storage-account-name | `<storage-account-name>` | | azure.storage.account-key | Your Blob Storage account key | `<account-key>` | | azure.storage.blob-endpoint | Your Blob Storage endpoint | `https://<storage-account-name>.blob.core.windows.net/` |
+| spring.cloud.azure.storage.blob.account-name | Your Blob storage-account-name for Spring Cloud Azure version 4.0 or above | `<storage-account-name>` |
+| spring.cloud.azure.storage.blob.account-key | Your Blob Storage account key for Spring Cloud Azure version 4.0 or above | `<account-key>` |
+| spring.cloud.azure.storage.blob.endpoint | Your Blob Storage endpoint for Spring Cloud Azure version 4.0 or above | `https://<storage-account-name>.blob.core.windows.net/` |
#### Other client types | Default environment variable name | Description | Example value | ||--|| | AZURE_STORAGEBLOB_CONNECTIONSTRING | Blob Storage connection string | `DefaultEndpointsProtocol=https;AccountName=<account name>;AccountKey=<account-key>;EndpointSuffix=core.windows.net` |
-| Default environment variable name | Description | Example value |
-| - | | - |
-| AZURE_STORAGEBLOB_CONNECTIONSTRING | Blob Storage connection string | `DefaultEndpointsProtocol=https;AccountName=<account name>;AccountKey=<account-key>;EndpointSuffix=core.windows.net` |
#### Sample code
site-recovery Vmware Physical Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-azure-support-matrix.md
Guest/server disk with 4K logical and 512-bytes physical sector size | No
Guest/server volume with striped disk >4 TB | Yes Logical volume management (LVM)| Thick provisioning - Yes <br></br> Thin provisioning - Yes, it is supported from [Update Rollup 61](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) onwards. It wasn't supported in earlier Mobility service versions. Guest/server - Storage Spaces | No
-Guest/server - NVMe interface | Yes
+Guest/server - NVMe interface | Yes, for Windows machines. Not supported for Linux machines.
Guest/server hot add/remove disk | No Guest/server - exclude disk | Yes Guest/server multipath (MPIO) | No
spring-apps How To Enterprise Deploy Polyglot Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enterprise-deploy-polyglot-apps.md
These buildpacks support building with source code or artifacts for Java, .NET C
All the builders configured in an Azure Spring Apps service instance are listed on the **Build Service** page, as shown in the following screenshot: Select **Add** to create a new builder. The following screenshot shows the resources you should use to create the custom builder. The [OS Stack](https://docs.vmware.com/en/VMware-Tanzu-Buildpacks/services/tanzu-buildpacks/GUID-stacks.html) includes `Bionic Base`, `Bionic Full`, `Jammy Tiny`, `Jammy Base`, and `Jammy Full`. Bionic is based on `Ubuntu 18.04 (Bionic Beaver)` and Jammy is based on `Ubuntu 22.04 (Jammy Jellyfish)`. For more information, see the [OS stack recommendations](#os-stack-recommendations) section. We recommend using `Jammy OS Stack` to create your builder because VMware is deprecating `Bionic OS Stack`. You can also edit a custom builder when the builder isn't used in a deployment. You can update the buildpacks or the OS Stack, but the builder name is read only. The builder is a resource that continuously contributes to your deployments. It provides the latest runtime images and latest buildpacks.
You can't delete a builder when existing active deployments are being built with
In Azure Spring Apps, we recommend using `Jammy OS Stack` to create your builder because `Bioinic OS Stack` is in line for deprecation by VMware. The following list describes the options available: -- Jammy Tiny - suitable for building a minimal image for the smallest possible size and security footprint. Like building a Java Native Image, it can make the final container image smaller. The integrated libraries are limited. For example, you can't [connect to an app instance for troubleshooting](how-to-connect-to-app-instance-for-troubleshooting.md) because there's no `shell` library.
+- Jammy Tiny: Suitable for building a minimal image for the smallest possible size and security footprint. Like building a Java Native Image, it can make the final container image smaller. The integrated libraries are limited. For example, you can't [connect to an app instance for troubleshooting](how-to-connect-to-app-instance-for-troubleshooting.md) because there's no `shell` library.
+ - Most Go apps.
+ - Java apps. Some Apache Tomcat configuration options, such as setting *bin/setenv.sh*, aren't available because Tiny has no shell.
-- Jammy Base - suitable for most apps without native extensions.
+- Jammy Base: Suitable for most apps without native extensions.
+ - Java apps and .NET Core apps.
+ - Go apps that require some C libraries.
+ - Node.js, Python, or Web Servers apps without native extensions.
-- Jammy Full - includes the most libraries, and is suitable for apps with native extensions. For example, it includes a more complete library of fonts. If your app relies on the native extension, then use the `Full` stack.
+- Jammy Full: Includes most libraries, and is suitable for apps with native extensions. For example, it includes a more complete library of fonts. If your app relies on the native extension, then use the `Full` stack.
+ - Node.js or Python apps with native extensions.
For more information, see [Ubuntu Stacks](https://docs.vmware.com/en/VMware-Tanzu-Buildpacks/services/tanzu-buildpacks/GUID-stacks.html#ubuntu-stacks) in the VMware documentation.
Use the following steps to show, add, edit, and delete the container registry:
1. Select **Container registry** in the navigation pane. 1. Select **Add** to create a container registry.
- :::image type="content" source="media/how-to-enterprise-deploy-polyglot-apps/add-container-registry.png" alt-text="Screenshot of Azure portal showing the Container registry page with Add container registry button." lightbox="media/how-to-enterprise-deploy-polyglot-apps/add-container-registry.png":::
+ :::image type="content" source="media/how-to-enterprise-deploy-polyglot-apps/add-container-registry.png" alt-text="Screenshot of Azure portal that shows the Container registry page with Add container registry button." lightbox="media/how-to-enterprise-deploy-polyglot-apps/add-container-registry.png":::
1. For a container registry, select the ellipsis (**...**) button, then select **Edit** to view the registry configuration.
- :::image type="content" source="media/how-to-enterprise-deploy-polyglot-apps/show-container-registry.png" alt-text="Screenshot of the Azure portal showing the Container registry page." lightbox="media/how-to-enterprise-deploy-polyglot-apps/show-container-registry.png":::
+ :::image type="content" source="media/how-to-enterprise-deploy-polyglot-apps/show-container-registry.png" alt-text="Screenshot of the Azure portal that shows the Container registry page." lightbox="media/how-to-enterprise-deploy-polyglot-apps/show-container-registry.png":::
1. Review the values on the **Edit container registry** page.
- :::image type="content" source="media/how-to-enterprise-deploy-polyglot-apps/edit-container-registry.png" alt-text="Screenshot of the Azure portal showing the Container registry page with Edit container registry pane open for the current container registry in the list." lightbox="media/how-to-enterprise-deploy-polyglot-apps/edit-container-registry.png":::
+ :::image type="content" source="media/how-to-enterprise-deploy-polyglot-apps/edit-container-registry.png" alt-text="Screenshot of the Azure portal that shows the Container registry page with Edit container registry pane open for the current container registry in the list." lightbox="media/how-to-enterprise-deploy-polyglot-apps/edit-container-registry.png":::
1. To delete a container registry, select the ellipsis (**...**) button, then select **Delete** to delete the registry. If the container registry is used by build service, it can't be deleted.
- :::image type="content" source="media/how-to-enterprise-deploy-polyglot-apps/delete-container-registry.png" alt-text="Screenshot of Azure portal showing the Container registry page with Delete container registry pane open for the current container registry in the list." lightbox="media/how-to-enterprise-deploy-polyglot-apps/delete-container-registry.png":::
+ :::image type="content" source="media/how-to-enterprise-deploy-polyglot-apps/delete-container-registry.png" alt-text="Screenshot of Azure portal that shows the Container registry page with Delete container registry pane open for the current container registry in the list." lightbox="media/how-to-enterprise-deploy-polyglot-apps/delete-container-registry.png":::
#### [Azure CLI](#tab/Azure-CLI)
If the build service is using the container registry, then you can't delete it.
-The build service can use a container registry, and can also change the associated container registry. This process is time consuming. When the change happens, all the builder and build resources under the build service are rebuilt, and then the final container images are pushed to the new container registry.
+The build service can use a container registry, and can also change the associated container registry. This process is time consuming. When the change happens, all the builder and build resources under the build service rebuild, and then the final container images get pushed to the new container registry.
#### [Azure portal](#tab/Portal)
Use the following steps to switch the container registry associated with the bui
1. Select **Build Service** in the navigation pane. 1. Select **Referenced container registry** to update the container registry for the build service.
- :::image type="content" source="media/how-to-enterprise-deploy-polyglot-apps/switch-build-service-container-registry.png" alt-text="Screenshot of the Azure portal showing the Build Service page with referenced container registry highlighted." lightbox="media/how-to-enterprise-deploy-polyglot-apps/switch-build-service-container-registry.png":::
+ :::image type="content" source="media/how-to-enterprise-deploy-polyglot-apps/switch-build-service-container-registry.png" alt-text="Screenshot of the Azure portal that shows the Build Service page with referenced container registry highlighted." lightbox="media/how-to-enterprise-deploy-polyglot-apps/switch-build-service-container-registry.png":::
#### [Azure CLI](#tab/Azure-CLI)
For more information about the supported configurations for different language a
- Serialization - Bytecode isn't available at runtime anymore, so debugging and monitoring with tools targeted to the JVMTI isn't possible.
-The following features aren't supported in Azure Spring Apps due to the limitation of Java Native Image. Azure Spring Apps will support them as long as Java Native Image and the community overcomes the limitation.
+The following features aren't supported in Azure Spring Apps due to the limitation of Java Native Image. Azure Spring Apps will support them when the Java Native Image and the community overcomes the limitation.
| Feature | Why it isn't supported | ||-|
The following table lists the features supported in Azure Spring Apps:
| Integrate with Application Insights, Dynatrace, Elastic, New Relic, App Dynamic APM agent. | See [How to configure APM integration and CA certificates](./how-to-enterprise-configure-apm-integration-and-ca-certificates.md). | N/A | N/A | | Deploy WAR package with Apache Tomcat or TomEE. | Set the application server to use. Set to *tomcat* to use Tomcat and *tomee* to use TomEE. The default value is *tomcat*. | `BP_JAVA_APP_SERVER` | `--build-env BP_JAVA_APP_SERVER=tomee` | | Support Spring Boot applications. | Indicates whether to contribute Spring Cloud Bindings support for the image at build time. The default value is *false*. | `BP_SPRING_CLOUD_BINDINGS_DISABLED` | `--build-env BP_SPRING_CLOUD_BINDINGS_DISABLED=false` |
-| | Indicates whether to autoconfigure Spring Boot environment properties from bindings at runtime. This feature requires Spring Cloud Bindings to have been installed at build time or it does nothing. The default value is *false*. | `BPL_SPRING_CLOUD_BINDINGS_DISABLED` | `--env BPL_SPRING_CLOUD_BINDINGS_DISABLED=false` |
+| | Indicates whether to autoconfigure Spring Boot environment properties from bindings at runtime. This feature requires Spring Cloud Bindings to have already been installed at build time or it does nothing. The default value is *false*. | `BPL_SPRING_CLOUD_BINDINGS_DISABLED` | `--env BPL_SPRING_CLOUD_BINDINGS_DISABLED=false` |
| Support building Maven-based applications from source. | Used for a multi-module project. Indicates the module to find the application artifact in. Defaults to the root module (empty). | `BP_MAVEN_BUILT_MODULE` | `--build-env BP_MAVEN_BUILT_MODULE=./gateway` | | Support building Gradle-based applications from source. | Used for a multi-module project. Indicates the module to find the application artifact in. Defaults to the root module (empty). | `BP_GRADLE_BUILT_MODULE` | `--build-env BP_GRADLE_BUILT_MODULE=./gateway` |
-| Enable configuration of labels on the created image. | Configures both OCI-specified labels with short environment variable names and arbitrary labels using a space-delimited syntax in a single environment variable. | `BP_IMAGE_LABELS` <br> `BP_OCI_AUTHORS` <br> see more envs [here](https://github.com/paketo-buildpacks/image-labels). | `--build-env BP_OCI_AUTHORS=<value>` |
-| Integrate JProfiler agent. | Indicates whether to integrate JProfiler support. The default value is *false*. | `BP_JPROFILER_ENABLED` | build phase: <br>`--build-env BP_JPROFILER_ENABLED=true` <br> runtime phase: <br> `--env BPL_JPROFILER_ENABLED=true` <br> `BPL_JPROFILER_PORT=<port>` (optional, defaults to *8849*) <br> `BPL_JPROFILER_NOWAIT=true` (optional. Indicates whether the JVM executes before JProfiler has attached. The default value is *true*.) |
+| Enable configuration of labels on the created image. | Configures both OCI-specified labels with short environment variable names and arbitrary labels using a space-delimited syntax in a single environment variable. | `BP_IMAGE_LABELS` <br> `BP_OCI_AUTHORS` <br> see more environment variables [here](https://github.com/paketo-buildpacks/image-labels). | `--build-env BP_OCI_AUTHORS=<value>` |
+| Integrate JProfiler agent. | Indicates whether to integrate JProfiler support. The default value is *false*. | `BP_JPROFILER_ENABLED` | build phase: <br>`--build-env BP_JPROFILER_ENABLED=true` <br> runtime phase: <br> `--env BPL_JPROFILER_ENABLED=true` <br> `BPL_JPROFILER_PORT=<port>` (optional, defaults to *8849*) <br> `BPL_JPROFILER_NOWAIT=true` (optional. Indicates whether the JVM executes before JProfiler gets attached. The default value is *true*.) |
| | Indicates whether to enable JProfiler support at runtime. The default value is *false*. | `BPL_JPROFILER_ENABLED` | `--env BPL_JPROFILER_ENABLED=false` | | | Indicates which port the JProfiler agent listens on. The default value is *8849*. | `BPL_JPROFILER_PORT` | `--env BPL_JPROFILER_PORT=8849` |
-| | Indicates whether the JVM executes before JProfiler has attached. The default value is *true*. | `BPL_JPROFILER_NOWAIT` | `--env BPL_JPROFILER_NOWAIT=true` |
+| | Indicates whether the JVM executes before JProfiler gets attached. The default value is *true*. | `BPL_JPROFILER_NOWAIT` | `--env BPL_JPROFILER_NOWAIT=true` |
| Integrate [JRebel](https://www.jrebel.com/) agent. | The application should contain a *rebel-remote.xml* file. | N/A | N/A | | AES encrypts an application at build time and then decrypts it at launch time. | The AES key to use at build time. | `BP_EAR_KEY` | `--build-env BP_EAR_KEY=<value>` | | | The AES key to use at run time. | `BPL_EAR_KEY` | `--env BPL_EAR_KEY=<value>` |
The following table lists the features supported in Azure Spring Apps:
| Configure the .NET Core runtime version. | Supports *Net6.0* and *Net7.0*. <br> You can configure through a *runtimeconfig.json* or MSBuild Project file. <br> The default runtime is *6.0.\**. | N/A | N/A | | Add CA certificates to the system trust store at build and runtime. | See the [Configure CA certificates for app builds and deployments](./how-to-enterprise-configure-apm-integration-and-ca-certificates.md#configure-ca-certificates-for-app-builds-and-deployments) section of [How to configure APM integration and CA certificates](./how-to-enterprise-configure-apm-integration-and-ca-certificates.md). | N/A | N/A | | Integrate with the Dynatrace and New Relic APM agents. | See [How to configure APM integration and CA certificates](./how-to-enterprise-configure-apm-integration-and-ca-certificates.md). | N/A | N/A |
-| Enable configuration of labels on the created image. | Configures both OCI-specified labels with short environment variable names and arbitrary labels using a space-delimited syntax in a single environment variable. | `BP_IMAGE_LABELS` <br> `BP_OCI_AUTHORS` <br> See more envs [here](https://github.com/paketo-buildpacks/image-labels). | `--build-env BP_OCI_AUTHORS=<value>` |
+| Enable configuration of labels on the created image. | Configures both OCI-specified labels with short environment variable names and arbitrary labels using a space-delimited syntax in a single environment variable. | `BP_IMAGE_LABELS` <br> `BP_OCI_AUTHORS` <br> See more environment variables [here](https://github.com/paketo-buildpacks/image-labels). | `--build-env BP_OCI_AUTHORS=<value>` |
### Deploy Python applications
The following table lists the features supported in Azure Spring Apps:
||--|--|-| | Specify a Python version. | Supports *3.7.\**, *3.8.\**, *3.9.\**, *3.10.\**, *3.11.\**. The default value is *3.10.\**<br> You can specify the version via the `BP_CPYTHON_VERSION` environment variable during build. | `BP_CPYTHON_VERSION` | `--build-env BP_CPYTHON_VERSION=3.8.*` | | Add CA certificates to the system trust store at build and runtime. | See the [Configure CA certificates for app builds and deployments](./how-to-enterprise-configure-apm-integration-and-ca-certificates.md#configure-ca-certificates-for-app-builds-and-deployments) section of [How to configure APM integration and CA certificates](./how-to-enterprise-configure-apm-integration-and-ca-certificates.md). | N/A | N/A |
-| Enable configuration of labels on the created image. | Configures both OCI-specified labels with short environment variable names and arbitrary labels using a space-delimited syntax in a single environment variable. | `BP_IMAGE_LABELS` <br> `BP_OCI_AUTHORS` <br> See more envs [here](https://github.com/paketo-buildpacks/image-labels). | `--build-env BP_OCI_AUTHORS=<value>` |
+| Enable configuration of labels on the created image. | Configures both OCI-specified labels with short environment variable names and arbitrary labels using a space-delimited syntax in a single environment variable. | `BP_IMAGE_LABELS` <br> `BP_OCI_AUTHORS` <br> See more environment variables [here](https://github.com/paketo-buildpacks/image-labels). | `--build-env BP_OCI_AUTHORS=<value>` |
### Deploy Go applications
The following table lists the features supported in Azure Spring Apps:
| Configure multiple targets. | Specifies multiple targets for a Go build. | `BP_GO_TARGETS` | `--build-env BP_GO_TARGETS=./some-target:./other-target` | | Add CA certificates to the system trust store at build and runtime. | See the [Configure CA certificates for app builds and deployments](./how-to-enterprise-configure-apm-integration-and-ca-certificates.md#configure-ca-certificates-for-app-builds-and-deployments) section of [How to configure APM integration and CA certificates](./how-to-enterprise-configure-apm-integration-and-ca-certificates.md). | N/A | N/A | | Integrate with Dynatrace APM agent. | See [How to configure APM integration and CA certificates](./how-to-enterprise-configure-apm-integration-and-ca-certificates.md). | N/A | N/A |
-| Enable configuration of labels on the created image. | Configures both OCI-specified labels with short environment variable names and arbitrary labels using a space-delimited syntax in a single environment variable. | `BP_IMAGE_LABELS` <br> `BP_OCI_AUTHORS` <br> See more envs [here](https://github.com/paketo-buildpacks/image-labels). | `--build-env BP_OCI_AUTHORS=<value>` |
+| Enable configuration of labels on the created image. | Configures both OCI-specified labels with short environment variable names and arbitrary labels using a space-delimited syntax in a single environment variable. | `BP_IMAGE_LABELS` <br> `BP_OCI_AUTHORS` <br> See more environment variables [here](https://github.com/paketo-buildpacks/image-labels). | `--build-env BP_OCI_AUTHORS=<value>` |
### Deploy Node.js applications
The following table lists the features supported in Azure Spring Apps:
| Specify a Node version. | Supports *14.\**, *16.\**, *18.\**, *19.\**. The default value is *18.\**. <br>You can specify the Node version via an *.nvmrc* or *.node-version* file at the application directory root. `BP_NODE_VERSION` overrides the settings. | `BP_NODE_VERSION` | `--build-env BP_NODE_VERSION=19.*` | | Add CA certificates to the system trust store at build and runtime. | See the [Configure CA certificates for app builds and deployments](./how-to-enterprise-configure-apm-integration-and-ca-certificates.md#configure-ca-certificates-for-app-builds-and-deployments) section of [How to configure APM integration and CA certificates](./how-to-enterprise-configure-apm-integration-and-ca-certificates.md). | N/A | N/A | | Integrate with Dynatrace, Elastic, New Relic, App Dynamic APM agent. | See [How to configure APM integration and CA certificates](./how-to-enterprise-configure-apm-integration-and-ca-certificates.md). | N/A | N/A |
-| Enable configuration of labels on the created image. | Configures both OCI-specified labels with short environment variable names and arbitrary labels using a space-delimited syntax in a single environment variable. | `BP_IMAGE_LABELS` <br> `BP_OCI_AUTHORS` <br> See more envs [here](https://github.com/paketo-buildpacks/image-labels). | `--build-env BP_OCI_AUTHORS=<value>` |
+| Enable configuration of labels on the created image. | Configures both OCI-specified labels with short environment variable names and arbitrary labels using a space-delimited syntax in a single environment variable. | `BP_IMAGE_LABELS` <br> `BP_OCI_AUTHORS` <br> See more environment variables [here](https://github.com/paketo-buildpacks/image-labels). | `--build-env BP_OCI_AUTHORS=<value>` |
| Deploy an Angular application with Angular Live Development Server. | Specify the host before running `ng serve` in the [package.json](https://github.com/paketo-buildpacks/samples/blob/main/nodejs/angular-npm/package.json): `ng serve --host 0.0.0.0 --port 8080 --public-host <your application domain name>`. The domain name of the application is available in the application **Overview** page, in the **URL** section. Remove the protocol `https://` before proceeding. | `BP_NODE_RUN_SCRIPTS` <br> `NODE_ENV` | `--build-env BP_NODE_RUN_SCRIPTS=build NODE_ENV=development` | ### Deploy WebServer applications
The following table lists the features supported in Azure Spring Apps:
| Integrate with Bellsoft OpenJDK. | Configures the JDK version. Currently supported: JDK 8, 11, and 17. | `BP_JVM_VERSION` | `--build-env BP_JVM_VERSION=17` | | Configure arguments for the `native-image` command. | Arguments to pass directly to the native-image command. These arguments must be valid and correctly formed or the native-image command fails. | `BP_NATIVE_IMAGE_BUILD_ARGUMENTS` | `--build-env BP_NATIVE_IMAGE_BUILD_ARGUMENTS="--no-fallback"` | | Add CA certificates to the system trust store at build and runtime. | See the [Use CA certificates](./how-to-enterprise-configure-apm-intergration-and-ca-certificates.md#use-ca-certificates) section of [How to configure APM integration and CA certificates](./how-to-enterprise-configure-apm-intergration-and-ca-certificates.md). | Not applicable. | Not applicable. |
-| Enable configuration of labels on the created image | Configures both OCI-specified labels with short environment variable names and arbitrary labels using a space-delimited syntax in a single environment variable. | `BP_IMAGE_LABELS` <br> `BP_OCI_AUTHORS` <br> See more envs [here](https://github.com/paketo-buildpacks/image-labels). | `--build-env BP_OCI_AUTHORS=<value>` |
+| Enable configuration of labels on the created image | Configures both OCI-specified labels with short environment variable names and arbitrary labels using a space-delimited syntax in a single environment variable. | `BP_IMAGE_LABELS` <br> `BP_OCI_AUTHORS` <br> See more environment variables [here](https://github.com/paketo-buildpacks/image-labels). | `--build-env BP_OCI_AUTHORS=<value>` |
| Support building Maven-based applications from source. | Used for a multi-module project. Indicates the module to find the application artifact in. Defaults to the root module (empty). | `BP_MAVEN_BUILT_MODULE` | `--build-env BP_MAVEN_BUILT_MODULE=./gateway` | There are some limitations for Java Native Image. For more information, see the [Java Native Image limitations](#java-native-image-limitations) section.
spring-apps How To Prepare App Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-prepare-app-deployment.md
The following table lists the supported Spring Boot and Spring Cloud combination
| Spring Boot version | Spring Cloud version | End of commercial support | ||||
+| 3.2.x | 2022.0.3+ also known as Kilburn | 2026-02-23 |
| 3.1.x | 2022.0.3+ also known as Kilburn | 2025-08-18 | | 3.0.x | 2022.0.3+ also known as Kilburn | 2025-02-24 | | 2.7.x | 2021.0.3+ also known as Jubilee | 2025-08-24 |
The following table lists the supported Spring Boot and Spring Cloud combination
| Spring Boot version | Spring Cloud version | End of support | |||-|
+| 3.2.x | 2022.0.3+ also known as Kilburn | 2024-11-23 |
| 3.1.x | 2022.0.3+ also known as Kilburn | 2024-05-18 | | 3.0.x | 2022.0.3+ also known as Kilburn | 2023-11-24 | | 2.7.x | 2021.0.3+ also known as Jubilee | 2023-11-24 |
static-web-apps Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/custom-domain.md
For custom domain verification to work with Static Web Apps, the DNS must be pub
* Ensure that the public Internet CNAME DNS record used to add the custom domain to the Static Web App via CNAME validation is still present. This option is only valid if CNAME validation was used to add the domain to the static web app. * Ensure that the custom domain resolves to the static web app over public internet. This option is valid regardless of the validation method used to add the domain to the web app. This approach is valid even if private endpoints are enabled, because private endpoints for Static Web Apps block internet access to the site contents but do not block internet DNS resolution to the site.
+## Zero downtime migration
+
+You may want to migrate a custom domain currently serving a production website to your static web app with zero downtime. DNS providers do not accept multiple records for the same name/host, so you can separately validate your ownership of the domain and route traffic to your web app.
+
+1. Open your static web app in the Azure portal.
+1. Add a **TXT record** for your custom domain (APEX or subdomain). Instead of entering the *Host* value as displayed, enter the *Host* in your DNS provider as follows:
+ * For APEX domains, enter `_dnsauth.www.<YOUR-DOMAIN.COM>`.
+ * For subdomains, enter `_dnsauth.<SUBDOMAIN>.<YOUR-DOMAIN.COM>`.
+1. Once your domain is validated, you can migrate your traffic to your static web app by updating your `CNAME`, `ALIAS`, or `A` record to point to your [default host name](./apex-domain-external.md)
## Next steps
storage Azcopy Cost Estimation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/azcopy-cost-estimation.md
+
+ Title: 'Estimate costs: AzCopy with Azure Blob Storage'
+description: Learn how to estimate the cost to transfer data to, from, or between containers in Azure Blob Storage.
++++ Last updated : 11/27/2023++++
+# Estimate the cost of using AzCopy to transfer blobs
+
+This article helps you estimate the cost to transfer blobs by using AzCopy.
+
+All calculations are based on a fictitious price. You can find each price in the [sample prices](#sample-prices) section at the end of this article.
+
+> [!IMPORTANT]
+> These prices are meant only as examples, and shouldn't be used to calculate your costs. For official prices, see the [Azure Blob Storage pricing](https://azure.microsoft.com/pricing/details/storage/blobs/) or [Azure Data Lake Storage pricing](https://azure.microsoft.com/pricing/details/storage/data-lake/) pricing pages. For more information about how to choose the correct pricing page, see [Understand the full billing model for Azure Blob Storage](../common/storage-plan-manage-costs.md).
+
+## The cost to upload
+
+When you run the [azcopy copy](../common/storage-use-azcopy-blobs-upload.md?toc=/azure/storage/blobs/toc.json&bc=/azure/storage/blobs/breadcrumb/toc.json) command, you'll specify a destination endpoint. That endpoint can be either a Blob Service endpoint (`blob.core.windows.net`) or a Data Lake Storage endpoint (`dfs.core.windows.net`) endpoint. This section calculates the cost of using each endpoint to upload **1,000** blobs that are **5 GiB** each in size.
+
+### Cost of uploading to the Blob Service endpoint
+
+If you upload data to the Blob Service endpoint, then by default, AzCopy uploads each blob in 8-MiB blocks. This size is configurable.
+
+AzCopy uses the [Put Block](/rest/api/storageservices/put-block) operation to upload each block. After the final block is uploaded, AzCopy commits those blocks by using the [Put Block List](/rest/api/storageservices/put-block-list) operation. Both operations are billed as _write_ operations.
+
+The following table calculates the number of write operations required to upload these blobs.
+
+| Calculation | Value |
+|--|-|
+| Number of MiB in 5 GiB | 5,120 |
+| PutBlock operations per blob (5,120 MiB / 8-MiB block) | 640 |
+| PutBlockList operations per blob | 1 |
+| **Total write operations (1,000 * 641)** | **641,000** |
+
+> [!TIP]
+> You can reduce the number of operations by configuring AzCopy to use a larger block size.
+
+After each blob is uploaded, AzCopy uses the [Get Blob Properties](/rest/api/storageservices/get-blob-properties) operation as part of validating the upload. The [Get Blob Properties](/rest/api/storageservices/get-blob-properties) operation is billed as an _All other operations_ operation.
+
+Using the [Sample prices](#sample-prices) that appear in this article, the following table calculates the cost to upload these blobs.
+
+| Price factor | Hot | Cool | Cold | Archive |
+||-|-|--|-|
+| Price of a single write operation (price / 10,000) | $0.0000055 | $0.00001 | $0.000018 | $0.00001 |
+| **Cost of write operations (641,000 * operation price)** | **$3.5255** | **$6.4100** | **$11.5380** | **$3.5255** |
+| Price of a single _other_ operation (price / 10,000) | $0.00000044 | $0.00000044 | $0.00000052 | $0.00000044 |
+| **Cost to get blob properties (1000 * _other_ operation price)** | **$0.0004** | **$0.0004** | **$0.0005** | **$0.0004** |
+| **Total cost (write + properties)** | **$3.53** | **$6.41** | **$11.54** | **$3.53** |
+
+> [!NOTE]
+> If you upload to the archive tier, each [Put Block](/rest/api/storageservices/put-block) operation is charged at the price of a **hot** write operation. Each [Put Block List](/rest/api/storageservices/put-block-list) operation is charged the price of an **archive** write operation.
+
+### Cost of uploading to the Data Lake Storage endpoint
+
+If you upload data to the Data Lake Storage endpoint, then AzCopy uploads each blob in 4-MiB blocks. This value is not configurable.
+
+AzCopy uploads each block by using the [Path - Update](/rest/api/storageservices/datalakestoragegen2/path/update) operation with the action parameter set to `append`. After the final block is uploaded, AzCopy commits those blocks by using the [Path - Update](/rest/api/storageservices/datalakestoragegen2/path/update) operation with the action parameter set to `flush`. Both operations are billed as _write_ operations.
+
+The following table calculates the number of write operations required to upload these blobs.
+
+| Calculation | Value
+|||
+| Number of MiB in 5 GiB | 5,120 |
+| Path - Update (append) operations per blob (5,120 MiB / 4-MiB block) | 1,280 |
+| Path - Update (flush) operations per blob | 1 |
+| **Total write operations (1,000 * 1,281)** | **1,281,00** |
+
+After each blob is uploaded, AzCopy uses the [Get Blob Properties](/rest/api/storageservices/get-blob-properties) operation as part of validating the upload. The [Get Blob Properties](/rest/api/storageservices/get-blob-properties) operation is billed as an _All other operations_ operation.
+
+Using the [Sample prices](#sample-prices) that appear in this article, the following table calculates the cost to upload these blobs
+
+| Price factor | Hot | Cool | Cold | Archive |
+||-|--|--|--|
+| Price of a single write operation (price / 10,000) | $0.00000715 | $0.000013 | $0.0000234 | $0.0000143 |
+| **Cost of write operations (1,281,000 * operation price)** | **$9.1592** | **$16.6530** | **$29.9754** | **$18.3183** |
+| Price of a single _other_ operation (price / 10,000) | $0.00000044 | $0.00000044 | $0.00000052 | $0.00000044 |
+| **Cost to get blob properties (1000 * operation price)** | **$0.0004** | **$0.0004** | **$0.0005** | **$0.0004** |
+| **Total cost (write + properties)** | **$9.16** | **$16.65** | **$29.98** | **$18.32** |
+
+## The cost to download
+
+When you run the [azcopy copy](../common/storage-use-azcopy-blobs-download.md?toc=/azure/storage/blobs/toc.json&bc=/azure/storage/blobs/breadcrumb/toc.json) command, you'll specify a source endpoint. That endpoint can be either a Blob Service endpoint (`blob.core.windows.net`) or a Data Lake Storage endpoint (`dfs.core.windows.net`) endpoint. This section calculates the cost of using each endpoint to download **1,000** blobs that are **5 GiB** each in size.
+
+### Cost of downloading from the Blob Service endpoint
+
+If you download blobs from the Blob Service endpoint, AzCopy uses the [List Blobs](/rest/api/storageservices/list-blobs) to enumerate blobs. A [List Blobs](/rest/api/storageservices/list-blobs) is billed as a _List and create container_ operation. One [List Blobs](/rest/api/storageservices/list-blobs) operation returns up to 5,000 blobs. Therefore, in this example, only one [List Blobs](/rest/api/storageservices/list-blobs) operation is required.
+
+For each blob, AzCopy uses the [Get Blob Properties](/rest/api/storageservices/get-blob-properties) operation, and the [Get Blob](/rest/api/storageservices/get-blob) operation. The [Get Blob Properties](/rest/api/storageservices/get-blob-properties) operation is billed as an _All other operations_ operation and the [Get Blob](/rest/api/storageservices/get-blob) operation is billed as a _read_ operation.
+
+If you download blobs from the cool or cold tier, you're also charged a data retrieval per GiB downloaded.
+
+Using the [Sample prices](#sample-prices) that appear in this article, the following table calculates the cost to download these blobs.
+
+> [!NOTE]
+> This table excludes the archive tier because you can't download directly from that tier. See [Blob rehydration from the archive tier](archive-rehydrate-overview.md).
+
+| Price factor | Hot | Cool | Cold |
+|-|-|-|-|
+| Price of a single list operation (price/ 10,000) | $0.0000055 | $0.0000055 | $0.0000065 |
+| **Cost of listing operations (1 * operation price)** | **$0.0000055** | **$0.0000055** | **$0.0000065** |
+| Price of a single _other_ operation (price / 10,000) | $0.00000044 | $0.00000044 | $0.00000052 |
+| **Cost to get blob properties (1000 * operation price)** | **$0.00044** | **$0.00044** | **$0.00052** |
+| Price of a single read operation (price / 10,000) | $0.00000044 | $0.000001 | $0.00001 |
+| **Cost of read operations (1000 * operation price)** | **$0.00044** | **$0.001** | **$0.01** |
+| Price of data retrieval (per GiB) | $0.00 | $0.01 | $0.03 |
+| **Cost of data retrieval (5 * operation price)** | **$0.00** | **$0.05** | **$0.15** |
+| **Total cost (list + properties + read + retrieval)** | **$0.001** | **$0.051** | **$0.161** |
++
+### Cost of downloading from the Data Lake Storage endpoint
+
+If you download blobs from the Data Lake Storage endpoint, AzCopy uses the [List Blobs](/rest/api/storageservices/list-blobs) to enumerate blobs. A [List Blobs](/rest/api/storageservices/list-blobs) is billed as a _List and create container_ operation. One [List Blobs](/rest/api/storageservices/list-blobs) operation returns up to 5,000 blobs. Therefore, in this example, only one [List Blobs](/rest/api/storageservices/list-blobs) operation is required.
+
+For each blob, AzCopy uses the [Get Blob Properties](/rest/api/storageservices/get-blob-properties) operation which is billed as an _All other operations_ operation. AzCopy downloads each block (4 MiB in size) by using the [Path - Read](/rest/api/storageservices/datalakestoragegen2/path/read) operation. Each [Path - Read](/rest/api/storageservices/datalakestoragegen2/path/read) call is billed as a _read_ operation.
+
+If you download blobs from the cool or cold tier, you're also charged a data retrieval per GiB downloaded.
+
+The following table calculates the number of write operations required to upload the blobs.
+
+| Calculation | Value |
+|-||
+| Number of MiB in 5 GiB | 5,120 |
+| Path - Update operations per blob (5,120 MiB / 4-MiB block) | 1,280 |
+| Total read operations (1000* 1,280) | **1,280,000** |
+
+Using the [Sample prices](#sample-prices) that appear in this article, the following table calculates the cost to download these blobs.
+
+> [!NOTE]
+> This table excludes the archive tier because you can't download directly from that tier. See [Blob rehydration from the archive tier](archive-rehydrate-overview.md).
+
+| Price factor | Hot | Cool | Cold |
+|--|-|-|-|
+| Price of a single list operation (price/ 10,000) | $0.0000055 | $0.0000055 | $0.0000065 |
+| **Cost of listing operations (1 * operation price)** | **$0.0000055** | **$0.0000055** | **$0.0000065** |
+| Price of a single _other_ operation (price / 10,000) | $0.00000044 | $0.00000044 | $0.00000052 |
+| **Cost to get blob properties (1000 * operation price)** | **$0.00044** | **$0.00044** | **$0.00052** |
+| Price of a single read operation (price / 10,000) | $0.00000057 | $0.00000130 | $0.00001300 |
+| **Cost of read operations (1,281,000 * operation price)** | **$0.73017** | **$1.6653** | **$16.653** |
+| Price of data retrieval (per GiB) | $0.00000000 | $0.01000000 | $0.03000000 |
+| **Cost of data retrieval (5 * operation price)** | **$0.00** | **$0.05** | **$0.15** |
+| **Total cost (list + properties + read + retrieval)** | **$0.731** | **$1.716** | **$16.804** |
++
+## The cost to copy between containers
+
+When you run the [azcopy copy](../common/storage-use-azcopy-blobs-copy.md?toc=/azure/storage/blobs/toc.json&bc=/azure/storage/blobs/breadcrumb/toc.json) command, you'll specify a source and destination endpoint. These endpoints can be either a Blob Service endpoint (`blob.core.windows.net`) or a Data Lake Storage endpoint (`dfs.core.windows.net`) endpoint. This section calculates the cost to copy **1,000** blobs that are **5 GiB** each in size.
+
+> [!NOTE]
+> Blobs in the archive tier can be copied only to an online tier. Because all of these examples assume the same tier for source and destination, the archive tier is excluded from these tables.
+
+### Cost of copying blobs within the same account
+
+Regardless of which endpoint you specify (Blob Service or Data Lake Storage), AzCopy uses the [List Blobs](/rest/api/storageservices/list-blobs) to enumerate blobs at the source location. A [List Blobs](/rest/api/storageservices/list-blobs) is billed as a _List and create container_ operation. One [List Blobs](/rest/api/storageservices/list-blobs) operation returns up to 5,000 blobs. Therefore, in this example, only one [List Blobs](/rest/api/storageservices/list-blobs) operation is required.
+
+For each blob, AzCopy uses the [Get Blob Properties](/rest/api/storageservices/get-blob-properties) operation for both the source blob and the blob that is copied to the destination. The [Get Blob Properties](/rest/api/storageservices/get-blob-properties) operation is billed as an _All other operations_ operation. AzCopy uses the [Copy Blob](/rest/api/storageservices/copy-blob) operation to copy blobs to another container which is billed as a _write_ operation that is based on the destination tier.
+
+| Price factor | Hot | Cool | Cold |
+|-|-|-|-|
+| Price of a single list operation (price/ 10,000) | $0.0000055 | $0.0000055 | $0.0000065 |
+| **Cost of listing operations (1 * operation price)** | **$0.0000055** | **$0.0000055** | **$0.0000065** |
+| Price of a single other operations (price / 10,000) | $0.00000044 | $0.00000044 | $0.00000052 |
+| **Cost to get blob properties (2000 * operation price)** | **$0.00088** | **$0.00088** | **$0.00104** |
+| Price of a single write operation (price / 10,000) | $0.0000055 | $0.00001 | $0.000018 |
+| **Cost to write (1000 * operation price)** | **$3.53** | **$0.0055** | **$0.01** |
+| **Total cost (listing + properties + write)** | **$3.5309** | **$0.0064** | **$0.0110** |
+
+### Cost of copying blobs to another account in the same region
+
+This scenario is identical to the previous one except that you're also billed for data retrieval and for read operation that is based on the source tier.
+
+| Price factor | Hot | Cool | Cold |
+|-|--|-|-|
+| **Total from previous section** | **$3.5309** | **$0.0064** | **$0.0110** |
+| Price of a single read operation (price / 10,000) | $0.00000044 | $0.000001 | $0.00001 |
+| **Cost of read operations (1,000 * operation price)** | **$0.00044** | **$0.001** | **$0.01** |
+| Price of data retrieval (per GiB) | Free | $0.01 | $0.03 |
+| **Cost of data retrieval (5 * operation price)** | **$0.00** | **$.05** | **$.15** |
+| **Total cost (previous section + retrieval + read)** | **$3.53134** | **$0.0574** | **$0.171** |
+
+### Cost of copying blobs to an account located in another region
+
+This scenario is identical to the previous one except you are billed for network egress charges.
+
+| Price factor | Hot | Cool | Cold |
+|--|--|-|-|
+| **Total cost from previous section** | **$3.53134** | **$0.0574** | **$0.171** |
+| Price of network egress (per GiB) | $0.02 | $0.02 | $0.02 |
+| **Total cost of network egress (5 * price of egress)** | **$.10** | **$.10** | **$.10** |
+| **Total cost (previous section + egress)** | **$3.5513** | **$0.0774** | **$0.191** |
+
+## The cost to synchronize changes
+
+When you run the [azcopy sync](../common/storage-use-azcopy-blobs-synchronize.md?toc=/azure/storage/blobs/toc.json&bc=/azure/storage/blobs/breadcrumb/toc.json) command, you'll specify a source and destination endpoint. These endpoints can be either a Blob Service endpoint (`blob.core.windows.net`) or a Data Lake Storage endpoint (`dfs.core.windows.net`) endpoint.
+
+> [!NOTE]
+> Blobs in the archive tier can be copied only to an online tier. Because all of these examples assume the same tier for source and destination, the archive tier is excluded from these tables.
+
+### Cost to synchronize a container with a local file system
+
+If you want to keep a container updated with changes to a local file system, then AzCopy performs the exact same tasks as described in the [Cost of uploading to the Blob Service endpoint](#cost-of-uploading-to-the-blob-service-endpoint) section in this article. Blobs are uploaded only if the last modified time of a local file is different than the last modified time of the blob in the container. Therefore, you are billed _write_ transactions only for blobs that are uploaded.
+
+If you want to keep a local file system updated with changes to a container, then AzCopy performs the exact same tasks as described in the [Cost of downloading from the Blob Service endpoint](#cost-of-downloading-from-the-blob-service-endpoint) section of this article. Blobs are downloaded only If the last modified time of a local blob is different than the last modified time of the blob in the container. Therefore, you are billed _read_ transactions only for blobs that are downloaded.
+
+### Cost to synchronize containers
+
+If you want to keep two containers synchronized, then AzCopy performs the exact same tasks as described in the [The cost to copy between containers](#the-cost-to-copy-between-containers) section in this article. A blob is copied only if the last modified time of a blob in the source container is different than the last modified time of a blob in the destination container. Therefore, you are billed _write_ and _read_ transactions only for blobs that are copied.
+
+The [azcopy sync](../common/storage-use-azcopy-blobs-synchronize.md?toc=/azure/storage/blobs/toc.json&bc=/azure/storage/blobs/breadcrumb/toc.json) command uses the [List Blobs](/rest/api/storageservices/list-blobs) operation on both source and destination accounts when synchronizing containers that exist in separate accounts.
++
+## Summary of calculations
+
+The following table contains all of the estimates presented in this article. All estimates are based on transferring **1000** blobs that are each **5 GiB** in size and use the sample prices listed in the next section.
+
+| Scenario | Hot | Cool | Cold | Archive |
+||-||||
+| Upload blobs (Blob Service endpoint) | $3.53 | $6.41 | $11.54 | $3.53 |
+| Upload blobs (Data Lake Storage endpoint) | $9.16 | $16.65 | $29.98 | $18.32 |
+| Download blobs (Blob Service endpoint) | $0.001 | $0.051 | $0.161 | N/A |
+| Download blobs (Data Lake Storage endpoint) | $0.731 | $1.716 | $16.804 | N/A |
+| Copy blobs | $3.5309 | $0.0064 | $0.0110 | N/A |
+| Copy blobs to another account | $3.53134 | $0.0574 | $0.171 | N/A |
+| Copy blobs to an account in another region | $3.5513 | $0.0774 | $0.191 | N/A |
+
+## Sample prices
+
+The following table includes sample (fictitious) prices for each request to the Blob Service endpoint (`blob.core.windows.net`). For official prices, see [Azure Blob Storage pricing](https://azure.microsoft.com/pricing/details/storage/blobs/).
+
+| Price factor | Hot | Cool | Cold | Archive |
+|--|||||
+| Price of write transactions (per 10,000) | $0.055 | $0.10 | $0.18 | $0.10 |
+| Price of read transactions (per 10,000) | $0.0044 | $0.01 | $0.10 | $5.00 |
+| Price of data retrieval (per GiB) | Free | $0.01 | $0.03 | $0.02 |
+| List and container operations (per 10,000) | $0.055 | $0.055 | $0.065 | $0.055 |
+| All other operations (per 10,000) | $0.0044 | $0.0044 | $0.0052 | $0.0044 |
+
+The following table includes sample prices (fictitious) prices for each request to the Data Lake Storage endpoint (`dfs.core.windows.net`). For official prices, see [Azure Data Lake Storage pricing](https://azure.microsoft.com/pricing/details/storage/data-lake/).
+
+| Price factor | Hot | Cool | Cold | Archive |
+|--|-|-|-||
+| Price of write transactions (every 4MiB, per 10,000) | $0.0715 | $0.13 | $0.234 | $0.143 |
+| Price of read transactions (every 4MiB, per 10,000) | $0.0057 | $0.013 | $0.13 | $7.15 |
+| Price of data retrieval (per GiB) | Free | $0.01 | $0.03 | $0.022 |
+| Iterative Read operations (per 10,000) | $0.0715 | $0.0715 | $0.0845 | $0.0715 |
+
+## Operations used by AzCopy commands
+
+The following table shows the operations that are used by each AzCopy command. To map each operation to a price, see [Map each REST operation to a price](map-rest-apis-transaction-categories.md).
+
+### Commands that target the Blob Service Endpoint
+
+| Command | Scenario | Operations |
+||-|--|
+| [azcopy bench](../common/storage-ref-azcopy-bench.md?toc=/azure/storage/blobs/toc.json) | Upload | [Put Block](/rest/api/storageservices/put-block-list) and [Put Block List](/rest/api/storageservices/put-block-list) |
+| [azcopy bench](../common/storage-ref-azcopy-bench.md?toc=/azure/storage/blobs/toc.json) | Download | [List Blobs](/rest/api/storageservices/list-blobs), [Get Blob Properties](/rest/api/storageservices/get-blob-properties), and [Get Blob](/rest/api/storageservices/get-blob) |
+| [azcopy copy](../common/storage-ref-azcopy-copy.md?toc=/azure/storage/blobs/toc.json) | Upload | [Put Block](/rest/api/storageservices/put-block-list) and [Put Block List](/rest/api/storageservices/put-block-list), [Get Blob Properties](/rest/api/storageservices/get-blob-properties) |
+| [azcopy copy](../common/storage-ref-azcopy-copy.md?toc=/azure/storage/blobs/toc.json) | Download | [List Blobs](/rest/api/storageservices/list-blobs), [Get Blob Properties](/rest/api/storageservices/get-blob-properties), and [Get Blob](/rest/api/storageservices/get-blob) |
+| [azcopy copy](../common/storage-ref-azcopy-copy.md?toc=/azure/storage/blobs/toc.json) | Perform a dry run | [List Blobs](/rest/api/storageservices/list-blobs) |
+| [azcopy copy](../common/storage-ref-azcopy-copy.md?toc=/azure/storage/blobs/toc.json) | Copy from Amazon S3| [Put Blob from URL](/rest/api/storageservices/put-blob-from-url) |
+| [azcopy copy](../common/storage-ref-azcopy-copy.md?toc=/azure/storage/blobs/toc.json) | Copy from Google Cloud Storage | [Put Blob from URL](/rest/api/storageservices/put-blob-from-url) |
+| [azcopy copy](../common/storage-ref-azcopy-copy.md?toc=/azure/storage/blobs/toc.json) | Copy to another container | [List Blobs](/rest/api/storageservices/list-blobs), [Get Blob Properties](/rest/api/storageservices/get-blob-properties), and [Copy Blob](/rest/api/storageservices/copy-blob) |
+| [azcopy sync](../common/storage-ref-azcopy-sync.md?toc=/azure/storage/blobs/toc.json) | Update local with changes to container | [List Blobs](/rest/api/storageservices/list-blobs), [Get Blob Properties](/rest/api/storageservices/get-blob-properties), and [Get Blob](/rest/api/storageservices/get-blob) |
+| [azcopy sync](../common/storage-ref-azcopy-sync.md?toc=/azure/storage/blobs/toc.json) | Update container with changes to local file system | [List Blobs](/rest/api/storageservices/list-blobs), [Get Blob Properties](/rest/api/storageservices/get-blob-properties), [Put Block](/rest/api/storageservices/put-block-list), and [Put Block List](/rest/api/storageservices/put-block-list) |
+| [azcopy sync](../common/storage-ref-azcopy-sync.md?toc=/azure/storage/blobs/toc.json) | Synchronize containers | [List Blobs](/rest/api/storageservices/list-blobs), [Get Blob Properties](/rest/api/storageservices/get-blob-properties), and [Copy Blob](/rest/api/storageservices/copy-blob) |
+| [azcopy set-properties](../common/storage-ref-azcopy-set-properties.md?toc=/azure/storage/blobs/toc.json) | Set blob tier | [Set Blob Tier](/rest/api/storageservices/set-blob-tier) |
+| [azcopy set-properties](../common/storage-ref-azcopy-set-properties.md?toc=/azure/storage/blobs/toc.json) | Set metadata | [Set Blob Metadata](/rest/api/storageservices/set-blob-metadata) |
+| [azcopy set-properties](../common/storage-ref-azcopy-set-properties.md?toc=/azure/storage/blobs/toc.json) | Set blob tags | [Set Blob Tags](/rest/api/storageservices/set-blob-tags) |
+| [azcopy list](../common/storage-ref-azcopy-list.md?toc=/azure/storage/blobs/toc.json) | List blobs in a container| [List Blobs](/rest/api/storageservices/list-blobs) |
+| [azcopy make](../common/storage-ref-azcopy-make.md?toc=/azure/storage/blobs/toc.json) | Create a container | [Create Container](/rest/api/storageservices/create-container) |
+| [azcopy remove](../common/storage-ref-azcopy-remove.md?toc=/azure/storage/blobs/toc.json) | Delete a container | [Delete Container](/rest/api/storageservices/delete-container) |
+| [azcopy remove](../common/storage-ref-azcopy-remove.md?toc=/azure/storage/blobs/toc.json) | Delete a blob | [Delete Blob](/rest/api/storageservices/delete-blob) |
+
+### Commands that target the Data Lake Storage endpoint
+
+| Command | Scenario | Operations |
+||-|--|
+| [azcopy bench](../common/storage-ref-azcopy-bench.md?toc=/azure/storage/blobs/toc.json) | Upload | [Path - Update](/rest/api/storageservices/datalakestoragegen2/path/update) (Append), and [Path - Update](/rest/api/storageservices/datalakestoragegen2/path/update) (Flush) |
+| [azcopy bench](../common/storage-ref-azcopy-bench.md?toc=/azure/storage/blobs/toc.json) | Download | [List Blobs](/rest/api/storageservices/list-blobs), [Get Blob Properties](/rest/api/storageservices/get-blob-properties), and [Path - Read](/rest/api/storageservices/datalakestoragegen2/path/read)|
+| [azcopy copy](../common/storage-ref-azcopy-copy.md?toc=/azure/storage/blobs/toc.json) | Upload | [Path - Update](/rest/api/storageservices/datalakestoragegen2/path/update), and [Get Blob Properties](/rest/api/storageservices/get-blob-properties) |
+| [azcopy copy](../common/storage-ref-azcopy-copy.md?toc=/azure/storage/blobs/toc.json) | Download |[List Blobs](/rest/api/storageservices/list-blobs), [Get Blob Properties](/rest/api/storageservices/get-blob-properties), and [Path - Read](/rest/api/storageservices/datalakestoragegen2/path/read) |
+| [azcopy copy](../common/storage-ref-azcopy-copy.md?toc=/azure/storage/blobs/toc.json) | Perform a dry run | [List Blobs](/rest/api/storageservices/list-blobs) |
+| [azcopy copy](../common/storage-ref-azcopy-copy.md?toc=/azure/storage/blobs/toc.json) | Copy from Amazon S3| Not supported |
+| [azcopy copy](../common/storage-ref-azcopy-copy.md?toc=/azure/storage/blobs/toc.json) | Copy from Google Cloud Storage | Not supported |
+| [azcopy copy](../common/storage-ref-azcopy-copy.md?toc=/azure/storage/blobs/toc.json) | Copy to another container | [List Blobs](/rest/api/storageservices/list-blobs), and [Copy Blob](/rest/api/storageservices/copy-blob). if --preserve-permissions-true, then [Path - Get Properties](/rest/api/storageservices/datalakestoragegen2/path/get-properties) (Get Access Control List) and [Path - Update](/rest/api/storageservices/datalakestoragegen2/path/update) (Set access control) otherwise, [Get Blob Properties](/rest/api/storageservices/get-blob-properties). |
+| [azcopy sync](../common/storage-ref-azcopy-sync.md?toc=/azure/storage/blobs/toc.json) | Update local with changes to container | [List Blobs](/rest/api/storageservices/list-blobs), [Get Blob Properties](/rest/api/storageservices/get-blob-properties), and [Get Blob](/rest/api/storageservices/get-blob) |
+| [azcopy sync](../common/storage-ref-azcopy-sync.md?toc=/azure/storage/blobs/toc.json) | Update container with changes to local file system | [List Blobs](/rest/api/storageservices/list-blobs), [Get Blob Properties](/rest/api/storageservices/get-blob-properties), [Path - Update](/rest/api/storageservices/datalakestoragegen2/path/update) (Append), and [Path - Update](/rest/api/storageservices/datalakestoragegen2/path/update) (Flush)|
+| [azcopy sync](../common/storage-ref-azcopy-sync.md?toc=/azure/storage/blobs/toc.json) | Synchronize containers | [List Blobs](/rest/api/storageservices/list-blobs), [Get Blob Properties](/rest/api/storageservices/get-blob-properties), and [Copy Blob](/rest/api/storageservices/copy-blob) |
+| [azcopy set-properties](../common/storage-ref-azcopy-set-properties.md?toc=/azure/storage/blobs/toc.json) | Set blob tier | Not supported |
+| [azcopy set-properties](../common/storage-ref-azcopy-set-properties.md?toc=/azure/storage/blobs/toc.json) | Set metadata | Not supported |
+| [azcopy set-properties](../common/storage-ref-azcopy-set-properties.md?toc=/azure/storage/blobs/toc.json) | Set blob tags | Not supported |
+| [azcopy list](../common/storage-ref-azcopy-list.md?toc=/azure/storage/blobs/toc.json) | List blobs in a container| [List Blobs](/rest/api/storageservices/list-blobs)|
+| [azcopy make](../common/storage-ref-azcopy-make.md?toc=/azure/storage/blobs/toc.json) | Create a container | [Filesystem - Create](/rest/api/storageservices/datalakestoragegen2/filesystem/create) |
+| [azcopy remove](../common/storage-ref-azcopy-remove.md?toc=/azure/storage/blobs/toc.json) | Delete a container | [Filesystem - Delete](/rest/api/storageservices/datalakestoragegen2/filesystem/delete) |
+| [azcopy remove](../common/storage-ref-azcopy-remove.md?toc=/azure/storage/blobs/toc.json) | Delete a blob | [Filesystem - Delete](/rest/api/storageservices/datalakestoragegen2/filesystem/delete) |
+
+## See also
+
+- [Plan and manage costs for Azure Blob Storage](../common/storage-plan-manage-costs.md)
+- [Map each REST operation to a price](map-rest-apis-transaction-categories.md)
+- [Get started with AzCopy](../common/storage-use-azcopy-v10.md)
storage Map Rest Apis Transaction Categories https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/map-rest-apis-transaction-categories.md
The price of each type appears in the [Azure Blob Storage pricing](https://azure
| [Lease Blob](/rest/api/storageservices/find-blobs-by-tags) (acquire, release, renew) | Other | Other | Read | | [Lease Blob](/rest/api/storageservices/find-blobs-by-tags) (break, change) | Other | Other | Write | | [Snapshot Blob](/rest/api/storageservices/snapshot-blob) | Other | Other | Read |
-| [Copy Blob](/rest/api/storageservices/copy-blob) | Write | Write | Write |
+| [Copy Blob](/rest/api/storageservices/copy-blob) | Write<sup>2</sup> | Write<sup>2</sup> | Write<sup>2</sup> |
| [Copy Blob from URL](/rest/api/storageservices/copy-blob-from-url) | Write | Write | Write | | [Abort Copy Blob](/rest/api/storageservices/abort-copy-blob) | Other | Other | Write | | [Delete Blob](/rest/api/storageservices/delete-blob) | Free | Free | Free |
The price of each type appears in the [Azure Blob Storage pricing](https://azure
| [Append Block](/rest/api/storageservices/append-block) | Write | Write | Write | | [Append Block from URL](/rest/api/storageservices/append-block-from-url) | Write | Write | Write | | [Append Blob Seal](/rest/api/storageservices/append-blob-seal) | Write | Write | Write |
-| [Set Blob Expiry](/rest/api/storageservices/set-blob-expiry) | Other | Other | Write |
+| [Set Blob Expiry](/rest/api/storageservices/set-blob-expiry) | Other | Other | Write |
<sup>1</sup> In addition to a read charge, charges are incurred for the **Query Acceleration - Data Scanned**, and **Query Acceleration - Data Returned** transaction categories that appear on the [Azure Data Lake Storage pricing](https://azure.microsoft.com/pricing/details/storage/data-lake/) page.
+<sup>2</sup> When the source object is in a different account, the source account incurs one transaction for each read request to the source object.
+ ## Operation type of each Data Lake Storage Gen2 REST operation The following table maps each Data Lake Storage Gen2 REST operation to an operation type.
storage Authorize Data Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/authorize-data-access.md
Each authorization option is briefly described below:
- **On-premises Active Directory Domain Services (AD DS, or on-premises AD DS) authentication** for Azure Files. Azure Files supports identity-based authorization over SMB through AD DS. Your AD DS environment can be hosted in on-premises machines or in Azure VMs. SMB access to Files is supported using AD DS credentials from domain joined machines, either on-premises or in Azure. You can use a combination of Azure RBAC for share level access control and NTFS DACLs for directory/file level permission enforcement. For more information about Azure Files authentication using domain services, see the [overview](../files/storage-files-active-directory-overview.md). -- **anonymous read access** for blob data is supported, but not recommended. When anonymous access is configured, clients can read blob data without authorization. We recommend that you disable anonymous access for all of your storage accounts. For more information, see [Overview: Remediating anonymous read access for blob data](../blobs/anonymous-read-access-overview.md).
+- **Anonymous read access** for blob data is supported, but not recommended. When anonymous access is configured, clients can read blob data without authorization. We recommend that you disable anonymous access for all of your storage accounts. For more information, see [Overview: Remediating anonymous read access for blob data](../blobs/anonymous-read-access-overview.md).
- **Storage Local Users** can be used to access blobs with SFTP or files with SMB. Storage Local Users support container level permissions for authorization. See [Connect to Azure Blob Storage by using the SSH File Transfer Protocol (SFTP)](../blobs/secure-file-transfer-protocol-support-how-to.md) for more information on how Storage Local Users can be used with SFTP.
storage Storage Explorer Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-explorer-security.md
This section describes the two Microsoft Entra ID-based technologies that can be
[Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md) give you fine-grained access control over your Azure resources. Azure roles and permissions can be managed from the Azure portal.
-Storage Explorer supports Azure RBAC access to Storage Accounts, Blobs, and Queues. If you need access to File Shares or Tables, you'll need to assign Azure roles that grant permission to list storage account keys.
+Storage Explorer supports Azure RBAC access to Storage Accounts, Blobs, Queues, and Tables. If you need access to File Shares, you'll need to assign Azure roles that grant permission to list storage account keys.
#### Access control lists (ACLs)
storage Storage Explorer Support Policy Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-explorer-support-policy-lifecycle.md
This table describes the release date and the end of support date for each relea
| Storage Explorer version | Release date | End of support date | |:-:|::|:-:|
+| v1.32.1 | November 15, 2023 | November 1, 2024 |
+| v1.32.0 | November 1, 2023 | November 1, 2024 |
| v1.31.2 | October 3, 2023 | August 11, 2024 | | v1.31.1 | August 22, 2023 | August 11, 2024 | | v1.31.0 | August 11, 2023 | August 11, 2024 |
storage Storage Plan Manage Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-plan-manage-costs.md
Previously updated : 10/03/2023 Last updated : 11/27/2023
Use the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculato
4. Modify the remaining options to see their effect on your estimate.
- > [!TIP]
- > - To view an Excel template which can help you to itemize the amount of storage and number of operations required by your workloads, see [Estimating Pricing for Azure Block Blob Deployments](https://azure.github.io/Storage/docs/application-and-user-data/code-samples/estimate-block-blob/).
- >
- > You can use that information as input to the Azure pricing calculator.
- >
- > - For more information about how to estimate the cost of archiving data that is rarely used, see [Estimate the cost of archiving data](../blobs/archive-cost-estimation.md).
+### Supportive tools and guides
+
+The following resources can also help you forecast the cost of using Azure Blob Storage:
+
+- [Estimating Pricing for Azure Block Blob Deployments](https://azure.github.io/Storage/docs/application-and-user-data/code-samples/estimate-block-blob/)
+
+- [Estimate the cost of archiving data](../blobs/archive-cost-estimation.md)
+
+- [Estimate the cost of using AzCopy to transfer blobs](../blobs/azcopy-cost-estimation.md)
+
+- [Map each REST operation to a price](../blobs/map-rest-apis-transaction-categories.md)
## Understand the full billing model for Azure Blob Storage
As you use Azure resources with Azure Storage, you incur costs. Resource usage u
When you use cost analysis, you can view Azure Storage costs in graphs and tables for different time intervals. Some examples are by day, current and prior month, and year. You can also view costs against budgets and forecasted costs. Switching to longer views over time can help you identify spending trends and see where overspending might have occurred. If you've created budgets, you can also easily see where they exceeded. > [!NOTE]
-> Cost analysis supports different kinds of Azure account types. To view the full list of supported account types, see [Understand Cost Management data](../../cost-management-billing/costs/understand-cost-mgt-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). To view cost data, you need at least read access for your Azure account. For information about assigning access to Azure Cost Management data, see [Assign access to data](../../cost-management-billing/costs/assign-access-acm-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
+> Cost analysis supports different kinds of Azure account types. To view the full list of supported account types, see [Understand Cost Management data](../../cost-management-billing/costs/understand-cost-mgt-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). To view cost data, you need at least read access for your Azure account. For information about assigning access to Microsoft Cost Management data, see [Assign access to data](../../cost-management-billing/costs/assign-access-acm-data.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
To view Azure Storage costs in cost analysis:
storage Storage Files Identity Multiple Forests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-multiple-forests.md
To use this method, complete the following steps:
1. Select the node named after your domain (for example, **onpremad1.com**) and right-click **New Alias (CNAME)**. 1. For the alias name, enter your storage account name. 1. For the fully qualified domain name (FQDN), enter **`<storage-account-name>`.`<domain-name>`**, such as **mystorageaccount.onpremad1.com**.
- 1. If you're using a private endpoint (PrivateLink) for the storage account, add an additional CNAME entry to map to the private endpoint name, for example **mystorageaccount.privatelink.onpremad1.com**.
1. For the target host FQDN, enter **`<storage-account-name>`.file.core.windows.net** 1. Select **OK**.
storage Storage Blobs Container Calculate Size Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/scripts/storage-blobs-container-calculate-size-powershell.md
Title: Calculate size of a blob container with PowerShell
+ Title: Calculate the size of blob containers with PowerShell
-description: Calculate the size of a container in Azure Blob Storage by totaling the size of each of its blobs.
+description: Calculate the size of all Azure Blob Storage containers in a storage account.
-+ ms.devlang: powershell Previously updated : 12/04/2019- Last updated : 11/21/2023+
-# Calculate the size of a blob container with PowerShell
+# Calculate the size of blob containers with PowerShell
-This script calculates the size of a container in Azure Blob Storage. It first displays the total number of bytes used by the blobs within the container, then displays their individual names and lengths.
+This script calculates the size of all Azure Blob Storage containers in a storage account.
[!INCLUDE [sample-powershell-install](../../../includes/sample-powershell-install-no-ssh-az.md)] [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] > [!IMPORTANT]
-> This PowerShell script provides an estimated size for the container and should not be used for billing calculations. For a script that calculates container size for billing purposes, see [Calculate the size of a Blob storage container for billing purposes](../scripts/storage-blobs-container-calculate-billing-size-powershell.md).
+> This PowerShell script provides an estimated size for the containers in an account and should not be used for billing calculations. For a script that calculates container size for billing purposes, see [Calculate the size of a Blob storage container for billing purposes](../scripts/storage-blobs-container-calculate-billing-size-powershell.md).
## Sample script
-[!code-powershell[main](../../../powershell_scripts/storage/calculate-container-size/calculate-container-size.ps1 "Calculate container size")]
+```powershell
+# This script will show how to get the total size of the blobs in all containers in a storage account.
+# Before running this, you need to create a storage account, at least one container,
+# and upload some blobs into that container.
+# note: this retrieves all of the blobs in each container in one command.
+# Run the Connect-AzAccount cmdlet to connect to Azure.
+# Requests that are sent as part of this tool will incur transactional costs.
+#
+
+$containerstats = @()
+
+# Provide the name of your storage account and resource group
+$storage_account_name = "<name-of-your-storage-account>"
+$resource_group = "<name-of-your-resource-group"
+
+# Get a reference to the storage account and the context.
+$storageAccount = Get-AzStorageAccount `
+ -ResourceGroupName $resource_group `
+ -Name $storage_account_name
+$Ctx = $storageAccount.Context
+
+$container_continuation_token = $null
+do {
+ $containers = Get-AzStorageContainer -Context $Ctx -MaxCount 5000 -ContinuationToken $container_continuation_token
+ $container_continuation_token = $null;
+
+ if ($containers -ne $null)
+ {
+ $container_continuation_token = $containers[$containers.Count - 1].ContinuationToken
+
+ for ([int] $c = 0; $c -lt $containers.Count; $c++)
+ {
+ $container = $containers[$c].Name
+ Write-Verbose "Processing container : $container"
+ $total_usage = 0
+ $total_blob_count = 0
+ $soft_delete_usage = 0
+ $soft_delete_count = 0
+ $version_usage = 0
+ $version_count =
+ $snapshot_count = 0
+ $snapshot_usage = 0
+ $blob_continuation_token = $null
+
+ do {
+ $blobs = Get-AzStorageBlob -Context $Ctx -IncludeDeleted -IncludeVersion -Container $container -ConcurrentTaskCount 100 -MaxCount 5000 -ContinuationToken $blob_continuation_token
+ $blob_continuation_token = $null;
+
+ if ($blobs -ne $null)
+ {
+ $blob_continuation_token = $blobs[$blobs.Count - 1].ContinuationToken
+
+ for ([int] $b = 0; $b -lt $blobs.Count; $b++)
+ {
+ $total_blob_count++
+ $total_usage += $blobs[$b].Length
+
+ if ($blobs[$b].IsDeleted)
+ {
+ $soft_delete_count++
+ $soft_delete_usage += $blobs[$b].Length
+ }
+
+ if ($blobs[$b].SnapshotTime -ne $null)
+ {
+ $snapshot_count++
+ $snapshot_usage+= $blobs[$b].Length
+ }
+
+ if ($blobs[$b].VersionId -ne $null)
+ {
+ $version_count++
+ $version_usage += $blobs[$b].Length
+ }
+ }
+
+ If ($blob_continuation_token -ne $null)
+ {
+ Write-Verbose "Blob listing continuation token = {0}".Replace("{0}",$blob_continuation_token.NextMarker)
+ }
+ }
+ } while ($blob_continuation_token -ne $null)
+
+ Write-Verbose "Calculated size of $container = $total_usage with soft_delete usage of $soft_delete_usage"
+ $containerstats += [PSCustomObject] @{
+ Name = $container
+ TotalBlobCount = $total_blob_count
+ TotalBlobUsageinGB = $total_usage/1GB
+ SoftDeletedBlobCount = $soft_delete_count
+ SoftDeletedBlobUsageinGB = $soft_delete_usage/1GB
+ SnapshotCount = $snapshot_count
+ SnapshotUsageinGB = $snapshot_usage/1GB
+ VersionCount = $version_count
+ VersionUsageinGB = $version_usage/1GB
+ }
+ }
+ }
+
+ If ($container_continuation_token -ne $null)
+ {
+ Write-Verbose "Container listing continuation token = {0}".Replace("{0}",$container_continuation_token.NextMarker)
+ }
+} while ($container_continuation_token -ne $null)
+
+Write-Host "Total container stats"
+$containerstats | Format-Table -AutoSize
+```
## Clean up deployment
This script uses the following commands to calculate the size of the Blob storag
| Command | Notes | ||| | [Get-AzStorageAccount](/powershell/module/az.storage/get-azstorageaccount) | Gets a specified Storage account or all of the Storage accounts in a resource group or the subscription. |
+| [Get-AzStorageContainer](/powershell/module/az.storage/get-azstoragecontainer) | Lists the storage containers. |
| [Get-AzStorageBlob](/powershell/module/az.storage/Get-AzStorageBlob) | Lists blobs in a container. | ## Next steps
update-manager Configure Wu Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/configure-wu-agent.md
The Windows update client on Windows servers can get their patches from either o
> [!NOTE] > For the application of patches, you can choose the update client at the time of installation, or later using Group policy or by directly editing the registry.
-> To get the non-operating system Microsoft patches or to install only the OS patches, we recommend you to change the patch repository as this is an operating system setting and not an option that you can configure within Update management center (preview).
+> To get the non-operating system Microsoft patches or to install only the OS patches, we recommend you to change the patch repository as this is an operating system setting and not an option that you can configure within Azure Update Manager.
### Edit the registry
-If scheduled patching is configured on your machine using the Update management center (preview), the Auto update on the client is disabled. To edit the registry and configure the setting, see [First party updates on Windows](support-matrix.md#first-party-updates-on-windows).
+If scheduled patching is configured on your machine using the Azure Update Manager, the Auto update on the client is disabled. To edit the registry and configure the setting, see [First party updates on Windows](support-matrix.md#first-party-updates-on-windows).
### Patching using group policy on Azure Update management
update-manager Manage Update Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/manage-update-settings.md
You can schedule updates from **Overview** or **Machines** on the **Update Manag
1. Select the checkbox of your machine from the list and select **Update settings**. 1. Select **Update Settings** to proceed with the type of update for your machine. 1. On the **Change update settings** pane, select **Add machine** to select the machine for which you want to change the update settings.
-1. On the **Select resources** pane, select the machine and select **Add**. Follow the procedure from step 5 listed in **From Overview pane** of [Configure settings on a single VM](#configure-settings-on-a-single-vm).
+1. On the **Select resources** pane, select the machine and select **Add**. Follow the procedure from step 5 listed in **From Overview blade** of [Configure settings on a single VM](#configure-settings-on-a-single-vm).
# [From a selected VM](#tab/singlevm-schedule-home)
update-manager Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/whats-new.md
Last updated 11/13/2023
## November 2023
-## Alerting (preview)
+### Alerting (preview)
Azure Update Manager allows you to enable alerts to address events as captured in updates data.
-## Azure Stack HCI patching (preview)
+### Azure Stack HCI patching (preview)
Azure Update Manager allows you to patch Azure Stack HCI cluster. [Learn more](/azure-stack/hci/update/azure-update-manager-23h2?toc=/azure/update-manager/toc.json&bc=/azure/update-manager/breadcrumb/toc.json)
virtual-desktop Whats New Webrtc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-webrtc.md
Title: What's new in the Remote Desktop WebRTC Redirector Service?
description: New features and product updates the Remote Desktop WebRTC Redirector Service for Azure Virtual Desktop. Previously updated : 11/15/2023 Last updated : 11/27/2023
Download: [MSI Installer](https://query.prod.cms.rt.microsoft.com/cms/api/am/bin
Date published: June 20, 2022
-Download: [MSI installer](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE4YM8L)
- - Fixed an issue that made the WebRTC redirector service disconnect from Teams on Azure Virtual Desktop. - Added keyboard shortcut detection for Shift+Ctrl+; that lets users turn on a diagnostic overlay during calls on Teams for Azure Virtual Desktop. This feature is supported in version 1.2.3313 or later of the Windows Desktop client. - Added further stability and reliability improvements to the service.
Download: [MSI installer](https://query.prod.cms.rt.microsoft.com/cms/api/am/bin
Date published: December 2, 2021
-Download: [MSI installer](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RWQ1UW)
- - Fixed a mute notification problem. - Multiple z-ordering fixes in Teams on Azure Virtual Desktop and Teams on Microsoft 365. - Removed timeout that prevented the WebRTC redirector service from starting when the user connects.
virtual-machines Network Watcher Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/network-watcher-linux.md
description: Deploy the Network Watcher Agent virtual machine extension on Linux
- + Last updated 06/29/2023-+ # Network Watcher Agent virtual machine extension for Linux
The Network Watcher Agent extension can be configured for the following Linux di
| Distribution | Version | |||
-| Ubuntu | 16+ |
+| AlmaLinux | 9.2 |
+| Azure Linux | 2.0 |
+| CentOS | 6.10 and 7 |
| Debian | 7 and 8 |
-| Red Hat | 6.10, 7 and 8+ |
-| Oracle Linux | 6.10, 7 and 8+ |
-| SUSE Linux Enterprise Server | 12 and 15 |
| OpenSUSE Leap | 42.3+ |
-| CentOS | 6.10 and 7 |
-| Azure Linux | 2.0 |
+| Oracle Linux | 6.10, 7 and 8+ |
+| Red Hat Enterprise Linux (RHEL) | 6.10, 7, 8 and 9.2 |
+| Rocky Linux | 9.1 |
+| SUSE Linux Enterprise Server (SLES) | 12 and 15 (SP2, SP3 and SP4) |
+| Ubuntu | 16+ |
> [!NOTE]
-> - Red Hat Enterprise Linux (RHEL) 6.X and Oracle Linux 6.x have reached their end-of-life (EOL). RHEL 6.10 has available [extended life cycle (ELS) support](https://www.redhat.com/en/resources/els-datasheet) through [June 30, 2024]( https://access.redhat.com/product-life-cycles/?product=Red%20Hat%20Enterprise%20Linux,OpenShift%20Container%20Platform%204).
+> - Red Hat Enterprise Linux 6.X and Oracle Linux 6.x have reached their end-of-life (EOL). RHEL 6.10 has available [extended life cycle (ELS) support](https://www.redhat.com/en/resources/els-datasheet) through [June 30, 2024]( https://access.redhat.com/product-life-cycles/?product=Red%20Hat%20Enterprise%20Linux,OpenShift%20Container%20Platform%204).
> - Oracle Linux version 6.10 has available [ELS support](https://www.oracle.com/a/ocom/docs/linux/oracle-linux-extended-support-ds.pdf) through [July 1, 2024](https://www.oracle.com/a/ocom/docs/elsp-lifetime-069338.pdf). ### Internet connectivity
virtual-machines Windows In Place Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows-in-place-upgrade.md
Previously updated : 01/19/2023 Last updated : 07/05/2023 # In-place upgrade for VMs running Windows Server in Azure
-An in-place upgrade allows you to go from an older operating system to a newer one while keeping your settings, server roles, and data intact. This article teaches you how to move your Azure VMs to a later version of Windows Server using an in-place upgrade. Currently, upgrading to Windows Server 2016, Windows Server 2019 and Windows Server 2022 are supported.
+An in-place upgrade allows you to go from an older operating system to a newer one while keeping your settings, server roles, and data intact. This article teaches you how to move your Azure VMs to a later version of Windows Server using an in-place upgrade. Currently, upgrading to Windows Server 2012, Windows Server 2016, Windows Server 2019 and Windows Server 2022 are supported.
Before you begin an in-place upgrade: - Review the upgrade requirements for the target operating system:
+ - Upgrade options for Windows Server 2012 from Windows Server 2008 (64-bit) or Windows Server 2008 R2
+ - Upgrade options for Windows Server 2016 from Windows Server 2012 or Windows Server 2012 R2 - Upgrade options for Windows Server 2019 from Windows Server 2012 R2 or Windows Server 2016 - Upgrade options for Windows Server 2022 from Windows Server 2016 or Windows Server 2019 -- Verify the operating system disk has enough [free space to perform the in-place upgrade](/windows-server/get-started/hardware-requirements#storage-controller-and-disk-space-requirements). If additional space is needed [follow these steps](./windows/expand-os-disk.md) to expand the operating system disk attached to the VM.
+- Verify the operating system disk has enough [free space to perform the in-place upgrade](/windows-server/get-started/hardware-requirements#storage-controller-and-disk-space-requirements). If more space is needed [follow these steps](./windows/expand-os-disk.md) to expand the operating system disk attached to the VM.
- Disable antivirus and anti-spyware software and firewalls. These types of software can conflict with the upgrade process. Re-enable antivirus and anti-spyware software and firewalls after the upgrade is completed.
-## Windows versions not yet supported for in-place upgrade
-For the following versions, consider using the [workaround](#workaround) later in this article:
--- Windows Server 2008 R2 Datacenter-- Windows Server 2008 R2 Standard
-## Upgrade VM to volume license (KMS server activation)
+## Upgrade VM to volume license (KMS server activation)
-The upgrade media provided by Azure requires the VM to be configured for Windows Server volume licensing. This is the default behavior for any Windows Server VM that was installed from a generalized image in Azure. If the VM was imported into Azure, then it may need to be converted to volume licensing to use the upgrade media provided by Azure. To confirm the VM is configured for volume license activation follow these steps to [configure the appropriate KMS client setup key](/troubleshoot/azure/virtual-machines/troubleshoot-activation-problems#step-1-configure-the-appropriate-kms-client-setup-key). If the activation configuration was changed, then follow these steps to [verify connectivity to Azure KMS service](/troubleshoot/azure/virtual-machines/troubleshoot-activation-problems#step-2-verify-the-connectivity-between-the-vm-and-azure-kms-service).
+The upgrade media provided by Azure requires the VM to be configured for Windows Server volume licensing. This is the default behavior for any Windows Server VM that was installed from a generalized image in Azure. If the VM was imported into Azure, then it might need to be converted to volume licensing to use the upgrade media provided by Azure. To confirm the VM is configured for volume license activation follow these steps to [configure the appropriate KMS client setup key](/troubleshoot/azure/virtual-machines/troubleshoot-activation-problems#step-1-configure-the-appropriate-kms-client-setup-key). If the activation configuration was changed, then follow these steps to [verify connectivity to Azure KMS service](/troubleshoot/azure/virtual-machines/troubleshoot-activation-problems#step-2-verify-the-connectivity-between-the-vm-and-azure-kms-service).
-## Upgrade to Managed Disks
+## Upgrade to Managed Disks
The in-place upgrade process requires the use of Managed Disks on the VM to be upgraded. Most VMs in Azure are using Managed Disks, and retirement for unmanaged disks support was announced in November of 2022. If the VM is currently using unmanaged disks, then follow these steps to [migrate to Managed Disks](./windows/migrate-to-managed-disks.md).
-
-
-## Create snapshot of the operating system disk
+ ## Create snapshot of the operating system disk
We recommend that you create a snapshot of your operating system disk and any data disks before starting the in-place upgrade process. This enables you to revert to the previous state of the VM if anything fails during the in-place upgrade process. To create a snapshot on each disk, follow these steps to [create a snapshot of a disk](./snapshot-copy-managed-disk.md).
To start an in-place upgrade the upgrade media must be attached to the VM as a M
| location | Azure region where the upgrade media Managed Disk is created. This must be the same region as the VM to be upgraded. | | zone | Azure zone in the selected region where the upgrade media Managed Disk will be created. This must be the same zone as the VM to be upgraded. For regional VMs (non-zonal) the zone parameter should be "". | | diskName | Name of the Managed Disk that will contain the upgrade media |
-| sku | Windows Server upgrade media version. This must be either: `server2016Upgrade` or `server2019Upgrade` or `server2022Upgrade` |
+| sku | Windows Server upgrade media version. This must be either: `server2016Upgrade` or `server2019Upgrade` or `server2022Upgrade` or `server2012Upgrade` |
+
+If you have more than one subscription, you should run `Set-AzsSubscription -SubscriptionId <String>` to specify which subscription to use.
### PowerShell script ```azurepowershell-interactive #
-# Customer specific parameters
+# Customer specific parameters
+ # Resource group of the source VM $resourceGroup = "WindowsServerUpgrades"
$zone = ""
# Disk name for the that will be created $diskName = "WindowsServer2022UpgradeDisk"
-# Target version for the upgrade - must be either server2022Upgrade or server2019Upgrade
+# Target version for the upgrade - must be either server2022Upgrade, server2019Upgrade, server2016Upgrade or server2012Upgrade
$sku = "server2022Upgrade"
Attach the upgrade media for the target Windows Server version to the VM which w
-## Perform in-place upgrade
+## Perform in-place upgrade to Windows Server 2016, 2019, or 2022
To initiate the in-place upgrade the VM must be in the `Running` state. Once the VM is in a running state use the following steps to perform the upgrade.
To initiate the in-place upgrade the VM must be in the `Running` state. Once the
During the upgrade process the VM will automatically disconnect from the RDP session. After the VM is disconnected from the RDP session the progress of the upgrade can be monitored through the [screenshot functionality available in the Azure portal](/troubleshoot/azure/virtual-machines/boot-diagnostics#enable-boot-diagnostics-on-existing-virtual-machine).
-## Post upgrade steps
+## Perform in-place upgrade to Windows Server 2012 only
-Once the upgrade process has completed successfully the following steps should be taken to clean up any artifacts which were created during the upgrade process:
+To initiate the in-place upgrade the VM must be in the `Running` state. Once the VM is in a running state use the following steps to perform the upgrade.
-- Delete the snapshots of the OS disk and data disk(s) if they were created.
+1. Connect to the VM using [RDP](./windows/connect-rdp.md#connect-to-the-virtual-machine) or [RDP-Bastion](../bastion/bastion-connect-vm-rdp-windows.md#rdp).
-- Delete the upgrade media Managed Disk.
+1. Determine the drive letter for the upgrade disk (typically E: or F: if there are no other data disks).
-- Enable any antivirus, anti-spyware or firewall software that may have been disabled at the start of the upgrade process.
+1. Start Windows PowerShell.
-## Workaround
+1. Change directory to the only directory on the upgrade disk.
+
+1. Execute the following command to start the upgrade:
+
+ ```powershell
+ .\setup.exe
+ ```
+
+1. When Windows Setup launches, select **Install now**.
+1. For **Get important updates for Windows Setup**, select **No thanks**.
+1. Select the correct Windows Server 2012 "Upgrade to" image based on the current version and configuration of the VM using the [Windows Server upgrade matrix](/windows-server/get-started/upgrade-overview).
+1. On the **License terms** page, select **I accept the license terms** and then select **Next**.
+1. For **What type of installation do you want?" select **Upgrade: Install Windows and keep files, settings, and applications**.
+1. Setup will product a **Compatibility report**, you can ignore any warnings and select **Next**.
+1. When complete, the machine will reboot and you will automatically be disconnected from the RDP session. After the VM is disconnected from the RDP session the progress of the upgrade can be monitored through the [screenshot functionality available in the Azure portal](/troubleshoot/azure/virtual-machines/boot-diagnostics#enable-boot-diagnostics-on-existing-virtual-machine).
-For versions of Windows that are not currently supported, create an Azure VM that's running a supported version. And then either migrate the workload (Method 1, preferred), or download and upgrade the VHD of the VM (Method 2).
-To prevent data loss, back up the Windows 10 VM by using [Azure Backup](../backup/backup-overview.md). Or use a third-party backup solution from [Azure Marketplace Backup & Recovery](https://azuremarketplace.microsoft.com/marketplace/apps?page=1&search=Backup+&exp=ubp8).
-### Method 1: Deploy a newer system and migrate the workload
-Create an Azure VM that runs a supported version of the operating system, and then migrate the workload. To do so, you'll use Windows Server migration tools. For instructions to migrate Windows Server roles and features, see [Install, use, and remove Windows Server migration tools](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012).
+## Post upgrade steps
+
+Once the upgrade process has completed successfully the following steps should be taken to clean up any artifacts which were created during the upgrade process:
+- Delete the snapshots of the OS disk and data disk(s) if they were created.
-### Method 2: Download and upgrade the VHD
-1. Do an in-place upgrade in a local Hyper-V VM
- 1. [Download the VHD](./windows/download-vhd.md) of the VM.
- 2. Attach the VHD to a local Hyper-V VM.
- 3. Start the VM.
- 4. Run the in-place upgrade.
-2. Upload the VHD to Azure. For more information, see [Upload a generalized VHD and use it to create new VMs in Azure](./windows/upload-generalized-managed.md).
+- Delete the upgrade media Managed Disk.
+
+- Enable any antivirus, anti-spyware or firewall software that may have been disabled at the start of the upgrade process.
## Recover from failure
If the in-place upgrade process failed to complete successfully you can return t
## Next steps
-For more information, see [Perform an in-place upgrade of Windows Server](/windows-server/get-started/perform-in-place-upgrade)
+- For more information, see [Perform an in-place upgrade of Windows Server](/windows-server/get-started/perform-in-place-upgrade)
+- For information about using Azure Migrate to upgrade, see [Azure Migrate Windows Server upgrade](/azure/migrate/how-to-upgrade-windows)
virtual-network-manager Concept Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-limitations.md
This article provides an overview of the current limitations when using [Azure V
## Connected group limitations * A connected group can have up to 250 virtual networks. Virtual networks in a [mesh topology](concept-connectivity-configuration.md#mesh-network-topology) are in a [connected group](concept-connectivity-configuration.md#connected-group), therefore a mesh configuration has a limit of 250 virtual networks.
-* The current preview of connected group has a limitation where traffic from a connected group can't communicate with a private endpoint in this connected group if it has a network security group enabled on it. However, this limitation will be removed once the feature is generally available.
+* The current preview of connected group has a limitation where traffic from a connected group can't communicate with a private endpoint in this connected group. However, this limitation will be removed once the feature is generally available.
* You can have network groups with or without [direct connectivity](concept-connectivity-configuration.md#direct-connectivity) enabled in the same [hub-and-spoke configuration](concept-connectivity-configuration.md#hub-and-spoke-topology), as long as the total number of virtual networks peered to the hub **doesn't exceed 500** virtual networks. * If the network group peered with the hub **has direct connectivity enabled**, these virtual networks are in a *connected group*, therefore the network group has a limit of **250** virtual networks. * If the network group peered with the hub **doesn't have direct connectivity enabled**, the network group can have up to the total limit for a hub-and-spoke topology.
virtual-network Default Outbound Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/default-outbound-access.md
If you deploy a virtual machine in Azure and it doesn't have explicit outbound c
* Loss of IP address
- * Customers don't own the default outbound access IP. This IP might changit ge, and any dependency on it could cause issues in the future.
+ * Customers don't own the default outbound access IP. This IP might change, and any dependency on it could cause issues in the future.
## How can I transition to an explicit method of public connectivity (and disable default outbound access)?
virtual-network Virtual Network Tap Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-tap-overview.md
The accounts you use to apply TAP configuration on network interfaces must be as
- [Ixia CloudLens](https://www.ixiacom.com/cloudlens/cloudlens-azure) -- [Nubeva Prisms](https://www.nubeva.com/azurevtap)
+- [cPacket Cloud Visbility](https://www.cpacket.com/solutions/cloud-visibility/)
- [Big Switch Big Monitoring Fabric](https://www.arista.com/en/bigswitch)
vpn-gateway P2s Session Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/p2s-session-management.md
description: Learn how to view and disconnect Point-to-Site VPN sessions.
Previously updated : 04/26/2021 Last updated : 11/27/2023 # Point-to-site VPN session management
-Azure virtual network gateways provide an easy way to view and disconnect current Point-to-site VPN sessions. This article helps you view and disconnect current sessions. The session status is updated every 5 minutes. It is not updated immediately.
+VPN Gateway provides an easy way to view and disconnect current point-to-site VPN sessions. This article helps you view and disconnect current sessions. The session status is updated every 5 minutes. It isn't updated immediately.
-As this feature allows the disconnection of VPN clients, Reader permissions on the VPN gateway resource are not sufficient. Contributor role is needed to visualize Point-to-site VPN sessions correctly.
+Because this feature allows the disconnection of VPN clients, Reader permissions on the VPN gateway resource aren't sufficient. The Contributor role is needed to visualize point-to-site VPN sessions correctly.
## Portal
->[!NOTE]
+> [!NOTE]
> Connection source info is provided for IKEv2 and OpenVPN connections only.
->
+>
To view and disconnect a session in the portal:
To view and disconnect a session using PowerShell:
```azurepowershell-interactive Get-AzVirtualNetworkGatewayVpnClientConnectionHealth -VirtualNetworkGatewayName <name of the gateway> -ResourceGroupName <name of the resource group> ```+ 1. Copy the **VpnConnectionId** of the session that you want to disconnect. :::image type="content" source="./media/p2s-session-management/powershell.png" alt-text="PowerShell example":::+ 1. To disconnect the session, run the following command: ```azurepowershell-interactive
To view and disconnect a session using PowerShell:
## Next steps
-For more information about Point-to-site connections, see [About Point-to-site VPN](point-to-site-about.md).
+For more information about point-to-site connections, see [About Point-to-site VPN](point-to-site-about.md).