Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
active-directory-b2c | Custom Policies Series Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policies-series-overview.md | While you can use pre-made [custom policy starter pack](./tutorial-create-user-f ## Select an article -This how-to guide series consists of multiple articles. We recommend that you start this series from the fist article. For each article (except the first one), you're expected to use the custom policy you write in the preceding article. +This how-to guide series consists of multiple articles. We recommend that you start this series from the first article. For each article (except the first one), you're expected to use the custom policy you write in the preceding article. |Article | What you'll learn | ||| |
active-directory-b2c | Custom Policies Series Sign Up Or Sign In Federation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policies-series-sign-up-or-sign-in-federation.md | Use the following steps to add a combined local and social account: </UserJourneys>--> ``` - In the fist step, we specify the options a user needs to choose from in their journey, local or social authentication. In the steps that follow, we use preconditions to track the option the user picked or the stage of the journey at which the user is. For example, we use the `authenticationSource` claim to differentiate between a local authentication journey and a social authentication journey. + In the first step, we specify the options a user needs to choose from in their journey, local or social authentication. In the steps that follow, we use preconditions to track the option the user picked or the stage of the journey at which the user is. For example, we use the `authenticationSource` claim to differentiate between a local authentication journey and a social authentication journey. 1. In the `RelyingParty` section, change *DefaultUserJourney's* `ReferenceId` to `LocalAndSocialSignInAndSignUp` |
active-directory | How To Integrate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/how-to-integrate.md | |
active-directory | Quickstart V2 Dotnet Native Aspnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/quickstart-v2-dotnet-native-aspnet.md | -> 1. Ensure the TodoListService service starts first by moving it to the fist position in the list, using the up arrow. +> 1. Ensure the TodoListService service starts first by moving it to the first position in the list, using the up arrow. > > Sign in to run your TodoListClient project. > |
active-directory | Web Api Quickstart Portal Dotnet Native Aspnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-api-quickstart-portal-dotnet-native-aspnet.md | -> 1. Ensure the TodoListService service starts first by moving it to the fist position in the list, using the up arrow. +> 1. Ensure the TodoListService service starts first by moving it to the first position in the list, using the up arrow. > > Sign in to run your TodoListClient project. > |
aks | Open Ai Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-ai-quickstart.md | -In this article, you will learn how to deploy an application that uses Azure OpenAI or OpenAI on AKS. With OpenAI, you can easily adapt different AI models, such as content generation, summarization, semantic search, and natural language to code generation, for your specific tasks. +In this article, you will learn how to deploy an application that uses Azure OpenAI or OpenAI on AKS. With OpenAI, you can easily adapt different AI models, such as content generation, summarization, semantic search, and natural language to code generation, for your specific tasks. You will start by deploying an AKS cluster in your Azure subscription. Then you will deploy your OpenAI service and the sample application. -This article also walks you through how to run a sample multi-container solution representative of real-world implementations. The multi-container solution is comprised of applications written in multiple languages and frameworks, including: +The sample cloud native application is representative of real-world implementations. The multi-container application is comprised of applications written in multiple languages and frameworks, including: - Golang with Gin - Rust with Actix-Web - JavaScript with Vue.js and Fastify This article also walks you through how to run a sample multi-container solution These applications provide front ends for customers and store admins, REST APIs for sending data to RabbitMQ message queue and MongoDB database, and console apps to simulate traffic. +> [!NOTE] +> We don't recommend running stateful containers, such as MongoDB and Rabbit MQ, without persistent storage for production. These are used here for simplicity, but we recommend using managed services, such as Azure CosmosDB or Azure Service Bus. + The codebase for [AKS Store Demo][aks-store-demo] can be found on GitHub. ## Before you begin For the [AKS Store application][aks-store-demo], this manifest includes the foll - Mongo DB: NoSQL instance for persisted data - Rabbit MQ: Message queue for an order queue -1. Create a file named `aks-store.yaml` and copy the following manifest. - ```yaml - apiVersion: apps/v1 - kind: Deployment - metadata: - name: mongodb - spec: - replicas: 1 - selector: - matchLabels: - app: mongodb - template: - metadata: - labels: - app: mongodb - spec: - nodeSelector: - "kubernetes.io/os": linux - containers: - - name: mongodb - image: mcr.microsoft.com/mirror/docker/library/mongo:4.2 - ports: - - containerPort: 27017 - name: mongodb - resources: {} - - apiVersion: v1 - kind: Service - metadata: - name: mongodb - spec: - ports: - - port: 27017 - selector: - app: mongodb - type: ClusterIP - - apiVersion: apps/v1 - kind: Deployment - metadata: - name: rabbitmq - spec: - replicas: 1 - selector: - matchLabels: - app: rabbitmq - template: - metadata: - labels: - app: rabbitmq - spec: - nodeSelector: - "kubernetes.io/os": linux - containers: - - name: rabbitmq - image: mcr.microsoft.com/mirror/docker/library/rabbitmq:3.10-management-alpine - ports: - - containerPort: 5672 - name: rabbitmq-amqp - - containerPort: 15672 - name: rabbitmq-http - env: - - name: RABBITMQ_DEFAULT_USER - value: "username" - - name: RABBITMQ_DEFAULT_PASS - value: "password" - resources: {} - - apiVersion: v1 - kind: Service - metadata: - name: rabbitmq - spec: - selector: - app: rabbitmq - ports: - - name: rabbitmq-amqp - port: 5672 - targetPort: 5672 - - name: rabbitmq-http - port: 15672 - targetPort: 15672 - type: ClusterIP - - apiVersion: apps/v1 - kind: Deployment - metadata: - name: order-service - spec: - replicas: 1 - selector: - matchLabels: - app: order-service - template: - metadata: - labels: - app: order-service - spec: - nodeSelector: - "kubernetes.io/os": linux - containers: - - name: order-service - image: ghcr.io/azure-samples/aks-store-demo/order-service:latest - ports: - - containerPort: 3000 - env: - - name: ORDER_QUEUE_PROTOCOL - value: "amqp" - - name: ORDER_QUEUE_HOSTNAME - value: "rabbitmq" - - name: ORDER_QUEUE_PORT - value: "5672" - - name: ORDER_QUEUE_USERNAME - value: "username" - - name: ORDER_QUEUE_PASSWORD - value: "password" - - name: FASTIFY_ADDRESS - value: "0.0.0.0" - resources: {} - initContainers: - - name: wait-for-rabbitmq - image: busybox - command: ['sh', '-c', 'until nc -zv rabbitmq 5672; do echo waiting for rabbitmq; sleep 2; done;'] - - apiVersion: v1 - kind: Service - metadata: - name: order-service - spec: - type: ClusterIP - ports: - - name: http - port: 3000 - targetPort: 3000 - selector: - app: order-service - - apiVersion: apps/v1 - kind: Deployment - metadata: - name: makeline-service - spec: - replicas: 1 - selector: - matchLabels: - app: makeline-service - template: - metadata: - labels: - app: makeline-service - spec: - nodeSelector: - "kubernetes.io/os": linux - containers: - - name: makeline-service - image: ghcr.io/azure-samples/aks-store-demo/makeline-service:latest - ports: - - containerPort: 3001 - env: - - name: ORDER_QUEUE_CONNECTION_STRING - value: "amqp://username:password@rabbitmq:5672/" - - name: ORDER_QUEUE_NAME - value: "orders" - - name: ORDER_DB_CONNECTION_STRING - value: "mongodb://mongodb:27017" - - name: ORDER_DB_NAME - value: "orderdb" - - name: ORDER_DB_COLLECTION_NAME - value: "orders" - resources: {} - - apiVersion: v1 - kind: Service - metadata: - name: makeline-service - spec: - type: ClusterIP - ports: - - name: http - port: 3001 - targetPort: 3001 - selector: - app: makeline-service - - apiVersion: apps/v1 - kind: Deployment - metadata: - name: product-service - spec: - replicas: 1 - selector: - matchLabels: - app: product-service - template: - metadata: - labels: - app: product-service - spec: - nodeSelector: - "kubernetes.io/os": linux - containers: - - name: product-service - image: ghcr.io/azure-samples/aks-store-demo/product-service:latest - ports: - - containerPort: 3002 - resources: {} - - apiVersion: v1 - kind: Service - metadata: - name: product-service - spec: - type: ClusterIP - ports: - - name: http - port: 3002 - targetPort: 3002 - selector: - app: product-service - - apiVersion: apps/v1 - kind: Deployment - metadata: - name: store-front - spec: - replicas: 1 - selector: - matchLabels: - app: store-front - template: - metadata: - labels: - app: store-front - spec: - nodeSelector: - "kubernetes.io/os": linux - containers: - - name: store-front - image: ghcr.io/azure-samples/aks-store-demo/store-front:latest - ports: - - containerPort: 8080 - name: store-front - env: - - name: VUE_APP_ORDER_SERVICE_URL - value: "http://order-service:3000/" - - name: VUE_APP_PRODUCT_SERVICE_URL - value: "http://product-service:3002/" - resources: {} - - apiVersion: v1 - kind: Service - metadata: - name: store-front - spec: - ports: - - port: 80 - targetPort: 8080 - selector: - app: store-front - type: LoadBalancer - - apiVersion: apps/v1 - kind: Deployment - metadata: - name: store-admin - spec: - replicas: 1 - selector: - matchLabels: - app: store-admin - template: - metadata: - labels: - app: store-admin - spec: - nodeSelector: - "kubernetes.io/os": linux - containers: - - name: store-admin - image: ghcr.io/azure-samples/aks-store-demo/store-admin:latest - ports: - - containerPort: 8081 - name: store-admin - env: - - name: VUE_APP_PRODUCT_SERVICE_URL - value: "http://product-service:3002/" - - name: VUE_APP_MAKELINE_SERVICE_URL - value: "http://makeline-service:3001/" - - name: VUE_APP_AI_SERVICE_URL - value: "http://ai-service:5001/" - resources: {} - - apiVersion: v1 - kind: Service - metadata: - name: store-admin - spec: - ports: - - port: 80 - targetPort: 8081 - selector: - app: store-admin - type: LoadBalancer - - apiVersion: apps/v1 - kind: Deployment - metadata: - name: virtual-customer - spec: - replicas: 1 - selector: - matchLabels: - app: virtual-customer - template: - metadata: - labels: - app: virtual-customer - spec: - nodeSelector: - "kubernetes.io/os": linux - containers: - - name: virtual-worker - image: ghcr.io/azure-samples/aks-store-demo/virtual-customer:latest - env: - - name: ORDER_SERVICE_URL - value: http://order-service:3000/ - - name: ORDERS_PER_HOUR - value: "100" - resources: {} - - apiVersion: apps/v1 - kind: Deployment - metadata: - name: virtual-worker - spec: - replicas: 1 - selector: - matchLabels: - app: virtual-worker - template: - metadata: - labels: - app: virtual-worker - spec: - nodeSelector: - "kubernetes.io/os": linux - containers: - - name: virtual-worker - image: ghcr.io/azure-samples/aks-store-demo/virtual-worker:latest - env: - - name: MAKELINE_SERVICE_URL - value: http://makeline-service:3001 - - name: ORDERS_PER_HOUR - value: "100" - resources: {} - ``` +> [!NOTE] +> We don't recommend running stateful containers, such as MongoDB and Rabbit MQ, without persistent storage for production. These are used here for simplicity, but we recommend using managed services, such as Azure CosmosDB or Azure Service Bus. ++1. Review the [YAML manifest](https://github.com/Azure-Samples/aks-store-demo/blob/main/aks-store-all-in-one.yaml) for the application. You will see a series of deployments and services that make up the entire application. 1. Deploy the application using the [`kubectl apply`][kubectl-apply] command and specify the name of your yaml manifest. ```bash- kubectl apply -f aks-store.yaml + kubectl apply -f https://raw.githubusercontent.com/Azure-Samples/aks-store-demo/main/aks-store-all-in-one.yaml ``` The following example resembles output showing successfully created deployments and services. You can either use Azure OpenAI or OpenAI and run your application on AKS. 1. Select the Azure OpenAI instance you created. 1. Select **Keys and Endpoints** to generate a key. 1. Select **Model Deployments** > **Managed Deployments** to open the [Azure OpenAI studio][aoai-studio].-1. Create a new deployment using the **text-davinci-003** model. +1. Create a new deployment using the **gpt-35-turbo** model. For more information on how to create a deployment in Azure OpenAI, check out [Get started generating text using Azure OpenAI Service][aoai-get-started]. ### [OpenAI](#tab/openai) 1. [Generate an OpenAI key][open-ai-new-key] by selecting **Create new secret key** and save the key. You will need this key in the [next step](#deploy-the-ai-service). 1. [Start a paid plan][openai-paid] to use OpenAI API.++ ## Deploy the AI service Now that the application is deployed, you can deploy the Python-based microservi deployment.apps/ai-service created service/ai-service created ```+ ### [OpenAI](#tab/openai) 1. Create a file named `ai-service.yaml` and copy the following manifest into it. ```yaml Now that the application is deployed, you can deploy the Python-based microservi deployment.apps/ai-service created service/ai-service created ```++ > [!NOTE] Now that the application is deployed, you can deploy the Python-based microservi 1. Open a web browser and browse to the external IP address of your service. In the example shown here, open 40.64.86.161 to see Store Admin in the browser. Repeat the same step for Store Front. 1. In store admin, click on the products tab, then select **Add Products**. 1. When the ai-service is running successfully, you should see the Ask OpenAI button next to the description field. Fill in the name, price, and keywords, then click Ask OpenAI to generate a product description. Then click save product. See the picture for an example of adding a new product. + :::image type="content" source="media/ai-walkthrough/ai-generate-description.png" alt-text="Screenshot of how to use openAI to generate a product description."::: 1. You can now see the new product you created on Store Admin used by sellers. In the picture, you can see Jungle Monkey Chew Toy is added.+ :::image type="content" source="media/ai-walkthrough/new-product-store-admin.png" alt-text="Screenshot viewing the new product in the store admin page."::: 1. You can also see the new product you created on Store Front used by buyers. In the picture, you can see Jungle Monkey Chew Toy is added. Remember to get the IP address of store front by using [kubectl get service][kubectl-get].+ :::image type="content" source="media/ai-walkthrough/new-product-store-front.png" alt-text="Screenshot viewing the new product in the store front page."::: ## Next steps Now that you've seen how to add OpenAI functionality to an AKS application, lear <!-- Links external --> [aks-store-demo]: https://github.com/Azure-Samples/aks-store-demo+ [kubectl]: https://kubernetes.io/docs/reference/kubectl/+ [kubeconfig-file]: https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/+ [kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get+ [kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply+ [aoai-studio]: https://oai.azure.com/portal/+ [open-ai-landing]: https://openai.com/+ [open-ai-new-key]: https://platform.openai.com/account/api-keys+ [open-ai-org-id]: https://platform.openai.com/account/org-settings+ [aoai-access]: https://aka.ms/oai/access+ [openai-paid]: https://platform.openai.com/account/billing/overview+ [openai-platform]: https://platform.openai.com/+ [miyagi]: https://github.com/Azure-Samples/miyagi <!-- Links internal --> [azure-resource-group]: ../azure-resource-manager/management/overview.md + [az-group-create]: /cli/azure/group#az-group-create+ [az-aks-create]: /cli/azure/aks#az-aks-create+ [az-aks-install-cli]: /cli/azure/aks#az-aks-install-cli+ [az-aks-get-credentials]: /cli/azure/aks#az-aks-get-credentials+ [aoai-get-started]: ../cognitive-services/openai/quickstart.md+ [managed-identity]: /azure/cognitive-services/openai/how-to/managed-identity#authorize-access-to-managed-identities+ [key-vault]: csi-secrets-store-driver.md+ [aoai]: ../cognitive-services/openai/index.yml-[learn-aoai]: /training/modules/explore-azure-openai ++[learn-aoai]: /training/modules/explore-azure-openai + |
azure-arc | Pod Scheduling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/pod-scheduling.md | + + Title: Arc SQL Managed Instance pod scheduling +description: Describes how pods are scheduled for Azure Arc-enabled data services, and how you may configure them. ++++++ Last updated : 07/07/2023++++# Arc SQL Managed Instance pod scheduling ++By default, SQL pods are scheduled with a preferred pod anti affinity between each other. This setting prefers that the pods are scheduled on different nodes, but does not require it. In a scenario where there are not enough nodes to place each pod on a distinct node, multiple pods are scheduled on a single node. Kubernetes does not reevaluate this decision until a pod is rescheduled. ++This default behavior can be overridden using the scheduling options. Arc SQL Managed Instance has three controls for scheduling, which are located at `$.spec.scheduling` ++## NodeSelector ++The simplest control is node selector. The node selector simply specifies a label that the target nodes for an instance must have. The path of nodeSelector is `$.spec.scheduling.nodeSelector` and functions the same as any other Kubernetes nodeSelector property. (see: [Assign Pods to Nodes | Kubernetes](https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/#create-a-pod-that-gets-scheduled-to-your-chosen-node)) ++## Affinity ++Affinity is a feature in Kubernetes that allows fine-grained control over how pods are scheduled onto nodes within a cluster. There are many ways to leverage affinity in Kubernetes (see: [Assigning Pods to Nodes | Kubernetes](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity)). The same rules for applying affinities to traditional StatefulSets in Kubernetes apply in SQL MI. The exact same object model is used. ++++The path of affinity in a deployment is `$.spec.template.spec.affinity`, whereas the path of affinity in SQL MI is `$.spec.scheduling.affinity`. ++Here is a sample spec for a required pod anti affinity between replicas of a single SQL MI instance. The labels chosen in the labelSelector of the affinity term are automatically applied by the dataController based on the resource type and name, but the labelSelector could be changed to use any labels provided. +++```yaml +apiVersion: sql.arcdata.microsoft.com/v13 +kind: SqlManagedInstance +metadata: + labels: + management.azure.com/resourceProvider: Microsoft.AzureArcData + name: sql1 + namespace: test +spec: + backup: + retentionPeriodInDays: 7 + dev: false + licenseType: LicenseIncluded + orchestratorReplicas: 1 + preferredPrimaryReplicaSpec: + preferredPrimaryReplica: any + primaryReplicaFailoverInterval: 600 + readableSecondaries: 1 + replicas: 3 + scheduling: + affinity: + podAntiAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + - labelSelector: + matchLabels: + arc-resource: sqlmanagedinstance + controller: sql1 + topologyKey: kubernetes.io/hostname + default: + resources: + limits: + cpu: "4" + requests: + cpu: "4" + memory: 4Gi + + primary: + type: NodePort + readableSecondaries: + type: NodePort + storage: + data: + volumes: + - accessMode: ReadWriteOnce + className: local-storage + size: 5Gi + logs: + volumes: + - accessMode: ReadWriteOnce + className: local-storage + size: 5Gi + syncSecondaryToCommit: -1 + tier: BusinessCritical +``` ++## TopologySpreadConstraints ++Pod topology spread constraints control rules around how pods are spread across different groupings of nodes in a Kubernetes cluster. A cluster may have different node topology domains defined such as regions, zones, node pools, etc. A standard Kubernetes topology spread constraint can be applied at `$.spec.scheduling.topologySpreadConstraints` (see: [Pod Topology Spread Constraints | Kubernetes](https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/)). ++For instance: +++```yaml +apiVersion: sql.arcdata.microsoft.com/v13 +kind: SqlManagedInstance +metadata: + labels: + management.azure.com/resourceProvider: Microsoft.AzureArcData + name: sql1 + namespace: test +spec: + backup: + retentionPeriodInDays: 7 + dev: false + licenseType: LicenseIncluded + orchestratorReplicas: 1 + preferredPrimaryReplicaSpec: + preferredPrimaryReplica: any + primaryReplicaFailoverInterval: 600 + readableSecondaries: 1 + replicas: 3 + scheduling: + topologySpreadConstraints: + - maxSkew: 1 + topologyKey: kubernetes.io/hostname + whenUnsatisfiable: DoNotSchedule + labelSelector: + matchLabels: + name: sql1 +``` |
azure-cache-for-redis | Cache Moving Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-moving-resources.md | After geo-replication is configured, the following restrictions apply to your li ### Move -1. To link two caches together for geo-replication, fist select **Geo-replication** from the Resource menu of the cache that you intend to be the primary linked cache. Next, select **Add cache replication link** from **Geo-replication** on the left. +1. To link two caches together for geo-replication, first select **Geo-replication** from the Resource menu of the cache that you intend to be the primary linked cache. Next, select **Add cache replication link** from **Geo-replication** on the left. :::image type="content" source="media/cache-how-to-geo-replication/cache-geo-location-menu.png" alt-text="Screenshot showing the cache's Geo-replication menu."::: |
azure-cache-for-redis | Cache Tutorial Functions Getting Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-tutorial-functions-getting-started.md | To connect to your cache, add a `ConnectionStrings` section in the `local.settin { "IsEncrypted": false, "Values": {+ "AzureWebJobsStorage": "", "FUNCTIONS_WORKER_RUNTIME": "dotnet",- }, - "ConnectionStrings": { "redisConnectionString": "<your-connection-string>" } } To connect to your cache, add a `ConnectionStrings` section in the `local.settin <!--  --> +> [!IMPORTANT] +> This example is simplified for the tutorial. For production use, we recommend that you use [Azure Key Vault](../service-connector/tutorial-portal-key-vault.md) to store connection string information. +> + ### Build and run the code locally Switch to the **Run and debug** tab in VS Code and select the green arrow to debug the code locally. If you donΓÇÖt have Azure Functions core tools installed, you're prompted to do so. In that case, youΓÇÖll need to restart VS Code after installing. The app builds and starts deploying. You can track progress in the **Output Wind ### Add connection string information -Navigate to your new Function App in the Azure portal and select the **Configuration** blade from the Resource menu. You'll notice that your application settings have automatically been added to the Function App. For security, however, the connection string information in your `local.settings.json` file is not automatically added. Select **New connection string** and enter `redisConnectionString` as the Name, and your connection string as the Value. Set Type to _Custom_, and select **Ok** to close the menu and then **Save** on the Configuration page to confirm. The functions app will restart with the new connection string information. +Navigate to your new Function App in the Azure portal and select the **Configuration** blade from the Resource menu. Select **New application setting** and enter `redisConnectionString` as the Name, with your connection string as the Value. Set Type to _Custom_, and select **Ok** to close the menu and then **Save** on the Configuration page to confirm. The functions app will restart with the new connection string information. ### Test your triggers |
azure-cache-for-redis | Cache Tutorial Write Behind | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-tutorial-write-behind.md | In this example, we’re going to use the [pub/sub trigger](cache-how-to-functio 1. If so, update the value of that key. 1. If not, write a new row with the key and its value. -To do this, copy and paste the following code in redisfunction.cs, replacing the existing code. +First, import the `System.Data.SqlClient` NuGet package to enable communication with the SQL Database instance. Go to the VS Code terminal and use the following command: ++```dos +dotnet add package System.Data.SqlClient +``` ++Next, copy and paste the following code in redisfunction.cs, replacing the existing code. ```csharp-using System.Text.Json; using Microsoft.Extensions.Logging; using StackExchange.Redis; using System; using System.Data.SqlClient; -namespace Microsoft.Azure.WebJobs.Extensions.Redis.Samples +namespace Microsoft.Azure.WebJobs.Extensions.Redis {- public static class RedisSamples + public static class WriteBehind {- public const string cacheAddress = "<YourRedisConnectionString>"; - public const string SQLAddress = "<YourSQLConnectionString>"; + public const string connectionString = "redisConnectionString"; + public const string SQLAddress = "SQLConnectionString"; [FunctionName("KeyeventTrigger")] public static void KeyeventTrigger(- [RedisPubSubTrigger(ConnectionString = cacheAddress, Channel = "__keyevent@0__:set")] RedisMessageModel model, + [RedisPubSubTrigger(connectionString, "__keyevent@0__:set")] string message, ILogger logger) {- // connect to a Redis cache instance - var redisConnection = ConnectionMultiplexer.Connect(cacheAddress); + // retrive redis connection string from environmental variables + var redisConnectionString = System.Environment.GetEnvironmentVariable(connectionString); + + // connect to a Redis cache instance + var redisConnection = ConnectionMultiplexer.Connect(redisConnectionString); var cache = redisConnection.GetDatabase(); // get the key that was set and its value- var key = model.Message; + var key = message; var value = (double)cache.StringGet(key); logger.LogInformation($"Key {key} was set to {value}"); - // connect to a SQL database - String SQLConnectionString = SQLAddress; + // retrive SQL connection string from environmental variables + String SQLConnectionString = System.Environment.GetEnvironmentVariable(SQLAddress); // Define the name of the table you created and the column names String tableName = "dbo.inventory"; namespace Microsoft.Azure.WebJobs.Extensions.Redis.Samples //Log the time the function was executed logger.LogInformation($"C# Redis trigger function executed at: {DateTime.Now}"); }-- } } ``` namespace Microsoft.Azure.WebJobs.Extensions.Redis.Samples > This example is simplified for the tutorial. For production use, we recommend that you use parameterized SQL queries to prevent SQL injection attacks. > -You need to update the `cacheAddress` and `SQLAddress` variables with the connection strings for your cache instance and your SQL database. You need to manually enter the password for your SQL database connection string, because the password isn't pasted automatically. You can find the Redis connection string in the **Access Keys** of the Resource menu of the Azure Cache for Redis resource. You can find the SQL database connection string under the **ADO.NET** tab in **Connection strings** on the Resource menu in the SQL database resource. +### Configure connection strings +You need to update the 'local.settings.json' file to include the connection string for your SQL Database instance. Add an entry in the `Values` section for `SQLConnectionString`. Your file should look like this: -You see errors in some of the SQL classes. You need to import the `System.Data.SqlClient` NuGet package to resolve these. Go to the VS Code terminal and use the following command: --```dos -dotnet add package System.Data.SqlClient +```json +{ + "IsEncrypted": false, + "Values": { + "AzureWebJobsStorage": "", + "FUNCTIONS_WORKER_RUNTIME": "dotnet", + "redisConnectionString": "<redis-connection-string>", + "SQLConnectionString": "<sql-connection-string>" + } +} ```+You need to manually enter the password for your SQL database connection string, because the password isn't pasted automatically. You can find the Redis connection string in the **Access Keys** of the Resource menu of the Azure Cache for Redis resource. You can find the SQL database connection string under the **ADO.NET** tab in **Connection strings** on the Resource menu in the SQL database resource. ++> [!IMPORTANT] +> This example is simplified for the tutorial. For production use, we recommend that you use [Azure Key Vault](../service-connector/tutorial-portal-key-vault.md) to store connection string information. +> ### Build and run the project Go to the **Run and debug tab** in VS Code and run the project. Navigate back to your Azure Cache for Redis instance in the Azure portal and select the **Console** button to enter the Redis Console. Try using some set commands: +- SET apple 5.25 +- SET bread 2.25 +- SET apple 4.50 + Back in VS Code, you should see the triggers being registered: To validate that the triggers are working, go to the SQL database instance in the Azure portal. Then, select **Query editor** from the Resource menu. Create a **New Query** with the following SQL to view the top 100 items in the inventory table: You should see the items written to your Azure Cache for Redis instance show up The only thing left is to deploy the code to the actual Azure Function app. As before, go to the Azure tab in VS Code, find your subscription, expand it, find the Function App section, and expand that. Select and hold (or right-click) your Azure Function app. Then, select **Deploy to Function App…** +### Add connection string information ++Navigate to your Function App in the Azure portal and select the **Configuration** blade from the Resource menu. Select **New application setting** and enter `SQLConnectionString` as the Name, with your connection string as the Value. Set Type to _Custom_, and select **Ok** to close the menu and then **Save** on the Configuration page to confirm. The functions app will restart with the new connection string information. ++## Verify deployment Once the deployment has finished, go back to your Azure Cache for Redis instance and use SET commands to write more values. You should see these show up in your Azure SQL database as well. If you’d like to confirm that your Azure Function app is working properly, go to the app in the portal and select the **Log stream** from the Resource menu. You should see the triggers executing there, and the corresponding updates being made to your SQL database. |
azure-monitor | Autoscale Custom Metric | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-custom-metric.md | This article describes how to set up autoscale for a web app by using a custom m Autoscale allows you to add and remove resources to handle increases and decreases in load. In this article, we'll show you how to set up autoscale for a web app by using one of the Application Insights metrics to scale the web app in and out. > [!NOTE]-> Autoscaling on custom metrics in Application Insights is supported only for metrics published to **Standard** and **Azure.ApplicationInsights** namespaces. If any other namespaces are used for custom metrics in Application Insights, it will return **Unsupported Metric** error. +> Autoscaling on custom metrics in Application Insights is supported only for metrics published to **Standard** and **Azure.ApplicationInsights** namespaces. If any other namespaces are used for custom metrics in Application Insights, it returns an **Unsupported Metric** error. Azure Monitor autoscale applies to: Set up a scale-out rule so that Azure spins up another instance of the web app w ### Set up a scale-in rule -Set up a scale-in rule so that Azure spins down one of the instances when the number of sessions your web app is handling is less than 60 per instance. Azure will reduce the number of instances each time this rule is run until the minimum number of instances is reached. +Set up a scale-in rule so that Azure spins down one of the instances when the number of sessions your web app is handling is less than 60 per instance. Azure reduces the number of instances each time this rule is run until the minimum number of instances is reached. 1. In the **Rules** section of the default scale condition, select **Add a rule**. 1. From the **Metric source** dropdown, select **Other resource**. If you're not going to continue to use this application, delete resources. To learn more about autoscale, see the following articles: -- [Use autoscale actions to send email and webhook alert notifications](./autoscale-webhook-email.md)-- [Overview of autoscale](./autoscale-overview.md)-- [Azure Monitor autoscale common metrics](./autoscale-common-metrics.md)-- [Best practices for Azure Monitor autoscale](./autoscale-best-practices.md)-- [Autoscale REST API](/rest/api/monitor/autoscalesettings)++ [Use autoscale actions to send email and webhook alert notifications](./autoscale-webhook-email.md)++ [Overview of autoscale](./autoscale-overview.md)++ [Azure Monitor autoscale common metrics](./autoscale-common-metrics.md)++ [Best practices for Azure Monitor autoscale](./autoscale-best-practices.md)++ [Autoscale REST API](/rest/api/monitor/autoscalesettings) |
azure-monitor | Autoscale Diagnostics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-diagnostics.md | |
azure-vmware | Deploy Disaster Recovery Using Vmware Hcx | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-disaster-recovery-using-vmware-hcx.md | Title: Deploy disaster recovery using VMware HCX description: Learn how to deploy disaster recovery of your virtual machines (VMs) with VMware HCX Disaster Recovery. Also learn how to use Azure VMware Solution as the recovery or target site. Previously updated : 10/26/2022 Last updated : 7/8/2023 +The diagram shows the deployment of VMware HCX from on-premises VMware vSphere to Azure VMware Solution private cloud disaster recovery scenario. ++ :::image type="content" source="./media/disaster-recovery-virtual-machines/hcx-disaster-recovery-scenario-1-diagram.png" alt-text="Diagram showing the VMware HCX manual disaster recovery solution in Azure VMware Solution with on-premises VMware vSphere." border="true" lightbox="./media/disaster-recovery-virtual-machines/hcx-disaster-recovery-scenario-1-diagram.png"::: + >[!IMPORTANT]->Although part of HCX, VMware HCX Disaster Recovery (DR) is not recommended for large deployments. The disaster recovery orchestration is 100% manual, and Azure VMware Solution currently doesn't have runbooks or features to support manual HCX DR failover. For enterprise-class disaster recovery, refer to VMware Site Recovery Manager (SRM) or VMware business continuity and disaster recovery (BCDR) solutions. +>Although part of VMware HCX, VMware HCX Disaster Recovery (DR) is not recommended for large deployments. The disaster recovery orchestration is 100% manual, and Azure VMware Solution currently doesn't have runbooks or features to support manual VMware HCX DR failover. For enterprise-class disaster recovery, refer to VMware Site Recovery Manager (SRM) or VMware business continuity and disaster recovery (BCDR) solutions. VMware HCX provides various operations that provide fine control and granularity in replication policies. Available Operations include: This guide covers the following replication scenarios: 1. Log into **vSphere Client** on the source site and access **HCX plugin**. - :::image type="content" source="./media/disaster-recovery-virtual-machines/hcx-vsphere.png" alt-text="Screenshot showing the HCX option in the vSphere Web Client." border="true"::: + :::image type="content" source="./media/disaster-recovery-virtual-machines/hcx-vsphere.png" alt-text="Screenshot showing the VMware HCX option in the vSphere Client." border="true"::: 1. Enter the **Disaster Recovery** area and select **PROTECT VMS**. - :::image type="content" source="./media/disaster-recovery-virtual-machines/protect-virtual-machine.png" alt-text="Screenshot showing the Disaster Recovery dashboard in the vSphere Web Client." border="true" lightbox="./media/disaster-recovery-virtual-machines/protect-virtual-machine.png"::: + :::image type="content" source="./media/disaster-recovery-virtual-machines/protect-virtual-machine.png" alt-text="Screenshot showing the Disaster Recovery dashboard in the vSphere Client." border="true" lightbox="./media/disaster-recovery-virtual-machines/protect-virtual-machine.png"::: 1. Select the Source and the Remote sites. The Remote site in this case should be the Azure VMware Solution private cloud. - :::image type="content" source="./media/disaster-recovery-virtual-machines/protect-virtual-machines.png" alt-text="Screenshot showing the HCX: Protected Virtual Machines window." border="true"::: + :::image type="content" source="./media/disaster-recovery-virtual-machines/protect-virtual-machines.png" alt-text="Screenshot showing the VMware HCX: Protected Virtual Machines window." border="true"::: 1. If needed, select the **Default replication** options: |
azure-vmware | Disaster Recovery Using Vmware Site Recovery Manager | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/disaster-recovery-using-vmware-site-recovery-manager.md | VMware vSphere Replication is VMware's hypervisor-based replication technology f In this article, you'll implement disaster recovery for on-premises VMware vSphere virtual machines (VMs) or Azure VMware Solution-based VMs. +> [!NOTE] +> The current version of VMware Site Recovery Manager (SRM) in Azure VMware Solution is 8.5.0.3. ## Supported scenarios VMware SRM helps you plan, test, and run the recovery of VMs between a protected VMware vCenter Server site and a recovery VMware vCenter Server site. You can use VMware SRM with Azure VMware Solution with the following two DR scenarios: Make sure you've explicitly provided the remote user the VMware VRM administrato 1. In your on-premises data center, install VMware SRM and vSphere Replication. - >[!NOTE] - >Use the [Two-site Topology with one vCenter Server instance per PSC](https://docs.vmware.com/en/Site-Recovery-Manager/8.4/com.vmware.srm.install_config.doc/GUID-F474543A-88C5-4030-BB86-F7CC51DADE22.html) deployment model. Also, make sure that the [required vSphere Replication Network ports](https://kb.VMware.com/s/article/2087769) are opened. -+ > [!NOTE] + > Use the [Two-site Topology with one vCenter Server instance per PSC](https://docs.vmware.com/en/Site-Recovery-Manager/8.4/com.vmware.srm.install_config.doc/GUID-F474543A-88C5-4030-BB86-F7CC51DADE22.html) deployment model. Also, make sure that the [required vSphere Replication Network ports](https://kb.VMware.com/s/article/2087769) are opened. 1. In your Azure VMware Solution private cloud, under **Manage**, select **Add-ons** > **Disaster recovery**. - The default CloudAdmin user in the Azure VMware Solution private cloud doesn't have sufficient privileges to install VMware SRM or vSphere Replication. The installation process involves multiple steps outlined in the [Prerequisites](#prerequisites) section. Instead, you can install VMware SRM with vSphere Replication as an add-on service from your Azure VMware Solution private cloud. +1. The default CloudAdmin user in the Azure VMware Solution private cloud doesn't have sufficient privileges to install VMware SRM or vSphere Replication. The installation process involves multiple steps outlined in the [Prerequisites](#prerequisites) section. Instead, you can install VMware SRM with vSphere Replication as an add-on service from your Azure VMware Solution private cloud. + - :::image type="content" source="media/VMware-srm-vsphere-replication/disaster-recovery-add-ons.png" alt-text="Screenshot of Azure VMware Solution private cloud to install VMware SRM with vSphere Replication as an add-on" border="true" lightbox="media/VMware-srm-vsphere-replication/disaster-recovery-add-ons.png"::: +1. :::image type="content" source="media/VMware-srm-vsphere-replication/disaster-recovery-add-ons.png" alt-text="Screenshot of Azure VMware Solution private cloud to install VMware SRM with vSphere Replication as an add-on" border="true" lightbox="media/VMware-srm-vsphere-replication/disaster-recovery-add-ons.png"::: ++> [!NOTE] +> The current version of VMware Site Recovery Manager (SRM) in Azure VMware Solution is 8.5.0.3. 1. From the **Disaster Recovery Solution** drop-down, select **VMware Site Recovery Manager (SRM) ΓÇô vSphere Replication**. :::image type="content" source="media/VMware-srm-vsphere-replication/disaster-recovery-solution-srm-add-on.png" alt-text="Screenshot showing the Disaster recovery tab under Add-ons with VMware Site Recovery Manager (SRM) - vSphere replication selected." border="true" lightbox="media/VMware-srm-vsphere-replication/disaster-recovery-solution-srm-add-on.png"::: VMware and Microsoft support teams will engage each other as needed to troublesh - [Pre-requisites and Best Practices for SRM installation](https://docs.vmware.com/en/Site-Recovery-Manager/8.3/com.vmware.srm.install_config.doc/GUID-BB0C03E4-72BE-4C74-96C3-97AC6911B6B8.html) - [Network ports for SRM](https://docs.vmware.com/en/Site-Recovery-Manager/8.3/com.vmware.srm.install_config.doc/GUID-499D3C83-B8FD-4D4C-AE3D-19F518A13C98.html) - [Network ports for vSphere Replication](https://kb.vmware.com/s/article/2087769)++ |
cloud-services | Cloud Services Guestos Msrc Releases | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-msrc-releases.md | The following tables show the Microsoft Security Response Center (MSRC) updates ## June 2023 Guest OS ->[!NOTE] -->The June Guest OS is currently being rolled out to Cloud Service VMs that are configured for automatic updates. When the rollout is complete, this version will be made available for manual updates through the Azure portal and configuration files. The following patches are included in the June Guest OS. This list is subject to change. | Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced | | | | | | |-| Rel 23-06 OOB | [5028623] | Latest Cumulative Update(LCU) | 5.82 | Jun 23, 2023 | -| Rel 23-06 | [5027225] | Latest Cumulative Update(LCU) | 7.26 | Jun 13, 2023 | -| Rel 23-06 | [5027222] | Latest Cumulative Update(LCU) | 6.58 | Jun 13, 2023 | -| Rel 23-06 | [5027140] | .NET Framework 3.5 Security and Quality Rollup | 2.138 | Jun 13, 2023 | -| Rel 23-06 | [5027134] | .NET Framework 4.6.2 Security and Quality Rollup | 2.138 | Jun 13, 2023 | -| Rel 23-06 OOB | [5028591] | .NET Framework Rollup 4.6 – 4.7.2/ .NET standalone update | 2.138 | Jun 22, 2023 | -| Rel 23-06 | [5027141] | .NET Framework 3.5 Security and Quality Rollup | 4.118 | Jun 13, 2023 | -| Rel 23-06 | [5027133] | .NET Framework 4.6.2 Security and Quality Rollup | 4.118 | Jun 13, 2023 | -| Rel 23-06 OOB | [5028590] | .NET Framework Rollup 4.6 – 4.7.2/ .NET standalone update | 4.118 | Jun 22, 2023 | -| Rel 23-06 | [5027138] | .NET Framework 3.5 Security and Quality Rollup | 3.126 | Jun 13, 2023 | -| Rel 23-06 | [5027132] | .NET Framework 4.6.2 Security and Quality Rollup | 3.126 | Jun 13, 2023 | -| Rel 23-06 OOB | [5028589] | .NET Framework Rollup 4.6 – 4.7.2/ .NET standalone update | 3.126 | Jun 22, 2023 | -| Rel 23-06 | [5027123] | .NET Framework 4.8 Security and Quality Rollup  | 5.82 | Jun 13, 2023 | -| Rel 23-06 OOB | [5028580] | .NET Framework Rollup 4.8 /.NET Standalone Update  | 5.82 | Jun 22, 2023 | -| Rel 23-06 | [5027131] | . NET Framework 4.7.2 Cumulative Update | 6.58 | Jun 13, 2023 | -| Rel 23-06 OOB | [5028588] | .NET Framework Rollup - 4.6-4.7.2 / .NET Standalone Update | 6.58 | Jun 22, 2023 | -| Rel 23-06 | [5027127] | .NET Framework 4.8 Security and Quality Rollup | 7.26 | Jun 13, 2023 | -| Rel 23-06 OOB | [5028584] | .NET Framework Rollup - 4.8 / .NET Standalone Update | 7.26 | Jun 22, 2023 | -| Rel 23-06 | [5027275] | Monthly Rollup | 2.138 | Jun 13, 2023 | -| Rel 23-06 | [5027283] | Monthly Rollup | 3.126 | Jun 13, 2023 | -| Rel 23-06 | [5027271] | Monthly Rollup | 4.118 | Jun 13, 2023 | -| Rel 23-06 | [5027575] | Servicing Stack Update | 3.126 | Jun 13, 2023 | -| Rel 23-06 | [5027574] | Servicing Stack Update | 4.118 | Jun 13, 2022 | -| Rel 23-06 | [4578013] | OOB Standalone Security Update | 4.118 | Aug 19, 2020 | -| Rel 23-06 | [5023788] | Servicing Stack Update LKG | 5.82 | Mar 14, 2023 | -| Rel 23-06 | [5017397] | Servicing Stack Update LKG | 2.138 | Sep 13, 2022 | -| Rel 23-06 | [4494175] | Microcode | 5.82 | Sep 1, 2020 | -| Rel 23-06 | [4494174] | Microcode | 6.58 | Sep 1, 2020 | -| Rel 23-06 | [5027396] | Servicing Stack Update | 7.26 | | -| Rel 23-06 | [5023789] | Servicing Stack Update | 6.58 | | +| Rel 23-06 OOB | [5028623] | Latest Cumulative Update(LCU) | [5.83] | Jun 23, 2023 | +| Rel 23-06 | [5027225] | Latest Cumulative Update(LCU) | [7.27] | Jun 13, 2023 | +| Rel 23-06 | [5027222] | Latest Cumulative Update(LCU) | [6.59] | Jun 13, 2023 | +| Rel 23-06 | [5027140] | .NET Framework 3.5 Security and Quality Rollup | [2.139] | Jun 13, 2023 | +| Rel 23-06 | [5027134] | .NET Framework 4.6.2 Security and Quality Rollup | [2.139] | Jun 13, 2023 | +| Rel 23-06 OOB | [5028591] | .NET Framework Rollup 4.6 – 4.7.2/ .NET standalone update | [2.139] | Jun 22, 2023 | +| Rel 23-06 | [5027141] | .NET Framework 3.5 Security and Quality Rollup | [4.119] | Jun 13, 2023 | +| Rel 23-06 | [5027133] | .NET Framework 4.6.2 Security and Quality Rollup | [4.119] | Jun 13, 2023 | +| Rel 23-06 OOB | [5028590] | .NET Framework Rollup 4.6 – 4.7.2/ .NET standalone update | [4.119] | Jun 22, 2023 | +| Rel 23-06 | [5027138] | .NET Framework 3.5 Security and Quality Rollup | [3.127] | Jun 13, 2023 | +| Rel 23-06 | [5027132] | .NET Framework 4.6.2 Security and Quality Rollup | [3.127] | Jun 13, 2023 | +| Rel 23-06 OOB | [5028589] | .NET Framework Rollup 4.6 – 4.7.2/ .NET standalone update | [3.127] | Jun 22, 2023 | +| Rel 23-06 | [5027123] | .NET Framework 4.8 Security and Quality Rollup  | [5.83] | Jun 13, 2023 | +| Rel 23-06 OOB | [5028580] | .NET Framework Rollup 4.8 /.NET Standalone Update  | [5.83] | Jun 22, 2023 | +| Rel 23-06 | [5027131] | . NET Framework 4.7.2 Cumulative Update | [6.59] | Jun 13, 2023 | +| Rel 23-06 OOB | [5028588] | .NET Framework Rollup - 4.6-4.7.2 / .NET Standalone Update | [6.59] | Jun 22, 2023 | +| Rel 23-06 | [5027127] | .NET Framework 4.8 Security and Quality Rollup | [7.27] | Jun 13, 2023 | +| Rel 23-06 OOB | [5028584] | .NET Framework Rollup - 4.8 / .NET Standalone Update | [7.27] | Jun 22, 2023 | +| Rel 23-06 | [5027275] | Monthly Rollup | [2.139] | Jun 13, 2023 | +| Rel 23-06 | [5027283] | Monthly Rollup | [3.127] | Jun 13, 2023 | +| Rel 23-06 | [5027271] | Monthly Rollup | [4.119] | Jun 13, 2023 | +| Rel 23-06 | [5027575] | Servicing Stack Update | [3.127] | Jun 13, 2023 | +| Rel 23-06 | [5027574] | Servicing Stack Update | [4.119] | Jun 13, 2022 | +| Rel 23-06 | [4578013] | OOB Standalone Security Update | [4.119] | Aug 19, 2020 | +| Rel 23-06 | [5023788] | Servicing Stack Update LKG | [5.83] | Mar 14, 2023 | +| Rel 23-06 | [5017397] | Servicing Stack Update LKG | [2.139] | Sep 13, 2022 | +| Rel 23-06 | [4494175] | Microcode | [5.83] | Sep 1, 2020 | +| Rel 23-06 | [4494174] | Microcode | [6.59] | Sep 1, 2020 | +| Rel 23-06 | 5027396 | Servicing Stack Update | [7.27] | | +| Rel 23-06 | 5023789 | Servicing Stack Update | [6.59] | | [5028623]: https://support.microsoft.com/kb/5028623 [5027225]: https://support.microsoft.com/kb/5027225 The following tables show the Microsoft Security Response Center (MSRC) updates [5017397]: https://support.microsoft.com/kb/5017397 [4494175]: https://support.microsoft.com/kb/4494175 [4494174]: https://support.microsoft.com/kb/4494174+[2.139]: ./cloud-services-guestos-update-matrix.md#family-2-releases +[3.127]: ./cloud-services-guestos-update-matrix.md#family-3-releases +[4.119]: ./cloud-services-guestos-update-matrix.md#family-4-releases +[5.83]: ./cloud-services-guestos-update-matrix.md#family-5-releases +[6.59]: ./cloud-services-guestos-update-matrix.md#family-6-releases +[7.27]: ./cloud-services-guestos-update-matrix.md#family-7-releases ## May 2023 Guest OS |
cloud-services | Cloud Services Guestos Update Matrix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-update-matrix.md | Unsure about how to update your Guest OS? Check [this][cloud updates] out. ## News updates +###### **July 8, 2023** +The June Guest OS has released. + ###### **May 19, 2023** The May Guest OS has released. The September Guest OS has released. | Configuration string | Release date | Disable date | | | | |-| WA-GUEST-OS-7.25_202305-01 | May 19, 2023 | Post 7.27 | -| WA-GUEST-OS-7.24_202304-01 | April 27, 2023 | Post 7.26 | +| WA-GUEST-OS-7.27_202306-02 | July 8, 2023 | Post 7.29 | +| WA-GUEST-OS-7.25_202305-01 | May 19, 2023 | Post 7.28 | +|~~WA-GUEST-OS-7.24_202304-01~~| April 27, 2023 | July 8, 2023 | |~~WA-GUEST-OS-7.23_202303-01~~| March 28, 2023 | May 19, 2023 | |~~WA-GUEST-OS-7.22_202302-01~~| March 1, 2023 | April 27, 2023 | |~~WA-GUEST-OS-7.21_202301-01~~| January 31, 2023 | March 28, 2023 | The September Guest OS has released. | Configuration string | Release date | Disable date | | | | |-| WA-GUEST-OS-6.57_202305-01 | May 19, 2023 | Post 6.59 | -| WA-GUEST-OS-6.56_202304-01 | April 27, 2023 | Post 6.58 | +| WA-GUEST-OS-6.59_202306-02 | July 8, 2023 | Post 6.61 | +| WA-GUEST-OS-6.57_202305-01 | May 19, 2023 | Post 6.60 | +|~~WA-GUEST-OS-6.56_202304-01~~| April 27, 2023 | July 8, 2023 | |~~WA-GUEST-OS-6.55_202303-01~~| March 28, 2023 | May 19, 2023 | |~~WA-GUEST-OS-6.54_202302-01~~| March 1, 2023 | April 27, 2023 | |~~WA-GUEST-OS-6.53_202301-01~~| January 31, 2023 | March 28, 2023 | The September Guest OS has released. | Configuration string | Release date | Disable date | | | | |-| WA-GUEST-OS-5.81_202305-01 | May 19, 2023 | Post 5.83 | -| WA-GUEST-OS-5.80_202304-01 | April 27, 2023 | Post 5.82 | +| WA-GUEST-OS-5.83_202306-02 | July 8, 2023 | Post 5.85 | +| WA-GUEST-OS-5.81_202305-01 | May 19, 2023 | Post 5.84 | +|~~WA-GUEST-OS-5.80_202304-01~~| April 27, 2023 | July 8, 2023 | |~~WA-GUEST-OS-5.79_202303-01~~| March 28, 2023 | May 19, 2023 | |~~WA-GUEST-OS-5.78_202302-01~~| March 1, 2023 | April 27, 2023 | |~~WA-GUEST-OS-5.77_202301-01~~| January 31, 2023 | March 28, 2023 | The September Guest OS has released. | Configuration string | Release date | Disable date | | | | |-| WA-GUEST-OS-4.117_202305-01 | May 19, 2023 | Post 4.119 | -| WA-GUEST-OS-4.116_202304-01 | April 27, 2023 | Post 4.118 | +| WA-GUEST-OS-4.119_202306-02 | July 8, 2023 | Post 4.121 | +| WA-GUEST-OS-4.117_202305-01 | May 19, 2023 | Post 4.120 | +|~~WA-GUEST-OS-4.116_202304-01~~| April 27, 2023 | July 8, 2023 | |~~WA-GUEST-OS-4.115_202303-01~~| March 28, 2023 | May 19, 2023 | |~~WA-GUEST-OS-4.114_202302-01~~| March 1, 2023 | April 27, 2023 | |~~WA-GUEST-OS-4.113_202301-01~~| January 31, 2023 | March 28, 2023 | The September Guest OS has released. | Configuration string | Release date | Disable date | | | | |-| WA-GUEST-OS-3.125_202305-01 | May 19, 2023 | Post 3.127 | -| WA-GUEST-OS-3.124_202304-02 | April 27, 2023 | Post 3.126 | +| WA-GUEST-OS-3.127_202306-02 | July 8, 2023 | Post 3.129 | +| WA-GUEST-OS-3.125_202305-01 | May 19, 2023 | Post 3.128 | +|~~WA-GUEST-OS-3.124_202304-02~~| April 27, 2023 | July 8, 2023 | |~~WA-GUEST-OS-3.122_202303-01~~| March 28, 2023 | May 19, 2023 | |~~WA-GUEST-OS-3.121_202302-01~~| March 1, 2023 | April 27, 2023 | |~~WA-GUEST-OS-3.120_202301-01~~| January 31, 2023 | March 28, 2023 | The September Guest OS has released. | Configuration string | Release date | Disable date | | | | |-| WA-GUEST-OS-2.137_202305-01 | May 19, 2023 | Post 2.139 | -| WA-GUEST-OS-2.136_202304-01 | April 27, 2023 | Post 2.138 | +| WA-GUEST-OS-2.139_202306-02 | July 8, 2023 | Post 2.141 | +| WA-GUEST-OS-2.137_202305-01 | May 19, 2023 | Post 2.140 | +|~~WA-GUEST-OS-2.136_202304-01~~| April 27, 2023 | July 8, 2023 | |~~WA-GUEST-OS-2.135_202303-01~~| March 28, 2023 | May 19, 2023 | |~~WA-GUEST-OS-2.134_202302-01~~| March 1, 2023 | April 27, 2023 | |~~WA-GUEST-OS-2.133_202301-01~~| January 31, 2023 | March 28, 2023 | |
cognitive-services | Custom Neural Voice | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/custom-neural-voice.md | the recording samples of human voices. For more information, see [this Microsoft If you're using the old version of Custom Voice (which is scheduled to be retired in February 2024), see [How to migrate to Custom Neural Voice](how-to-migrate-to-custom-neural-voice.md). -## Responsible use of AI +## Responsible AI -To learn how to use Custom Neural Voice responsibly, check the following articles. +An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the transparency notes to learn about responsible AI use and deployment in your systems. * [Transparency note and use cases for Custom Neural Voice](/legal/cognitive-services/speech-service/custom-neural-voice/transparency-note-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context) * [Characteristics and limitations for using Custom Neural Voice](/legal/cognitive-services/speech-service/custom-neural-voice/characteristics-and-limitations-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context) |
cognitive-services | Custom Speech Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/custom-speech-overview.md | Here's more information about the sequence of steps shown in the previous diagra > [!TIP] > A hosted deployment endpoint isn't required to use Custom Speech with the [Batch transcription API](batch-transcription.md). You can conserve resources if the custom speech model is only used for batch transcription. For more information, see [Speech service pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/). +## Responsible AI ++An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the transparency notes to learn about responsible AI use and deployment in your systems. ++* [Transparency note and use cases](/legal/cognitive-services/speech-service/speech-to-text/transparency-note?context=/azure/cognitive-services/speech-service/context/context) +* [Characteristics and limitations](/legal/cognitive-services/speech-service/speech-to-text/characteristics-and-limitations?context=/azure/cognitive-services/speech-service/context/context) +* [Integration and responsible use](/legal/cognitive-services/speech-service/speech-to-text/guidance-integration-responsible-use?context=/azure/cognitive-services/speech-service/context/context) +* [Data, privacy, and security](/legal/cognitive-services/speech-service/speech-to-text/data-privacy-security?context=/azure/cognitive-services/speech-service/context/context) + ## Next steps * [Create a project](how-to-custom-speech-create-project.md) |
cognitive-services | How To Migrate To Custom Neural Voice | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-migrate-to-custom-neural-voice.md | +> +> The pricing for custom voice is different from custom neural voice. Go to the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) and check the pricing details in the collapsable "Deprecated" section. Custom voice (non-neural training) is referred as **Custom**. The custom neural voice lets you build higher-quality voice models while requiring less data. You can develop more realistic, natural, and conversational voices. Your customers and end users will benefit from the latest Text to speech technology, in a responsible way. |
cognitive-services | How To Migrate To Prebuilt Neural Voice | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-migrate-to-prebuilt-neural-voice.md | +> +> The pricing for prebuilt standard voice is different from prebuilt neural voice. Go to the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) and check the pricing details in the collapsable "Deprecated" section. Prebuilt standard voice (retired) is referred as **Standard**. The prebuilt neural voice provides more natural sounding speech output, and thus, a better end-user experience. |
cognitive-services | Language Support | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-support.md | With the cross-lingual feature (preview), you can transfer your custom neural vo # [Pronunciation assessment](#tab/pronunciation-assessment) -The table in this section summarizes the locales supported for Pronunciation assessment, and each language is available on all [Speech to text regions](regions.md#speech-service). You should specify the language that you're learning or practicing to improve pronunciation. The default language is set as `en-US`. If you know your target learning language, set the locale accordingly. For example, if you're learning British English, you should specify the language as `en-GB`. If you're teaching a broader language, such as Spanish, and are uncertain about which locale to select, you can run various accent models (`es-ES`, `es-MX`) to determine the one that achieves the highest score to suit your specific scenario. +The table in this section summarizes the 19 locales supported for pronunciation assessment, and each language is available on all [Speech to text regions](regions.md#speech-service). Latest update extends support from English to 18 additional languages and quality enhancements to existing features, including accuracy, fluency and miscue assessment. You should specify the language that you're learning or practicing improving pronunciation. The default language is set as `en-US`. If you know your target learning language, [set the locale](how-to-pronunciation-assessment.md#get-pronunciation-assessment-results) accordingly. For example, if you're learning British English, you should specify the language as `en-GB`. If you're teaching a broader language, such as Spanish, and are uncertain about which locale to select, you can run various accent models (`es-ES`, `es-MX`) to determine the one that achieves the highest score to suit your specific scenario. [!INCLUDE [Language support include](includes/language-support/pronunciation-assessment.md)] |
cognitive-services | Migration Overview Neural Voice | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/migration-overview-neural-voice.md | We're retiring two features from [Text to speech](index-text-to-speech.yml) capa > [!IMPORTANT] > We are retiring the standard non-neural training tier of custom voice from March 1, 2021 through February 29, 2024. If you used a non-neural custom voice with your Speech resource prior to March 1, 2021 then you can continue to do so until February 29, 2024. All other Speech resources can only use custom neural voice. After February 29, 2024, the non-neural custom voices won't be supported with any Speech resource. +> +> The pricing for custom voice is different from custom neural voice. Go to the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) and check the pricing details in the collapsable "Deprecated" section. Custom voice (non-neural training) is referred as **Custom**. Go to [this article](how-to-migrate-to-custom-neural-voice.md) to learn how to migrate to custom neural voice. Go to the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-s > [!IMPORTANT] > We are retiring the standard voices from September 1, 2021 through August 31, 2024. If you used a standard voice with your Speech resource that was created prior to September 1, 2021 then you can continue to do so until August 31, 2024. All other Speech resources can only use prebuilt neural voices. You can choose from the supported [neural voice names](language-support.md?tabs=tts). After August 31, 2024 the standard voices won't be supported with any Speech resource.+> +> The pricing for prebuilt standard voice is different from prebuilt neural voice. Go to the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) and check the pricing details in the collapsable "Deprecated" section. Prebuilt standard voice (retired) is referred as **Standard**. Go to [this article](how-to-migrate-to-prebuilt-neural-voice.md) to learn how to migrate to prebuilt neural voice. |
cognitive-services | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/overview.md | Sample code for the Speech service is available on GitHub. These samples cover c - [Text to speech samples (REST)](https://github.com/Azure-Samples/Cognitive-Speech-TTS) - [Voice assistant samples (SDK)](https://aka.ms/csspeech/samples) +## Responsible AI ++An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the transparency notes to learn about responsible AI use and deployment in your systems. ++### Speech to text ++* [Transparency note and use cases](/legal/cognitive-services/speech-service/speech-to-text/transparency-note?context=/azure/cognitive-services/speech-service/context/context) +* [Characteristics and limitations](/legal/cognitive-services/speech-service/speech-to-text/characteristics-and-limitations?context=/azure/cognitive-services/speech-service/context/context) +* [Integration and responsible use](/legal/cognitive-services/speech-service/speech-to-text/guidance-integration-responsible-use?context=/azure/cognitive-services/speech-service/context/context) +* [Data, privacy, and security](/legal/cognitive-services/speech-service/speech-to-text/data-privacy-security?context=/azure/cognitive-services/speech-service/context/context) ++### Pronunciation Assessment ++* [Transparency note and use cases](/legal/cognitive-services/speech-service/pronunciation-assessment/transparency-note-pronunciation-assessment?context=/azure/cognitive-services/speech-service/context/context) +* [Characteristics and limitations](/legal/cognitive-services/speech-service/pronunciation-assessment/characteristics-and-limitations-pronunciation-assessment?context=/azure/cognitive-services/speech-service/context/context) ++### Custom Neural Voice ++* [Transparency note and use cases](/legal/cognitive-services/speech-service/custom-neural-voice/transparency-note-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context) +* [Characteristics and limitations](/legal/cognitive-services/speech-service/custom-neural-voice/characteristics-and-limitations-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context) +* [Limited access](/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context) +* [Responsible deployment of synthetic speech](/legal/cognitive-services/speech-service/custom-neural-voice/concepts-guidelines-responsible-deployment-synthetic?context=/azure/cognitive-services/speech-service/context/context) +* [Disclosure of voice talent](/legal/cognitive-services/speech-service/disclosure-voice-talent?context=/azure/cognitive-services/speech-service/context/context) +* [Disclosure of design guidelines](/legal/cognitive-services/speech-service/custom-neural-voice/concepts-disclosure-guidelines?context=/azure/cognitive-services/speech-service/context/context) +* [Disclosure of design patterns](/legal/cognitive-services/speech-service/custom-neural-voice/concepts-disclosure-patterns?context=/azure/cognitive-services/speech-service/context/context) +* [Code of conduct](/legal/cognitive-services/speech-service/tts-code-of-conduct?context=/azure/cognitive-services/speech-service/context/context) +* [Data, privacy, and security](/legal/cognitive-services/speech-service/custom-neural-voice/data-privacy-security-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context) ++### Speaker Recognition ++* [Transparency note and use cases](/legal/cognitive-services/speech-service/speaker-recognition/transparency-note-speaker-recognition?context=/azure/cognitive-services/speech-service/context/context) +* [Characteristics and limitations](/legal/cognitive-services/speech-service/speaker-recognition/characteristics-and-limitations-speaker-recognition?context=/azure/cognitive-services/speech-service/context/context) +* [Limited access](/legal/cognitive-services/speech-service/speaker-recognition/limited-access-speaker-recognition?context=/azure/cognitive-services/speech-service/context/context) +* [General guidelines](/legal/cognitive-services/speech-service/speaker-recognition/guidance-integration-responsible-use-speaker-recognition?context=/azure/cognitive-services/speech-service/context/context) +* [Data, privacy, and security](/legal/cognitive-services/speech-service/speaker-recognition/data-privacy-speaker-recognition?context=/azure/cognitive-services/speech-service/context/context) + ## Next steps * [Get started with speech to text](get-started-speech-to-text.md) |
cognitive-services | Pronunciation Assessment Tool | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/pronunciation-assessment-tool.md | After you press the stop button, you can see **Pronunciation score**, **Accuracy :::image type="content" source="media/pronunciation-assessment/pa-after-recording-display-score.png" alt-text="Screenshot of overall assessment scores after recording." lightbox="media/pronunciation-assessment/pa-after-recording-display-score.png"::: +## Responsible AI ++An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the transparency notes to learn about responsible AI use and deployment in your systems. ++* [Transparency note and use cases](/legal/cognitive-services/speech-service/pronunciation-assessment/transparency-note-pronunciation-assessment?context=/azure/cognitive-services/speech-service/context/context) +* [Characteristics and limitations](/legal/cognitive-services/speech-service/pronunciation-assessment/characteristics-and-limitations-pronunciation-assessment?context=/azure/cognitive-services/speech-service/context/context) + ## Next steps - Use [pronunciation assessment with the Speech SDK](how-to-pronunciation-assessment.md) |
cognitive-services | Speaker Recognition Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speaker-recognition-overview.md | As with all of the Cognitive Services resources, developers who use the speaker | Can you enroll one speaker multiple times? | Yes, for text-dependent verification, you can enroll a speaker up to 50 times. For text-independent verification or speaker identification, you can enroll with up to 300 seconds of audio. | | What data is stored in Azure? | Enrollment audio is stored in the service until the voice profile is [deleted](./get-started-speaker-recognition.md#delete-voice-profile-enrollments). Recognition audio samples aren't retained or stored. | +## Responsible AI ++An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the transparency notes to learn about responsible AI use and deployment in your systems. ++* [Transparency note and use cases](/legal/cognitive-services/speech-service/speaker-recognition/transparency-note-speaker-recognition?context=/azure/cognitive-services/speech-service/context/context) +* [Characteristics and limitations](/legal/cognitive-services/speech-service/speaker-recognition/characteristics-and-limitations-speaker-recognition?context=/azure/cognitive-services/speech-service/context/context) +* [Limited access](/legal/cognitive-services/speech-service/speaker-recognition/limited-access-speaker-recognition?context=/azure/cognitive-services/speech-service/context/context) +* [General guidelines](/legal/cognitive-services/speech-service/speaker-recognition/guidance-integration-responsible-use-speaker-recognition?context=/azure/cognitive-services/speech-service/context/context) +* [Data, privacy, and security](/legal/cognitive-services/speech-service/speaker-recognition/data-privacy-speaker-recognition?context=/azure/cognitive-services/speech-service/context/context) + ## Next steps > [!div class="nextstepaction"] |
cognitive-services | Speech Synthesis Markup Voice | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-synthesis-markup-voice.md | The optional `emphasis` element is used to add or remove word-level stress for t > [!NOTE] > The word-level emphasis tuning is only available for these neural voices: `en-US-GuyNeural`, `en-US-DavisNeural`, and `en-US-JaneNeural`.+> +> For words that have low pitch and short duration, the pitch might not be raised enough to be noticed. Usage of the `emphasis` element's attributes are described in the following table. |
cognitive-services | Speech To Text | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-to-text.md | A custom model can be used to augment the base model to improve recognition of d Customization options vary by language or locale. To verify support, see [Language and voice support for the Speech service](./language-support.md?tabs=stt). +## Responsible AI ++An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the transparency notes to learn about responsible AI use and deployment in your systems. ++* [Transparency note and use cases](/legal/cognitive-services/speech-service/speech-to-text/transparency-note?context=/azure/cognitive-services/speech-service/context/context) +* [Characteristics and limitations](/legal/cognitive-services/speech-service/speech-to-text/characteristics-and-limitations?context=/azure/cognitive-services/speech-service/context/context) +* [Integration and responsible use](/legal/cognitive-services/speech-service/speech-to-text/guidance-integration-responsible-use?context=/azure/cognitive-services/speech-service/context/context) +* [Data, privacy, and security](/legal/cognitive-services/speech-service/speech-to-text/data-privacy-security?context=/azure/cognitive-services/speech-service/context/context) + ## Next steps - [Get started with speech to text](get-started-speech-to-text.md) |
cognitive-services | Text To Speech | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/text-to-speech.md | Text to speech includes the following features: | Feature | Summary | Demo | | | | |-| Prebuilt neural voice (called *Neural* on the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/)) | Highly natural out-of-the-box voices. Create an Azure account and Speech service subscription, and then use the [Speech SDK](./get-started-text-to-speech.md) or visit the [Speech Studio portal](https://speech.microsoft.com/portal) and select prebuilt neural voices to get started. Check the [pricing details](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/). | Check the the [Voice Gallery](https://speech.microsoft.com/portal/voicegallery) and determine the right voice for your business needs. | +| Prebuilt neural voice (called *Neural* on the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/)) | Highly natural out-of-the-box voices. Create an Azure account and Speech service subscription, and then use the [Speech SDK](./get-started-text-to-speech.md) or visit the [Speech Studio portal](https://speech.microsoft.com/portal) and select prebuilt neural voices to get started. Check the [pricing details](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/). | Check the [Voice Gallery](https://speech.microsoft.com/portal/voicegallery) and determine the right voice for your business needs. | | Custom Neural Voice (called *Custom Neural* on the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/)) | Easy-to-use self-service for creating a natural brand voice, with limited access for responsible use. Create an Azure account and Speech service subscription (with the S0 tier), and [apply](https://aka.ms/customneural) to use the custom neural feature. After you've been granted access, visit the [Speech Studio portal](https://speech.microsoft.com/portal) and select **Custom Voice** to get started. Check the [pricing details](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/). | Check the [voice samples](https://aka.ms/customvoice). | ### More about neural text to speech features Custom Neural Voice training and hosting are both calculated by hour and billed Custom Neural Voice (CNV) training time is measured by ΓÇÿcompute hourΓÇÖ (a unit to measure machine running time). Typically, when training a voice model, two computing tasks are running in parallel. So, the calculated compute hours will be longer than the actual training time. On average, it takes less than one compute hour to train a CNV Lite voice; while for CNV Pro, it usually takes 20 to 40 compute hours to train a single-style voice, and around 90 compute hours to train a multi-style voice. The CNV training time is billed with a cap of 96 compute hours. So in the case that a voice model is trained in 98 compute hours, you will only be charged with 96 compute hours. -Custom Neural Voice (CNV) endpoint hosting is measured by the actual time (hour). The hosting time (hours) for each endpoint is calculated at 00:00 UTC every day for the previous 24 hours. For example, if the endpoint has been active for 24 hours on day one, it will be billed for 24 hours at 00:00 UTC the second day. If the endpoint is newly created or has been suspended during the day, it will be billed for its acumulated running time until 00:00 UTC the second day. If the endpoint is not currently hosted, it will not be billed. In addition to the daily calculation at 00:00 UTC each day, the billing is also triggered immediately when an endpoint is deleted or suspended. For example, for an endpoint created at 08:00 UTC on December 1, the hosting hour will be calculated to 16 hours at 00:00 UTC on December 2 and 24 hours at 00:00 UTC on December 3. If the user suspends hosting the endpoint at 16:30 UTC on December 3, the duration (16.5 hours) from 00:00 to 16:30 UTC on December 3 will be calculated for billing. -+Custom Neural Voice (CNV) endpoint hosting is measured by the actual time (hour). The hosting time (hours) for each endpoint is calculated at 00:00 UTC every day for the previous 24 hours. For example, if the endpoint has been active for 24 hours on day one, it will be billed for 24 hours at 00:00 UTC the second day. If the endpoint is newly created or has been suspended during the day, it will be billed for its accumulated running time until 00:00 UTC the second day. If the endpoint is not currently hosted, it will not be billed. In addition to the daily calculation at 00:00 UTC each day, the billing is also triggered immediately when an endpoint is deleted or suspended. For example, for an endpoint created at 08:00 UTC on December 1, the hosting hour will be calculated to 16 hours at 00:00 UTC on December 2 and 24 hours at 00:00 UTC on December 3. If the user suspends hosting the endpoint at 16:30 UTC on December 3, the duration (16.5 hours) from 00:00 to 16:30 UTC on December 3 will be calculated for billing. ## Reference docs * [Speech SDK](speech-sdk.md) * [REST API: Text to speech](rest-text-to-speech.md) +## Responsible AI ++An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the transparency notes to learn about responsible AI use and deployment in your systems. ++* [Transparency note and use cases for Custom Neural Voice](/legal/cognitive-services/speech-service/custom-neural-voice/transparency-note-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context) +* [Characteristics and limitations for using Custom Neural Voice](/legal/cognitive-services/speech-service/custom-neural-voice/characteristics-and-limitations-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context) +* [Limited access to Custom Neural Voice](/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context) +* [Guidelines for responsible deployment of synthetic voice technology](/legal/cognitive-services/speech-service/custom-neural-voice/concepts-guidelines-responsible-deployment-synthetic?context=/azure/cognitive-services/speech-service/context/context) +* [Disclosure for voice talent](/legal/cognitive-services/speech-service/disclosure-voice-talent?context=/azure/cognitive-services/speech-service/context/context) +* [Disclosure design guidelines](/legal/cognitive-services/speech-service/custom-neural-voice/concepts-disclosure-guidelines?context=/azure/cognitive-services/speech-service/context/context) +* [Disclosure design patterns](/legal/cognitive-services/speech-service/custom-neural-voice/concepts-disclosure-patterns?context=/azure/cognitive-services/speech-service/context/context) +* [Code of Conduct for Text to speech integrations](/legal/cognitive-services/speech-service/tts-code-of-conduct?context=/azure/cognitive-services/speech-service/context/context) +* [Data, privacy, and security for Custom Neural Voice](/legal/cognitive-services/speech-service/custom-neural-voice/data-privacy-security-custom-neural-voice?context=/azure/cognitive-services/speech-service/context/context) + ## Next steps * [Text to speech quickstart](get-started-text-to-speech.md) |
communication-services | Trial Phone Numbers Faq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/trial-phone-numbers-faq.md | + + Title: Frequently asked questions about trial phone numbers in Azure Communication Services +description: A conceptual overview plus FAQ for trial phone numbers and verified phone numbers. ++++ Last updated : 07/19/2023++++# Frequently asked questions about trial phone numbers in Azure Communication Services +++This article answers commonly asked questions about Trial Phone Numbers and Verified Phone Numbers. ++++## Trial Phone Numbers +### How many trial phone numbers can I request? +Customers are limited to one trial phone number. + +### Is my subscription eligible for a trial phone number? +Currently trial phone numbers are only available for US subscriptions. For more information on subscriptions in other regions available for purchase, see our subscription eligibility documentation. ++### Can I use the trial phone number for SMS messaging? +No, currently the trial phone number for Azure Communication Services only supports PSTN Calling capabilities. SMS messaging capabilities for trial phone numbers will be available soon. Keep an eye on the Azure Communication Services documentation for updates on when SMS functionality will be added to the trial phone numbers. ++### Can I choose a specific phone number or area code for the trial number? +Currently, customers who can activate a trial phone number will be provided a US toll-free number. If you require a specific phone number or area code, then you must [purchase a phone number](../../quickstarts/telephony/get-phone-number.md). ++### How long can I use the trial phone number? +The trial phone number is available for 30 days. After the trial period ends, the phone number will no longer be accessible. It's recommended to upgrade to a production subscription if you require a longer-term phone number. +++### How can I make and receive calls using the trial phone number? +You can use Azure Communication Services APIs or SDKs to make and receive calls using the trial phone number. Microsoft provides comprehensive documentation and code samples to help you integrate the PSTN Calling capabilities into your applications. ++### What are the calling limitations on my trial phone number? +Trial phone numbers have 60 minutes of inbound and 60 minutes of outbound PSTN calling. The max duration of a phone call is 5 minutes. The trial phone number may not be used to dial emergency services such as 911, 311, 988 or any emergency numbers. + +### Are there any costs associated with the trial phone number? +While the trial phone number itself is provided at no cost during the trial period, there may be associated costs for making and receiving calls or other PSTN Calling services. It's essential to review the pricing details for Azure Communication Services to understand the costs involved. ++## Verified Phone Numbers ++### Why do I need to verify the recipient phone number for a trial phone number? +Verifying the recipient phone number is a security measure that ensures the trial phone number can only make calls to the verified number. This helps protect against misuse and unauthorized usage of trial phone numbers. ++### How is the recipient phone number verified? +The verification process involves sending a one-time passcode via SMS to the recipient phone number. The recipient needs to enter this code in the Azure portal to complete the verification. ++### Can I verify multiple recipient phone numbers for the same trial phone number? +Currently the trial phone number can be verified for up to three recipient phone numbers. If you need to make calls to more numbers, then you'll need to [purchase a phone number](../../quickstarts/telephony/get-phone-number.md) ++### What happens if I enter the verification code incorrectly? +If the verification code is entered incorrectly, the verification process fails. You can request a new verification code and try again. ++### Does the recipient phone number need to be in a specific format for verification? +Yes, the recipient phone number should be entered in the correct international format, including the country code. Ensure that the phone number is accurate and free of any typos or formatting errors. ++### How long does the verification process take? +The verification code is typically sent within a few seconds after initiating the verification process. The overall process should be completed quickly, depending on the recipient's ability to receive the SMS and enter the code in the Azure portal. +++## Next steps ++> [!div class="nextstepaction"] +> [Get a trial phone number](../../quickstarts/telephony/get-trial-phone-number.md) + |
communication-services | Get Trial Phone Number | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/telephony/get-trial-phone-number.md | + + Title: Quickstart - get and manage trial phone numbers in Azure Communication Services +description: Learn how to get and use trial phone numbers in Azure Communication Services. ++++ Last updated : 07/19/2023++++# Quickstart: get and manage a trial phone number in Azure Communication Services +++> [!NOTE] +> Trial Phone Numbers are currently only supported by Azure subscriptions with billing addresses based in the United States. To those outside the US, you can visit the [Subscription eligibility](../../concepts/numbers/sub-eligibility-number-capability.md) documentation to determine where you purchase a phone number. ++Azure Communication Services provides powerful communication capabilities for developers to integrate voice, video, and SMS functionalities into their applications. One of the key features is the ability to acquire phone numbers for making and receiving calls. This quickstart guide walks you through the process of obtaining a trial phone number for Azure Communication Services. ++## Prerequisites ++- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). +- An active Communication Services resource. [Create a Communication Services resource](../create-communication-resource.md). ++## Get a trial phone number ++1. Navigate to your Communication Service resource in the [Azure portal](https://portal.azure.com). ++2. In the Communication Services resource overview, select on the "Phone numbers" option in the left-hand menu. +If you donΓÇÖt have any phone numbers yet, you will see an empty list of phone numbers followed by this call to action for trial phone numbers. +If you already have numbers for your Communication Services resource, you can also activate a trial phone number: ++3. Select on ΓÇ£Activate trial phone numberΓÇ¥. This immediately provisions a trial phone number to your Communication Services resource. Once the trial phone number is provisioned, you can view it on the Phone numbers page. ++## Add a verified phone number to your trial phone number +When using a trial phone number in Azure Communication Services for PSTN Calling capabilities, it is required to verify the recipient phone number. This verification process ensures that the trial phone number can only make calls to the verified number. ++1. Once your trial phone number is provisioned, select on the number in the Phone Numbers page and navigate to the ΓÇ£Trial detailsΓÇ¥ tab: +This tab shows the current limitations on the number, including the days left to use the number, the total calling minutes, and how many verified phone numbers are attached to the trial phone number. You can find more information on the trial phone number limitations [here](../../concepts/telephony/trial-phone-numbers-faq.md). ++2. Select on ΓÇ£Manage verified phone numbersΓÇ¥ to start adding verified phone numbers. ++3. Select ΓÇ£AddΓÇ¥ or ΓÇ£Verify a phone numberΓÇ¥ and enter your phone number and designated country code associated with it. This recipient phone number is verified by sending a one-time passcode (OTP) to their number either through SMS or automated voicemail. Choose which option you prefer, and then press ΓÇ£NextΓÇ¥ to receive the OTP. ++4. Once the user gets the one-time-passcode (OTP), enter the code into the Portal to verify the number. ++5. Once the correct OTP is entered, the phone number is verified, and it shows in the list of verified numbers that the trial number can call. ++## Conclusion +Congratulations! You have successfully obtained a trial phone number for Azure Communication Services. You can now use this phone number to add voice capabilities to your applications. Explore the documentation and resources provided by Microsoft to learn more about Azure Communication Services and integrate it into your projects. ++Remember that trial phone numbers have limitations and are intended for evaluation and development purposes. If you require production-ready phone numbers, you can upgrade your Azure Communication Services subscription and [purchase a phone number](get-phone-number.md). +++## Next steps ++In this quickstart you learned how to: ++> [!div class="checklist"] +> * Activate a trial phone number +> * View your trial phone number limits +> * Add a verified phone number ++> [!div class="nextstepaction"] +> [Make your first outbound call with Call Automation](../call-automation/quickstart-make-an-outbound-call.md) ++> [!div class="nextstepaction"] +> [Add PSTN calling in your app](pstn-call.md) + |
container-registry | Tutorial Enable Registry Cache Auth | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tutorial-enable-registry-cache-auth.md | Follow the steps to create cache rule in the [Azure portal](https://portal.azure 5. Enter the **Rule name**. -6. Select **Source** Registry from the dropdown menu. Currently, Cache ACR only supports **Docker Hub** and **Microsoft Artifact Registry**. +6. Select **Source** Registry from the dropdown menu. 7. Enter the **Repository Path** to the artifacts you want to cache. |
container-registry | Tutorial Enable Registry Cache | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tutorial-enable-registry-cache.md | Follow the steps to create cache rule in the [Azure portal](https://portal.azure 5. Enter the **Rule name**. -6. Select **Source** Registry from the dropdown menu. Currently, Cache for ACR only supports **Docker Hub** and **Microsoft Artifact Registry**. +6. Select **Source** Registry from the dropdown menu. 7. Enter the **Repository Path** to the artifacts you want to cache. |
container-registry | Tutorial Registry Cache | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tutorial-registry-cache.md | Implementing Cache for ACR provides the following benefits: 1. Rule Name - The name of your cache rule. For example, `Hello-World-Cache`. - 2. Source - The name of the Source Registry. Currently, we only support **Docker Hub** and **Microsoft Artifact Registry**. + 2. Source - The name of the Source Registry. 3. Repository Path - The source path of the repository to find and retrieve artifacts you want to cache. For example, `docker.io/library/hello-world`. Implementing Cache for ACR provides the following benefits: 1. Credentials - The name of your credentials. - 2. Source registry Login Server - The login server of your source registry. Currently, we only support `docker.io`. + 2. Source registry Login Server - The login server of your source registry. 3. Source Authentication - The key vault locations to store credentials. 4. Username and Password secrets- The secrets containing the username and password. -## Preview Limitations +## Upstream support -- Quarantine functions like signing, scanning, and manual compliance approval are on the roadmap but not included in this release.+Cache for ACR currently supports the following upstream registries: -- Caching for ACR feature doesn't support Customer managed key (CMK) enabled registries.+| Upstream registries | Support | Availability | +| | | -- | +| Docker | Supports both authenticated pulls and unauthenticated pulls. | Azure CLI, Azure portal | +| Microsoft Artifact Registry | Supports unauthenticated pulls only. | Azure CLI, Azure portal | +| ECR Public | Supports unauthenticated pulls only. | Azure CLI | +| Quay.io | Supports both authenticated pulls and unauthenticated pulls. | Azure CLI | +| GitHub Container Registry | Supports both authenticated pulls and unauthenticated pulls. | Azure CLI | -- Cache will only occur after at least one image pull is complete on the available container image. For every new image available, a new image pull must be complete. Cache for ACR doesn't automatically pull new tags of images when a new tag is available. It is on the roadmap but not supported in this release. +## Preview Limitations -- Cache for ACR only supports Docker Hub and Microsoft Artifact Registry. Multiple other registries including self-hosted registries are on the roadmap but aren't included in this release.+- Cache for ACR feature doesn't support Customer managed key (CMK) enabled registries. -- Cache for ACR only supports 50 cache rules.+- Cache will only occur after at least one image pull is complete on the available container image. For every new image available, a new image pull must be complete. Cache for ACR doesn't automatically pull new tags of images when a new tag is available. It is on the roadmap but not supported in this release. -- Cache for ACR is only available by using the Azure portal and Azure CLI. +- Cache for ACR only supports 50 cache rules. ## Next steps |
container-registry | Tutorial Troubleshoot Registry Cache | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tutorial-troubleshoot-registry-cache.md | Title: Troubleshoot Registry Cache -description: Learn how to troubleshoot the most common problems for a registry enabled with the caching for ACR feature. +description: Learn how to troubleshoot the most common problems for a registry enabled with the Cache for ACR feature. Last updated 04/19/2022 -# Troubleshoot guide for Caching for ACR (Preview) +# Troubleshoot guide for Cache for ACR (Preview) -This article is part six in a six-part tutorial series. [Part one](tutorial-registry-cache.md) provides an overview of Caching for ACR, its features, benefits, and preview limitations. In [part two](tutorial-enable-registry-cache.md), you learn how to enable Caching for ACR feature by using the Azure portal. In [part three](tutorial-enable-registry-cache-cli.md), you learn how to enable Caching for ACR feature by using the Azure CLI. In [part four](tutorial-enable-registry-cache-auth.md), you learn how to enable Caching for ACR feature with authentication by using Azure portal. In [part five](tutorial-enable-registry-cache-auth-cli.md), you learn how to enable Caching for ACR feature with authentication by using Azure CLI. +This article is part six in a six-part tutorial series. [Part one](tutorial-registry-cache.md) provides an overview of Cache for ACR, its features, benefits, and preview limitations. In [part two](tutorial-enable-registry-cache.md), you learn how to enable Cache for ACR feature by using the Azure portal. In [part three](tutorial-enable-registry-cache-cli.md), you learn how to enable Cache for ACR feature by using the Azure CLI. In [part four](tutorial-enable-registry-cache-auth.md), you learn how to enable Cache for ACR feature with authentication by using Azure portal. In [part five](tutorial-enable-registry-cache-auth-cli.md), you learn how to enable Cache for ACR feature with authentication by using Azure CLI. -This article helps you troubleshoot problems you might encounter when attempting to use Caching for ACR (preview). +This article helps you troubleshoot problems you might encounter when attempting to use Cache for ACR (preview). ## Symptoms and Causes May include one or more of the following issues: - [Unhealthy Credential Set](tutorial-troubleshoot-registry-cache.md#unhealthy-credential-set) - Unable to create a cache rule- - [Unsupported Registries](tutorial-troubleshoot-registry-cache.md#unsupported-registries) - [Cache rule Limit](tutorial-troubleshoot-registry-cache.md#cache-rule-limit) ## Potential Solutions ## Cached images don't appear in a live repository -If you're having an issue with cached images not showing up in your repository in ACR, we recommend verifying the repository path. Incorrect repository paths lead the cached images to not show up in your repository in ACR. Caching for ACR currently supports **Docker Hub** and **Microsoft Artifact Registry**. +If you're having an issue with cached images not showing up in your repository in ACR, we recommend verifying the repository path. Incorrect repository paths lead the cached images to not show up in your repository in ACR. - The Login server for Docker Hub is `docker.io`. - The Login server for Microsoft Artifact Registry is `mcr.microsoft.com`. Learn more about [Assigning the access to Azure Key Vault][az-keyvault-set-polic ## Unable to create a Cache rule -### Unsupported Registries +### Cache rule Limit -If you're facing issues while creating a Cache rule, we recommend verifying if you're attempting to cache repositories from an unsupported registry. ACR currently supports **Docker Hub** and **Microsoft Artifact Registry** for Caching. +If you're facing issues while creating a Cache rule, we recommend verifying if you have more than 50 cache rules created. -- The repository path for Docker is `docker.io/library`-- The repository path for Microsoft Artifact Registry is `mcr.microsoft.com/library`+We recommend deleting any unwanted cache rules to avoid hitting the limit. Learn more about the [Cache Terminology](tutorial-registry-cache.md#terminology) -### Cache rule Limit +## Upstream support -If you're facing issues while creating a Cache rule, we recommend verifying if you've more than 50 cache rules created. +Cache for ACR currently supports the following upstream registries: ++| Upstream registries | Support | Availability | +| | | -- | +| Docker | Supports both authenticated pulls and unauthenticated pulls. | Azure CLI, Azure portal | +| Microsoft Artifact Registry | Supports unauthenticated pulls only. | Azure CLI, Azure portal | +| ECR Public | Supports unauthenticated pulls only. | Azure CLI | +| Quay.io | Supports both authenticated pulls and unauthenticated pulls. | Azure CLI | +| GitHub Container Registry | Supports both authenticated pulls and unauthenticated pulls. | Azure CLI | -We recommend deleting any unwanted cache rules to avoid hitting the limit. <!-- LINKS - External --> [create-and-store-keyvault-credentials]:../key-vault/secrets/quick-create-portal.md |
defender-for-cloud | Alerts Suppression Rules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-suppression-rules.md | You can also select the **Suppression rules** button in the Security Alerts page :::image type="content" source="media/alerts-suppression-rules/create-new-suppression-rule.png" alt-text="Screenshot of the Create suppression rule button in the Suppression rules page."::: +> [!NOTE] +> For some alerts, suppression rules are not applicable for certain entities. If the rule is not available, a message will display at the end of the **Create a suppression rule** process. + ## Edit a suppression rule To edit a rule you've created from the suppression rules page: |
defender-for-cloud | Concept Agentless Containers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-agentless-containers.md | Title: Agentless Container Posture for Microsoft Defender for Cloud description: Learn how Agentless Container Posture offers discovery, visibility, and vulnerability assessment for Containers without installing an agent on your machines. Previously updated : 06/21/2023 Last updated : 07/03/2023 Container vulnerability assessment powered by MDVM (Microsoft Defender Vulnerabi - **Query vulnerability information via the Azure Resource Graph** - Ability to query vulnerability information via the [Azure Resource Graph](/azure/governance/resource-graph/overview#how-resource-graph-complements-azure-resource-manager). Learn how to [query recommendations via the ARG](review-security-recommendations.md#review-recommendation-data-in-azure-resource-graph-arg). - **Query vulnerability information via sub-assessment API** - You can get scan results via REST API. See the [subassessment list](/rest/api/defenderforcloud/sub-assessments/get?tabs=HTTP). -- **Support for exemptions** - Learn how to [create exemption rules for a management group, resource group, or subscription](how-to-enable-agentless-containers.md#support-for-exemptions). +- **Support for exemptions** - Learn how to [create exemption rules for a management group, resource group, or subscription](how-to-enable-agentless-containers.md#support-for-exemptions). +- **Support for disabling vulnerability findings** - Learn how to [disable vulnerability assessment findings on Container registry images](disable-vulnerability-findings-containers.md). ### Scan Triggers Container registry vulnerability assessment scans container images stored in you 1. When you enable the vulnerability assessment extension in Defender CSPM, you authorize Defender CSPM to scan container images in your Azure Container registries. 1. Defender CSPM automatically discovers all containers registries, repositories and images (created before or after enabling the plan). -1. Once a day, all discovered images are pulled and an inventory is created for each image that is discovered. -1. Vulnerability reports for known vulnerabilities (CVEs) are generated for each software that is present on an image inventory. -1. Vulnerability reports are refreshed daily for any image pushed during the last 90 days to a registry or currently running on a Kubernetes cluster monitored by Defender CSPM Agentless discovery and visibility for Kubernetes, or monitored by the Defender for Containers agent (profile or extension). - +1. Once a day: ++ 1. All newly discovered images are pulled, and an inventory is created for each image. Image inventory is kept to avoid further image pulls, unless required by new scanner capabilities.​ ++ 1. Using the inventory, vulnerability reports are generated for new images, and updated for images previously scanned which were either pushed in the last 90 days to a registry, or are currently running. ++> [!NOTE] +> To determine if an image is currently running, Agentless Vulnerability Assessment uses [Agentless Discovery and Visibility within Kubernetes components](/azure/defender-for-cloud/concept-agentless-containers). ### If I remove an image from my registry, how long before vulnerabilities reports on that image would be removed? It currently takes 3 days to remove findings for a deleted image. We are working on providing quicker deletion for removed images. It currently takes 3 days to remove findings for a deleted image. We are working ## Next steps - Learn about [support and prerequisites for agentless containers posture](support-agentless-containers-posture.md)+ - Learn how to [enable agentless containers](how-to-enable-agentless-containers.md)++ |
defender-for-cloud | Disable Vulnerability Findings Containers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/disable-vulnerability-findings-containers.md | + + Title: Disable vulnerability assessment findings on Container registry images and running images in Microsoft Defender for Cloud +description: Microsoft Defender for Cloud includes a fully integrated agentless vulnerability assessment solution powered by MDVM (Microsoft Defender Vulnerability Management). + Last updated : 07/09/2023+++# Disable vulnerability assessment findings on container registry images ++If you have an organizational need to ignore a finding, rather than remediate it, you can optionally disable it. Disabled findings don't affect your secure score or generate unwanted noise. ++When a finding matches the criteria you've defined in your disable rules, it doesn't appear in the list of findings. Typical scenario examples include: ++- Disable findings with severity below medium +- Disable findings for images that the vendor will not fix ++> [!IMPORTANT] +> To create a rule, you need permissions to edit a policy in Azure Policy. +> Learn more in [Azure RBAC permissions in Azure Policy](../governance/policy/overview.md#azure-rbac-permissions-in-azure-policy). +++You can use a combination of any of the following criteria: ++- **CVE** - Enter the CVEs of the findings you want to exclude. Ensure the CVEs are valid. Separate multiple CVEs with a semicolon. For example, CVE-2020-1347; CVE-2020-1346. +- **Image digest** - Specify images for which vulnerabilities should be excluded based on the image digest. Separate multiple digests with a semicolon, for example: sha256:9b920e938111710c2768b31699aac9d1ae80ab6284454e8a9ff42e887fa1db31;sha256:ab0ab32f75988da9b146de7a3589c47e919393ae51bbf2d8a0d55dd92542451c +- **OS version** - Specify images for which vulnerabilities should be excluded based on the image OS. Separate multiple versions with a semicolon, for example: ubuntu_linux_20.04;alpine_3.17 +- **Minimum Severity** - Select low, medium, high, or critical to exclude vulnerabilities less than and equal to the specified severity level. +- **Fix status** - Select the option to exclude vulnerabilities based on their fix status. +++Disable rules apply per recommendation, for example, to disable [CVE-2017-17512](https://github.com/advisories/GHSA-fc69-2v7r-7r95) both on the registry images and runtime images, the disable rule has to be configured in both places. ++> [!NOTE] +> The [Azure Preview Supplemental Terms](//azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. ++ To create a rule: ++1. From the recommendations detail page for [Container registry images should have vulnerability findings resolved powered by Microsoft Defender Vulnerability Management](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c0b7cfc6-3172-465a-b378-53c7ff2cc0d5) or [Running container images should have vulnerability findings resolved powered by Microsoft Defender Vulnerability Management +](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/c609cf0f-71ab-41e9-a3c6-9a1f7fe1b8d5), select **Disable rule**. ++1. Select the relevant scope. ++1. Define your criteria. You can use any of the following criteria: + + - **CVE** - Enter the CVEs of the findings you want to exclude. Ensure the CVEs are valid. Separate multiple CVEs with a semicolon. For example, CVE-2020-1347; CVE-2020-1346. + - **Image digest** - Specify images for which vulnerabilities should be excluded based on the image digest. Separate multiple digests with a semicolon, for example: sha256:9b920e938111710c2768b31699aac9d1ae80ab6284454e8a9ff42e887fa1db31;sha256:ab0ab32f75988da9b146de7a3589c47e919393ae51bbf2d8a0d55dd92542451c + - **OS version** - Specify images for which vulnerabilities should be excluded based on the image OS. Separate multiple versions with a semicolon, for example: ubuntu_linux_20.04;alpine_3.17 + - **Minimum Severity** - Select low, medium, high, or critical to exclude vulnerabilities less than and equal to the specified severity level. + - **Fix status** - Select the option to exclude vulnerabilities based on their fix status. ++1. In the justification text box, add your justification for why a specific vulnerability was disabled. This provides clarity and understanding for anyone reviewing the rule. + +1. Select **Apply rule**. ++ :::image type="content" source="./media/disable-vulnerability-findings-containers/disable-rules.png" alt-text="Screenshot showing where to create a disable rule for vulnerability findings on registry images." lightbox="media/disable-vulnerability-findings-containers/disable-rules.png"::: ++ > [!IMPORTANT] + > Changes might take up to 24hrs to take effect. ++**To view, override, or delete a rule:** ++1. From the recommendations detail page, select **Disable rule**. +1. From the scope list, subscriptions with active rules show as **Rule applied**. +1. To view or delete the rule, select the ellipsis menu ("..."). +1. Do one of the following: + - To view or override a disable rule - select **View rule**, make any changes you want, and select **Override rule**. + - To delete a disable rule - select **Delete rule**. ++ :::image type="content" source="./media/disable-vulnerability-findings-containers/override-rules.png" alt-text="Screenshot showing where to view, delete or override a rule for vulnerability findings on registry images." lightbox="media/disable-vulnerability-findings-containers/override-rules.png"::: +++## Next steps ++- Learn how to [view and remediate vulnerability assessment findings for registry images](view-and-remediate-vulnerability-assessment-findings.md). +- Learn about [agentless container posture](concept-agentless-containers.md). + |
defender-for-cloud | Other Threat Protections | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/other-threat-protections.md | Title: Other threat protections from Microsoft Defender for Cloud description: Learn about the threat protections available from Microsoft Defender for Cloud Previously updated : 05/01/2023 Last updated : 05/22/2023 # Other threat protections in Microsoft Defender for Cloud In addition to its built-in [advanced protection plans](defender-for-cloud-intro <a name="network-layer"></a> ## Threat protection for Azure network layer+ Defenders for Cloud network-layer analytics are based on sample [IPFIX data](https://en.wikipedia.org/wiki/IP_Flow_Information_Export), which are packet headers collected by Azure core routers. Based on this data feed, Defender for Cloud uses machine learning models to identify and flag malicious traffic activities. Defender for Cloud also uses the Microsoft Threat Intelligence database to enrich IP addresses. Some network configurations restrict Defender for Cloud from generating alerts on suspicious network activity. For Defender for Cloud to generate network alerts, ensure that: If you have created [WAF Security solution](partner-integration.md#add-data-sour > [!NOTE] > Only WAF v1 is supported and will work with Microsoft Defender for Cloud. +To deploy Azure's Application Gateway WAF, do the following: ++1. From the Azure portal, open **Defender for Cloud**. ++1. From Defender for Cloud's menu, select **Security solutions**. ++1. In the **Add data sources** section, select **Add** for Azure's Application Gateway WAF. ++ :::image type="content" source="media/other-threat-protections/deploy-azure-waf.png" alt-text="Screenshot showing where to select add to deploy WAF." lightbox="media/other-threat-protections/deploy-azure-waf.png"::: ++ <a name="azure-ddos"></a> ### Display Azure DDoS Protection alerts in Defender for Cloud |
defender-for-cloud | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md | Title: Release notes for Microsoft Defender for Cloud description: This page is updated frequently with the latest updates in Defender for Cloud. Previously updated : 06/28/2023 Last updated : 07/09/2023 # What's new in Microsoft Defender for Cloud? Updates in July include: |Date |Update | |||+|July 9 | [Support for disabling specific vulnerability findings](#support-for-disabling-specific-vulnerability-findings) | July 1 | [Data Aware Security Posture is now Generally Available](#data-aware-security-posture-is-now-generally-available) | +### Support for disabling specific vulnerability findings ++July 9, 2023 ++Release of support for disabling vulnerability findings for your container registry images or running images as part of agentless container posture. If you have an organizational need to ignore a vulnerability finding on your container registry image, rather than remediate it, you can optionally disable it. Disabled findings don't affect your secure score or generate unwanted noise. ++Learn how to [disable vulnerability assessment findings on Container registry images](disable-vulnerability-findings-containers.md). ++ ### Data Aware Security Posture is now Generally Available July 1, 2023 Updates in June include: June 26, 2023 -Defender for Cloud have improved the onboarding experience to include a new streamlined user interface and instructions in addition to new capabilities that allow you to onboard your AWS and GCP environments while providing access to advanced onboarding features. +Defender for Cloud has improved the onboarding experience to include a new streamlined user interface and instructions in addition to new capabilities that allow you to onboard your AWS and GCP environments while providing access to advanced onboarding features. For organizations that have adopted Hashicorp Terraform for automation, Defender for Cloud now includes the ability to use Terraform as the deployment method alongside AWS CloudFormation or GCP Cloud Shell. You can now customize the required role names when creating the integration. You can also select between: |
logic-apps | Edit App Settings Host Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/edit-app-settings-host-settings.md | These settings affect the throughput and capacity for single-tenant Azure Logic ### Trigger concurrency +The following settings work only for workflows that start with a recurrence-based trigger for [built-in, service provider-based connectors](/azure/logic-apps/connectors/built-in/reference/). For a workflow that starts with a function-based trigger, you might try to [set up batching where supported](logic-apps-batch-process-send-receive-messages.md). However, batching isn't always the correct solution. For example, with Azure Service Bus triggers, a batch might hold onto messages beyond the lock duration. As a result, any action, such as complete or abandon, fails on such messages. + | Setting | Default value | Description | |||-| | `Runtime.Trigger.MaximumRunConcurrency` | `100` runs | Sets the maximum number of concurrent runs that a trigger can start. This value appears in the trigger's concurrency definition. | |
machine-learning | How To Deploy For Real Time Inference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-deploy-for-real-time-inference.md | You can continue to configure the endpoint in wizard, as the endpoint creation w If the checkbox is selected, the first row of your input data will be used as sample input data for testing the endpoint later. +### Outputs ++In this step, you can view all flow outputs, and specify which outputs will be included in the response of the endpoint you deploy. ++ ### Connections In this step, you can view all connections within your flow, and change connections used by the endpoint when it performs inference later. |
machine-learning | Faiss Index Lookup Tool | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/tools-reference/faiss-index-lookup-tool.md | Last updated 06/30/2023 Faiss Index Lookup is a tool tailored for querying within a user-provided Faiss-based vector store. In combination with our Large Language Model (LLM) tool, it empowers users to extract contextually relevant information from a domain knowledge base. -> [!IMPORTANT] -> Prompt flow is currently in public preview. This preview is provided without a service-level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. -> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). - ## Requirements--- embeddingstore==0.0.93026209 --extra-index-url https://azuremlsdktestpypi.azureedge.net/embeddingstore+- embeddingstore --extra-index-url https://azuremlsdktestpypi.azureedge.net/embeddingstore ## Prerequisites- - Prepare an accessible path on Azure Blob Storage. Here's the guide if a new storage account needs to be created: [Azure Storage Account](../../../storage/common/storage-account-create.md).-- Create related Faiss-based index files on Azure Blob Storage. We support the LangChain format (index.faiss + index.pkl) for the index files, which can be prepared either by employing our EmbeddingStore SDK or following the quick guide from [LangChain documentation](https://python.langchain.com/docs/modules/data_connection/vectorstores/integrations/faiss). Refer to the instructions of `Vector DB QnA Step 1` for building index using EmbeddingStore SDK.-- The identity used by the Prompt flow runtime should be granted with certain roles to access the vector store, based on which kind of path you provide. Refer to [Steps to assign an Azure role](../../../role-based-access-control/role-assignments-steps.md):- - Workspace relative path, blob url on workspace default storage, and AML Datastore url: `AzureML Data Scientist` - - Other blob urls: `Storage Blob Data Reader role` +- Create related Faiss-based index files on Azure Blob Storage. We support the LangChain format (index.faiss + index.pkl) for the index files, which can be prepared either by employing our EmbeddingStore SDK or following the quick guide from [LangChain documentation](https://python.langchain.com/docs/modules/data_connection/vectorstores/integrations/faiss). Please refer to [the sample notebook for creating Faiss index](https://aka.ms/pf-sample-build-faiss-index) for building index using EmbeddingStore SDK. +- Based on where you put your own index files, the identity used by the promptflow runtime should be granted with certain roles. Please refer to [Steps to assign an Azure role](../../../role-based-access-control/role-assignments-steps.md): ++ | Location | Role | + | - | - | + | workspace datastores or workspace default blob | AzureML Data Scientist | + | other blobs | Storage Blob Data Reader | ## Inputs The tool accepts the following inputs: | Name | Type | Description | Required | | - | - | -- | -- |-| path | string | blob/datastore url or workspace relative path for the vector store.<br><br>blob url format:<br>https://`<account_name>`.blob.core.windows.net/`<container_name>`/`<path_and_folder_name>`.<br><br>AML datastore url format:<br>azureml://subscriptions/`<your_subscription>`/resourcegroups/`<your_resource_group>`/workspaces/`<your_workspace>`/data/`<data_path>`<br><br>relative path to workspace datastore `workspaceblobstore`:<br>`<path_and_folder_name>` | Yes | +| path | string | URL or path for the vector store.<br><br>blob URL format:<br>https://`<account_name>`.blob.core.windows.net/`<container_name>`/`<path_and_folder_name>`.<br><br>AML datastore URL format:<br>azureml://subscriptions/`<your_subscription>`/resourcegroups/`<your_resource_group>`/workspaces/`<your_workspace>`/data/`<data_path>`<br><br>relative path to workspace datastore `workspaceblobstore`:<br>`<path_and_folder_name>`<br><br> public http/https URL (for public demonstration):<br>http(s)://`<path_and_folder_name>` | Yes | | vector | list[float] | The target vector to be queried, which can be generated by the LLM tool. | Yes | | top_k | integer | The count of top-scored entities to return. Default value is 3. | No | The following is an example for JSON format response returned by the tool, which | score | float | Distance between the entity and the query vector | | metadata | dict | Customized key-value pairs provided by user when creating the index | - ```json [ { |
machine-learning | Vector Db Lookup Tool | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/tools-reference/vector-db-lookup-tool.md | This tool adds support for more vector databases, including Pinecone, Weaviete, ## Requirements -- embeddingstore==0.0.93026209 --extra-index-url https://azuremlsdktestpypi.azureedge.net/embeddingstore+- embeddingstore --extra-index-url https://azuremlsdktestpypi.azureedge.net/embeddingstore ## Prerequisites The tool searches data from a third-party vector database. To use it, you should - **Azure Cognitive Search:** - Create resource [Azure Cognitive Search](../../../search/search-create-service-portal.md).- - Add "CognitiveSearchConnection" connection. Fill "API key" field with "Primary admin key" from "Keys" section of created resource, and fill "Api Base" field with the Url, the Url format is `https://{your_serive_name}.search.windows.net`. + - Add "CognitiveSearchConnection" connection. Fill "API key" field with "Primary admin key" from "Keys" section of created resource, and fill "Api Base" field with the URL, the URL format is `https://{your_serive_name}.search.windows.net`. ## Inputs |
machine-learning | Vector Index Lookup Tool | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/tools-reference/vector-index-lookup-tool.md | Vector index lookup is a tool tailored for querying within an Azure Machine Lear ## Requirements -- embeddingstore==0.0.93026209 --extra-index-url https://azuremlsdktestpypi.azureedge.net/embeddingstore+- embeddingstore --extra-index-url https://azuremlsdktestpypi.azureedge.net/embeddingstore ## Prerequisites - Follow the instructions from sample flow `Bring your own Data QnA` to prepare a Vector Index as an input.-- The identity used by the Prompt flow runtime should be granted with certain roles to access the Vector Index, based on which kind of path you provide. Refer to [Steps to assign an Azure role](../../../role-based-access-control/role-assignments-steps.md):- - blob url on workspace default storage, AML asset url, and AML Datastore url: `AzureML Data Scientist` - - other blob urls: `Storage Blob Data Reader role` +- Based on where you put your Vector Index, the identity used by the promptflow runtime should be granted with certain roles. Please refer to [Steps to assign an Azure role](../../../role-based-access-control/role-assignments-steps.md): ++ | Location | Role | + | - | - | + | workspace datastores or workspace default blob | AzureML Data Scientist | + | other blobs | Storage Blob Data Reader | ## Inputs The tool accepts the following inputs: | Name | Type | Description | Required | | - | - | -- | -- |-| path | string | blob/AML asset/datastore url for the VectorIndex.<br><br>blob url format:<br>https://`<account_name>`.blob.core.windows.net/`<container_name>`/`<path_and_folder_name>`.<br><br>AML asset url format:<br>azureml://subscriptions/`<your_subscription>`/resourcegroups/`<your_resource_group>>`/workspaces/`<your_workspace>`/datastores/`<your_datastore>`/paths/`<asset_name and optional version/label>`<br><br>AML datastore url format:<br>azureml://subscriptions/`<your_subscription>`/resourcegroups/`<your_resource_group>`/workspaces/`<your_workspace>`/data/`<data_path>` | Yes | +| path | string | blob/AML asset/datastore URL for the VectorIndex.<br><br>blob URL format:<br>https://`<account_name>`.blob.core.windows.net/`<container_name>`/`<path_and_folder_name>`.<br><br>AML asset URL format:<br>azureml://subscriptions/`<your_subscription>`/resourcegroups/`<your_resource_group>>`/workspaces/`<your_workspace>`/datastores/`<your_datastore>`/paths/`<asset_name and optional version/label>`<br><br>AML datastore URL format:<br>azureml://subscriptions/`<your_subscription>`/resourcegroups/`<your_resource_group>`/workspaces/`<your_workspace>`/data/`<data_path>` | Yes | | query | string, list[float] | The text to be queried.<br>or<br>The target vector to be queried, which can be generated by the LLM tool. | Yes | | top_k | integer | The count of top-scored entities to return. Default value is 3. | No | |
search | Search Indexer Howto Access Private | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-howto-access-private.md | Title: Connect through a private endpoint + Title: Connect through a shared private link -description: Configure indexer connections to access content from other Azure resources that are protected through a private endpoint. +description: Configure indexer connections to access content from other Azure resources that are protected through a shared private link. --++ - Previously updated : 04/18/2023 Last updated : 07/07/2023 -# Make outbound connections through a private link +# Make outbound connections through a shared private link This article explains how to configure private, outbound calls from Azure Cognitive Search to Azure PaaS resources that run within a virtual network. -Setting up a private connection allows Azure Cognitive Search to connect to Azure PaaS through a virtual network IP address instead of a port that's open to the internet. The object created for the connection is called a *shared private link*. On the connection, Search uses the shared private link internally to reach an Azure PaaS resource inside the network boundary. +Setting up a private connection allows Azure Cognitive Search to connect to Azure PaaS through a virtual network IP address instead of a port that's open to the internet. The object created for the connection is called a *shared private link*. On the connection, Search uses the shared private link internally to reach an Azure PaaS resource inside the network boundary. ++Shared private link is a premium feature that's billed by usage. The costs of reading from a data source through the private endpoint are billed to your Azure subscription. As the indexer reads data from the data source, network egress charges are billed at the ["inbound data processed"](https://azure.microsoft.com/pricing/details/private-link/) rate. > [!NOTE] > If you're setting up a private indexer connection to a SQL Managed Instance, see [this article](search-indexer-how-to-access-private-sql.md) instead. When evaluating shared private links for your scenario, remember these constrain + You should have a minimum of Contributor permissions on both Azure Cognitive Search and the Azure PaaS resource for which you're creating the shared private link. -> [!NOTE] -> Azure Private Link is used internally, at no charge, to set up the shared private link. - <a name="group-ids"></a> ### Supported resource types A `202 Accepted` response is returned on success. The process of creating an out ## 2 - Approve the private endpoint connection -The resource owner must approve the connection request you created. This section assumes the portal for this step, but you can also use the REST APIs of the Azure PaaS resource. [Private Endpoint Connections (Storage Resource Provider)](/rest/api/storagerp/privateendpointconnections) and [Private Endpoint Connections (Cosmos DB Resource Provider)](/rest/api/cosmos-db-resource-provider/2022-11-15/private-endpoint-connections) are two examples. +The resource owner must approve the connection request you created. This section assumes the portal for this step, but you can also use the REST APIs of the Azure PaaS resource. [Private Endpoint Connections (Storage Resource Provider)](/rest/api/storagerp/privateendpointconnections) and [Private Endpoint Connections (Cosmos DB Resource Provider)](/rest/api/cosmos-db-resource-provider/2023-03-15/private-endpoint-connections) are two examples. 1. In the Azure portal, open the **Networking** page of the Azure PaaS resource. |
search | Search Sku Manage Costs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-sku-manage-costs.md | Billing is based on capacity (SUs) and the costs of running premium features, su |-|| | Image extraction (AI enrichment) <sup>1, 2</sup> | Per 1000 images. See the [pricing page](https://azure.microsoft.com/pricing/details/search/#pricing). | | Custom Entity Lookup skill (AI enrichment) <sup>1</sup> | Per 1000 text records. See the [pricing page](https://azure.microsoft.com/pricing/details/search/#pricing) |-| Built-in skills (AI enrichment) <sup>1</sup> | Number of transactions, billed at the same rate as if you had performed the task by calling Cognitive Services directly. You can process 20 documents per indexer per day for free. Larger or more frequent workloads require a multi-resource Cognitive Services key. | -| Semantic Search <sup>1</sup> | Number of queries of "queryType=semantic", billed at a progressive rate. See the [pricing page](https://azure.microsoft.com/pricing/details/search/#pricing). | -| Private Endpoints <sup>1</sup> | Billed as long as the endpoint exists, and billed for bandwidth. | +| [Built-in skills](cognitive-search-predefined-skills.md) (AI enrichment) <sup>1</sup> | Number of transactions, billed at the same rate as if you had performed the task by calling Cognitive Services directly. You can process 20 documents per indexer per day for free. Larger or more frequent workloads require a multi-resource Cognitive Services key. | +| [Semantic search](semantic-search-overview.md) <sup>1</sup> | Number of queries of "queryType=semantic", billed at a progressive rate. See the [pricing page](https://azure.microsoft.com/pricing/details/search/#pricing). | +| [Shared private link](search-indexer-howto-access-private.md) <sup>1</sup> | [Billed for bandwidth](https://azure.microsoft.com/pricing/details/private-link/) as long as the shared private link exists and is used. | <sup>1</sup> Applies only if you use or enable the feature. -<sup>2</sup> In an [indexer configuration](/rest/api/searchservice/create-indexer#indexer-parameters), "imageAction" is the parameter that triggers image extraction. If "imageAction" is set to "none" (the default), you won't be charged for image extraction. +<sup>2</sup> In an [indexer configuration](/rest/api/searchservice/create-indexer#indexer-parameters), "imageAction" is the parameter that triggers image extraction. If "imageAction" is set to "none" (the default), you won't be charged for image extraction. Costs are incurred when "imageAction" parameter is set *and* you include OCR, Image Analysis, or Document Extraction in a skillset. There is no meter on the number of queries, query responses, or documents ingested, although [service limits](search-limits-quotas-capacity.md) do apply at each tier. |
search | Vector Search Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-overview.md | In order to create effective embeddings for vector search, it's important to tak For example, documents that talk about different species of dogs would be clustered close together in the embedding space. Documents about cats would be close together, but farther from the dogs cluster while still being in the neighborhood for animals. Dissimilar concepts such as cloud computing would be much farther away. In practice, these embedding spaces are very abstract and don't have well-defined, human-interpretable meanings, but the core idea stays the same. -Popular vector similarity metrics include: `euclidean distance`, `cosine similarity`, and `dot product`. +Popular vector similarity metrics include the following, which are all supported by Azure Cognitive Search. +++ `euclidean`: Also known as _l2-norm_, this measures the length of the vector difference between two vectors.++ `cosine`: This measures the angle between two vectors, and is not affected by differing vector lengths.++ `dotProduct`: This measures both the length of each of the pair of two vectors, and the angle between them. For normalized vectors, this is identical to `cosine` similarity, but slightly more performant. ### Approximate Nearest Neighbors |
sentinel | Connect Cef Ama | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-cef-ama.md | This example collects events for: 1. To capture messages sent from a logger or a connected device, run this command in the background: ```- tcpdump -I any port 514 -A vv & + tcpdump -i any port 514 -A vv & ``` 1. After you complete the validation, we recommend that you stop the `tcpdump`: Type `fg` and then select <kbd>Ctrl</kbd>+<kbd>C</kbd>. 1. To send demo messages, do one of the following: |
sentinel | Enroll Simplified Pricing Tier | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/enroll-simplified-pricing-tier.md | The following sample template sets Microsoft Sentinel to the classic pricing tie ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "properties": { ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "workspaceResourceId": "/subscriptions/{SubscriptionId}/resourcegroups/{ResourceGroup}/providers/microsoft.operationalinsights/workspaces/{YourWorkspaceName}", ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "sku": { -ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "name": "pergb2018", +ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "name": "PerGB", ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» "capacityReservationLevel": ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» } ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» ΓÇ» } It's possible your Microsoft account team has negotiated a discounted price for - Learn more about managing costs with [cost analysis](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). - Learn about how to [prevent unexpected costs](../cost-management-billing/understand/analyze-unexpected-charges.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn). - Take the [Cost Management](/training/paths/control-spending-manage-bills?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn) guided learning course.-- For more tips on reducing Log Analytics data volume, see [Azure Monitor best practices - Cost management](../azure-monitor/best-practices-cost.md).+- For more tips on reducing Log Analytics data volume, see [Azure Monitor best practices - Cost management](../azure-monitor/best-practices-cost.md). |
sentinel | Monitor Sap System Health | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/monitor-sap-system-health.md | This article describes how to use the following features, which allow you to per 1. From the Microsoft Sentinel portal, select **Data connectors**. 1. In the search bar, type *Microsoft Sentinel for SAP*. 1. Select the **Microsoft Sentinel for SAP** connector and select **Open connector**.-1. In the **Configuration > System Health** area, you can view information on the health of your SAP systems. +1. In the **Configuration > Configure an SAP system and assign it to a collector agent** area, you can view information on the health of your SAP systems. Learn how to [add new SAP systems](sap/deploy-data-connector-agent-container.md). - :::image type="content" source="media/monitor-sap-system-health/health-status.png" alt-text="Screenshot of the Configuration area showing the status of the connected SAP systems." lightbox="media/monitor-sap-system-health/health-status.png"::: + The following table describes the different fields in the **Configure an SAP system and assign it to a collector agent** area. ++### System health status and details |Field |Description |Values |Notes | |||||-|Agent name |Unique ID of the installed data connector agent. | | | |SID |The name of the connected SAP system ID (SID). | | |-|Health |Indicates whether the SID is healthy. To troubleshoot health issues, [review the container execution logs](sap/sap-deploy-troubleshoot.md#view-all-container-execution-logs) and review other [troubleshooting steps](sap/sap-deploy-troubleshoot.md). |The **System healthy** status indicates that Microsoft Sentinel identified both logs and a heartbeat from the system. Other statuses, like **System unreachable for over 1 day**, indicate the connectivity status. | | |System role |Indicates whether the system is productive or not. The data connector agent retrieves the value by reading the SAP T000 table. This value also impacts billing. To change the role, an SAP admin needs to change the configuration in the SAP system. |ΓÇó **Production**. The system is defined by the SAP admin as a production system.<br>ΓÇó **Unknown (Production)**. Microsoft Sentinel couldn't retrieve the system status. Microsoft Sentinel regards this type of system as a production system for both security and billing purposes.<br>ΓÇó **Non production**. Indicates roles like developing, testing, and customizing.<br>ΓÇó **Agent update available**. Displayed in addition to the health status to indicate that a newer SAP connector version exists. In this case, we recommended that you [update the connector](sap/update-sap-data-connector.md). | If the system role is **Production (unknown)**, check the Microsoft Sentinel role definitions and permissions on the SAP system, and validate that the system allows Microsoft Sentinel to read the content of the T000 table. Next, consider [updating the SAP connector](sap/update-sap-data-connector.md) to the latest version. |+|Agent name |Unique ID of the installed data connector agent. | | | +|Health |Indicates whether the SID is healthy. To troubleshoot health issues, [review the container execution logs](sap/sap-deploy-troubleshoot.md#view-all-container-execution-logs) and review other [troubleshooting steps](sap/sap-deploy-troubleshoot.md). |ΓÇó **System healthy** (green icon): Indicates that Microsoft Sentinel identified both logs and a heartbeat from the system.<br>ΓÇó **System Connected ΓÇô unauthorized to collect role, production assumed** (yellow icon): Microsoft Sentinel doesn't have sufficient permissions to define whether the system is a production system. In this case, Microsoft Sentinel defines the system as a production system. To allow Microsoft Sentinel to receive the system status, review the Notes column.<br>ΓÇó **Connected with errors** (yellow icon): Microsoft Sentinel detected errors when fetching the system role. In this case, Microsoft Sentinel received data regarding whether the system is or isn't a production system.<br>ΓÇó **System not connected**: Microsoft Sentinel was unable to connect to the SAP system, and cannot fetch the system role. In this case, Microsoft Sentinel received data regarding whether the system is or isn't a production system.<br><br>Other statuses, like **System unreachable for over 1 day**, indicate the connectivity status. |If the system health status is **System Connected ΓÇô unauthorized to collect role, production assumed**, check the Microsoft Sentinel role definitions and permissions on the SAP system, and validate that the system allows Microsoft Sentinel to read the content of the T000 table. Next, consider [updating the SAP connector](sap/update-sap-data-connector.md) to the latest version. | ## Use an alert rule template |
sentinel | Configure Snc | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/configure-snc.md | This section explains how to import a certificate so that it's trusted by your A 1. Ensure **Entry for RFC activated** and **Entry for certificate activated** checkboxes are marked, then select **Save**. +### Map users of the ABAP service provider to external user IDs ++1. Run the **SM30** transaction. ++1. In the **Table/View** field, type **VUSREXTID**, then select **Maintain**. ++1. In the **Determine Work Area: Entry** page, select the **DN** ID type as the **Work Area**. ++1. Type these details: ++ - **External ID**: *CN=Sentinel*, *C=US* + - **Seq. No**: *000* + - **User**: *SENTINEL* ++1. Select **Save** and **Enter**. ++ :::image type="content" source="media/configure-snc/vusrextid-table-configuration.png" alt-text="Screenshot of configuring the SAP VUSREXTID table."::: + ### Set up the container > [!NOTE] |
sentinel | Deploy Data Connector Agent Container | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/deploy-data-connector-agent-container.md | Last updated 01/18/2023 This article shows you how to deploy the container that hosts the SAP data connector agent. You do this to ingest SAP data into Microsoft Sentinel, as part of the Microsoft Sentinel solution for SAP® applications. -This article shows you how to deploy the container and create SAP systems via the UI. Alternatively, you can [deploy the data connector agent using other methods](deploy-data-connector-agent-container-other-methods.md): Managed identity, a registered application, a configuration file, or directly on the VM. +This article shows you how to deploy the container and create SAP systems via the UI. Also see [this video](https://www.youtube.com/watch?v=bg0vmUvcQ5Q) that shows the agent deployment process via the UI. ++Alternatively, you can [deploy the data connector agent using other methods](deploy-data-connector-agent-container-other-methods.md): Managed identity, a registered application, a configuration file, or directly on the VM. > [!IMPORTANT] > Deploying the container and creating SAP systems via the UI is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. In this section, you deploy the data connector agent. After you deploy the agent 1. Copy the **appId**, **tenant**, and **password** from the output. You'll need these for assigning the key vault access policy and running the deployment script in the coming steps. -1. Run the following commands to **create a key vault** (substitute actual names for the `<placeholders>`). If you'll be using an existing key vault, ignore this step : +1. Run the following commands to **create a key vault** (substitute actual names for the `<placeholders>`). If you'll be using an existing key vault, ignore this step: ```azurecli az keyvault create \ |
sentinel | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md | See these [important announcements](#announcements) about recent changes to feat ## July 2023 +- Announcement: [Changes to Microsoft Defender for Office 365 connector alerts that apply when disconnecting and reconnecting](#changes-to-microsoft-defender-for-office-365-connector-alerts-that-apply-when-disconnecting-and-reconnecting) - [Content Hub generally available and centralization changes released](#content-hub-generally-available-and-centralization-changes-released) - [Deploy incident response playbooks for SAP](#deploy-incident-response-playbooks-for-sap) - [Microsoft Sentinel solution for D365 Finance and Operations (Preview)](#microsoft-sentinel-solution-for-d365-finance-and-operations-preview) See these [important announcements](#announcements) about recent changes to feat ### Content Hub generally available and centralization changes released -Content hub is now generally availabile (GA)! The [content hub centralization changes announced in February](#out-of-the-box-content-centralization-changes) have also been released. For more information on these changes and their impact, including more details about the tool provided to reinstate **IN USE** gallery templates, see [Out-of-the-box (OOTB) content centralization changes](sentinel-content-centralize.md). +Content hub is now generally available (GA)! The [content hub centralization changes announced in February](#out-of-the-box-content-centralization-changes) have also been released. For more information on these changes and their impact, including more details about the tool provided to reinstate **IN USE** gallery templates, see [Out-of-the-box (OOTB) content centralization changes](sentinel-content-centralize.md). As part of the deployment for GA, the default view of the content hub is now the **List view**. The install process is streamlined as well. When selecting **Install** or **Install/Update**, the experience behaves like bulk installation. To ensure that Microsoft Sentinel's threat detection provides complete coverage - [Classic alert automation due for deprecation](#classic-alert-automation-due-for-deprecation) (see Announcements) - [Microsoft Sentinel solution for SAP® applications: new systemconfig.json file](#microsoft-sentinel-solution-for-sap-applications-new-systemconfigjson-file) - ### Windows Forwarded Events connector is now generally available The Windows Forwarded Events connector is now generally available. The connector is available in both the Azure Commercial and Azure Government clouds. Review the [connector information](data-connectors/windows-forwarded-events.md). Learn more about [Microsoft Sentinel workspace manager](workspace-manager.md). ## Announcements +- [Changes to Microsoft Defender for Office 365 connector alerts that apply when disconnecting and reconnecting](#changes-to-microsoft-defender-for-office-365-connector-alerts-that-apply-when-disconnecting-and-reconnecting) - [Simplified pricing tiers](#simplified-pricing-tiers) - [Classic alert automation due for deprecation](#classic-alert-automation-due-for-deprecation) - [When disconnecting and connecting the MDI alerts connector - UniqueExternalId field is not populated (use the AlertName field)](#when-disconnecting-and-connecting-the-mdi-alerts-connectoruniqueexternalid-field-is-not-populated-use-the-alertname-field) Learn more about [Microsoft Sentinel workspace manager](workspace-manager.md). - [Account enrichment fields removed from Azure AD Identity Protection connector](#account-enrichment-fields-removed-from-azure-ad-identity-protection-connector) - [Name fields removed from UEBA UserPeerAnalytics table](#name-fields-removed-from-ueba-userpeeranalytics-table) ++### Changes to Microsoft Defender for Office 365 connector alerts that apply when disconnecting and reconnecting ++To improve the overall experience for Microsoft Defender for Office 365 alerts, we've improved the sync between alerts and incidents, and increased the number of alerts that flow through the connector. ++To benefit from this change, disconnect and reconnect the Microsoft Defender for Office 365 connector. However, by taking this action, some fields are no longer populated: ++- `ExtendedProperties["InvestigationName"]` +- `ExtendedProperties["Status"]` +- `ExtendedLinks` +- `AdditionalActionsAndResults` located inside the `Entities` field ++To retrieve the information that was previously retrieved by these fields, in the Microsoft 365 Defender portal, on the left, select **Alerts**, and locate the following information: ++- `ExtendedProperties["InvestigationName"]`: Under **Investigation ID**: + + :::image type="content" source="medio-connector-fields-investigation-id.png"::: ++- `ExtendedProperties["Status"]`: Under **Investigation status**: + + :::image type="content" source="medio-connector-fields-investigation-status.png" alt-text="Screenshot showing the Microsoft Defender for Office 365 alerts Investigation status field in the Microsoft 365 Defender portal."::: ++- `ExtendedLinks`: Select **Investigation ID**, which opens the relevant **Investigation** page. + ### Simplified pricing tiers Microsoft Sentinel is billed for the volume of data *analyzed* in Microsoft Sentinel and *stored* in Azure Monitor Log Analytics. So far, there have been two sets of pricing tiers, one for each product. Two things are happening: |
storage | Monitor Queue Storage Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/monitor-queue-storage-reference.md | |
storage | Monitor Queue Storage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/monitor-queue-storage.md | |
storage | Queues Storage Monitoring Scenarios | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/queues-storage-monitoring-scenarios.md | Title: Best practices for monitoring Azure Queue Storage description: Learn best practice guidelines and how to them when using metrics and logs to monitor your Azure Queue Storage. --+ Last updated 08/24/2021 |
storage | Storage Queues Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/storage-queues-introduction.md | |
storage | Storage Quickstart Queues Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/storage-quickstart-queues-portal.md | |
storage | Storage Tutorial Queues | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/storage-tutorial-queues.md | |
synapse-analytics | Apache Spark Secure Credentials With Tokenlibrary | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-secure-credentials-with-tokenlibrary.md | Title: Secure access credentials with Linked Services in Apache Spark for Azure Synapse Analytics description: This article provides concepts on how to securely integrate Apache Spark for Azure Synapse Analytics with other services using linked services and token library -+ -Synapse uses Azure Active Directory (Azure AD) passthrough by default for authentication between resources. If you need to connect to a resource using other credentials, use the mssparkutils directly. The mssparkutils simplifies the process of retrieving SAS tokens, Azure AD tokens, connection strings, and secrets stored in a linked service or from an Azure Key Vault. +Azure Synapse Analytics uses Azure Active Directory (Azure AD) passthrough by default for authentication between resources. If you need to connect to a resource using other credentials, use the mssparkutils directly. The mssparkutils simplifies the process of retrieving SAS tokens, Azure AD tokens, connection strings, and secrets stored in a linked service or from an Azure Key Vault. Azure AD passthrough uses permissions assigned to you as a user in Azure AD, rather than permissions assigned to Synapse or a separate service principal. For example, if you want to use Azure AD passthrough to access a blob in a storage account, then you should go to that storage account and assign blob contributor role to yourself. Get result: #### ADLS Gen2 Primary Storage -Accessing files from the primary Azure Data Lake Storage uses Azure Active Directory passthrough for authentication by default and doesn't require the explicit use of the mssparkutils. The identity used in the passthrough authentication differs based on a few factors. By default, interactive notebooks are executed using the user's identity, but they can be changed to the workspace MSI. Batch jobs and non-interactive executions of the notebook use the Workspace MSI identity. +Accessing files from the primary Azure Data Lake Storage uses Azure Active Directory passthrough for authentication by default and doesn't require the explicit use of the mssparkutils. The identity used in the passthrough authentication differs based on a few factors. By default, interactive notebooks are executed using the user's identity, but it can be changed to the workspace managed service identity (MSI). Batch jobs and non-interactive executions of the notebook use the Workspace MSI. ::: zone pivot = "programming-language-scala" display(df.limit(10)) #### ADLS Gen2 storage with linked services -Synapse provides an integrated linked services experience when connecting to Azure Data Lake Storage Gen2. Linked Services can be configured to authenticate using an **Account Key**, **Service Principal**, **Managed Identity**, or **Credential**. +Azure Synapse Analytics provides an integrated linked services experience when connecting to Azure Data Lake Storage Gen2. Linked services can be configured to authenticate using an **Account Key**, **Service Principal**, **Managed Identity**, or **Credential**. When the linked service authentication method is set to **Account Key**, the linked service will authenticate using the provided storage account key, request a SAS key, and automatically apply it to the storage request using the **LinkedServiceBasedSASProvider**. Synapse allows users to set the linked service for a particular storage account. ```scala val sc = spark.sparkContext val source_full_storage_account_name = "teststorage.dfs.core.windows.net"-spark.conf.set(f"spark.storage.synapse.{source_full_storage_account_name}.linkedServiceName", "<LINKED SERVICE NAME>") -sc.hadoopConfiguration.set(f"fs.azure.account.auth.type.{source_full_storage_account_name}", "SAS") -sc.hadoopConfiguration.set(f"fs.azure.sas.token.provider.type.{source_full_storage_account_name}", "com.microsoft.azure.synapse.tokenlibrary.LinkedServiceBasedSASProvider") +spark.conf.set(s"spark.storage.synapse.$source_full_storage_account_name.linkedServiceName", "<LINKED SERVICE NAME>") +sc.hadoopConfiguration.set(s"fs.azure.account.auth.type.$source_full_storage_account_name", "SAS") +sc.hadoopConfiguration.set(s"fs.azure.sas.token.provider.type.$source_full_storage_account_name", "com.microsoft.azure.synapse.tokenlibrary.LinkedServiceBasedSASProvider") val df = spark.read.csv("abfss://<CONTAINER>@<ACCOUNT>.dfs.core.windows.net/<FILE PATH>") When the linked service authentication method is set to **Managed Identity** or ```scala val sc = spark.sparkContext val source_full_storage_account_name = "teststorage.dfs.core.windows.net"-spark.conf.set(f"spark.storage.synapse.{source_full_storage_account_name}.linkedServiceName", "<LINKED SERVICE NAME>") -sc.hadoopConfiguration.set(f"fs.azure.account.oauth.provider.type.{source_full_storage_account_name}", "com.microsoft.azure.synapse.tokenlibrary.LinkedServiceBasedTokenProvider") +spark.conf.set(s"spark.storage.synapse.$source_full_storage_account_name.linkedServiceName", "<LINKED SERVICE NAME>") +sc.hadoopConfiguration.set(s"fs.azure.account.oauth.provider.type.$source_full_storage_account_name", "com.microsoft.azure.synapse.tokenlibrary.LinkedServiceBasedTokenProvider") val df = spark.read.csv("abfss://<CONTAINER>@<ACCOUNT>.dfs.core.windows.net/<FILE PATH>") display(df.limit(10)) import json json.loads(mssparkutils.credentials.getPropertiesAll("<LINKED SERVICE NAME>")) ``` The output will look like-```` +``` { 'AuthType': 'Key', 'AuthKey': '[REDACTED]', The output will look like 'Endpoint': 'https://storageaccount.blob.core.windows.net/', 'Database': None }-```` +``` #### GetSecret() To retrieve a secret stored from Azure Key Vault, we recommend that you create a linked service to Azure Key Vault within the Synapse workspace. The Synapse workspace managed service identity will need to be granted **GET** Secrets permission to the Azure Key Vault. The linked service will use the managed service identity to connect to Azure Key Vault service to retrieve the secret. Otherwise, connecting directly to Azure Key Vault will use the user's Azure Active Directory (Azure AD) credential. In this case, the user will need to be granted the Get Secret permissions in Azure Key Vault. -In national clouds, please provide the fully qualified domain name of the keyvault. +In government clouds, please provide the fully qualified domain name of the keyvault. `mssparkutils.credentials.getSecret("<AZURE KEY VAULT NAME>", "<SECRET KEY>" [, <LINKED SERVICE NAME>])` Console.WriteLine(connectionString); ::: zone-end -#### Linked service connections supported from the Spark runtime (notebook or batch jobs) +#### Linked service connections supported from the Spark runtime ++While Azure Synapse Analytics supports a variety of linked service connections (from pipelines and other Azure products), not all of them are supported from the Spark runtime. Here is the list of supported linked -The Azure Synapse Analytics supports a variety of linked service connections (from pipelines and other places), but not all of them are supported from the Spark runtime. Here is the list of supported linked services. - Azure Blob Storage+ - Azure Cognitive Services - Azure Cosmos DB - Azure Data Explorer+ - Azure Database for MySQL + - Azure Database for Postgre SQL + - Azure Data Lake Store (Gen1) + - Azure Key Vault - Azure Machine Learning - Azure Purview+ - Azure SQL Database + - Azure SQL Data Warehouse (Dedicated and Serverless) + - Azure Storage - #### The following methods of accessing the linked services are not supported from the Spark runtime + #### mssparkutils.credenials.getToken() + When you need an OAuth bearer token to access services directly, you can use the `getToken` method. The following resources are supported: ++| Service Name | String literal to be used in API call | +|-|| +| Azure Storage | `Storage` | +| Azure Key Vault | `Vault` | +| Azure Management | `AzureManagement` | +| Azure SQL Data Warehouse (Dedicated and Serverless) | `DW` | +| Azure Synapse | `Synapse` | +| Azure Data Lake Store | `DataLakeStore` | +| Azure Data Factory | `ADF` | +| Azure Data Explorer | `AzureDataExplorer` | +| Azure Database for MySQL | `AzureOSSDB` | +| Azure Database for MariaDB | `AzureOSSDB` | +| Azure Database for PostgreSQL | `AzureOSSDB` | +#### Unsupported linked service access from the Spark runtime ++The following methods of accessing the linked services are not supported from the Spark runtime: - Passing arguments to parameterized linked service- - Connections that use User assigned managed identities (UAMI) - -From a notebook or a spark job, when the request to get token/secret using Linked Service fails, if the error message indicates BadRequest, then this indicates the user error. The error message currently doesn't require all the details of the failure. Please reach out to our support to debug the issue. + - Connections with User assigned managed identities (UAMI) ++While running a notebook or a Spark job, requests to get a token / secret using a linked service may fail with an error message that indicates 'BadRequest'. This is often caused by a configuration issue with the linked service. If you see this error message, please check the configuration of your linked service. If you have any questions, please contact Microsoft Azure Support at https://portal.azure.com. ## Next steps |
virtual-machines | Configure Oracle Dataguard | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/configure-oracle-dataguard.md | $ mkdir -p /u01/app/oracle/admin/cdb1/adump Create a password file: + ```bash-$ orapwd file=/u01/app/oracle/product/19.0.0/dbhome_1/dbs/orapwcdb1 password=OracleLab123 entries=10 format= +$ orapwd file=/u01/app/oracle/product/19.0.0/dbhome_1/dbs/orapwcdb1 password=OracleLab123 entries=10 force=y ``` Start the database on `OracleVM2`: az group delete --name $RESOURCE_GROUP - [Tutorial: Create highly available virtual machines](../../linux/create-cli-complete.md) - [Explore Azure CLI samples for VM deployment](https://github.com/Azure-Samples/azure-cli-samples/tree/master/virtual-machine)+ |
virtual-network | Manage Custom Ip Address Prefix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/manage-custom-ip-address-prefix.md | The operation is asynchronous. You can check the status by reviewing the **Commi To fully remove a custom IP prefix, it must be deprovisioned and then deleted. > [!NOTE]-> If there is a requirement to migrate an provisioned range from one region to the other, the original custom IP prefix must be fully removed from the fist region before a new custom IP prefix with the same address range can be created in another region. +> If there is a requirement to migrate an provisioned range from one region to the other, the original custom IP prefix must be fully removed from the first region before a new custom IP prefix with the same address range can be created in another region. > > The estimated time to complete the deprovisioning process is anywhere from 30 to 60 minutes. |