Updates from: 01/23/2023 02:06:24
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory How To Mfa Number Match https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-number-match.md
description: Learn how to use number matching in MFA notifications
Previously updated : 01/13/2023 Last updated : 01/20/2023
To create the registry key that overrides push notifications:
Value = TRUE 1. Restart the NPS Service.
-If you're using Remote Desktop Gateway and the user is registered for OTP code along with Microsoft Authenticator push notifications, the user won't be able to meet the Azure AD MFA challenge and Remote Desktop Gateway sign-in will fail. In this case, you can set OVERRIDE_NUMBER_MATCHING_WITH_TOP = FALSE to fall back to push notifications with Microsoft Authenticator.
+If you're using Remote Desktop Gateway and the user is registered for OTP code along with Microsoft Authenticator push notifications, the user won't be able to meet the Azure AD MFA challenge and Remote Desktop Gateway sign-in will fail. In this case, you can set OVERRIDE_NUMBER_MATCHING_WITH_OTP = FALSE to fall back to push notifications with Microsoft Authenticator.
### Apple Watch supported for Microsoft Authenticator
aks Ingress Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/ingress-tls.md
Previously updated : 05/18/2022 Last updated : 01/20/2023 #Customer intent: As a cluster operator or developer, I want to use TLS with an ingress controller to handle the flow of incoming traffic and secure my apps using my own certificates or automatically generated certificates.
The transport layer security (TLS) protocol uses certificates to provide securit
You can bring your own certificates and integrate them with the Secrets Store CSI driver. Alternatively, you can use [cert-manager][cert-manager], which automatically generates and configures [Let's Encrypt][lets-encrypt] certificates. Two applications run in the AKS cluster, each of which is accessible over a single IP address. > [!NOTE]
-> There are two open source ingress controllers for Kubernetes based on Nginx: one is maintained by the Kubernetes community ([kubernetes/ingress-nginx][nginx-ingress]), and one is maintained by NGINX, Inc. ([nginxinc/kubernetes-ingress]). This article uses the Kubernetes community ingress controller.
+> There are two open source ingress controllers for Kubernetes based on Nginx: one is maintained by the Kubernetes community ([kubernetes/ingress-nginx][nginx-ingress]), and one is maintained by NGINX, Inc. ([nginxinc/kubernetes-ingress]). This article uses the *Kubernetes community ingress controller*.
## Before you begin
You can bring your own certificates and integrate them with the Secrets Store CS
* This article uses [Helm 3][helm] to install the NGINX ingress controller on a [supported version of Kubernetes][aks-supported versions]. Make sure you're using the latest release of Helm and have access to the `ingress-nginx` and `jetstack` Helm repositories. The steps outlined in this article may not be compatible with previous versions of the Helm chart, NGINX ingress controller, or Kubernetes.
- * For more information on configuring and using Helm, see [Install applications with Helm in Azure Kubernetes Service (AKS)][use-helm]. For upgrade instructions, see the [Helm install docs][helm-install].
+ * For more information on configuring and using Helm, see [Install applications with Helm in AKS][use-helm]. For upgrade instructions, see the [Helm install docs][helm-install].
-* This article assumes you have an existing AKS cluster with an integrated Azure Container Registry (ACR). For more information on creating an AKS cluster with an integrated ACR, see [Authenticate with Azure Container Registry from Azure Kubernetes Service][aks-integrated-acr].
+* This article assumes you have an existing AKS cluster with an integrated Azure Container Registry (ACR). For more information on creating an AKS cluster with an integrated ACR, see [Authenticate with ACR from AKS][aks-integrated-acr].
* If you're using Azure CLI, this article requires that you're running the Azure CLI version 2.0.64 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
You can bring your own certificates and integrate them with the Secrets Store CS
## Use TLS with your own certificates with Secrets Store CSI Driver
-To use TLS with your own certificates with Secrets Store CSI Driver, you need an AKS cluster with the Secrets Store CSI Driver configured and an Azure Key Vault instance. For more information, see [Set up Secrets Store CSI Driver to enable NGINX Ingress Controller with TLS][aks-nginx-tls-secrets-store].
+To use TLS with your own certificates with Secrets Store CSI Driver, you need an AKS cluster with the Secrets Store CSI Driver configured and an Azure Key Vault instance.
+
+For more information, see [Set up Secrets Store CSI Driver to enable NGINX Ingress Controller with TLS][aks-nginx-tls-secrets-store].
## Use TLS with Let's Encrypt certificates
Import-AzContainerRegistryImage -ResourceGroupName $ResourceGroup -RegistryName
> [!NOTE]
-> In addition to importing container images into your ACR, you can import Helm charts into your ACR. For more information, see [Push and pull Helm charts to an Azure Container Registry][acr-helm].
+> You can also import Helm charts into your ACR. For more information, see [Push and pull Helm charts to an ACR][acr-helm].
## Ingress controller configuration options
-An NGINX ingress controller is created with a new public IP address assignment by default. This public IP address is only static for the lifespan of the ingress controller. If you delete the ingress controller, the public IP address assignment will be lost. If you create another ingress controller, a new public IP address will be assigned.
-
-You can configure your ingress controller using one of the following methods:
+You can configure your NGINX ingress controller using either a static public IP address or a dynamic public IP address. If you're using a custom domain, you need to add an A record to your DNS zone. If you're not using a custom domain, you can configure a fully qualified domain name (FQDN) for the ingress controller IP address.
-* Using a dynamic public IP address.
-* Using a static public IP address.
+### Create a static or dynamic public IP address
-## Use a static public IP address
+#### Use a static public IP address
-A common configuration requirement is to provide the NGINX ingress controller an existing static public IP address. The static public IP address remains if the ingress controller is deleted.
+You can configure your ingress controller with a static public IP address. The static public IP address remains if you delete your ingress controller. The IP address *doesn't* remain if you delete your AKS cluster.
-Follow the commands below to create an IP address that will be deleted if you delete your AKS cluster.
+When you upgrade your ingress controller, you must pass a parameter to the Helm release to ensure the ingress controller service is made aware of the load balancer that will be allocated to it. For the HTTPS certificates to work correctly, you use a DNS label to configure an FQDN for the ingress controller IP address.
### [Azure CLI](#tab/azure-cli)
-Get the resource group name of the AKS cluster with the [az aks show][az-aks-show] command.
+1. Get the resource group name of the AKS cluster with the [`az aks show`][az-aks-show] command.
```azurecli-interactive az aks show --resource-group myResourceGroup --name myAKSCluster --query nodeResourceGroup -o tsv ```
-Next, create a public IP address with the *static* allocation method using the [az network public-ip create][az-network-public-ip-create] command. The following example creates a public IP address named *myAKSPublicIP* in the AKS cluster resource group obtained in the previous step.
+2. Create a public IP address with the *static* allocation method using the [`az network public-ip create`][az-network-public-ip-create] command. The following example creates a public IP address named *myAKSPublicIP* in the AKS cluster resource group obtained in the previous step.
```azurecli-interactive az network public-ip create --resource-group MC_myResourceGroup_myAKSCluster_eastus --name myAKSPublicIP --sku Standard --allocation-method static --query publicIp.ipAddress -o tsv ```
+> [!NOTE]
+> Alternatively, you can create an IP address in a different resource group, which you can manage separately from your AKS cluster. If you create an IP address in a different resource group, ensure the following are true:
+>
+> * The cluster identity used by the AKS cluster has delegated permissions to the resource group, such as *Network Contributor*.
+> * Add the `--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-resource-group"="<RESOURCE_GROUP>"` parameter. Replace `<RESOURCE_GROUP>` with the name of the resource group where the IP address resides.
+
+3. Add the `--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-dns-label-name"="<DNS_LABEL>"` parameter. The DNS label can be set either when the ingress controller is first deployed, or it can be configured later.
+
+4. Add the `--set controller.service.loadBalancerIP="<STATIC_IP>"` parameter. Specify your own public IP address that was created in the previous step.
+
+```azurecli-interactive
+DNS_LABEL="<DNS_LABEL>"
+NAMESPACE="ingress-basic"
+STATIC_IP=<STATIC_IP>
+
+helm upgrade ingress-nginx ingress-nginx/ingress-nginx \
+ --namespace $NAMESPACE \
+ --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-dns-label-name"=$DNS_LABEL \
+ --set controller.service.loadBalancerIP=$STATIC_IP
+```
+ ### [Azure PowerShell](#tab/azure-powershell)
-Get the resource group name of the AKS cluster with the [Get-AzAksCluster][get-az-aks-cluster] command:
+1. Get the resource group name of the AKS cluster with the [`Get-AzAksCluster`][get-az-aks-cluster] command.
```azurepowershell-interactive (Get-AzAksCluster -ResourceGroupName $ResourceGroup -Name myAKSCluster).NodeResourceGroup ```
-Next, create a public IP address with the *static* allocation method using the [New-AzPublicIpAddress][new-az-public-ip-address] command. The following example creates a public IP address named *myAKSPublicIP* in the AKS cluster resource group obtained in the previous step:
+2. Create a public IP address with the *static* allocation method using the [`New-AzPublicIpAddress`][new-az-public-ip-address] command. The following example creates a public IP address named *myAKSPublicIP* in the AKS cluster resource group obtained in the previous step.
```azurepowershell-interactive (New-AzPublicIpAddress -ResourceGroupName MC_myResourceGroup_myAKSCluster_eastus -Name myAKSPublicIP -Sku Standard -AllocationMethod Static -Location eastus).IpAddress ``` -- > [!NOTE]
-> Alternatively, you can create an IP address in a different resource group, which can be managed separately from your AKS cluster. If you create an IP address in a different resource group, ensure the following are true:
+> Alternatively, you can create an IP address in a different resource group, which you can manage separately from your AKS cluster. If you create an IP address in a different resource group, ensure the following are true:
> > * The cluster identity used by the AKS cluster has delegated permissions to the resource group, such as *Network Contributor*. > * Add the `--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-resource-group"="<RESOURCE_GROUP>"` parameter. Replace `<RESOURCE_GROUP>` with the name of the resource group where the IP address resides.
->
-You must pass a parameter to the Helm release when you upgrade the ingress controller. This ensures that the ingress controller service is made aware of the load balancer that will be allocated to it. For the HTTPS certificates to work correctly, a DNS name label is used to configure a fully qualified domain name (FQDN) for the ingress controller IP address.
+3. Add the --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-dns-label-name"="<DNS_LABEL>" parameter. The DNS label can be set either when the ingress controller is first deployed, or it can be configured later.
-1. Add the `--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-dns-label-name"="<DNS_LABEL>"` parameter. The DNS label can be set either when the ingress controller is first deployed, or it can be configured later.
-2. Add the `--set controller.service.loadBalancerIP="<STATIC_IP>"` parameter. Specify your own public IP address that was created in the previous step.
+4. Add the --set controller.service.loadBalancerIP="<STATIC_IP>" parameter. Specify your own public IP address that was created in the previous step.
-### [Azure CLI](#tab/azure-cli)
-
-```azurecli
-DNS_LABEL="demo-aks-ingress"
-NAMESPACE="ingress-basic"
-STATIC_IP=<STATIC_IP>
-
-helm upgrade nginx-ingress ingress-nginx/ingress-nginx \
- --namespace $NAMESPACE \
- --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-dns-label-name"=$DNS_LABEL \
- --set controller.service.loadBalancerIP=$STATIC_IP
-```
-
-### [Azure PowerShell](#tab/azure-powershell)
-
-```azurepowershell
-$DnsLabel = "demo-aks-ingress"
+```azurepowershell-interactive
+$DnsLabel = "<DNS_LABEL>"
$Namespace = "ingress-basic" $StaticIP = "<STATIC_IP>"
-helm upgrade nginx-ingress ingress-nginx/ingress-nginx `
+helm upgrade ingress-nginx ingress-nginx/ingress-nginx `
--namespace $Namespace ` --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-dns-label-name"=$DnsLabel ` --set controller.service.loadBalancerIP=$StaticIP
helm upgrade nginx-ingress ingress-nginx/ingress-nginx `
For more information, see [Use a static public IP address and DNS label with the AKS load balancer][aks-static-ip].
-## Use a dynamic IP address
+#### Use a dynamic public IP address
-An Azure public IP address is created for the ingress controller upon creation. This public IP address is static for the lifespan of the ingress controller. If you delete the ingress controller, the public IP address assignment will be lost. If you create another ingress controller, a new public IP address will be assigned.
+An Azure public IP address is created for your ingress controller upon creation. The public IP address is static for the lifespan of your ingress controller. The public IP address *doesn't* remain if you delete your ingress controller. If you create a new ingress controller, it will be assigned a new public IP address.
-To get the public IP address, use the `kubectl get service` command.
+Use the `kubectl get service` command to get the public IP address for your ingress controller.
```console kubectl --namespace ingress-basic get services -o wide -w nginx-ingress-ingress-nginx-controller ```
-The example output shows the details about the ingress controller.
+Your output should look similar to the following example output:
```console NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
nginx-ingress-ingress-nginx-controller LoadBalancer 10.0.74.133 EXTERNAL_I
### Add an A record to your DNS zone
-If you're using a custom domain, you need to add an A record to your DNS zone. Otherwise, you need to configure the public IP address with an FQDN.
+If you're using a custom domain, you need to add an *A* record to your DNS zone. If you're not using a custom domain, you can configure the public IP address with an FQDN.
### [Azure CLI](#tab/azure-cli)
-Add an *A* record to your DNS zone with the external IP address of the NGINX service using [az network dns record-set a add-record][az-network-dns-record-set-a-add-record].
+Add an *A* record to your DNS zone with the external IP address of the NGINX service using [`az network dns record-set a add-record`][az-network-dns-record-set-a-add-record].
```azurecli az network dns record-set a add-record \
az network dns record-set a add-record \
### [Azure PowerShell](#tab/azure-powershell)
-Add an *A* record to your DNS zone with the external IP address of the NGINX service using [New-AzDnsRecordSet][new-az-dns-recordset-create-a-record].
+Add an *A* record to your DNS zone with the external IP address of the NGINX service using [`New-AzDnsRecordSet`][new-az-dns-recordset-create-a-record].
```azurepowershell $Records = @()
New-AzDnsRecordSet -Name "*" `
-### Configure an FQDN for the ingress controller
+### Configure an FQDN for your ingress controller
-Optionally, you can configure an FQDN for the ingress controller IP address instead of a custom domain. Your FQDN will be of the form `<CUSTOM LABEL>.<AZURE REGION NAME>.cloudapp.azure.com`. You can configure it using one of the following methods:
+Optionally, you can configure an FQDN for the ingress controller IP address instead of a custom domain by setting a DNS label. Your FQDN should follow this form: `<CUSTOM LABEL>.<AZURE REGION NAME>.cloudapp.azure.com`. Your DNS label must be unique within its Azure location.
-* Setting the DNS label using the Azure CLI or Azure PowerShell
-* Setting the DNS label using Helm chart settings
+You can configure your FQDN using one of the following methods:
-#### Method 1: Set the DNS label using the Azure CLI or Azure PowerShell
+* Set the DNS label using Azure CLI or Azure PowerShell.
+* Set the DNS label using Helm chart settings.
+
+For more information, see [Public IP address DNS name labels](../virtual-network/ip-services/public-ip-addresses.md#dns-name-label).
+
+#### Set the DNS label using Azure CLI or Azure PowerShell
### [Azure CLI](#tab/azure-cli)
Write-Output $UpdatedPublicIp.DnsSettings.Fqdn
-#### Method 2: Set the DNS label using Helm chart settings
+#### Set the DNS label using Helm chart settings
-You can pass an annotation setting to your Helm chart configuration by using the `--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-dns-label-name"` parameter. This parameter can be set either when the ingress controller is first deployed, or it can be configured later.
+You can pass an annotation setting to your Helm chart configuration using the `--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-dns-label-name"` parameter. This parameter can be set when the ingress controller is first deployed, or it can be configured later.
The following example shows how to update this setting after the controller has been deployed. ### [Azure CLI](#tab/azure-cli) ```bash
-DNS_LABEL="demo-aks-ingress"
+DNS_LABEL="<DNS_LABEL>"
NAMESPACE="ingress-basic"
-helm upgrade nginx-ingress ingress-nginx/ingress-nginx \
+helm upgrade ingress-nginx ingress-nginx/ingress-nginx \
--namespace $NAMESPACE \ --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-dns-label-name"=$DNS_LABEL ```
helm upgrade nginx-ingress ingress-nginx/ingress-nginx \
### [Azure PowerShell](#tab/azure-powershell) ```azurepowershell
-$DnsLabel = "demo-aks-ingress"
+$DnsLabel = "<DNS_LABEL>"
$Namespace = "ingress-basic"
-helm upgrade nginx-ingress ingress-nginx/ingress-nginx `
+helm upgrade ingress-nginx ingress-nginx/ingress-nginx `
--namespace $Namespace ` --set controller.service.annotations."service\.beta\.kubernetes\.io/azure-dns-label-name"=$DnsLabel ```
For more information on cert-manager configuration, see the [cert-manager projec
## Create a CA cluster issuer
-Before certificates can be issued, cert-manager requires one of the following:
+Before certificates can be issued, cert-manager requires one of the following issuers:
* An [Issuer][cert-manager-issuer], which works in a single namespace. * A [ClusterIssuer][cert-manager-cluster-issuer] resource, which works across all namespaces.
In the following example, traffic is routed as such:
> [!NOTE] > If you configured an FQDN for the ingress controller IP address instead of a custom domain, use the FQDN instead of *hello-world-ingress.MY_CUSTOM_DOMAIN*.
->
+>
> For example, if your FQDN is *demo-aks-ingress.eastus.cloudapp.azure.com*, replace *hello-world-ingress.MY_CUSTOM_DOMAIN* with *demo-aks-ingress.eastus.cloudapp.azure.com* in `hello-world-ingress.yaml`. >
kubectl apply -f hello-world-ingress.yaml --namespace ingress-basic
## Verify a certificate object has been created
-Next, a certificate resource must be created. The certificate resource defines the desired X.509 certificate. For more information, see [cert-manager certificates][cert-manager-certificates]. Cert-manager automatically creates a certificate object for you using ingress-shim, which is automatically deployed with cert-manager since v0.2.2. For more information, see the [ingress-shim documentation][ingress-shim].
+Next, a certificate resource must be created. The certificate resource defines the desired X.509 certificate. For more information, see [cert-manager certificates][cert-manager-certificates].
-To verify that the certificate was created successfully, use the `kubectl get certificate --namespace ingress-basic` command and verify *READY* is *True*. This may take several minutes.
+Cert-manager automatically creates a certificate object for you using ingress-shim, which is automatically deployed with cert-manager since v0.2.2. For more information, see the [ingress-shim documentation][ingress-shim].
+
+To verify that the certificate was created successfully, use the `kubectl get certificate --namespace ingress-basic` command and verify *READY* is *True*. It may take several minutes to get the output.
```console kubectl get certificate --namespace ingress-basic
This article used Helm to install the ingress components, certificates, and samp
### Delete the sample namespace and all resources
-To delete the entire sample namespace, use the `kubectl delete` command and specify your namespace name. All the resources in the namespace are deleted.
+To delete the entire sample namespace, use the `kubectl delete` command and specify your namespace name. All the resources in the namespace will be deleted.
```console kubectl delete namespace ingress-basic
kubectl delete namespace ingress-basic
### Delete resources individually
-Alternatively, you can delete the resource individually. First, remove the cluster issuer resources.
+Alternatively, you can delete the resource individually.
+
+First, remove the cluster issuer resources.
```console kubectl delete -f cluster-issuer.yaml --namespace ingress-basic
kubectl delete namespace ingress-basic
This article included some external components to AKS. To learn more about these components, see the following project pages: -- [Helm CLI][helm-cli]-- [NGINX ingress controller][nginx-ingress]-- [cert-manager][cert-manager]
+* [Helm CLI][helm-cli]
+* [NGINX ingress controller][nginx-ingress]
+* [cert-manager][cert-manager]
You can also: -- [Enable the HTTP application routing add-on][aks-http-app-routing]
+* [Enable the HTTP application routing add-on][aks-http-app-routing]
<!-- LINKS - external --> [az-network-dns-record-set-a-add-record]: /cli/azure/network/dns/record-set/#az-network-dns-record-set-a-add-record
aks Planned Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/planned-maintenance.md
description: Learn how to use Planned Maintenance in Azure Kubernetes Service (AKS). Previously updated : 03/03/2021 Last updated : 01/17/2023 - # Use Planned Maintenance to schedule maintenance windows for your Azure Kubernetes Service (AKS) cluster (preview)
-Your AKS cluster has regular maintenance performed on it automatically. By default, this work can happen at any time. Planned Maintenance allows you to schedule weekly maintenance windows that will update your control plane as well as your kube-system pods on a VMSS instance, and minimize workload impact. Once scheduled, all your maintenance will occur during the window you selected. You can schedule one or more weekly windows on your cluster by specifying a day or time range on a specific day. Maintenance Windows are configured using the Azure CLI.
+Your AKS cluster has regular maintenance performed on it automatically. By default, this work can happen at any time. Planned Maintenance allows you to schedule weekly maintenance windows to perform updates and minimize workload impact. Once scheduled, maintenance will occur only during the window you selected.
## Before you begin
When you use Planned Maintenance, the following restrictions apply:
### Install aks-preview CLI extension
-You also need the *aks-preview* Azure CLI extension version 0.5.4 or later. Install the *aks-preview* Azure CLI extension by using the [az extension add][az-extension-add] command. Or install any available updates by using the [az extension update][az-extension-update] command.
+You also need the *aks-preview* Azure CLI extension version 0.5.124 or later. Install the *aks-preview* Azure CLI extension by using the [az extension add][az-extension-add] command. Or install any available updates by using the [az extension update][az-extension-update] command.
```azurecli-interactive # Install the aks-preview extension
az extension add --name aks-preview
az extension update --name aks-preview ```
-## Allow maintenance on every Monday at 1:00am to 2:00am
+## Understanding maintenance window configuration types
+
+There are currently two available configuration types: `default` and `aksManagedAutoUpgradeSchedule`:
+
+- `default` corresponds to a basic configuration that will update your control plane and your kube-system pods on a virtual machine scale sets instance. It is a legacy configuration that is mostly suitable for basic scheduling of [weekly releases][release-tracker].
+
+- `aksManagedAutoUpgradeSchedule` is a more complex configuration that controls when upgrades scheduled by your designated auto-upgrade channel are performed. More finely controlled cadence and recurrence settings are possible. For more information on cluster auto-upgrade, see [Automatically an Azure Kubernetes Service (AKS) cluster][aks-upgrade].
+
+### Choosing between configuration types
+
+We recommend using `aksManagedAutoUpgradeSchedule` for all maintenance and upgrade scenarios, while `default` is meant exclusively for weekly releases. You can port `default` configurations to `aksManagedAutoUpgradeSchedule` configurations via the `az aks maintenanceconfiguration update` command.
+
+> [!NOTE]
+> When using auto-upgrade, to ensure proper functionality, use a maintenance window with a duration of four hours or more.
+
+## Creating a maintenance window
+
+To create a maintenance window, you can use the `az aks maintenanceconfiguration add` command using the `--name` value `default` or `aksManagedAutoUpgradeSchedule`. The name value should reflect the desired configuration type. Using any other name will cause your maintenance window not to run.
+
+Planned Maintenance windows are specified in Coordinated Universal Time (UTC).
+
+A `default` maintenance window has the following properties:
-To add a maintenance window, you can use the `az aks maintenanceconfiguration add` command.
+|Name|Description|Default value|
+|--|--|--|
+|`timeInWeek`|In a `default` configuration, this property contains the `day` and `hourSlots` values defining a maintenance window|N/A|
+|`timeInWeek.day`|The day of the week to perform maintenance in a `default` configuration|N/A|
+|`timeInWeek.hourSlots`|A list of hour-long time slots to perform maintenance on a given day in a `default` configuration|N/A|
+|`notAllowedTime`|Specifies a range of dates that maintenance cannot run, determined by `start` and `end` child properties. Only applicable when creating the maintenance window using a config file|N/A|
+
+An `aksManagedAutoUpgradeSchedule` has the following properties:
+
+|Name|Description|Default value|
+|--|--|--|
+|`utcOffset`|Used to determine the timezone for cluster maintenance|`+00:00`|
+|`startDate`|The date on which the maintenance window will begin to take effect|The current date at creation time|
+|`startTime`|The time for maintenance to begin, based on the timezone determined by `utcOffset`|N/A|
+|`schedule`|Used to determine frequency. Three types are available: `Weekly`, `AbsoluteMonthly`, and `RelativeMonthly`|N/A|
+|`intervalWeeks`|The interval in weeks for maintenance runs|N/A|
+|`intervalMonths`|The interval in months for maintenance runs|N/A|
+|`dayOfWeek`|The specified day of the week for maintenance to begin|N/A|
+|`durationHours`|The duration of the window for maintenance to run|N/A|
+|`notAllowedDates`|Specifies a range of dates that maintenance cannot run, determined by `start` and `end` child properties. Only applicable when creating the maintenance window using a config file|N/A|
+
+### Understanding schedule types
+
+There are currently three available schedule types: `Weekly`, `AbsoluteMonthly`, and `RelativeMonthly`. These schedule types are only applicable to `aksManagedClusterAutoUpgrade` configurations.
+
+#### Weekly schedule
+
+A `Weekly` schedule may look like *"every two weeks on Friday"*:
+
+```json
+"schedule": {
+ "weekly": {
+ "intervalWeeks": 2,
+ "dayOfWeek": "Friday"
+ }
+}
+```
-> [!IMPORTANT]
-> At this time, you must set `default` as the value for `--name`. Using any other name will cause your maintenance window to not run.
->
-> Planned Maintenance windows are specified in Coordinated Universal Time (UTC).
+#### AbsoluteMonthly schedule
+
+An `AbsoluteMonthly` schedule may look like *"every three months, on the first day of the month"*:
+
+```json
+"schedule": {
+ "absoluteMonthly": {
+ "intervalMonths": 3,
+ "dayOfMonth": 1
+ }
+}
+```
+
+#### RelativeMonthly schedule
+
+A `RelativeMonthly` schedule may look like *"every two months, on the last Monday"*:
+
+```json
+"schedule": {
+ "relativeMonthly": {
+ "intervalMonths": 2,
+ "dayOfWeek": "Monday",
+ "weekIndex": "Last"
+ }
+}
+```
+
+## Add a maintenance window configuration with Azure CLI
+
+The following example shows a command to add a new `default` configuration that schedules maintenance to run from 1:00am to 2:00am every Monday:
```azurecli-interactive
-az aks maintenanceconfiguration add -g MyResourceGroup --cluster-name myAKSCluster --name default --weekday Monday --start-hour 1
+az aks maintenanceconfiguration add -g myResourceGroup --cluster-name myAKSCluster --name default --weekday Monday --start-hour 1
```
-The following example output shows the maintenance window from 1:00am to 2:00am every Monday.
+> [!NOTE]
+> When using a `default` configuration type, to allow maintenance anytime during a day omit the `--start-time` parameter.
+
+The following example shows a command to add a new `aksManagedAutoUpgradeSchedule` configuration that schedules maintenance to run every third Friday between 12:00 AM and 8:00 AM in the `UTC+5:30` timezone:
+
+```azurecli-interactive
+az aks maintenanceconfiguration add -g myResourceGroup --cluster-name myAKSCluster -n aksManagedAutoUpgradeSchedule --schedule-type Weekly --day-of-week Friday --interval-weeks 3 --duration 8 --utc-offset +05:30 --start-time 00:00
+```
+
+## Add a maintenance window configuration with a JSON file
+
+You can also use a JSON file create a maintenance configuration instead of using parameters. This method has the added benefit of allowing maintenance to be prevented during a range of dates, specified by `notAllowedTimes` for `default` configurations and `notAllowedDates` for `aksManagedAutoUpgradeSchedule` configurations.
+
+Create a `default.json` file with the following contents:
```json {
- "id": "/subscriptions/<subscriptionID>/resourcegroups/MyResourceGroup/providers/Microsoft.ContainerService/managedClusters/myAKSCluster/maintenanceConfigurations/default",
- "name": "default",
- "notAllowedTime": null,
- "resourceGroup": "MyResourceGroup",
- "systemData": null,
"timeInWeek": [ {
- "day": "Monday",
- "hourSlots": [
- 1
+ "day": "Tuesday",
+ "hour_slots": [
+ 1,
+ 2
+ ]
+ },
+ {
+ "day": "Wednesday",
+ "hour_slots": [
+ 1,
+ 6
] } ],
- "type": null
+ "notAllowedTime": [
+ {
+ "start": "2021-05-26T03:00:00Z",
+ "end": "2021-05-30T12:00:00Z"
+ }
+ ]
} ```
-To allow maintenance anytime during a day, omit the *start-hour* parameter. For example, the following command sets the maintenance window for the full day every Monday:
-
-```azurecli-interactive
-az aks maintenanceconfiguration add -g MyResourceGroup --cluster-name myAKSCluster --name default --weekday Monday
-```
-
-## Add a maintenance configuration with a JSON file
+The above JSON file specifies maintenance windows every Tuesday at 1:00am - 3:00am and every Wednesday at 1:00am - 2:00am and at 6:00am - 7:00am in the `UTC` timezone. There is also an exception from *2021-05-26T03:00:00Z* to *2021-05-30T12:00:00Z* where maintenance isn't allowed even if it overlaps with a maintenance window.
-You can also use a JSON file create a maintenance window instead of using parameters. Create a `test.json` file with the following contents:
+Create an `autoUpgradeWindow.json` file with the following contents:
```json
- {
- "timeInWeek": [
- {
- "day": "Tuesday",
- "hour_slots": [
- 1,
- 2
- ]
- },
- {
- "day": "Wednesday",
- "hour_slots": [
- 1,
- 6
- ]
- }
- ],
- "notAllowedTime": [
- {
- "start": "2021-05-26T03:00:00Z",
- "end": "2021-05-30T12:00:00Z"
- }
+{
+ "properties": {
+ "maintenanceWindow": {
+ "schedule": {
+ "absoluteMonthly": {
+ "intervalMonths": 3,
+ "dayOfMonth": 1
+ }
+ },
+ "durationHours": 4,
+ "utcOffset": "-08:00",
+ "startTime": "09:00",
+ "notAllowedDates": [
+ {
+ "start": "2023-12-23",
+ "end": "2024-01-05"
+ }
]
+ }
+ }
} ```
-The above JSON file specifies maintenance windows every Tuesday at 1:00am - 3:00am and every Wednesday at 1:00am - 2:00am and at 6:00am - 7:00am. There is also an exception from *2021-05-26T03:00:00Z* to *2021-05-30T12:00:00Z* where maintenance isn't allowed even if it overlaps with a maintenance window. The following command adds the maintenance windows from `test.json`.
+The above JSON file specifies maintenance windows every three months on the first of the month between 9:00 AM - 1:00 PM in the `UTC-08` timezone. There is also an exception from *2023-12-23* to *2024-01-05* where maintenance isn't allowed even if it overlaps with a maintenance window.
+
+The following command adds the maintenance windows from `default.json` and `autoUpgradeWindow.json`:
```azurecli-interactive
-az aks maintenanceconfiguration add -g MyResourceGroup --cluster-name myAKSCluster --name default --config-file ./test.json
+az aks maintenanceconfiguration add -g myResourceGroup --cluster-name myAKSCluster --name default --config-file ./test.json
+
+az aks maintenanceconfiguration add -g myResourceGroup --cluster-name myAKSCluster --name aksManagedAutoUpgradeSchedule --config-file ./autoUpgradeWindow.json
``` ## Update an existing maintenance window
az aks maintenanceconfiguration add -g MyResourceGroup --cluster-name myAKSClust
To update an existing maintenance configuration, use the `az aks maintenanceconfiguration update` command. ```azurecli-interactive
-az aks maintenanceconfiguration update -g MyResourceGroup --cluster-name myAKSCluster --name default --weekday Monday --start-hour 1
+az aks maintenanceconfiguration update -g myResourceGroup --cluster-name myAKSCluster --name default --weekday Monday --start-hour 2
``` ## List all maintenance windows in an existing cluster
az aks maintenanceconfiguration update -g MyResourceGroup --cluster-name myAKSCl
To see all current maintenance configuration windows in your AKS cluster, use the `az aks maintenanceconfiguration list` command. ```azurecli-interactive
-az aks maintenanceconfiguration list -g MyResourceGroup --cluster-name myAKSCluster
-```
-
-In the output below, you can see that there are two maintenance windows configured for myAKSCluster. One window is on Mondays at 1:00am and another window is on Friday at 4:00am.
-
-```json
-[
- {
- "id": "/subscriptions/<subscriptionID>/resourcegroups/MyResourceGroup/providers/Microsoft.ContainerService/managedClusters/myAKSCluster/maintenanceConfigurations/default",
- "name": "default",
- "notAllowedTime": null,
- "resourceGroup": "MyResourceGroup",
- "systemData": null,
- "timeInWeek": [
- {
- "day": "Monday",
- "hourSlots": [
- 1
- ]
- }
- ],
- "type": null
- },
- {
- "id": "/subscriptions/<subscriptionID>/resourcegroups/MyResourceGroup/providers/Microsoft.ContainerService/managedClusters/myAKSCluster/maintenanceConfigurations/testConfiguration",
- "name": "testConfiguration",
- "notAllowedTime": null,
- "resourceGroup": "MyResourceGroup",
- "systemData": null,
- "timeInWeek": [
- {
- "day": "Friday",
- "hourSlots": [
- 4
- ]
- }
- ],
- "type": null
- }
-]
+az aks maintenanceconfiguration list -g myResourceGroup --cluster-name myAKSCluster
``` ## Show a specific maintenance configuration window in an AKS cluster
In the output below, you can see that there are two maintenance windows configur
To see a specific maintenance configuration window in your AKS Cluster, use the `az aks maintenanceconfiguration show` command. ```azurecli-interactive
-az aks maintenanceconfiguration show -g MyResourceGroup --cluster-name myAKSCluster --name default
+az aks maintenanceconfiguration show -g myResourceGroup --cluster-name myAKSCluster --name aksManagedAutoUpgradeSchedule
```
-The following example output shows the maintenance window for *default*:
+The following example output shows the maintenance window for *aksManagedAutoUpgradeSchedule*:
```json {
- "id": "/subscriptions/<subscriptionID>/resourcegroups/MyResourceGroup/providers/Microsoft.ContainerService/managedClusters/myAKSCluster/maintenanceConfigurations/default",
- "name": "default",
+ "id": "/subscriptions/<subscription>/resourceGroups/myResourceGroup/providers/Microsoft.ContainerService/managedClusters/myAKSCluster/maintenanceConfigurations/aksManagedAutoUpgradeSchedule",
+ "maintenanceWindow": {
+ "durationHours": 4,
+ "notAllowedDates": [
+ {
+ "end": "2024-01-05",
+ "start": "2023-12-23"
+ }
+ ],
+ "schedule": {
+ "absoluteMonthly": {
+ "dayOfMonth": 1,
+ "intervalMonths": 3
+ },
+ "daily": null,
+ "relativeMonthly": null,
+ "weekly": null
+ },
+ "startDate": "2023-01-20",
+ "startTime": "09:00",
+ "utcOffset": "-08:00"
+ },
+ "name": "aksManagedAutoUpgradeSchedule",
"notAllowedTime": null,
- "resourceGroup": "MyResourceGroup",
+ "resourceGroup": "myResourceGroup",
"systemData": null,
- "timeInWeek": [
- {
- "day": "Monday",
- "hourSlots": [
- 1
- ]
- }
- ],
+ "timeInWeek": null,
"type": null } ```
The following example output shows the maintenance window for *default*:
To delete a certain maintenance configuration window in your AKS Cluster, use the `az aks maintenanceconfiguration delete` command. ```azurecli-interactive
-az aks maintenanceconfiguration delete -g MyResourceGroup --cluster-name myAKSCluster --name default
+az aks maintenanceconfiguration delete -g MyResourceGroup --cluster-name myAKSCluster --name autoUpgradeSchedule
```
-## Using Planned Maintenance with Cluster Auto-Upgrade
-
-Planned Maintenance will detect if you are using Cluster Auto-Upgrade and schedule your upgrades during your maintenance window automatically. For more details on about Cluster Auto-Upgrade, see [Upgrade an Azure Kubernetes Service (AKS) cluster][aks-upgrade].
-
-> [!NOTE]
-> To ensure proper functionality, use a maintenance window of four hours or more.
- ## Next steps - To get started with upgrading your AKS cluster, see [Upgrade an AKS cluster][aks-upgrade]
Planned Maintenance will detect if you are using Cluster Auto-Upgrade and schedu
[az-aks-install-cli]: /cli/azure/aks#az_aks_install_cli [az-provider-register]: /cli/azure/provider#az_provider_register [aks-upgrade]: upgrade-cluster.md
+[release-tracker]: release-tracker.md
+[auto-upgrade]: auto-upgrade-cluster.md
azure-functions Create First Function Cli Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-cli-java.md
In this article, you use command-line tools to create a Java function that respo
If Maven isn't your preferred development tool, check out our similar tutorials for Java developers: + [Gradle](./functions-create-first-java-gradle.md)
-+ [IntelliJ IDEA](/azure/developer/java/toolkit-for-intellij/quickstart-functions)
++ [IntelliJ IDEA](functions-create-maven-intellij.md) + [Visual Studio Code](create-first-function-vs-code-java.md) Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account.
azure-functions Functions Create First Quarkus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-first-quarkus.md
+
+ Title: Deploy Serverless Java Apps with Quarkus on Azure Functions
+description: Deploy Serverless Java Apps with Quarkus on Azure Functions
+++ Last updated : 01/10/2023
+ms.devlang: java
+++
+# Deploy Serverless Java Apps with Quarkus on Azure Functions
+
+In this article, you'll develop, build, and deploy a serverless Java app with Quarkus on Azure Functions. This article uses Quarkus Funqy and its built-in support for Azure Functions HTTP trigger for Java. Using Quarkus with Azure Functions gives you the power of the Quarkus programming model with the scale and flexibility of Azure Functions. When you're finished, you'll run serverless [Quarkus](https://quarkus.io) applications on Azure Functions and continuing to monitor the application on Azure.
+
+## Prerequisites
+
+* [Azure CLI](/cli/azure/overview), installed on your own computer.
+* [An Azure Account](https://azure.microsoft.com/)
+* [Java JDK 17](/azure/developer/java/fundamentals/java-support-on-azure) with JAVA_HOME configured appropriately. This article was written with Java 17 in mind, but Azure functions and Quarkus support older versions of Java as well.
+* [Apache Maven 3.8.1+](https://maven.apache.org)
+
+## A first look at the sample application
+
+Clone the sample code for this guide. The sample is on [GitHub](https://github.com/Azure-Samples/quarkus-azure).
+
+```bash
+git clone https://github.com/Azure-Samples/quarkus-azure
+```
+
+Explore the sample function. Open the file *functions-quarkus/src/main/java/io/quarkus/GreetingFunction.java*. The `@Funq` annotation makes your method (e.g. `funqyHello`) a serverless function. Azure Functions Java has its own set of Azure-specific annotations, but these annotations are not necessary when using Quarkus on Azure Functions in a simple capacity as we're doing here. For more information on the Azure Functions Java annotations, see [Azure Functions Java developer guide](/azure/azure-functions/functions-reference-java).
+
+```java
+@Funq
+public String funqyHello() {
+ return "hello funqy";
+}
+```
+
+Unless you specify otherwise, the function's name is taken to be same as the method name. You can also define the function name with a parameter to the annotation, as shown here.
+
+```java
+@Funq("alternateName")
+public String funqyHello() {
+ return "hello funqy";
+}
+```
+
+The name is important: the name becomes a part of the REST URI to invoke the function, as shown later in the article.
+
+## Test the Serverless Function locally
+
+Use `mvn` to run `Quarkus Dev mode` on your local terminal. Running Quarkus in this way enables live reload with background compilation. When you modify your Java files and/or your resource files and refresh your browser, these changes will automatically take effect.
+
+A browser refresh triggers a scan of the workspace. If any changes are detected, the Java files are recompiled and the application is redeployed. Your redeployed application services the request. If there are any issues with compilation or deployment an error page will let you know.
+
+Replace `yourResourceGroupName` with a resource group name. Function app names must be globally unique across all of Azure. Resource group names must be globally unique within a subscription. This article achieves the necessary uniqueness by prepending the resource group name to the function name. For this reason, consider prepending some unique identifier to any names you create that must be unique. A useful technique is to use your initials followed by today's date in `mmdd` format. The resourceGroup is not necessary for this part of the instructions, but it's required later. For simplicity, the maven project requires the property be defined.
+
+1. Invoke Quarkus dev mode.
+
+ ```bash
+ cd functions-azure
+ mvn -DskipTests -DresourceGroup=<yourResourceGroupName> quarkus:dev
+ ```
+
+ The output should look like this.
+
+ ```output
+ ...
+ --/ __ \/ / / / _ | / _ \/ //_/ / / / __/
+ -/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\ \
+ --\___\_\____/_/ |_/_/|_/_/|_|\____/___/
+ INFO [io.quarkus] (Quarkus Main Thread) quarkus-azure-function 1.0-SNAPSHOT on JVM (powered by Quarkus xx.xx.xx.) started in 1.290s. Listening on: http://localhost:8080
+
+ INFO [io.quarkus] (Quarkus Main Thread) Profile dev activated. Live Coding activated.
+ INFO [io.quarkus] (Quarkus Main Thread) Installed features: [cdi, funqy-http, smallrye-context-propagation, vertx]
+
+ --
+ Tests paused
+ Press [r] to resume testing, [o] Toggle test output, [:] for the terminal, [h] for more options>
+ ```
+
+1. Access the function using the `CURL` command on your local terminal.
+
+ ```bash
+ curl localhost:8080/api/funqyHello
+ ```
+
+ The output should look like this.
+
+ ```output
+ "hello funqy"
+ ```
+
+### Add Dependency injection to function
+
+Dependency injection in Quarkus is provided by the open standard technology Jakarta EE Contexts and Dependency Injection (CDI). For a high level overview on injection in general, and CDI in specific, see the [Jakarta EE tutorial](https://eclipse-ee4j.github.io/jakartaee-tutorial/#injection).
+
+1. Add a new function that uses dependency injection
+
+ Create a *GreetingService.java* file in the *functions-quarkus/src/main/java/io/quarkus* directory. Make the source code of the file be the following.
+
+ ```java
+ package io.quarkus;
+
+ import javax.enterprise.context.ApplicationScoped;
+
+ @ApplicationScoped
+ public class GreetingService {
+
+ public String greeting(String name) {
+ return "Welcome to build Serverless Java with Quarkus on Azure Functions, " + name;
+ }
+
+ }
+ ```
+
+ Save the file.
+
+ `GreetingService` is an injectable bean that implements a `greeting()` method returning a string `Welcome...` message with a parameter `name`.
+
+1. Open the existing the *functions-quarkus/src/main/java/io/quarkus/GreetingFunction.java* file. Replace the class with the below code to add a new field `gService` and method `greeting`.
+
+ ```java
+ package io.quarkus;
+
+ import javax.inject.Inject;
+ import io.quarkus.funqy.Funq;
+
+ public class GreetingFunction {
+
+ @Inject
+ GreetingService gService;
+
+ @Funq
+ public String greeting(String name) {
+ return gService.greeting(name);
+ }
+
+ @Funq
+ public String funqyHello() {
+ return "hello funqy";
+ }
+
+ }
+ ```
+
+ Save the file.
+
+1. Access the new function `greeting` using the `CURL` command on your local terminal.
+
+ ```bash
+ curl -d '"Dan"' -X POST localhost:8080/api/greeting
+ ```
+
+ The output should look like this.
+
+ ```output
+ "Welcome to build Serverless Java with Quarkus on Azure Functions, Dan"
+ ```
+
+ > [!IMPORTANT]
+ > `Live Coding` (also referred to as dev mode) allows you to run the app and make changes on the fly. Quarkus will automatically re-compile and reload the app when changes are made. This is a powerful and efficient style of developing that you'll use throughout the tutorial.
+
+ Before moving forward to the next step, stop Quarkus Dev Mode by pressing `CTRL-C`.
+
+## Deploy the Serverless App to Azure Functions
+
+1. If you haven't already, sign in to your Azure subscription by using the [az login](/cli/azure/reference-index) command and follow the on-screen directions.
+
+ ```azurecli
+ az login
+ ```
+
+ > [!NOTE]
+ > If you've multiple Azure tenants associated with your Azure credentials, you must specify which tenant you want to sign in to. You can do this with the `--tenant` option. For example, `az login --tenant contoso.onmicrosoft.com`.
+ > Continue the process in the web browser. If no web browser is available or if the web browser fails to open, use device code flow with `az login --use-device-code`.
+
+ Once you've signed in successfully, the output on your local terminal should look similar to the following.
+
+ ```output
+ xxxxxxx-xxxxx-xxxx-xxxxx-xxxxxxxxx 'Microsoft'
+ [
+ {
+ "cloudName": "AzureCloud",
+ "homeTenantId": "xxxxxx-xxxx-xxxx-xxxx-xxxxxxx",
+ "id": "xxxxxx-xxxx-xxxx-xxxx-xxxxxxxx",
+ "isDefault": true,
+ "managedByTenants": [],
+ "name": "Contoso account services",
+ "state": "Enabled",
+ "tenantId": "xxxxxxx-xxxx-xxxx-xxxxx-xxxxxxxxxx",
+ "user": {
+ "name": "user@contoso.com",
+ "type": "user"
+ }
+ }
+ ]
+ ```
+
+1. Build and deploy the functions to Azure
+
+ The *pom.xml* you generated in the previous step uses the `azure-functions-maven-plugin`. Running `mvn install` generates config files and a staging directory required by the `azure-functions-maven-plugin`. For `yourResourceGroupName`, use the value you used previously.
+
+ ```bash
+ mvn clean install -DskipTests -DtenantId=<your tenantId from shown previously> -DresourceGroup=<yourResourceGroupName> azure-functions:deploy
+ ```
+
+1. During deployment, sign in to Azure. The `azure-functions-maven-plugin` is configured to prompt for Azure sign in each time the project is deployed. Examine the build output. During the build, you'll see output similar to the following.
+
+ ```output
+ [INFO] Auth type: DEVICE_CODE
+ To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code AXCWTLGMP to authenticate.
+ ```
+
+ Do as the output says and authenticate to Azure using the browser and provided device code. Many other authentication and configuration options are available. The complete reference documentation for `azure-functions-maven-plugin` is available at [Azure Functions: Configuration Details](https://github.com/microsoft/azure-maven-plugins/wiki/Azure-Functions:-Configuration-Details).
+
+1. After authenticating, the build should continue and complete. The output should include `BUILD SUCCESS` near the end.
+
+ ```output
+ Successfully deployed the artifact to https://quarkus-demo-123451234.azurewebsites.net
+ ```
+
+ You can also find the `URL` to trigger your function on Azure in the output log.
+
+ ```output
+ [INFO] HTTP Trigger Urls:
+ [INFO] quarkus : https://quarkus-azure-functions-http-archetype-20220629204040017.azurewebsites.net/api/{*path}
+ ```
+
+ It will take a while for the deployment to complete. In the meantime, let's explore Azure Functions in the portal.
+
+## Access and Monitor the Serverless Function on Azure
+
+Sign in to the Portal and ensure you've selected the same tenant and subscription used in the Azure CLI. You can visit the portal at [https://aka.ms/publicportal](https://aka.ms/publicportal).
+
+1. Type `Function App` in the search bar at the top of the Azure portal and press Enter. Your function should be deployed and show up with the name `<yourResourceGroupName>-function-quarkus`.
+
+ :::image type="content" source="media/functions-create-first-quarkus/azure-function-app.png" alt-text="The function app in the portal":::
+
+ Select the `function name`. you'll see the function app's detail information such as **Location**, **Subscription**, **URL**, **Metrics**, and **App Service Plan**.
+
+1. In the detail page, select the `URL`.
+
+ :::image type="content" source="media/functions-create-first-quarkus/azure-function-app-detail.png" alt-text="The function app detail page in the portal":::
+
+ Then, you'll see if your function is "up and running" now.
+
+ :::image type="content" source="media/functions-create-first-quarkus/azure-function-app-ready.png" alt-text="The function welcome page":::
+
+1. Invoke the `greeting` function using `CURL` command on your local terminal.
+
+ > [!IMPORTANT]
+ > Replace `YOUR_HTTP_TRIGGER_URL` with your own function URL that you find in Azure portal or output.
+
+ ```bash
+ curl -d '"Dan on Azure"' -X POST https://YOUR_HTTP_TRIGGER_URL/api/greeting
+ ```
+
+ The output should look similar to the following.
+
+ ```output
+ "Welcome to build Serverless Java with Quarkus on Azure Functions, Dan on Azure"
+ ```
+
+ You can also access the other function (`funqyHello`).
+
+ ```bash
+ curl https://YOUR_HTTP_TRIGGER_URL/api/funqyHello
+ ```
+
+ The output should be the same as you observed above.
+
+ ```output
+ "hello funqy"
+ ```
+
+ If you want to exercise the basic metrics capability in the Azure portal, try invoking the function within a shell for loop, as shown here.
+
+ ```bash
+ for i in {1..100}; do curl -d '"Dan on Azure"' -X POST https://YOUR_HTTP_TRIGGER_URL/api/greeting; done
+ ```
+
+ After a while, you'll see some metrics data in the portal, as shown next.
+
+ :::image type="content" source="media/functions-create-first-quarkus/portal-metrics.png" alt-text="Function metrics in the portal":::
+
+ Now that you've opened your Azure function in the portal, here are some more features accessible from the portal.
+
+ * Monitor the performance of your Azure function. For more information, see [Monitoring Azure Functions](/azure/azure-functions/monitor-functions).
+ * Explore telemetry. For more information, see [Analyze Azure Functions telemetry in Application Insights](/azure/azure-functions/analyze-telemetry-data).
+ * Set up logging. For more information, see [Enable streaming execution logs in Azure Functions](/azure/azure-functions/streaming-logs).
+
+## Clean up resources
+
+If you don't need these resources, you can delete them by running the following command in the Cloud Shell or on your local terminal:
+
+```azurecli
+az group delete --name <yourResourceGroupName> --yes
+```
+
+## Next steps
+
+In this guide, you learned how to:
+> [!div class="checklist"]
+>
+> * Run Quarkus dev mode
+> * Deploy a Funqy app to Azure functions using the `azure-functions-maven-plugin`
+> * Examine the performance of the function in the portal
+
+To learn more about Azure Functions and Quarkus, see the following articles and references.
+
+* [Azure Functions Java developer guide](/azure/azure-functions/functions-reference-java)
+* [Quickstart: Create a Java function in Azure using Visual Studio Code](/azure/azure-functions/create-first-function-vs-code-java)
+* [Azure Functions documentation](/azure/azure-functions/)
+* [Quarkus guide to deploying on Azure](https://quarkus.io/guides/deploying-to-azure-cloud)
azure-functions Functions Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-get-started.md
zone_pivot_groups: programming-languages-set-functions-lang-workers
## Introduction
-[Azure Functions](./functions-overview.md) allows you to implement your system's logic as event-driven, readily-available blocks of code. These code blocks are called "functions".
+[Azure Functions](./functions-overview.md) allows you to implement your system's logic as event-driven, readily available blocks of code. These code blocks are called "functions".
Use the following resources to get started.
Use the following resources to get started.
::: zone pivot="programming-language-java" | Action | Resources | | | |
-| **Create your first function** | Using one of the following tools:<br><br><li>[Visual Studio Code](./create-first-function-vs-code-java.md)<li>[Jav) |
+| **Create your first function** | Using one of the following tools:<br><br><li>[Eclipse](./functions-create-maven-eclipse.md)<li>[Gradle](./functions-create-first-java-gradle.md)<li>[IntelliJ IDEA](./functions-create-maven-intellij.md)<li>[Maven with terminal/command prompt](./create-first-function-cli-java.md)<li>[Spring Cloud](/azure/developer/jav) |
| **See a function running** | <li>[Azure Samples Browser](/samples/browse/?expanded=azure&languages=java&products=azure-functions)<li>[Azure Community Library](https://www.serverlesslibrary.net/?technology=Functions%202.x&language=Java) | | **Explore an interactive tutorial**| <li>[Choose the best Azure serverless technology for your business scenario](/training/modules/serverless-fundamentals/)<li>[Well-Architected Framework - Performance efficiency](/training/modules/azure-well-architected-performance-efficiency/)<li>[Develop an App using the Maven Plugin for Azure Functions](/training/modules/develop-azure-functions-app-with-maven-plugin/) <br><br>See a [full listing of interactive tutorials](/training/browse/?expanded=azure&products=azure-functions).| | **Review best practices** |<li>[Performance and reliability](./functions-best-practices.md)<li>[Manage connections](./manage-connections.md)<li>[Error handling and function retries](./functions-bindings-error-pages.md?tabs=java)<li>[Security](./security-concepts.md)|
azure-government Compare Azure Government Global Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compare-azure-government-global-azure.md
recommendations: false Previously updated : 01/09/2023 Last updated : 01/20/2023 # Compare Azure Government and global Azure
For feature variations and limitations, see [Cloud feature availability for US G
This section outlines variations and considerations when using Storage services in the Azure Government environment. For service availability, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=hpc-cache,managed-disks,storsimple,backup,storage&regions=usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true).
-### [Azure managed disks](../virtual-machines/managed-disks-overview.md)
-
-The following Azure managed disks **features aren't currently available** in Azure Government:
--- Zone-redundant storage (ZRS)- ### [Azure NetApp Files](../azure-netapp-files/index.yml) For Azure NetApp Files feature availability in Azure Government and how to access the Azure NetApp Files service within Azure Government, see [Azure NetApp Files for Azure Government](../azure-netapp-files/azure-government.md).
azure-government Documentation Government Stig Linux Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-stig-linux-vm.md
recommendations: false Previously updated : 08/25/2022 Last updated : 01/20/2023 # Deploy STIG-compliant Linux Virtual Machines (Preview)
Azure -> Virtual Machine running Linux -> Cannot create a VM -> Troubleshoot my
:::image type="content" source="./media/stig-linux-support.png" alt-text="New support request for Linux STIG solution template":::
+## Frequently asked questions
+
+**When will STIG-compliant VMs reach general availability (GA)?** </br>
+The Azure STIG-compliant VM offering is expected to remain in Preview instead of reaching GA because of the release cadence for DISA STIGs. Every quarter, the offering is upgraded with latest guidance, and this process is expected to continue in the future. See previous section for support options that most customers require for production workloads, including creating support tickets.
+
+**Can Azure Update Management be used with STIG images?** </br>
+Yes, [Update Management](../automation/update-management/overview.md) in Azure Automation supports STIG images.
+ ## Next steps This quickstart showed you how to deploy a STIG-compliant Linux virtual machine (Preview) on Azure or Azure Government. For more information about creating virtual machines in:
azure-government Documentation Government Stig Windows Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-stig-windows-vm.md
recommendations: false Previously updated : 08/25/2022 Last updated : 01/20/2023 # Deploy STIG-compliant Windows Virtual Machines (Preview)
Azure -> Virtual Machine running Windows -> Cannot create a VM -> Troubleshoot m
:::image type="content" source="./media/stig-windows-support.png" alt-text="New support request for Windows STIG solution template":::
+## Frequently asked questions
+
+**When will STIG-compliant VMs reach general availability (GA)?** </br>
+The Azure STIG-compliant VM offering is expected to remain in Preview instead of reaching GA because of the release cadence for DISA STIGs. Every quarter, the offering is upgraded with latest guidance, and this process is expected to continue in the future. See previous section for support options that most customers require for production workloads, including creating support tickets.
+
+**Can Azure Update Management be used with STIG images?** </br>
+Yes, [Update Management](../automation/update-management/overview.md) in Azure Automation supports STIG images.
+ ## Next steps This quickstart showed you how to deploy a STIG-compliant Windows virtual machine (Preview) on Azure or Azure Government. For more information about creating virtual machines in:
azure-monitor Azure Monitor Agent Extension Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-extension-versions.md
description: This article describes the version details for the Azure Monitor ag
Previously updated : 1/5/2023 Last updated : 1/20/2023
We strongly recommended to update to the latest version at all times, or opt in
## Version details | Release Date | Release notes | Windows | Linux | |:|:|:|:|
+| Nov-Dec 2022 | <ul><li>Support for air-gapped clouds added for [Windows MSI installer for clients](./azure-monitor-agent-windows-client.md) </li><li>Reliability improvements for using AMA with Custom Metrics destination</li><li>Performance and internal logging improvements</li></ul> | 1.11.0.0 | None |
| Oct 2022 | **Windows** <ul><li>Increased default retry timeout for data upload from 4 to 8 hours</li><li>Data quality improvements</li></ul> **Linux** <ul><li>Support for `http_proxy` and `https_proxy` environment variables for [network proxy configurations](./azure-monitor-agent-data-collection-endpoint.md#proxy-configuration) for the agent</li><li>[Text logs](./data-collection-text-log.md) <ul><li>Network proxy support enabled</li><li>Fixed missing `_ResourceId`</li><li>Increased maximum line size support to 1MB</li></ul></li><li>Support ingestion of syslog events whose timestamp is in the future</li><li>Performance improvements</li><li>Fixed `diskio` metrics instance name dimension to use the disk mount path(s) instead of the device name(s)</li><li>Fixed world writable file issue to lockdown write access to certain agent logs and configuration files stored locally on the machine</li></ul> | 1.10.0.0 | 1.24.2 | | Sep 2022 | Reliability improvements | 1.9.0.0 | None | | August 2022 | **Common updates** <ul><li>Improved resiliency: Default lookback (retry) time updated to last 3 days (72 hours) up from 60 minutes, for agent to collect data post interruption. This is subject to default offline cache size of 10gigabytes</li><li>Fixes the preview custom text log feature that was incorrectly removing the *TimeGenerated* field from the raw data of each event. All events are now additionally stamped with agent (local) upload time</li><li>Reliability and supportability improvements</li></ul> **Windows** <ul><li>Fixed datetime format to UTC</li><li>Fix to use default location for firewall log collection, if not provided</li><li>Reliability and supportability improvements</li></ul> **Linux** <ul><li>Support for OpenSuse 15, Debian 11 ARM64</li><li>Support for coexistence of Azure Monitor agent with legacy Azure Diagnostic extension for Linux (LAD)</li><li>Increased max-size of UDP payload for Telegraf output to prevent dimension truncation</li><li>Prevent unconfigured upload to Azure Monitor Metrics destination</li><li>Fix for disk metrics wherein *instance name* dimension will use the disk mount path(s) instead of the device name(s), to provide parity with legacy agent</li><li>Fixed *disk free MB* metric to report megabytes instead of bytes</li></ul> | 1.8.0.0 | 1.22.2 |
azure-monitor Alerts Common Schema Test Action Definitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-common-schema-test-action-definitions.md
You can also use `LinkToFilteredSearchResultsAPI` or `LinkToSearchResultsAPI` to
} ```
+#### monitoringService = Actual Cost Budget
+
+**Sample values**
+```json
+{
+ "schemaId": "azureMonitorCommonAlertSchema",
+ "data": {
+ "essentials": {
+ "monitoringService": "CostAlerts",
+ "firedDateTime": "2022-12-07T21:13:20.645Z",
+ "description": "Your spend for budget Test_actual_cost_budget is now $11,111.00 exceeding your specified threshold $25.00.",
+ "essentialsVersion": "1.0",
+ "alertContextVersion": "1.0",
+ "alertId": "/subscriptions/11111111-1111-1111-1111-111111111111/providers/Microsoft.CostManagement/alerts/Test_Alert",
+ "alertRule": null,
+ "severity": null,
+ "signalType": null,
+ "monitorCondition": null,
+ "alertTargetIDs": null,
+ "configurationItems": ["budgets"],
+ "originAlertId": null
+ },
+ "alertContext": {
+ "AlertCategory": "budgets",
+ "AlertData": {
+ "Scope": "/subscriptions/11111111-1111-1111-1111-111111111111/",
+ "ThresholdType": "Actual",
+ "BudgetType": "Cost",
+ "BudgetThreshold": "$50.00",
+ "NotificationThresholdAmount": "$25.00",
+ "BudgetName": "Test_actual_cost_budget",
+ "BudgetId": "/subscriptions/11111111-1111-1111-1111-111111111111/providers/Microsoft.Consumption/budgets/Test_actual_cost_budget",
+ "BudgetStartDate": "2022-11-01",
+ "BudgetCreator": "test@sample.test",
+ "Unit": "USD",
+ "SpentAmount": "$11,111.00"
+ }
+ }
+ }
+}
+```
+#### monitoringService = Forecasted Budget
+
+**Sample values**
+```json
+{
+ "schemaId": "azureMonitorCommonAlertSchema",
+ "data": {
+ "essentials": {
+ "monitoringService": "CostAlerts",
+ "firedDateTime": "2022-12-07T21:13:29.576Z",
+ "description": "The total spend for your budget, Test_forcasted_budget, is forecasted to reach $1111.11 before the end of the period. This amount exceeds your specified budget threshold of $50.00.",
+ "essentialsVersion": "1.0",
+ "alertContextVersion": "1.0",
+ "alertId": "/subscriptions/11111111-1111-1111-1111-111111111111/providers/Microsoft.CostManagement/alerts/Test_Alert",
+ "alertRule": null,
+ "severity": null,
+ "signalType": null,
+ "monitorCondition": null,
+ "alertTargetIDs": null,
+ "configurationItems": ["budgets"],
+ "originAlertId": null
+ },
+ "alertContext": {
+ "AlertCategory": "budgets",
+ "AlertData": {
+ "Scope": "/subscriptions/11111111-1111-1111-1111-111111111111/",
+ "ThresholdType": "Forecasted",
+ "BudgetType": "Cost",
+ "BudgetThreshold": "$50.00",
+ "NotificationThresholdAmount": "$50.00",
+ "BudgetName": "Test_forcasted_budget",
+ "BudgetId": "/subscriptions/11111111-1111-1111-1111-111111111111/providers/Microsoft.Consumption/budgets/Test_forcasted_budget",
+ "BudgetStartDate": "2022-11-01",
+ "BudgetCreator": "test@sample.test",
+ "Unit": "USD",
+ "SpentAmount": "$999.99",
+ "ForecastedTotalForPeriod": "$1111.11"
+ }
+ }
+ }
+}
+```
+ #### monitoringService = Smart Alert **Sample values**
azure-monitor Alerts Non Common Schema Definitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-non-common-schema-definitions.md
The non-common alert schema lets you customize the consumption experience for al
} } ```
+#### `monitoringService` = `Actual Cost Budget` or `Forecasted Budget`
+
+**Sample values**
+```json
+{
+ "schemaId": "AIP Budget Notification",
+ "data": {
+ "SubscriptionName": "test-subscription",
+ "SubscriptionId": "11111111-1111-1111-1111-111111111111",
+ "EnrollmentNumber": "",
+ "DepartmentName": "test-budgetDepartmentName",
+ "AccountName": "test-budgetAccountName",
+ "BillingAccountId": "",
+ "BillingProfileId": "",
+ "InvoiceSectionId": "",
+ "ResourceGroup": "test-RG",
+ "SpendingAmount": "1111.32",
+ "BudgetStartDate": "2023-01-20T23:49:40.216Z",
+ "Budget": "10000",
+ "Unit": "USD",
+ "BudgetCreator": "email@domain.com",
+ "BudgetName": "test-budgetName",
+ "BudgetType": "Cost",
+ "NotificationThresholdAmount": "8000.0"
+ }
+}
+```
azure-monitor Convert Classic Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/convert-classic-resource.md
Legacy table: customMetrics
|user_AuthenticatedId|string|UserAuthenticatedId|string| |user_Id|string|UserId|string| |value|real|(removed)||
-|valueCount|int|ValueCount|int|
+|valueCount|int|ItemCount|int|
|valueMax|real|ValueMax|real| |valueMin|real|ValueMin|real| |valueSum|real|ValueSum|real|
azure-monitor Activity Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/activity-log.md
Each event is stored in the PT1H.json file with the following format. This forma
If you're collecting activity logs using the legacy collection method, we recommend you [export activity logs to your Log Analytics workspace](#send-to-log-analytics-workspace) and disable the legacy collection using the [Data Sources - Delete API](/rest/api/loganalytics/data-sources/delete?tabs=HTTP) as follows:
-1. List all data sources connected to the workspace using the [Data Sources - List By Workspace API](/rest/api/loganalytics/data-sources/list-by-workspace?tabs=HTTP#code-try-0) and filter for activity logs by setting `filter=kind='AzureActivityLog'`.
+1. List all data sources connected to the workspace using the [Data Sources - List By Workspace API](/rest/api/loganalytics/data-sources/list-by-workspace?tabs=HTTP#code-try-0) and filter for activity logs by setting `kind eq 'AzureActivityLog'`.
:::image type="content" source="media/activity-log/data-sources-list-by-workspace-api.png" alt-text="Screenshot showing the configuration of the Data Sources - List By Workspace API." lightbox="media/activity-log/data-sources-list-by-workspace-api.png":::
azure-monitor Prometheus Metrics Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-overview.md
The only requirement to enable Azure Monitor managed service for Prometheus is t
The primary method for visualizing Prometheus metrics is [Azure Managed Grafana](../../managed-grafan#link-a-grafana-workspace) so that it can be used as a data source in a Grafana dashboard. You then have access to multiple prebuilt dashboards that use Prometheus metrics and the ability to create any number of custom dashboards. ## Rules and alerts
-Azure Monitor managed service for Prometheus supports recording rules and alert rules using PromQL queries. Metrics recorded by recording rules are stored back in the Azure Monitor workspace and can be queried by dashboard or by other rules. Alerts fired by alert rules can trigger actions or notifications, as defined in the [action groups](../alerts/action-groups.md) configured for the alert rule. You can also view fired and resolved Prometheus alerts in the Azure portal along with other alert types. For your AKS cluster, a set of [predefined Prometheus alert rules](../containers/container-insights-metric-alerts.md) and [recording rules ](./prometheus-metrics-scrape-default.md#recording-rules)is provided to allow easy quick start.
+Azure Monitor managed service for Prometheus supports recording rules and alert rules using PromQL queries. Metrics recorded by recording rules are stored back in the Azure Monitor workspace and can be queried by dashboard or by other rules. Alert rules and recording rules can be created and managed using [Azure Managed Prometheus rule groups](prometheus-rule-groups.md). For your AKS cluster, a set of [predefined Prometheus alert rules](../containers/container-insights-metric-alerts.md) and [recording rules](./prometheus-metrics-scrape-default.md#recording-rules) is provided to allow easy quick start.
+
+Alerts fired by alert rules can trigger actions or notifications, as defined in the [action groups](../alerts/action-groups.md) configured for the alert rule. You can also view fired and resolved Prometheus alerts in the Azure portal along with other alert types.
## Limitations See [Azure Monitor service limits](../service-limits.md#prometheus-metrics) for performance related service limits for Azure Monitor workspaces.
Following are links to Prometheus documentation.
- [Enable Azure Monitor managed service for Prometheus](prometheus-metrics-enable.md). - [Collect Prometheus metrics for your AKS cluster](../containers/container-insights-prometheus-metrics-addon.md). - [Configure Prometheus alerting and recording rules groups](prometheus-rule-groups.md).-- [Customize scraping of Prometheus metrics](prometheus-metrics-scrape-configuration.md).
+- [Customize scraping of Prometheus metrics](prometheus-metrics-scrape-configuration.md).
azure-monitor Private Link Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/private-link-configure.md
Now that you have resources connected to your AMPLS, create a private endpoint t
5. On the Virtual Network tab: 1. Choose the **virtual network** and **subnet** that you want to connect to your Azure Monitor resources.
+ 1. For Network policy for private endpoints, select **edit** if you want to apply Network security groups and/or Route tables to the subnet that contains the private endpoint. In **Edit subnet network policy**, select the checkbox next to **Network security groups** and **Route Tables**. Select **Save**.
+
+ For more information, see [Manage network policies for private endpoints](../../private-link/disable-private-endpoint-network-policy.md).
+
+ 1. For Private IP configuration, by default, **Dynamically allocate IP address** is selected. If you want to assign a static IP address, select **Statically allocate IP address** and then enter a **Name** and **Private IP**.
+ 1. Optionally, you can select or create an **Application security group**. Application security groups allow you to group virtual machines and define network security policies based on those groups.
1. Select **Next: DNS >**. :::image type="content" source="./media/private-link-security/ampls-select-private-endpoint-create-5.png" alt-text="Screenshot of the Create a private endpoint page in the Azure portal with the Virtual Network tab selected." lightbox="./media/private-link-security/ampls-select-private-endpoint-create-5.png":::
azure-vmware Concepts Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-identity.md
The following permissions are assigned to the **cloudadmin** user in Azure VMwar
> [!NOTE] > **VMware NSX-T Data Center cloudadmin user** on Azure VMware Solution is not the same as the **cloudadmin user** mentioned in the VMware product documentation.
+> Permissions below apply to NSX-T's Policy API. Manager API functionality may be limited.
| Category | Type | Operation | Permission | |--|--|-||
cognitive-services Gaming Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/gaming-concepts.md
+
+ Title: Game development with Azure Cognitive Service for Speech - Speech service
+
+description: Concepts for game development with Azure Cognitive Service for Speech.
++++++ Last updated : 01/20/2023+++
+# Game development with Azure Cognitive Service for Speech
+
+Azure Cognitive Services for Speech can be used to improve various gaming scenarios, both in- and out-of-game.
+
+Here are a few Speech features to consider for flexible and interactive game experiences:
+
+- Bring everyone into the conversation by synthesizing audio from text. Or by displaying text from audio.
+- Make the game more accessible for players who are unable to read text in a particular language, including young players who haven't learned to read and write. Players can listen to storylines and instructions in their preferred language.
+- Create game avatars and non-playable characters (NPC) that can initiate or participate in a conversation in-game.
+- Prebuilt neural voice can provide highly natural out-of-box voices with leading voice variety in terms of a large portfolio of languages and voices.
+- Custom neural voice for creating a voice that stays on-brand with consistent quality and speaking style. You can add emotions, accents, nuances, laughter, and other para linguistic sounds and expressions.
+- Use game dialogue prototyping to shorten the amount of time and money spent in product to get the game to market sooner. You can rapidly swap lines of dialog and listen to variations in real-time to iterate the game content.
+
+You can use the [Speech SDK](speech-sdk.md) or [Speech CLI](spx-overview.md) for real-time low latency speech-to-text, text-to-speech, language identification, and speech translation. You can also use the [Batch transcription API](batch-transcription.md) to transcribe pre-recorded speech to text. To synthesize a large volume of text input (long and short) to speech, use the [Batch synthesis API](batch-synthesis.md).
+
+For information about locale and regional availability, see [Language and voice support](language-support.md) and [Region support](regions.md).
+
+## Text-to-speech
+
+Help bring everyone into the conversation by converting text messages to audio using [Text-to-Speech](text-to-speech.md) for scenarios, such as game dialogue prototyping, greater accessibility, or non-playable character (NPC) voices. Text-to-Speech includes [prebuilt neural voice](language-support.md?tabs=tts#prebuilt-neural-voices) and [custom neural voice](language-support.md?tabs=tts#custom-neural-voice) features. Prebuilt neural voice can provide highly natural out-of-box voices with leading voice variety in terms of a large portfolio of languages and voices. Custom neural voice is an easy-to-use self-service for creating a highly natural custom voice.
+
+When enabling this functionality in your game, keep in mind the following benefits:
+
+- Voices and languages supported - A large portfolio of [locales and voices](language-support.md?tabs=tts#supported-languages) are supported. You can also [specify multiple languages](speech-synthesis-markup-voice.md#adjust-speaking-languages) for Text-to-Speech output. For [custom neural voice](custom-neural-voice.md), you can [choose to create](how-to-custom-voice-create-voice.md?tabs=neural#choose-a-training-method) different languages from single language training data.
+- Emotional styles supported - [Emotional tones](language-support.md?tabs=tts#voice-styles-and-roles), such as cheerful, angry, sad, excited, hopeful, friendly, unfriendly, terrified, shouting, and whispering. You can [adjust the speaking style](speech-synthesis-markup-voice.md#speaking-styles-and-roles), style degree, and role at the sentence level.
+- Visemes supported - You can use visemes during real-time synthesizing to control the movement of 2D and 3D avatar models, so that the mouth movements are perfectly matched to synthetic speech. For more information, see [Get facial position with viseme](how-to-speech-synthesis-viseme.md).
+- Fine-tuning Text-to-Speech output with Speech Synthesis Markup Language (SSML) - With SSML, you can customize Text-to-Speech outputs, with richer voice tuning supports. For more information, see [Speech Synthesis Markup Language (SSML) overview](speech-synthesis-markup.md).
+- Audio outputs - Each prebuilt neural voice model is available at 24 kHz and high-fidelity 48 kHz. If you select 48-kHz output format, the high-fidelity voice model with 48 kHz will be invoked accordingly. The sample rates other than 24 kHz and 48 kHz can be obtained through upsampling or downsampling when synthesizing. For example, 44.1 kHz is downsampled from 48 kHz. Each audio format incorporates a bitrate and encoding type. For more information, see the [supported audio formats](rest-text-to-speech.md?tabs=streaming#audio-outputs). For more information on 48-kHz high-quality voices, see [this introduction blog](https://techcommunity.microsoft.com/t5/ai-cognitive-services-blog/azure-neural-tts-voices-upgraded-to-48khz-with-hifinet2-vocoder/ba-p/3665252).
+
+For an example, see the [Text-to-speech quickstart](get-started-text-to-speech.md).
+
+## Speech-to-text
+
+You can use [speech-to-text](speech-to-text.md) to display text from the spoken audio in your game. For an example, see the [Speech-to-text quickstart](get-started-speech-to-text.md).
+
+## Language identification
+
+With [language identification](language-identification.md), you can detect the language of the chat string submitted by the player.
+
+## Speech translation
+
+It's not unusual that players in the same game session natively speak different languages and may appreciate receiving both the original message and its translation. You can use [speech translation](speech-translation.md) to translate text between languages so players across the world can communicate with each other in their native language.
+
+For an example, see the [Speech translation quickstart](get-started-speech-translation.md).
+
+> [!NOTE]
+> Besides the Speech service, you can also use the [Translator service](/azure/cognitive-services/translator/translator-overview). To execute text translation between supported source and target languages in real time see [Text translation](/azure/cognitive-services/translator/text-translation-overview).
+
+## Next steps
+
+* [Text-to-speech quickstart](get-started-text-to-speech.md)
+* [Speech-to-text quickstart](get-started-speech-to-text.md)
+* [Speech translation quickstart](get-started-speech-translation.md)
cost-management-billing Enable Preview Features Cost Management Labs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/enable-preview-features-cost-management-labs.md
Cost analysis is your tool for interactive analytics and insights. You've seen t
The first time you open the cost analysis preview, you'll see a list of all views. When you return, you'll see a list of the recently used views to help you get back to where you left off quicker than ever. You can pin any view or even rename or subscribe to alerts for your saved views.
-The recent and pinned views can be enabled from the [Try preview](https://aka.ms/costmgmt/trypreview) page in the Azure portal. Use the **How would you rate the cost analysis preview?** option at the bottom of the page to share feedback about the preview.
+**Recent and pinned views are available by default in the cost analysis preview.** Use the **How would you rate the cost analysis preview?** option at the bottom of the page to share feedback.
<a name="aksnestedtable"></a>
defender-for-cloud Custom Security Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/custom-security-policies.md
Title: Create custom Azure security policies in Microsoft Defender for Cloud
description: Azure custom policy definitions monitored by Microsoft Defender for Cloud. Previously updated : 07/20/2022 Last updated : 01/22/2023 zone_pivot_groups: manage-asc-initiatives
You can view your custom initiatives organized by controls, similar to the contr
:::image type="content" source="media/custom-security-policies/accessing-security-policy-page.png" alt-text="Screenshot of accessing the security policy page in Microsoft Defender for Cloud." lightbox="media/custom-security-policies/accessing-security-policy-page.png":::
-1. In the Add custom initiatives page, review the list of custom policies already created in your organization.
+1. Review the list of custom policies already created in your organization, and select **Add** to assign a policy to your subscription.
- - If you see one you want to assign to your subscription, select **Add**.
- - If there isn't an initiative in the list that meets your needs, create a new custom initiative:
+If there isn't an initiative in the list that meets your needs, you can create one.
- 1. Select **Create new**.
- 1. Enter the definition's location and name.
- 1. Select the policies to include and select **Add**.
- 1. Enter any desired parameters.
- 1. Select **Save**.
- 1. In the Add custom initiatives page, select refresh. Your new initiative will be available.
- 1. Select **Add** and assign it to your subscription.
+**To create a new custom initiative**:
+
+1. Select **Create new**.
+
+1. Enter the definition's location and custom name.
+
+ > [!NOTE]
+ > Custom initiatives shouldn't have the same name as other initiatives (custom or built-in). If you create a custom initiative with the the same name, it will cause a conflict in the information displayed in the dashboard.
+
+1. Select the policies to include and select **Add**.
+
+1. Enter any desired parameters.
+
+1. Select **Save**.
+
+1. In the Add custom initiatives page, select refresh. Your new initiative will be available.
+
+1. Select **Add** and assign it to your subscription.
![Create or add a policy.](media/custom-security-policies/create-or-add-custom-policy.png)
You can view your custom initiatives organized by controls, similar to the contr
> [!NOTE] > Creating new initiatives requires subscription owner credentials. For more information about Azure roles, see [Permissions in Microsoft Defender for Cloud](permissions.md).
- Your new initiative takes effect and you can see the impact in the following two ways:
+ Your new initiative takes effect and you can see the results in the following two ways:
* From the Defender for Cloud menu, select **Regulatory compliance**. The compliance dashboard opens to show your new custom initiative alongside the built-in initiatives.
The metadata should be added to the policy definition for a policy that is part
}, ```
-Below is an example of a custom policy including the metadata/securityCenter property:
+Here's another example of a custom policy including the metadata/securityCenter property:
```json {
defender-for-cloud Governance Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/governance-rules.md
If there are existing recommendations that match the definition of the governanc
> - Create and apply rules on multiple scopes at once using management scopes cross cloud. > - Check effective rules on selected scope using the scope filter.
+To view the effect rules on specific scope, use the ΓÇ£scopeΓÇ¥ filter and select a desired scope.
+
+Conflicting rules are applied in priority order. For example, rules on a management scope, (Azure management groups, AWS master accents and GCP organizations) take effect before rules on scopes (for example, Azure subscriptions, AWS accounts, or GCP projects).
+ ## Manually assigning owners and due dates for recommendation remediation For every resource affected by a recommendation, you can assign an owner and a due date so that you know who needs to implement the security changes to improve your security posture and when they're expected to do it by. You can also apply a grace period so that the resources that are given a due date don't impact your secure score unless they become overdue.
defender-for-iot How To Connect Sensor By Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-connect-sensor-by-proxy.md
- Title: Connect sensors with a proxy (legacy)
-description: Learn how to configure Microsoft Defender for IoT to communicate with a sensor through a proxy with no direct internet access (legacy procedure).
- Previously updated : 02/06/2022--
-# Connect Microsoft Defender for IoT sensors without direct internet access by using a proxy (version 10.x)
-
-This article describes how to connect Microsoft Defender for IoT sensors to Defender for IoT via a proxy, with no direct internet access.
-> [!NOTE]
-> This article is only relevant if you are using a OT sensor version 10.x via a private IoT Hub.
-> Starting with sensor software versions 22.1.x, updated connection methods are supported that don't require customers to have their own IoT Hub. For more information, see [Sensor connection methods](architecture-connections.md) and [Connect your sensors to Microsoft Defender for IoT](connect-sensors.md).
--
-## Overview
-
-Connect the sensor with a forwarding proxy that has HTTP tunneling, and uses the HTTP CONNECT command for connectivity. The instructions here are given uses the open-source Squid proxy, any other proxy that supports CONNECT can be used.
-
-The proxy uses an encrypted SSL tunnel to transfer data from the sensors to the service. The proxy doesn't inspect, analyze, or cache any data.
-
-The following diagram shows data going from Microsoft Defender for IoT to the IoT sensor in the OT segment to cloud via a proxy located in the IT network, and industrial DMZ.
--
-## Set up your system
-
-For this scenario we'll be installing, and configuring the latest version of [Squid](http://www.squid-cache.org/) on an Ubuntu 18 server (additional to the OT sensor).
-
-> [!Note]
-> Microsoft Defender for IoT does not offer support for configuring Squid or any other proxy server. We recommend to follow the up to date instructions as applicable to the proxy software in use on your network.
-
-**To install Squid proxy on an Ubuntu 18 server**:
-
-1. Sign in to your designated proxy Ubuntu machine.
-
-1. Launch a terminal window.
-
-1. Update your software to the latest version using the following command.
-
- ```bash
- sudo apt-get update
- ```
-
-1. Install the Squid package using the following command.
-
- ```bash
- sudo apt-get install squid
- ```
-
-1. Locate the squid configuration file that is located at `/etc/squid/squid.conf`, and `/etc/squid/conf.d/`.
-
-1. Make a backup of the original file using the following command.
-
- ```bash
- sudo cp -v /etc/squid/squid.conf{,.factory}'/etc/squid/squid.conf' -> '/etc/squid/squid.conf.factory sudo nano /etc/squid/squid.conf
- ```
-
-1. Open `/etc/squid/squid.conf` in a text editor.
-
-1. Search for `# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS`.
-
-1. Add `acl sensor1 src <sensor-ip>`, and `http_access allow sensor1` into the file.
-
- :::image type="content" source="media/how-to-connect-sensor-by-proxy/add-lines.png" alt-text="Add the following two lines into the text and save the file.":::
-
-1. (Optional) Add more sensors by adding an extra line for each sensor.
-
-1. Enable the Squid service to start at launch with the following command.
-
- ```bash
- sudo systemctl enable squid
- ```
-
-## Set up a sensor to use Squid
-
-This section describes how to set up a sensor to use Squid.
-
-**To set up a sensor to use Squid**:
-
-1. Sign in to the sensor.
-
-1. Navigate to **System settings** > **Basic**> **Sensor Network Settings**.
-
-1. Turn on the **Enable Proxy** toggle.
-
-1. Enter the proxy address.
-
-1. Enter a port. The default port is `3128`.
-
-1. (Optional) Enter a proxy user, and password.
-
-1. Select **Save**.
-
-## Next steps
-
-For more information, see [Manage your subscriptions](how-to-manage-subscriptions.md).
defender-for-iot Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/overview.md
For more information, see the [Microsoft Defender for IoT for device builders do
Defender for IoT routes all traffic from all European regions to the *West Europe* regional datacenter. It routes traffic from all remaining regions to the *East US* regional datacenter.
-If you're using Defender for IoT OT monitoring software earlier than [22.1](release-notes.md#versions-222x) and are connecting through your own IoT Hub, the IoT Hub supported regions are also relevant for your organization. For more information, see [IoT Hub supported regions](https://azure.microsoft.com/global-infrastructure/services/?products=iot-hub).
- ## Next steps > [!div class="nextstepaction"]
governance First Query Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/first-query-dotnet.md
Title: "Quickstart: Your first .NET Core query"
-description: In this quickstart, you follow the steps to enable the Resource Graph NuGet packages for .NET Core and run your first query.
Previously updated : 01/19/2023
+ Title: "Quickstart: Your first .NET query"
+description: In this quickstart, you follow the steps to enable the Resource Graph NuGet packages for .NET and run your first query.
Last updated : 01/20/2023
-# Quickstart: Run your first Resource Graph query using .NET Core
+# Quickstart: Run your first Resource Graph query using .NET
> [!NOTE] > Special thanks to [Glenn Block](https://github.com/glennblock) for contributing > the code used in this quickstart.
-The first step to using Azure Resource Graph is to check that the required packages for .NET Core
-are installed. This quickstart walks you through the process of adding the packages to your .NET
-Core installation.
+The first step to using Azure Resource Graph is to check that the required NuGet packages are installed. This quickstart walks you through the process of adding the packages to your .NET application.
-At the end of this process, you'll have added the packages to your .NET Core installation and run
-your first Resource Graph query.
+At the end of this process, you'll have added the packages to your .NET application and run your first Resource Graph query.
## Prerequisites
+- [.NET SDK 6.0 or later](https://dotnet.microsoft.com/download/dotnet)
- An Azure subscription. If you don't have an Azure subscription, create a
- [free](https://azure.microsoft.com/free/) account before you begin.
+ [free](https://azure.microsoft.com/free/dotnet/) account before you begin.
- An Azure service principal, including the _clientId_ and _clientSecret_. If you don't have a service principal for use with Resource Graph or want to create a new one, see [Azure management libraries for .NET authentication](/dotnet/azure/sdk/authentication#mgmt-auth).
- Skip the step to install the .NET Core packages as we'll do that in the next steps.
+ Skip the step to install the NuGet packages, as we'll do that in the next steps.
## Create the Resource Graph project
-To enable .NET Core to query Azure Resource Graph, create a new console application and install the
+To enable .NET to query Azure Resource Graph, create a new console application and install the
required packages.
-1. Check that the latest .NET Core is installed (at least **3.1.5**). If it isn't yet installed,
- download it at [dotnet.microsoft.com](https://dotnet.microsoft.com/download/dotnet-core).
-
-1. Initialize a new .NET Core console application named "argQuery":
+1. Create a new .NET console application named "argQuery":
```dotnetcli dotnet new console --name "argQuery" ```
-1. Change directories into the new project folder and install the required packages for Azure Resource Graph:
+1. Change directories into the new project folder. Install the packages for the Azure Resource Graph and Azure Identity client libraries:
```dotnetcli
- # Add the Resource Graph package for .NET Core
- dotnet add package Azure.ResourceManager.ResourceGraph --version 1.0.0
-
- # Add the Azure app auth package for .NET Core
- dotnet add package Microsoft.Azure.Services.AppAuthentication --version 1.5.0
+ dotnet add package Azure.ResourceManager.ResourceGraph
+ dotnet add package Azure.Identity
``` 1. Replace the default `Program.cs` with the following code and save the updated file:
-```csharp
-using System;
-using System.Collections.Generic;
-using System.Threading.Tasks;
-using Azure.Core;
-using Azure.Identity;
-using Azure.ResourceManager;
-using Azure.ResourceManager.Resources;
-using Azure.ResourceManager.ResourceGraph;
-using Azure.ResourceManager.ResourceGraph.Models;
-
-namespace argQuery
-{
- class Program
- {
- static async Task Main(string[] args)
- {
- string strTenant = args[0];
- string strClientId = args[1];
- string strClientSecret = args[2];
- string strQuery = args[3];
-
- var client = new ArmClient(new ClientSecretCredential(strTenant, strClientId, strClientSecret));
- var tenant = client.GetTenants().First();
- //Console.WriteLine($"{tenant.Id} {tenant.HasData}");
- var queryContent = new ResourceQueryContent(strQuery);
- var response = tenant.GetResources(queryContent);
- var result = response.Value;
- Console.WriteLine($"Count: {result.Data.ToString()}");
- }
- }
-}
-```
+ ```csharp
+ using Azure.Identity;
+ using Azure.ResourceManager;
+ using Azure.ResourceManager.ResourceGraph;
+ using Azure.ResourceManager.ResourceGraph.Models;
+
+ string strTenant = args[0];
+ string strClientId = args[1];
+ string strClientSecret = args[2];
+ string strQuery = args[3];
+
+ var client = new ArmClient(
+ new ClientSecretCredential(strTenant, strClientId, strClientSecret));
+ var tenant = client.GetTenants().First();
+ //Console.WriteLine($"{tenant.Id} {tenant.HasData}");
+ var queryContent = new ResourceQueryContent(strQuery);
+ var response = tenant.GetResources(queryContent);
+ var result = response.Value;
+ Console.WriteLine($"Count: {result.Data.ToString()}");
+ ```
> [!NOTE] > This code creates a tenant-based query. To limit the query to a
namespace argQuery
## Run your first Resource Graph query
-With the .NET Core console application built and published, it's time to try out a simple
+With the .NET console application built and published, it's time to try out a simple
tenant-based Resource Graph query. The query returns the first five Azure resources with the **Name** and **Resource Type** of each resource.
-In each call to `argQuery`, replace the variables with your own
-values:
+In each call to `argQuery`, replace the variables with your own values:
- `{tenantId}` - Replace with your tenant ID - `{clientId}` - Replace with the client ID of your service principal
values:
1. Change directories to the `{run-folder}` you defined with the earlier `dotnet publish` command.
-1. Run your first Azure Resource Graph query using the compiled .NET Core console application:
+1. Run your first Azure Resource Graph query using the compiled .NET console application:
```bash argQuery "{tenantId}" "{clientId}" "{clientSecret}" "Resources | project name, type | limit 5"
top five results.
## Clean up resources
-If you wish to remove the .NET Core console application and installed packages, you can do so by
+If you wish to remove the .NET console application and installed packages, you can do so by
deleting the `argQuery` project folder. ## Next steps
-In this quickstart, you've created a .NET Core console application with the required Resource Graph
+In this quickstart, you've created a .NET console application with the required Resource Graph
packages and run your first query. To learn more about the Resource Graph language, continue to the query language details page.
iot-edge Gpu Acceleration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/gpu-acceleration.md
Windows 10 users must also [install WSL](/windows/wsl/install) because some of t
## Enable GPU acceleration in your Azure IoT Edge Linux on Windows deployment Once system setup is complete, you are ready to [create your deployment of Azure IoT Edge for Linux on Windows](how-to-install-iot-edge-on-windows.md). During this process, you must [enable GPU](reference-iot-edge-for-linux-on-windows-functions.md#deploy-eflow) as part of EFLOW deployment.
-For example, the command below creates a virtual machine with an NVIDIA A2 GPU assigned.
+For example, the following commands create a GPU-enabled virtual machine with either an NVIDIA A2 GPU or Intel Iris Xe Graphics card.
```powershell
-Deploy-Eflow -gpuPassthroughType "DirectDeviceAssignment" -gpuCount 1 -gpuName "NVIDIA A2"
+#Deploys EFLOW with NVIDIA A2 assigned to the EFLOW VM
+Deploy-Eflow -gpuPassthroughType DirectDeviceAssignment -gpuCount 1 -gpuName "NVIDIA A2"
+
+#Deploys EFLOW with Intel(R) Iris(R) Xe Graphics assigned to the EFLOW VM
+Deploy-Eflow -gpuPassthroughType ParaVirtualization -gpuCount 1 -gpuName ΓÇ£Intel(R) Iris(R) Xe GraphicsΓÇ¥
+```
+
+To find the name of your GPU, you can run the following command or look for Display adapters in Device Manager.
+```powershell
+(Get-WmiObject win32_VideoController).caption
``` Once installation is complete, you are ready to deploy and run GPU-accelerated Linux modules through Azure IoT Edge for Linux on Windows.
+## Configure GPU acceleration in an existing Azure IoT Edge Linux on Windows deployment
+Assigning the GPU at deployment time will result in the most straightforward experience. However, to enable or disable the GPU after deployment use the 'set-eflowvm' command. When using 'set-eflowvm' the default parameter will be used for any argument not specified. For example,
+
+```powershell
+#Deploys EFLOW without a GPU assigned to the EFLOW VM
+Deploy-Eflow -cpuCount 4 -memoryInMB 16384
+
+#Assigns NVIDIA A2 GPU to the existing deployment (cpu and memory must still be specified, otherwise they will be set to the default values)
+Set-EflowVM -cpuCount 4 -memoryInMB 16384 -gpuName "NVIDIA A2" -gpuPassthroughType DirectDeviceAssignment -gpuCount 1
+
+#Reduces the cpuCount and memory (GPU must still be specified, otherwise the GPU will be removed)
+Set-EflowVM -cpuCount 2 -memoryInMB 4096 -gpuName "NVIDIA A2" -gpuPassthroughType DirectDeviceAssignment -gpuCount 1
+
+#Removes NVIDIA A2 GPU from the existing deployment
+Set-EflowVM -cpuCount 2 -memoryInMB 4096
+```
## Next steps
-* Try our [GPU-enabled sample featuring Vision on Edge](https://github.com/Azure-Samples/azure-intelligent-edge-patterns/blob/master/factory-ai-vision/Tutorial/Eflow.md), a solution template illustrating how to build your own vision-based machine learning application.
+### Get Started with Samples
+Visit our [EFLOW Samples Page](https://github.com/Azure/iotedge-eflow/tree/main/samples) to discover several GPU samples which you can try and use. These samples illustrate common manufacturing and retail scenarios such as defect detection, worker safety, and inventory management. Thee open-source samples can serve as a solution template for building your own vision-based machine learning application.
+
+### Learn More from our Partners
+Several GPU vendors have provided user guides on getting the most of their hardware and software with EFLOW.
+* Learn how to run Intel OpenVINOΓäó applications on EFLOW by following [Intel's guide on iGPU with Azure IoT Edge for Linux on Windows (EFLOW) & OpenVINOΓäó Toolkit](https://community.intel.com/t5/Blogs/Tech-Innovation/Artificial-Intelligence-AI/Witness-the-power-of-Intel-iGPU-with-Azure-IoT-Edge-for-Linux-on/post/1382405) and [reference implementations](https://www.intel.com/content/www/us/en/developer/articles/technical/deploy-reference-implementation-to-azure-iot-eflow.html).
+* Get started with deploying CUDA-accelerated applications on EFLOW by following [NVIDIA's EFLOW User Guide for GeForce/Quadro/RTX GPUs](https://docs.nvidia.com/cuda/eflow-users-guide/https://docsupdatetracker.net/index.html).
-* Discover how to run Intel OpenVINOΓäó applications on EFLOW by following [Intel's guide on iGPU with Azure IoT Edge for Linux on Windows (EFLOW) & OpenVINOΓäó Toolkit](https://community.intel.com/t5/Blogs/Tech-Innovation/Artificial-Intelligence-AI/Witness-the-power-of-Intel-iGPU-with-Azure-IoT-Edge-for-Linux-on/post/1382405) and [reference implementations](https://www.intel.com/content/www/us/en/developer/articles/technical/deploy-reference-implementation-to-azure-iot-eflow.html).
+> [!NOTE]
+> This guide does not cover DDA-based GPUs such as NVIDIA T4 or A2.
-* Learn more about GPU passthrough technologies by visiting the [DDA documentation](/windows-server/virtualization/hyper-v/plan/plan-for-gpu-acceleration-in-windows-server#discrete-device-assignment-dda) and [GPU-PV blog post](https://devblogs.microsoft.com/directx/directx-heart-linux/#gpu-virtualization).
+### Dive into the Technology
+Learn more about GPU passthrough technologies by visiting the [DDA documentation](/windows-server/virtualization/hyper-v/plan/plan-for-gpu-acceleration-in-windows-server#discrete-device-assignment-dda) and [GPU-PV blog post](https://devblogs.microsoft.com/directx/directx-heart-linux/#gpu-virtualization).
machine-learning How To Deploy Mlflow Models Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-mlflow-models-online-endpoints.md
You will typically select this workflow when:
Use the following steps to deploy an MLflow model with a custom scoring script.
-1. Create a scoring script:
+1. Identify the folder where your MLflow model is placed.
+
+ a. Go to [Azure Machine Learning portal](https://ml.azure.com).
+
+ b. Go to the section __Models__.
+
+ c. Select the model you are trying to deploy and click on the tab __Artifacts__.
+
+ d. Take note of the folder that is displayed. This folder was indicated when the model was registered.
+
+ :::image type="content" source="media/how-to-deploy-mlflow-models-online-endpoints/mlflow-model-folder-name.png" lightbox="media/how-to-deploy-mlflow-models-online-endpoints/mlflow-model-folder-name.png" alt-text="Screenshot showing the folder where the model artifacts are placed.":::
+
+1. Create a scoring script. Notice how the folder name `model` you identified before has been included in the `init()` function.
__score.py__
machine-learning How To Mlflow Batch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-mlflow-batch.md
You will typically select this workflow when:
Use the following steps to deploy an MLflow model with a custom scoring script.
-1. Create a scoring script:
+1. Identify the folder where your MLflow model is placed.
+
+ a. Go to [Azure Machine Learning portal](https://ml.azure.com).
+
+ b. Go to the section __Models__.
+
+ c. Select the model you are trying to deploy and click on the tab __Artifacts__.
+
+ d. Take note of the folder that is displayed. This folder was indicated when the model was registered.
+
+ :::image type="content" source="media/how-to-deploy-mlflow-models-online-endpoints/mlflow-model-folder-name.png" lightbox="media/how-to-deploy-mlflow-models-online-endpoints/mlflow-model-folder-name.png" alt-text="Screenshot showing the folder where the model artifacts are placed.":::
+
+1. Create a scoring script. Notice how the folder name `model` you identified before has been included in the `init()` function.
__batch_driver.py__
network-watcher Network Watcher Network Configuration Diagnostics Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-network-configuration-diagnostics-overview.md
Title: Introduction to Network Configuration Diagnostics in Azure Network Watcher | Microsoft Docs
-description: This page provides an overview of the Network Watcher - NSG Diagnostics
+ Title: Introduction to NSG Diagnostics in Azure Network Watcher
+description: Learn about Network Security Group (NSG) Diagnostics tool in Azure Network Watcher
-- Previously updated : 01/04/2023- +++ Last updated : 01/20/2023+ # Introduction to NSG Diagnostics in Azure Network Watcher
-The NSG Diagnostics tool helps customers understand which traffic flows will be allowed or denied in your Azure Virtual Network along with detailed information for debugging. It can help you in understanding if your NSG rules are configured correctly.
+The Network Security Group (NSG) Diagnostics is an Azure Network Watcher tool that helps you understand which network traffic is allowed or denied in your Azure Virtual Network along with detailed information for debugging. It can help you in understanding if your NSG rules are configured correctly.
-## Pre-requisites
-For using NSG Diagnostics, Network Watcher must be enabled in your subscription. See [Create an Azure Network Watcher instance](./network-watcher-create.md) to enable.
+> [!NOTE]
+> To use NSG Diagnostics, Network Watcher must be enabled in your subscription. See [Create an Azure Network Watcher instance](./network-watcher-create.md) to enable.
## Background -- Your resources in Azure are connected via Virtual Networks (VNETs) and subnets. The security of these VNets and subnets can be managed using a Network Security Group (NSG).-- An NSG contains a list of security rules that allow or deny network traffic to resources it is connected to. NSGs can be associated with subnets, individual VMs, or individual network interfaces (NICs) attached to VMs.
+- Your resources in Azure are connected via [virtual networks (VNets)](../virtual-network/virtual-networks-overview.md) and subnets. The security of these VNets and subnets can be managed using [network security groups (NSGs)](../virtual-network/network-security-groups-overview.md).
+- An NSG contains a list of [security rules](../virtual-network/network-security-groups-overview.md#security-rules) that allow or deny network traffic to resources it's connected to. An NSG can be associated to a virtual network subnet or individual network interface (NIC) attached to a virtual machine (VM).
- All traffic flows in your network are evaluated using the rules in the applicable NSG.-- Rules are evaluated based on priority number from lowest to highest
+- Rules are evaluated based on priority number from lowest to highest.
## How does NSG Diagnostics work?
-For a given flow, the NSG Diagnostics tool runs a simulation of the flow and returns whether the flow would be allowed (or denied) and detailed information about rules allowing/denying the flow. Customers must provide details of a flow like source, destination, protocol, etc. The tool returns whether traffic was allowed or denied, the NSG rules that were evaluated for the specified flow and the evaluation results for every rule.
+For a given flow, after you provide details like source and destination, the NSG Diagnostics tool runs a simulation of the flow and returns whether the flow would be allowed or denied with detailed information about the security rule allowing or denying the flow.
## Next steps
security Secure Develop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/develop/secure-develop.md
Title: Develop secure applications on Microsoft Azure description: This article discusses best practices to consider during the implementation and verification phases of your web application project. -+ Previously updated : 03/21/2021 Last updated : 01/22/2023
If the application must autogenerate passwords, ensure that the generated passwo
If your application allows [file uploads](https://owasp.org/www-community/vulnerabilities/Unrestricted_File_Upload), consider precautions that you can take for this risky activity. The first step in many attacks is to get some malicious code into a system that is under attack. Using a file upload helps the attacker accomplish this. OWASP offers solutions for validating a file to ensure that the file you're uploading is safe.
-Antimalware protection helps identify and remove viruses, spyware, and other malicious software. You can install [Microsoft Antimalware](../fundamentals/antimalware.md) or a Microsoft partner's endpoint protection solution ([Trend Micro](https://www.trendmicro.com/azure/), [Broadcom](https://www.broadcom.com/products), [McAfee](https://www.mcafee.com/us/products.aspx), [Windows Defender](/windows/security/threat-protection/windows-defender-antivirus/windows-defender-antivirus-in-windows-10), and [Endpoint Protection](/configmgr/protect/deploy-use/endpoint-protection)).
+Antimalware protection helps identify and remove viruses, spyware, and other malicious software. You can install [Microsoft Antimalware](../fundamentals/antimalware.md) or a Microsoft partner's endpoint protection solution ([Trend Micro](https://www.trendmicro.com/azure/), [Broadcom](https://www.broadcom.com/products), [McAfee](https://www.mcafee.com/us/products.aspx), [Microsoft Defender Antivirus in Windows](/windows/security/threat-protection/windows-defender-antivirus/windows-defender-antivirus-in-windows-10), and [Endpoint Protection](/configmgr/protect/deploy-use/endpoint-protection)).
[Microsoft Antimalware](../fundamentals/antimalware.md) includes features like real-time protection, scheduled scanning, malware remediation, signature updates, engine updates, samples reporting, and exclusion event collection. You can integrate Microsoft Antimalware and partner solutions with [Microsoft Defender for Cloud](../../security-center/security-center-partner-integration.md) for ease of deployment and built-in detections (alerts and incidents).
In [fuzz testing](https://www.microsoft.com/security/blog/2007/09/20/fuzz-testin
Reviewing the attack surface after code completion helps ensure that any design or implementation changes to an application or system has been considered. It helps ensure that any new attack vectors that were created as a result of the changes, including threat models, has been reviewed and mitigated.
-You can build a picture of the attack surface by scanning the application. Microsoft offers an attack surface analysis tool called [Attack Surface Analyzer](https://www.microsoft.com/download/details.aspx?id=58105). You can choose from many commercial dynamic testing and vulnerability scanning tools or services, including [OWASP Zed Attack Proxy Project](https://owasp.org/www-project-zap/), [Arachni](http://arachni-scanner.com/), [Skipfish](https://code.google.com/p/skipfish/), and [w3af](http://w3af.sourceforge.net/). These scanning tools crawl your app and map the parts of the application that are accessible over the web. You can also search the Azure Marketplace for similar [developer tools](https://azuremarketplace.microsoft.com/marketplace/apps/category/developer-tools?page=1).
+You can build a picture of the attack surface by scanning the application. Microsoft offers an attack surface analysis tool called [Attack Surface Analyzer](https://www.microsoft.com/download/details.aspx?id=58105). You can choose from many commercial dynamic testing and vulnerability scanning tools or services, including [OWASP Zed Attack Proxy Project](https://owasp.org/www-project-zap/), [Arachni](http://arachni-scanner.com/), and [w3af](http://w3af.sourceforge.net/). These scanning tools crawl your app and map the parts of the application that are accessible over the web. You can also search the Azure Marketplace for similar [developer tools](https://azuremarketplace.microsoft.com/marketplace/apps/category/developer-tools?page=1).
### Perform security penetration testing
Ensuring that your application is secure is as important as testing any other fu
### Run security verification tests
-[Secure DevOps Kit for Azure](https://github.com/azsk/AzTS-docs/#readme) (AzSK) contains SVTs for multiple services of the Azure platform. You run these SVTs periodically to ensure that your Azure subscription and the different resources that comprise your application are in a secure state. You can also automate these tests by using the continuous integration/continuous deployment (CI/CD) extensions feature of AzSK, which makes SVTs available as a Visual Studio extension.
+[Azure Tenant Security Solution (AzTS)](https://github.com/azsk/AzTS-docs/#readme) from the Secure DevOps Kit for Azure (AzSK) contains SVTs for multiple services of the Azure platform. You run these SVTs periodically to ensure that your Azure subscription and the different resources that comprise your application are in a secure state. You can also automate these tests by using the continuous integration/continuous deployment (CI/CD) extensions feature of AzSK, which makes SVTs available as a Visual Studio extension.
## Next steps
security Data Encryption Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/data-encryption-best-practices.md
documentationcenter: na ms.assetid: 17ba67ad-e5cd-4a8f-b435-5218df753ca4--++ na Previously updated : 03/09/2020 Last updated : 01/22/2023
The best practices are based on a consensus of opinion, and they work with curre
To help protect data in the cloud, you need to account for the possible states in which your data can occur, and what controls are available for that state. Best practices for Azure data security and encryption relate to the following data states: - At rest: This includes all information storage objects, containers, and types that exist statically on physical media, whether magnetic or optical disk.-- In transit: When data is being transferred between components, locations, or programs, itΓÇÖs in transit. Examples are transfer over the network, across a service bus (from on-premises to cloud and vice-versa, including hybrid connections such as ExpressRoute), or during an input/output process.
+- In transit: When data is being transferred between components, locations, or programs, it's in transit. Examples are transfer over the network, across a service bus (from on-premises to cloud and vice-versa, including hybrid connections such as ExpressRoute), or during an input/output process.
## Choose a key management solution
Azure Key Vault is designed to support application keys and secrets. Key Vault i
Following are security best practices for using Key Vault. **Best practice**: Grant access to users, groups, and applications at a specific scope.
-**Detail**: Use Azure RBAC predefined roles. For example, to grant access to a user to manage key vaults, you would assign the predefined role [Key Vault Contributor](../../role-based-access-control/built-in-roles.md) to this user at a specific scope. The scope in this case would be a subscription, a resource group, or just a specific key vault. If the predefined roles donΓÇÖt fit your needs, you can [define your own roles](../../role-based-access-control/custom-roles.md).
+**Detail**: Use Azure RBAC predefined roles. For example, to grant access to a user to manage key vaults, you would assign the predefined role [Key Vault Contributor](../../role-based-access-control/built-in-roles.md#key-vault-contributor) to this user at a specific scope. The scope in this case would be a subscription, a resource group, or just a specific key vault. If the predefined roles don't fit your needs, you can [define your own roles](../../role-based-access-control/custom-roles.md).
**Best practice**: Control what users have access to. **Detail**: Access to a key vault is controlled through two separate interfaces: management plane and data plane. The management plane and data plane access controls work independently. Use Azure RBAC to control what users have access to. For example, if you want to grant an application access to use keys in a key vault, you only need to grant data plane access permissions by using key vault access policies, and no management plane access is needed for this application. Conversely, if you want a user to be able to read vault properties and tags but not have any access to keys, secrets, or certificates, you can grant this user read access by using Azure RBAC, and no access to the data plane is required.
-**Best practice**: Store certificates in your key vault. Your certificates are of high value. In the wrong hands, your application's security or the security of your data can be compromised.
+**Best practice**: Store certificates in your key vault. Your certificates are of high value. In the wrong hands, your application's security or the security of your data can be compromised.
**Detail**: Azure Resource Manager can securely deploy certificates stored in Azure Key Vault to Azure VMs when the VMs are deployed. By setting appropriate access policies for the key vault, you also control who gets access to your certificate. Another benefit is that you manage all your certificates in one place in Azure Key Vault. See [Deploy Certificates to VMs from customer-managed Key Vault](/archive/blogs/kv/updated-deploy-certificates-to-vms-from-customer-managed-key-vault) for more information. **Best practice**: Ensure that you can recover a deletion of key vaults or key vault objects.
Use Azure RBAC to control what users have access to. For example, if you want to
> [!NOTE] > The subscription administrator or owner should use a secure access workstation or a privileged access workstation.
-Because the vast majority of attacks target the end user, the endpoint becomes one of the primary points of attack. An attacker who compromises the endpoint can use the userΓÇÖs credentials to gain access to the organizationΓÇÖs data. Most endpoint attacks take advantage of the fact that users are administrators in their local workstations.
+Because the vast majority of attacks target the end user, the endpoint becomes one of the primary points of attack. An attacker who compromises the endpoint can use the user's credentials to gain access to the organization's data. Most endpoint attacks take advantage of the fact that users are administrators in their local workstations.
**Best practice**: Use a secure management workstation to protect sensitive accounts, tasks, and data. **Detail**: Use a [privileged access workstation](https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/) to reduce the attack surface in workstations. These secure management workstations can help you mitigate some of these attacks and ensure that your data is safer.
Because the vast majority of attacks target the end user, the endpoint becomes o
## Protect data at rest
-[Data encryption at rest](https://www.microsoft.com/security/blog/2015/09/10/cloud-security-controls-series-encrypting-data-at-rest/) is a mandatory step toward data privacy, compliance, and data sovereignty.
+[Data encryption at rest](encryption-atrest.md) is a mandatory step toward data privacy, compliance, and data sovereignty.
**Best practice**: Apply disk encryption to help safeguard your data. **Detail**: Use [Azure Disk Encryption for Linux VMs](../../virtual-machines/linux/disk-encryption-overview.md) or [Azure Disk Encryption for Windows VMs](../../virtual-machines/linux/disk-encryption-overview.md). Disk Encryption combines the industry-standard Linux dm-crypt or Windows BitLocker feature to provide volume encryption for the OS and the data disks.
Azure Storage and Azure SQL Database encrypt data at rest by default, and many s
**Best practices**: Use encryption to help mitigate risks related to unauthorized data access. **Detail**: Encrypt your drives before you write sensitive data to them.
-Organizations that donΓÇÖt enforce data encryption are more exposed to data-confidentiality issues. For example, unauthorized or rogue users might steal data in compromised accounts or gain unauthorized access to data coded in Clear Format. Companies also must prove that they are diligent and using correct security controls to enhance their data security in order to comply with industry regulations.
+Organizations that don't enforce data encryption are more exposed to data-confidentiality issues. For example, unauthorized or rogue users might steal data in compromised accounts or gain unauthorized access to data coded in Clear Format. Companies also must prove that they are diligent and using correct security controls to enhance their data security in order to comply with industry regulations.
## Protect data in transit Protecting data in transit should be an essential part of your data protection strategy. Because data is moving back and forth from many locations, we generally recommend that you always use SSL/TLS protocols to exchange data across different locations. In some circumstances, you might want to isolate the entire communication channel between your on-premises and cloud infrastructures by using a VPN.
-For data moving between your on-premises infrastructure and Azure, consider appropriate safeguards such as HTTPS or VPN. When sending encrypted traffic between an Azure virtual network and an on-premises location over the public internet, use [Azure VPN Gateway](../../vpn-gateway/index.yml).
+For data moving between your on-premises infrastructure and Azure, consider appropriate safeguards such as HTTPS or VPN. When sending encrypted traffic between an Azure virtual network and an on-premises location over the public internet, use [Azure VPN Gateway](../../vpn-gateway/vpn-gateway-about-vpngateways.md).
Following are best practices specific to using Azure VPN Gateway, SSL/TLS, and HTTPS.
Following are best practices specific to using Azure VPN Gateway, SSL/TLS, and H
**Detail**: Use [ExpressRoute](../../expressroute/expressroute-introduction.md). If you choose to use ExpressRoute, you can also encrypt the data at the application level by using SSL/TLS or other protocols for added protection. **Best practice**: Interact with Azure Storage through the Azure portal.
-**Detail**: All transactions occur via HTTPS. You can also use [Storage REST API](/rest/api/storageservices/) over HTTPS to interact with [Azure Storage](https://azure.microsoft.com/services/storage/).
+**Detail**: All transactions occur via HTTPS. You can also use [Storage REST API](/rest/api/storageservices/) over HTTPS to interact with [Azure Storage](../../storage/common/storage-introduction.md).
Organizations that fail to protect data in transit are more susceptible to [man-in-the-middle attacks](/previous-versions/office/skype-server-2010/gg195821(v=ocs.14)), [eavesdropping](/previous-versions/office/skype-server-2010/gg195641(v=ocs.14)), and session hijacking. These attacks can be the first step in gaining access to confidential data. ## Secure email, documents, and sensitive data
-You want to control and secure email, documents, and sensitive data that you share outside your company. [Azure Information Protection](/azure/information-protection/) is a cloud-based solution that helps an organization to classify, label, and protect its documents and emails. This can be done automatically by administrators who define rules and conditions, manually by users, or a combination where users get recommendations.
+You want to control and secure email, documents, and sensitive data that you share outside your company. [Azure Information Protection](/azure/information-protection/what-is-information-protection) is a cloud-based solution that helps an organization to classify, label, and protect its documents and emails. This can be done automatically by administrators who define rules and conditions, manually by users, or a combination where users get recommendations.
-Classification is identifiable at all times, regardless of where the data is stored or with whom itΓÇÖs shared. The labels include visual markings such as a header, footer, or watermark. Metadata is added to files and email headers in clear text. The clear text ensures that other services, such as solutions to prevent data loss, can identify the classification and take appropriate action.
+Classification is identifiable at all times, regardless of where the data is stored or with whom it's shared. The labels include visual markings such as a header, footer, or watermark. Metadata is added to files and email headers in clear text. The clear text ensures that other services, such as solutions to prevent data loss, can identify the classification and take appropriate action.
-The protection technology uses Azure Rights Management (Azure RMS). This technology is integrated with other Microsoft cloud services and applications, such as Microsoft 365 and Azure Active Directory. This protection technology uses encryption, identity, and authorization policies. Protection that is applied through Azure RMS stays with the documents and emails, independently of the locationΓÇöinside or outside your organization, networks, file servers, and applications.
+The protection technology uses Azure Rights Management (Azure RMS). This technology is integrated with other Microsoft cloud services and applications, such as Microsoft 365 and Azure Active Directory. This protection technology uses encryption, identity, and authorization policies. Protection that is applied through Azure RMS stays with the documents and emails, independently of the location-inside or outside your organization, networks, file servers, and applications.
-This information protection solution keeps you in control of your data, even when itΓÇÖs shared with other people. You can also use Azure RMS with your own line-of-business applications and information protection solutions from software vendors, whether these applications and solutions are on-premises or in the cloud.
+This information protection solution keeps you in control of your data, even when it's shared with other people. You can also use Azure RMS with your own line-of-business applications and information protection solutions from software vendors, whether these applications and solutions are on-premises or in the cloud.
We recommend that you: - [Deploy Azure Information Protection](/azure/information-protection/deployment-roadmap) for your organization.-- Apply labels that reflect your business requirements. For example: Apply a label named ΓÇ£highly confidentialΓÇ¥ to all documents and emails that contain top-secret data, to classify and protect this data. Then, only authorized users can access this data, with any restrictions that you specify.
+- Apply labels that reflect your business requirements. For example: Apply a label named "highly confidential" to all documents and emails that contain top-secret data, to classify and protect this data. Then, only authorized users can access this data, with any restrictions that you specify.
- Configure [usage logging for Azure RMS](/azure/information-protection/log-analyze-usage) so that you can monitor how your organization is using the protection service. Organizations that are weak on [data classification](https://download.microsoft.com/download/0/A/3/0A3BE969-85C5-4DD2-83B6-366AA71D1FE3/Data-Classification-for-Cloud-Readiness.pdf) and file protection might be more susceptible to data leakage or data misuse. With proper file protection, you can analyze data flows to gain insight into your business, detect risky behaviors and take corrective measures, track access to documents, and so on. ## Next steps
-See [Azure security best practices and patterns](best-practices-and-patterns.md) for more security best practices to use when youΓÇÖre designing, deploying, and managing your cloud solutions by using Azure.
+See [Azure security best practices and patterns](best-practices-and-patterns.md) for more security best practices to use when you're designing, deploying, and managing your cloud solutions by using Azure.
The following resources are available to provide more general information about Azure security and related Microsoft * [Azure Security Team Blog](/archive/blogs/azuresecurity/) - for up to date information on the latest in Azure Security
security Infrastructure Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/infrastructure-availability.md
description: This article provides information about what Microsoft does to secu
documentationcenter: na -+ ms.assetid: 61e95a87-39c5-48f5-aee6-6f90ddcd336e--++ na Previously updated : 04/28/2019 Last updated : 01/20/2023
High-speed and robust fiber optic networks connect datacenters with other major
Microsoft ensures high availability through advanced monitoring and incident response, service support, and backup failover capability. Geographically distributed Microsoft operations centers operate 24/7/365. The Azure network is one of the largest in the world. The fiber optic and content distribution network connects datacenters and edge nodes to ensure high performance and reliability. ## Disaster recovery
-Azure keeps your data durable in two locations. You can choose the location of the backup site. In both locations, Azure constantly maintains three healthy replicas of your data.
+Azure keeps your data durable in two locations. You can choose the location of the backup site. In the primary location, Azure constantly maintains three healthy replicas of your data.
## Database availability Azure ensures that a database is internet accessible through an internet gateway with sustained database availability. Monitoring assesses the health and state of the active databases at five-minute time intervals.
security Management Monitoring Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/management-monitoring-overview.md
description: This article provides an overview of the security features and serv
documentationcenter: na -+ ms.assetid: 5cf2827b-6cd3-434d-9100-d7411f7ed424-++ na Previously updated : 01/24/2021 Last updated : 01/20/2023
Azure role-based access control (Azure RBAC) provides detailed access management
Learn more:
-* [Active Directory team blog on Azure RBAC](https://cloudblogs.microsoft.com/enterprisemobility/?product=azure-active-directory)
* [Azure role-based access control (Azure RBAC)](../../role-based-access-control/role-assignments-portal.md) ## Antimalware
With Azure, you can use antimalware software from major security vendors such as
Microsoft Antimalware for Azure Cloud Services and Virtual Machines offers you the ability to install an antimalware agent for both PaaS roles and virtual machines. Based on System Center Endpoint Protection, this feature brings proven on-premises security technology to the cloud.
-We also offer deep integration for TrendΓÇÖs [Deep Security](https://www.trendmicro.com/us/enterprise/cloud-solutions/deep-security/) and [SecureCloud](https://www.trendmicro.com/us/enterprise/cloud-solutions/secure-cloud/) products in the Azure platform. Deep Security is an antivirus solution, and SecureCloud is an encryption solution. Deep Security is deployed inside VMs through an extension model. By using the Azure portal UI and PowerShell, you can choose to use Deep Security inside new VMs that are being spun up, or existing VMs that are already deployed.
- Symantec Endpoint Protection (SEP) is also supported on Azure. Through portal integration, you can specify that you intend to use SEP on a VM. SEP can be installed on a new VM via the Azure portal, or it can be installed on an existing VM via PowerShell. Learn more:
-* [Deploying Antimalware Solutions on Azure Virtual Machines](https://azure.microsoft.com/blog/deploying-antimalware-solutions-on-azure-virtual-machines/)
* [Microsoft Antimalware for Azure Cloud Services and Virtual Machines](antimalware.md)
-* [How to install and configure Trend Micro Deep Security as a Service on a Windows VM](/previous-versions/azure/virtual-machines/extensions/trend)
* [How to install and configure Symantec Endpoint Protection on a Windows VM](../../virtual-machines/extensions/symantec.md) * [New Antimalware Options for Protecting Azure Virtual Machines](https://azure.microsoft.com/blog/new-antimalware-options-for-protecting-azure-virtual-machines/) ## Multi-Factor Authentication
-Azure AD Multi-Factor Authentication is a method of authentication that requires the use of more than one verification method. It adds a critical second layer of security to user sign-ins and transactions.
+Azure Active Directory Multi-Factor Authentication is a method of authentication that requires the use of more than one verification method. It adds a critical second layer of security to user sign-ins and transactions.
Multi-Factor Authentication helps safeguard access to data and applications while meeting user demand for a simple sign-in process. It delivers strong authentication via a range of verification options (phone call, text message, or mobile app notification or verification code) and third-party OATH tokens. Learn more:
-* [Multi-Factor Authentication](/azure/multi-factor-authentication/)
-* [What is Azure AD Multi-Factor Authentication?](../../active-directory/authentication/concept-mfa-howitworks.md)
+* [Multi-Factor Authentication](../../active-directory/authentication/overview-authentication.md#azure-ad-multi-factor-authentication)
* [How Azure AD Multi-Factor Authentication works](../../active-directory/authentication/concept-mfa-howitworks.md) ## ExpressRoute
Privileged Identity Management introduces the concept of a temporary admin for a
Learn more: * [Azure AD Privileged Identity Management](../../active-directory/privileged-identity-management/pim-configure.md)
-* [Get started with Azure AD Privileged Identity Management](../../active-directory/privileged-identity-management/pim-getting-started.md)
+* [Start using Privileged Identity Management](../../active-directory/privileged-identity-management/pim-getting-started.md)
## Identity Protection
By providing notifications and recommended remediation, Identity Protection help
Learn more:
-* [Azure Active Directory Identity Protection](../../active-directory/identity-protection/overview-identity-protection.md)
-* Channel 9: Azure AD and Identity Show: Identity Protection Preview
+* [Azure Active Directory Identity Protection](../../active-directory/identity-protection/concept-identity-protection-security-overview.md)
## Defender for Cloud
Defender for Cloud performs continuous security assessments of your connected re
Defender for Cloud helps you optimize and monitor the security of your Azure resources by: - Enabling you to define policies for your Azure subscription resources according to:
- - Your organizationΓÇÖs security needs.
+ - Your organization's security needs.
- The type of applications or sensitivity of the data in each subscription.
- - Any industry or regulatory standards or benchmarks you apply to your subscriptions.
+ - Any industry or regulatory standards or benchmarks you apply to your subscriptions.
- Monitoring the state of your Azure virtual machines, networking, and applications. - Providing a list of prioritized security alerts, including alerts from integrated partner solutions. It also provides the information that you need to quickly investigate an attack and recommendations on how to remediate it. Learn more:
-* [Introduction to Microsoft Defender for Cloud](../../security-center/security-center-introduction.md)
-* [Improve your secure score in Microsoft Defender for Cloud](../../security-center/secure-score-security-controls.md)
-
-## Intelligent Security Graph
-
-Intelligent Security Graph provides real-time threat protection in Microsoft products and services. It uses advanced analytics that link a massive amount of threat intelligence and security data to provide insights that can strengthen organizational security. Microsoft uses advanced analyticsΓÇöprocessing more than 450 billion authentications per month, scanning 400 billion emails for malware and phishing, and updating one billion devicesΓÇöto deliver richer insights. These insights can help your organization detect and respond to attacks quickly.
-
-* [Intelligent Security Graph](https://www.microsoft.com/security/intelligence)
+* [Introduction to Microsoft Defender for Cloud](../../defender-for-cloud/defender-for-cloud-introduction.md)
+* [Improve your secure score in Microsoft Defender for Cloud](../../defender-for-cloud/secure-score-security-controls.md)
## Next Steps Learn about the [shared responsibility model](shared-responsibility.md) and which security tasks are handled by Microsoft and which tasks are handled by you.
security Technical Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/technical-capabilities.md
ms.assetid:
Previously updated : 01/06/2022 Last updated : 01/20/2023
Microsoft Azure is the only cloud computing provider that offers a secure, consi
With Microsoft Azure, you can: -- Accelerate innovation with the cloud.
+- Accelerate innovation with the cloud
-- Power business decisions & apps with insights.
+- Power business decisions & apps with insights
-- Build freely and deploy anywhere.
+- Build freely and deploy anywhere
-- Protect their business.-
-## Security technical capabilities to fulfill your responsibility
-
-Microsoft Azure provides services that help you meet your security, privacy, and compliance needs. The following picture helps explain various Azure services available for you to build a secure and compliant application infrastructure based on industry standards.
-
-![Available security technical capabilities- Big picture](./media/technical-capabilities/azure-security-technical-capabilities-fig1.png)
+- Protect their business
## Manage and control identity and user access
Azure helps you protect business and personal information by enabling you to man
### Azure Active Directory
-Microsoft identity and access management solutions help IT protect access to applications and resources across the corporate datacenter and into the cloud, enabling additional levels of validation such as multi-factor authentication and Conditional Access policies. Monitoring suspicious activity through advanced security reporting, auditing and alerting helps mitigate potential security issues. [Azure Active Directory Premium](../../active-directory/fundamentals/active-directory-whatis.md) provides single sign-on to thousands of cloud apps and access to web apps you run on-premises.
+Microsoft identity and access management solutions help IT protect access to applications and resources across the corporate datacenter and into the cloud, enabling additional levels of validation such as multi-factor authentication and Conditional Access policies. Monitoring suspicious activity through advanced security reporting, auditing and alerting helps mitigate potential security issues. [Azure Active Directory Premium](../../active-directory/fundamentals/active-directory-get-started-premium.md) provides single sign-on to thousands of cloud apps and access to web apps you run on-premises.
Security benefits of Azure Active Directory (Azure AD) include the ability to:
The following are core Azure identity management capabilities:
#### Single sign-on
-[Single sign-on (SSO)](https://azure.microsoft.com/documentation/videos/overview-of-single-sign-on/) means being able to access all the applications and resources that you need to do business, by signing in only once using a single user account. Once signed in, you can access all the applications you need without being required to authenticate (for example, type a password) a second time.
+[Single sign-on (SSO)](../../active-directory/manage-apps/what-is-single-sign-on.md) means being able to access all the applications and resources that you need to do business, by signing in only once using a single user account. Once signed in, you can access all the applications you need without being required to authenticate (for example, type a password) a second time.
Many organizations rely upon software as a service (SaaS) applications such as Microsoft 365, Box, and Salesforce for end-user productivity. Historically, IT staff needed to individually create and update user accounts in each SaaS application, and users had to remember a password for each SaaS application.
-[Azure AD extends on-premises Active Directory into the cloud](../../active-directory/manage-apps/what-is-single-sign-on.md), enabling users to use their primary organizational account to not only sign in to their domain-joined devices and company resources, but also all the web and SaaS applications needed for their job.
+Azure AD extends on-premises Active Directory into the cloud, enabling users to use their primary organizational account to not only sign in to their domain-joined devices and company resources, but also all the web and SaaS applications needed for their job.
-Not only do users not have to manage multiple sets of usernames and passwords, application access can be automatically provisioned or de-provisioned based on organizational groups and their status as an employee. [Azure AD introduces security and access governance controls](../../active-directory/manage-apps/view-applications-portal.md) that enable you to centrally manage users' access across SaaS applications.
+Not only do users not have to manage multiple sets of usernames and passwords, application access can be automatically provisioned or de-provisioned based on organizational groups and their status as an employee. Azure AD introduces security and access governance controls that enable you to centrally manage users' access across SaaS applications.
#### Multi-factor authentication
-[Azure AD Multi-Factor Authentication (MFA)](../../active-directory/authentication/concept-mfa-howitworks.md) is a method of authentication that requires the use of more than one verification method and adds a critical second layer of security to user sign-ins and transactions. [MFA helps safeguard](../../active-directory/authentication/concept-mfa-howitworks.md) access to data and applications while meeting user demand for a simple sign-in process. It delivers strong authentication via a range of verification optionsΓÇöphone call, text message, or mobile app notification or verification code and third-party OAuth tokens.
+[Azure AD Multi-Factor Authentication (MFA)](../../active-directory/authentication/overview-authentication.md#azure-ad-multi-factor-authentication) is a method of authentication that requires the use of more than one verification method and adds a critical second layer of security to user sign-ins and transactions. [MFA helps safeguard](../../active-directory/authentication/concept-mfa-howitworks.md) access to data and applications while meeting user demand for a simple sign-in process. It delivers strong authentication via a range of verification optionsΓÇöphone call, text message, or mobile app notification or verification code and third-party OAuth tokens.
#### Security monitoring, alerts, and machine learning-based reports
In the Azure portal or through the [Azure Active Directory portal](https://aad.p
#### Consumer identity and access management
-[Azure Active Directory B2C](https://azure.microsoft.com/services/active-directory-b2c/) is a highly available, global, identity management service for consumer-facing applications that scales to hundreds of millions of identities. It can be integrated across mobile and web platforms. Your consumers can log on to all your applications through customizable experiences by using their existing social accounts or by creating new credentials.
+[Azure Active Directory B2C](../../active-directory-b2c/overview.md) is a highly available, global, identity management service for consumer-facing applications that scales to hundreds of millions of identities. It can be integrated across mobile and web platforms. Your consumers can log on to all your applications through customizable experiences by using their existing social accounts or by creating new credentials.
-In the past, application developers who wanted to [sign up and sign in consumers](../../active-directory-b2c/overview.md) into their applications would have written their own code. And they would have used on-premises databases or systems to store usernames and passwords. Azure Active Directory B2C offers your organization a better way to integrate consumer identity management into applications with the help of a secure, standards-based platform, and a large set of extensible policies.
+In the past, application developers who wanted to sign up and sign in consumers into their applications would have written their own code. And they would have used on-premises databases or systems to store usernames and passwords. Azure Active Directory B2C offers your organization a better way to integrate consumer identity management into applications with the help of a secure, standards-based platform, and a large set of extensible policies.
When you use Azure Active Directory B2C, your consumers can sign up for your applications by using their existing social accounts (Facebook, Google, Amazon, LinkedIn) or by creating new credentials (email address and password, or username and password).
Azure AD Privileged Identity Management lets you:
## Secure resource access
-Access control in Azure starts from a billing perspective. The owner of an Azure account, accessed by visiting the [Azure portal](https://portal.azure.com/#blade/Microsoft_Azure_Billing/SubscriptionsBlade), is the Account Administrator (AA). Subscriptions are a container for billing, but they also act as a security boundary: each subscription has a Service Administrator (SA) who can add, remove, and modify Azure resources in that subscription by using the Azure portal. The default SA of a new subscription is the AA, but the AA can change the SA in the Azure portal.
+Access control in Azure starts from a billing perspective. The owner of an Azure account, accessed by visiting the Azure portal, is the Account Administrator (AA). Subscriptions are a container for billing, but they also act as a security boundary: each subscription has a Service Administrator (SA) who can add, remove, and modify Azure resources in that subscription by using the Azure portal. The default SA of a new subscription is the AA, but the AA can change the SA in the Azure portal.
![Secured resource access in Azure](./media/technical-capabilities/azure-security-technical-capabilities-fig3.png)
One of the keys to data protection in the cloud is accounting for the possible s
### Encryption at rest
-Encryption at rest is discussed in detail in [Azure Data Encryption-at-Rest](encryption-atrest.md).
+Encryption at rest is discussed in detail in [Azure Data Encryption at Rest](encryption-atrest.md).
### Encryption in-transit
For organizations that need to secure access from multiple workstations located
For organizations that need to secure access from one workstation located on-premises to Azure, use [Point-to-Site VPN](../../vpn-gateway/vpn-gateway-howto-point-to-site-classic-azure-portal.md).
-Larger data sets can be moved over a dedicated high-speed WAN link such as [ExpressRoute](https://azure.microsoft.com/services/expressroute/). If you choose to use ExpressRoute, you can also encrypt the data at the application-level using SSL/TLS or other protocols for added protection.
+Larger data sets can be moved over a dedicated high-speed WAN link such as [ExpressRoute](../../expressroute/expressroute-introduction.md). If you choose to use ExpressRoute, you can also encrypt the data at the application-level using SSL/TLS or other protocols for added protection.
-If you are interacting with Azure Storage through the Azure portal, all transactions occur via HTTPS. [Storage REST API](/rest/api/storageservices/) over HTTPS can also be used to interact with [Azure Storage](https://azure.microsoft.com/services/storage/) and [Azure SQL Database](https://azure.microsoft.com/services/sql-database/).
-
-Organizations that fail to protect data in transit are more susceptible for [man-in-the-middle attacks](/previous-versions/office/skype-server-2010/gg195821(v=ocs.14)), [eavesdropping](/previous-versions/office/skype-server-2010/gg195641(v=ocs.14)), and session hijacking. These attacks can be the first step in gaining access to confidential data.
+If you are interacting with Azure Storage through the Azure portal, all transactions occur via HTTPS. [Storage REST API](/rest/api/storageservices/) over HTTPS can also be used to interact with [Azure Storage](../../storage/index.yml) and [Azure SQL Database](/azure/azure-sql/database/sql-database-paas-overview).
You can learn more about Azure VPN option by reading the article [Planning and design for VPN Gateway](../../vpn-gateway/vpn-gateway-about-vpngateways.md). ### Enforce file level data encryption
-[Azure RMS](/azure/information-protection/what-is-azure-rms) uses encryption, identity, and authorization policies to help secure your files and email. Azure RMS works across multiple devicesΓÇöphones, tablets, and PCs by protecting both within your organization and outside your organization. This capability is possible because Azure RMS adds a level of protection that remains with the data, even when it leaves your organizationΓÇÖs boundaries.
-
-When you use Azure RMS to protect your files, you are using industry-standard cryptography with full support of [FIPS 140-2](https://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.140-2.pdf). When you use Azure RMS for data protection, you have the assurance that the protection stays with the file, even if it is copied to storage that is not under the control of IT, such as a cloud storage service. The same occurs for files shared via e-mail, the file is protected as an attachment to an email message, with instructions how to open the protected attachment.
-When planning for Azure RMS adoption we recommend the following:
--- Install the [RMS sharing app](/azure/information-protection/rms-client/sharing-app-windows). This app integrates with Office applications by installing an Office add-in so that users can easily protect files directly.--- Configure applications and services to support Azure RMS--- Create [custom templates](/azure/information-protection/configure-policy-templates) that reflect your business requirements. For example: a template for top secret data that should be applied in all top secret related emails.-
-Organizations that are weak on [data classification](https://download.microsoft.com/download/0/A/3/0A3BE969-85C5-4DD2-83B6-366AA71D1FE3/Data-Classification-for-Cloud-Readiness.pdf) and file protection may be more susceptible to data leakage. Without proper file protection, organizations wonΓÇÖt be able to obtain business insights, monitor for abuse and prevent malicious access to files.
-
-> [!Note]
-> You can learn more about Azure RMS by reading the article [Getting Started with Azure Rights Management](/azure/information-protection/requirements).
+[Azure Rights Management](/azure/information-protection/what-is-azure-rms) (Azure RMS) uses encryption, identity, and authorization policies to help secure your files and email. Azure RMS works across multiple devicesΓÇöphones, tablets, and PCs by protecting both within your organization and outside your organization. This capability is possible because Azure RMS adds a level of protection that remains with the data, even when it leaves your organizationΓÇÖs boundaries.
## Secure your application While Azure is responsible for securing the infrastructure and platform that your application runs on, it is your responsibility to secure your application itself. In other words, you need to develop, deploy, and manage your application code and content in a secure way. Without this, your application code or content can still be vulnerable to threats.
While Azure is responsible for securing the infrastructure and platform that you
### Web application firewall [Web application firewall (WAF)](../../web-application-firewall/ag/ag-overview.md) is a feature of [Application Gateway](../../application-gateway/overview.md) that provides centralized protection of your web applications from common exploits and vulnerabilities.
-Web application firewall is based on rules from the [OWASP core rule sets](https://owasp.org/www-project-modsecurity-core-rule-set/) 3.0 or 2.2.9. Web applications are increasingly targets of malicious attacks that exploit common known vulnerabilities. Common among these exploits are SQL injection attacks, cross site scripting attacks to name a few. Preventing such attacks in application code can be challenging and may require rigorous maintenance, patching and monitoring at multiple layers of the application topology. A centralized web application firewall helps make security management much simpler and gives better assurance to application administrators against threats or intrusions. A WAF solution can also react to a security threat faster by patching a known vulnerability at a central location versus securing each of individual web applications. Existing application gateways can be converted to a web application firewall enabled application gateway easily.
+Web application firewall is based on rules from the [OWASP core rule sets](https://owasp.org/www-project-modsecurity-core-rule-set/). Web applications are increasingly targets of malicious attacks that exploit common known vulnerabilities. Common among these exploits are SQL injection attacks, cross site scripting attacks to name a few. Preventing such attacks in application code can be challenging and may require rigorous maintenance, patching and monitoring at multiple layers of the application topology. A centralized web application firewall helps make security management much simpler and gives better assurance to application administrators against threats or intrusions. A WAF solution can also react to a security threat faster by patching a known vulnerability at a central location versus securing each of individual web applications. Existing application gateways can be converted to a web application firewall enabled application gateway easily.
Some of the common web vulnerabilities which web application firewall protects against includes:
Some of the common web vulnerabilities which web application firewall protects a
- Detection of common application misconfigurations (that is, Apache, IIS, etc.) > [!Note]
-> For a more detailed list of rules and their protections see the following [Core rule sets](../../web-application-firewall/ag/ag-overview.md):
-
-Azure also provides several easy-to-use features to help secure both inbound and outbound traffic for your app. Azure also helps customers secure their application code by providing externally provided functionality to scan your web application for vulnerabilities.
--- [Setup Azure Active Directory authentication for your app](https://azure.microsoft.com/blog/azure-websites-authentication-authorization/)--- [Secure traffic to your app by enabling Transport Layer Security (TLS/SSL) - HTTPS](../../app-service/configure-ssl-bindings.md)-
- - Force all incoming traffic over HTTPS connection
-
- - Enable Strict Transport Security (HSTS)
--- Restrict access to your app by client's IP address--- Restrict access to your app by client's behavior - request frequency and concurrency--- [Configure TLS mutual authentication to require client certificates to connect to your web app](../../app-service/app-service-web-configure-tls-mutual-auth.md)--- [Configure a client certificate for use from your app to securely connect to external resources](https://azure.microsoft.com/blog/using-certificates-in-azure-websites-applications/)--- [Remove standard server headers to avoid tools from fingerprinting your app](https://azure.microsoft.com/blog/removing-standard-server-headers-on-windows-azure-web-sites/)--- [Securely connect your app with resources in a private network using Point-To-Site VPN](../../app-service/overview-vnet-integration.md)
+> For a more detailed list of rules and their protections see the following [Core rule sets](../../web-application-firewall/ag/ag-overview.md).
-- [Securely connect your app with resources in a private network using Hybrid Connections](../../app-service/app-service-hybrid-connections.md)
+Azure provides several easy-to-use features to help secure both inbound and outbound traffic for your app. Azure helps customers secure their application code by providing externally provided functionality to scan your web application for vulnerabilities. See [Azure App Services](../../app-service/overview.md) to learn more.
Azure App Service uses the same Antimalware solution used by Azure Cloud Services and Virtual Machines. To learn more about this refer to our [Antimalware documentation](antimalware.md). ## Secure your network Microsoft Azure includes a robust networking infrastructure to support your application and service connectivity requirements. Network connectivity is possible between resources located in Azure, between on-premises and Azure hosted resources, and to and from the Internet and Azure.
-The [Azure network infrastructure](/previous-versions/azure/virtual-machines/windows/infrastructure-example) enables you to securely connect Azure resources to each other with [virtual networks (VNets)](../../virtual-network/virtual-networks-overview.md). A VNet is a representation of your own network in the cloud. A VNet is a logical isolation of the Azure cloud network dedicated to your subscription. You can connect VNets to your on-premises networks.
+The Azure network infrastructure enables you to securely connect Azure resources to each other with [virtual networks (VNets)](../../virtual-network/virtual-networks-overview.md). A VNet is a representation of your own network in the cloud. A VNet is a logical isolation of the Azure cloud network dedicated to your subscription. You can connect VNets to your on-premises networks.
![Secure your network (protect)](./media/technical-capabilities/azure-security-technical-capabilities-fig6.png)
-If you need basic network level access control (based on IP address and the TCP or UDP protocols), then you can use [Network Security Groups](../../virtual-network/virtual-network-vnet-plan-design-arm.md). A Network Security Group (NSG) is a basic stateful packet filtering firewall and it enables you to control access based on a [5-tuple](https://www.techopedia.com/definition/28190/5-tuple).
+If you need basic network level access control (based on IP address and the TCP or UDP protocols), then you can use [Network Security Groups](../../virtual-network/network-security-groups-overview.md). A Network Security Group (NSG) is a basic stateful packet filtering firewall that enables you to control access.
[Azure Firewall](../../firewall/overview.md) is a cloud-native and intelligent network firewall security service that provides threat protection for your cloud workloads running in Azure. It's a fully stateful firewall as a service with built-in high availability and unrestricted cloud scalability. It provides both east-west and north-south traffic inspection.
This method allows you to consolidate data from a variety of sources, so you can
### Microsoft Defender for Cloud
-[Microsoft Defender for Cloud](../../security-center/security-center-introduction.md) helps you prevent, detect, and respond to threats with increased visibility into and control over the security of your Azure resources. It provides integrated security monitoring and policy management across your Azure subscriptions, helps detect threats that might otherwise go unnoticed, and works with a broad ecosystem of security solutions.
+[Microsoft Defender for Cloud](../../defender-for-cloud/defender-for-cloud-introduction.md) helps you prevent, detect, and respond to threats with increased visibility into and control over the security of your Azure resources. It provides integrated security monitoring and policy management across your Azure subscriptions, helps detect threats that might otherwise go unnoticed, and works with a broad ecosystem of security solutions.
Defender for Cloud analyzes the security state of your Azure resources to identify potential security vulnerabilities. A list of recommendations guides you through the process of configuring needed controls.
security Threat Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/threat-detection.md
documentationcenter: na ms.assetid:--++ na Previously updated : 02/03/2021 Last updated : 01/20/2023
Azure provides a wide array of options to configure and customize security to me
## Azure Active Directory Identity Protection
-[Azure AD Identity Protection](../../active-directory/identity-protection/overview-identity-protection.md) is an [Azure Active Directory Premium P2](../../active-directory/fundamentals/active-directory-whatis.md) edition feature that provides an overview of the risk detections and potential vulnerabilities that can affect your organizationΓÇÖs identities. Identity Protection uses existing Azure AD anomaly-detection capabilities that are available through [Azure AD Anomalous Activity Reports](../../active-directory/reports-monitoring/overview-reports.md), and introduces new risk detection types that can detect real time anomalies.
+[Azure AD Identity Protection](../../active-directory/identity-protection/overview-identity-protection.md) is an [Azure Active Directory Premium P2](../../active-directory/fundamentals/active-directory-whatis.md#what-are-the-azure-ad-licenses) edition feature that provides an overview of the risk detections and potential vulnerabilities that can affect your organizationΓÇÖs identities. Identity Protection uses existing Azure AD anomaly-detection capabilities that are available through [Azure AD Anomalous Activity Reports](../../active-directory/reports-monitoring/overview-reports.md), and introduces new risk detection types that can detect real time anomalies.
![Azure AD Identity Protection diagram](./media/threat-detection/azure-threat-detection-fig1.png)
PIM helps you:
## Azure Monitor logs
-[Azure Monitor logs](../../azure-monitor/index.yml) is a Microsoft cloud-based IT management solution that helps you manage and protect your on-premises and cloud infrastructure. Because Azure Monitor logs is implemented as a cloud-based service, you can have it up and running quickly with minimal investment in infrastructure services. New security features are delivered automatically, saving ongoing maintenance and upgrade costs.
-
-In addition to providing valuable services on its own, Azure Monitor logs can integrate with System Center components, such as [System Center Operations Manager](/archive/blogs/cbernier/monitoring-windows-azure-with-system-center-operations-manager-2012-get-me-started), to extend your existing security management investments into the cloud. System Center and Azure Monitor logs can work together to provide a full hybrid management experience.
+[Azure Monitor logs](../../azure-monitor/logs/data-platform-logs.md) is a Microsoft cloud-based IT management solution that helps you manage and protect your on-premises and cloud infrastructure. Because Azure Monitor logs is implemented as a cloud-based service, you can have it up and running quickly with minimal investment in infrastructure services. New security features are delivered automatically, saving ongoing maintenance and upgrade costs.
### Holistic security and compliance posture
-[Microsoft Defender for Cloud](../../security-center/security-center-introduction.md) provides a comprehensive view into your organizationΓÇÖs IT security posture, with built-in search queries for notable issues that require your attention. It provides high-level insight into the security state of your computers. You can also view all events from the past 24 hours, 7 days, or any other custom time-frame.
+[Microsoft Defender for Cloud](../../defender-for-cloud/defender-for-cloud-introduction.md) provides a comprehensive view into your organization's IT security posture, with built-in search queries for notable issues that require your attention. It provides high-level insight into the security state of your computers. You can also view all events from the past 24 hours, 7 days, or any other custom time-frame.
Azure Monitor logs help you quickly and easily understand the overall security posture of any environment, all within the context of IT Operations, including software update assessment, antimalware assessment, and configuration baselines. Security log data is readily accessible to streamline the security and compliance audit processes.
You can create and manage DSC resources that are hosted in Azure and apply them
## Microsoft Defender for Cloud
-Microsoft Defender for Cloud helps protect your hybrid cloud environment. By performing continuous security assessments of your connected resources, it's able to provide detailed security recommendations for the discovered vulnerabilities.
+[Microsoft Defender for Cloud](../../defender-for-cloud/defender-for-cloud-introduction.md) helps protect your hybrid cloud environment. By performing continuous security assessments of your connected resources, it's able to provide detailed security recommendations for the discovered vulnerabilities.
Defender for Cloud's recommendations are based on the [Microsoft cloud security benchmark](/security/benchmark/azure/introduction) - the Microsoft-authored, Azure-specific set of guidelines for security and compliance best practices based on common compliance frameworks. This widely respected benchmark builds on the controls from the [Center for Internet Security (CIS)](https://www.cisecurity.org/benchmark/azure/) and the [National Institute of Standards and Technology (NIST)](https://www.nist.gov/) with a focus on cloud centric security.
Microsoft Defender for Cloud operates with security research and data science te
These combined efforts culminate in new and improved detections, which you can benefit from instantly. ThereΓÇÖs no action for you to take.
+### Microsoft Defender for Storage
+
+[Microsoft Defender for Storage](../../storage/common/azure-defender-storage-configure.md) is an Azure-native layer of security intelligence that detects unusual and potentially harmful attempts to access or exploit your storage accounts. It uses advanced threat detection capabilities and [Microsoft Threat Intelligence](https://go.microsoft.com/fwlink/?linkid=2128684) data to provide contextual security alerts. Those alerts also include steps to mitigate the detected threats and prevent future attacks.
+ ## Threat protection features: Other Azure services ### Virtual machines: Microsoft antimalware
SQL Database threat detectors use one of the following detection methodologies:
### Application Gateway Web Application Firewall
-[Web Application Firewall (WAF)](../../app-service/environment/integrate-with-application-gateway.md) is a feature of [Azure Application Gateway](../../web-application-firewall/ag/ag-overview.md) that provides protection to web applications that use an application gateway for standard [application delivery control](https://kemptechnologies.com/in/application-delivery-controllers) functions. Web Application Firewall does this by protecting them against most of the [Open Web Application Security Project (OWASP) top 10 common web vulnerabilities](https://owasp.org/www-project-top-ten/).
+[Web application firewall (WAF)](../../web-application-firewall/ag/ag-overview.md) is a feature of [Application Gateway](../../application-gateway/overview.md) that provides protection to web applications that use an application gateway for standard [application delivery control](https://kemptechnologies.com/in/application-delivery-controllers) functions. Web Application Firewall does this by protecting them against most of the [Open Web Application Security Project (OWASP) top 10 common web vulnerabilities](https://owasp.org/www-project-top-ten/).
![Application Gateway Web Application Firewall diagram](./media/threat-detection/azure-threat-detection-fig13.png)
Configuring WAF at your application gateway provides the following benefits:
- Helps meet compliance requirements. Certain compliance controls require all internet-facing endpoints to be protected by a WAF solution.
-### Anomaly Detection API: Built with Azure Machine Learning
-
-The Anomaly Detection API is an API that's useful for detecting a variety of anomalous patterns in your time series data. The API assigns an anomaly score to each data point in the time series, which can be used for generating alerts, monitoring through dashboards, or connecting with your ticketing systems.
-
-The [Anomaly Detection API](/azure/architecture/data-science-process/apps-anomaly-detection-api) can detect the following types of anomalies on time series data:
--- **Spikes and dips**: When you're monitoring the number of login failures to a service or number of checkouts in an e-commerce site, unusual spikes or dips could indicate security attacks or service disruptions.--- **Positive and negative trends**: When you're monitoring memory usage in computing, shrinking free memory size indicates a potential memory leak. For service queue length monitoring, a persistent upward trend might indicate an underlying software issue.--- **Level changes and changes in dynamic range of values**: Level changes in latencies of a service after a service upgrade or lower levels of exceptions after upgrade can be interesting to monitor.-
-The machine learning-based API enables:
--- **Flexible and robust detection**: The anomaly detection models allow users to configure sensitivity settings and detect anomalies among seasonal and non-seasonal data sets. Users can adjust the anomaly detection model to make the detection API less or more sensitive according to their needs. This would mean detecting the less or more visible anomalies in data with and without seasonal patterns.--- **Scalable and timely detection**: The traditional way of monitoring with present thresholds set by experts' domain knowledge are costly and not scalable to millions of dynamically changing data sets. The anomaly detection models in this API are learned, and models are tuned automatically from both historical and real-time data.--- **Proactive and actionable detection**: Slow trend and level change detection can be applied for early anomaly detection. The early abnormal signals that are detected can be used to direct humans to investigate and act on the problem areas. In addition, root cause analysis models and alerting tools can be developed on top of this anomaly-detection API service.-
-The anomaly-detection API is an effective and efficient solution for a wide range of scenarios, such as service health and KPI monitoring, IoT, performance monitoring, and network traffic monitoring. Here are some popular scenarios where this API can be useful:
--- IT departments need tools to track events, error code, usage log, and performance (CPU, memory, and so on) in a timely manner.--- Online commerce sites want to track customer activities, page views, clicks, and so on.--- Utility companies want to track consumption of water, gas, electricity, and other resources.--- Facility or building management services want to monitor temperature, moisture, traffic, and so on.--- IoT/manufacturers want to use sensor data in time series to monitor work flow, quality, and so on.--- Service providers, such as call centers, need to monitor service demand trend, incident volume, wait queue length, and so on.--- Business analytics groups want to monitor business KPIs' (such as sales volume, customer sentiments, or pricing) abnormal movement in real time.- ### Defender for Cloud Apps [Defender for Cloud Apps](/cloud-app-security/what-is-cloud-app-security) is a critical component of the Microsoft Cloud Security stack. It's a comprehensive solution that can help your organization as you move to take full advantage of the promise of cloud applications. It keeps you in control, through improved visibility into activity. It also helps increase the protection of critical data across cloud applications.
With tools that help uncover shadow IT, assess risk, enforce policies, investiga
| Protect | Use Defender for Cloud Apps to sanction or prohibit applications, enforce data loss prevention, control permissions and sharing, and generate custom reports and alerts. | | Control | Mitigate risk by setting policies and alerts to achieve maximum control over network cloud traffic. Use Defender for Cloud Apps to migrate your users to safe, sanctioned cloud app alternatives. | - ![Defender for Cloud Apps diagram](./media/threat-detection/azure-threat-detection-fig14.png) Defender for Cloud Apps integrates visibility with your cloud by:
Web Application Firewall provides the following benefits:
- Accelerates the delivery of web application contents, using capabilities such as caching, compression, and other traffic optimizations.
-For examples of web application firewalls that are available in the Azure Marketplace, see [Barracuda WAF, Brocade virtual web application firewall (vWAF), Imperva SecureSphere, and the ThreatSTOP IP firewall](https://azuremarketplace.microsoft.com/marketplace/apps/barracudanetworks.waf).
-
-## Next steps
+For examples of web application firewalls that are available in the Azure Marketplace, see [Barracuda WAF, Brocade virtual web application firewall (vWAF), Imperva SecureSphere, and the ThreatSTOP IP firewall](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/category/networking?page=1).
-- [Responding to todayΓÇÖs threats](../../security-center/security-center-managing-and-responding-alerts.md): Helps identify active threats that target your Azure resources and provides the insights you need to respond quickly.
+## Next step
-- [Azure SQL Database Threat Detection](https://azure.microsoft.com/blog/azure-sql-database-threat-detection-your-built-in-security-expert/): Helps address your concerns about potential threats to your databases.
+- [Responding to today's threats](../../defender-for-cloud/managing-and-responding-alerts.md): Helps identify active threats that target your Azure resources and provides the insights you need to respond quickly.
sentinel Health Audit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/health-audit.md
Health data is collected in the *SentinelHealth* table in your Log Analytics wor
[Is the data connector receiving data](./monitor-data-connector-health.md)? For example, if you've instructed Microsoft Sentinel to run a query every 5 minutes, you want to check whether that query is being performed, how it's performing, and whether there are any risks or vulnerabilities related to the query.
+**Are my SAP systems running correctly?**
+
+[Are the SAP systems managed by your organization running correctly](monitor-sap-system-health.md)?. Are the systems up and running, or ar they unreachable? Does Microsoft Sentinel identify these systems as production systems?
+ **Did an automation rule run as expected?** [Did my automation rule run when it was supposed to](./monitor-automation-health.md) - that is, when its conditions were met? Did all the actions in the automation rule run successfully?
sentinel Monitor Data Connector Health https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/monitor-data-connector-health.md
To ensure complete and uninterrupted data ingestion in your Microsoft Sentinel service, keep track of your data connectors' health, connectivity, and performance.
-This article describes how to use the following features, which allow you to perform this monitoring from within Microsoft Sentinel:
+The following features allow you to perform this monitoring from within Microsoft Sentinel:
-- **Data connectors health monitoring workbook:** This workbook provides additional monitors, detects anomalies, and gives insight regarding the workspaceΓÇÖs data ingestion status. You can use the workbookΓÇÖs logic to monitor the general health of the ingested data, and to build custom views and rule-based alerts.
+- **Data connectors health monitoring workbook**: This workbook provides additional monitors, detects anomalies, and gives insight regarding the workspaceΓÇÖs data ingestion status. You can use the workbookΓÇÖs logic to monitor the general health of the ingested data, and to build custom views and rule-based alerts.
-- ***SentinelHealth* data table (Preview):** Querying this table provides insights on health drifts, such as latest failure events per connector, or connectors with changes from success to failure states, which you can use to create alerts and other automated actions. The *SentinelHealth* data table is currently supported only for [selected data connectors](#supported-data-connectors).
+- ***SentinelHealth* data table (Preview)**: Querying this table provides insights on health drifts, such as latest failure events per connector, or connectors with changes from success to failure states, which you can use to create alerts and other automated actions. The *SentinelHealth* data table is currently supported only for [selected data connectors](#supported-data-connectors).
> [!IMPORTANT] > > The *SentinelHealth* data table is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+- [**View the health and status of your connected SAP systems**](monitor-sap-system-health.md): Review health information for your SAP systems under the SAP data connector, and use an alert rule template to get information about the health of the SAP agent's data collection.
+ ## Use the health monitoring workbook 1. From the Microsoft Sentinel portal, select **Workbooks** from the **Threat management** menu.
sentinel Monitor Sap System Health https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/monitor-sap-system-health.md
+
+ Title: Monitor the health and role of your Microsoft Sentinel SAP systems
+description: Use the SAP connector page and a dedicated alert rule template to keep track of your SAP systems' connectivity and performance.
+++ Last updated : 11/09/2022+++
+# Monitor the health and role of your SAP systems
+
+After you [deploy the SAP solution](sap/deployment-overview.md), you want to ensure proper functioning and performance of your SAP systems, and keep track of your system health, connectivity, and performance.
+
+This article describes how to use the following features, which allow you to perform this monitoring from within Microsoft Sentinel:
+
+- [**Use the SAP data connector page**](#use-the-sap-data-connector). Review the **System Health** area under the Microsoft Sentinel for SAP connector to get information on the health of your connected SAP systems.
+- [**Use the Data collection health check alert rule**](#use-an-alert-rule-template). Get proactive alerts on the health of the SAP agent's data collection.
+
+> [!IMPORTANT]
+> Monitoring the health of your SAP systems is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+## Use the SAP data connector
+
+1. From the Microsoft Sentinel portal, select **Data connectors**.
+1. In the search bar, type *Microsoft Sentinel for SAP*.
+1. Select the **Microsoft Sentinel for SAP** connector and select **Open connector**.
+1. In the **Configuration > System Health** area, you can view information on the health of your SAP systems.
++
+|Field |Description |Values |Notes |
+|||||
+|Agent name |Unique ID of the installed data connector agent. | | |
+|SID |The name of the connected SAP system ID (SID). | | |
+|Health |Indicates whether the SID is healthy. To troubleshoot health issues, [review the container execution logs](sap/sap-deploy-troubleshoot.md#view-all-container-execution-logs) and review other [troubleshooting steps](sap/sap-deploy-troubleshoot.md). |The **System healthy** status indicates that Microsoft Sentinel identified both logs and a heartbeat from the system. Other statuses, like **System unreachable for over 1 day**, indicate the connectivity status. | |
+|System role |Indicates whether the system is productive or not. The data connector agent retrieves the value by reading the SAP T000 table. This value also impacts billing. To change the role, an SAP admin needs to change the configuration in the SAP system. |ΓÇó **Production**. The system is defined by the SAP admin as a production system.<br>ΓÇó **Unknown (Production)**. Microsoft Sentinel couldn't retrieve the system status. Microsoft Sentinel regards this type of system as a production system for both security and billing purposes.<br>ΓÇó **Non production**. Indicates roles like developing, testing, and customizing.<br>ΓÇó **Agent update available**. Displayed in addition to the health status to indicate that a newer SAP connector version exists. In this case, we recommended that you [update the connector](sap/update-sap-data-connector.md). | If the system role is **Production (unknown)**, check the Microsoft Sentinel role definitions and permissions on the SAP system, and validate that the system allows Microsoft Sentinel to read the content of the T000 table. Next, consider [updating the SAP connector](sap/update-sap-data-connector.md) to the latest version. |
+
+## Use an alert rule template
+
+The Microsoft Sentinel for SAP solution includes an alert rule template designed to give you insight into the health of your SAP agent's data collection.
+
+To turn on the analytics rule:
+1. From the Microsoft Sentinel portal, select **Analytics**.
+1. Under **Rule templates**, locate the *SAP - Data collection health check* alert rule.
+
+The analytics rule:
+
+- Evaluates signals sent from the agent.
+- Evaluates telemetry data.
+- Evaluates alerts on log continuation and other system connectivity issues, if any are found.
+- Learns the log ingestion history, and therefore works better with time.
+
+The rule needs at least seven days of loading history to detect the different seasonality patterns. We recommend a value of 14 days for the alert rule **Look back** parameter to allow detection of weekly activity profiles.
+
+Once activated, the rule judges the recent telemetry and log volume observed on the workspace according to the history learned. The rule then alerts on potential issues, dynamically assigning severities according to the scope of the problem.
+
+This screenshot shows an example of an alert generated by the *SAP - Data collection health check* alert rule.
++
+## Next steps
+
+- Learn what [health monitoring in Microsoft Sentinel](health-audit.md) can do for you.
+- [Turn on health monitoring](enable-monitoring.md) in Microsoft Sentinel.
+- Monitor the health of your [automation rules and playbooks](monitor-automation-health.md).
+- See more information about the [*SentinelHealth* table schema](health-table-reference.md).
+
+
+
sentinel Deployment Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sap/deployment-overview.md
Last updated 04/12/2022
# Deploy Microsoft Sentinel Solution for SAP
-This article introduces you to the process of deploying the Microsoft Sentinel Solution for SAP. The full process is detailed in a whole set of articles linked under [Deployment milestones](#deployment-milestones) below.
+This article introduces you to the process of deploying the Microsoft Sentinel Solution for SAP. The full process is detailed in a whole set of articles linked under [Deployment milestones](#deployment-milestones).
> [!NOTE] > If needed, you can [update an existing Microsoft Sentinel for SAP data connector](update-sap-data-connector.md) to its latest version.
This article introduces you to the process of deploying the Microsoft Sentinel S
> > - The additional hourly charge applies to connected production systems only. > - Microsoft Sentinel identifies a production system by looking at the configuration on the SAP system. To do this, Microsoft Sentinel searches for a production entry in the T000 table.
+> - [View the roles of your connected production systems](../monitor-sap-system-health.md).
-The Microsoft Sentinel for SAP data connector is an agent, installed on a VM or a physical server, that collects application logs from across the entire SAP system landscape. It then sends those logs to your Log Analytics workspace in Microsoft Sentinel. You can then use the other content in the Threat Monitoring for SAP solution ΓÇô the analytics rules, workbooks, and watchlists ΓÇô to gain insight into your organization's SAP environment and to detect and respond to security threats.
+The Microsoft Sentinel for SAP data connector is an agent, installed on a VM or a physical server that collects application logs from across the entire SAP system landscape. It then sends those logs to your Log Analytics workspace in Microsoft Sentinel. You can then use the other content in the Threat Monitoring for SAP solution ΓÇô the analytics rules, workbooks, and watchlists ΓÇô to gain insight into your organization's SAP environment and to detect and respond to security threats.
## Deployment milestones
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md
The listed features were released in the last three months. For information abou
## January 2023
+- [Monitor SAP system health (Preview)](#monitor-sap-system-health-and-role-preview)
- [New incident investigation experience (Preview)](#new-incident-investigation-experience-preview) - [Microsoft Purview Information Protection connector (Preview)](#microsoft-purview-information-protection-connector-preview)
+### Monitor SAP system health and role (Preview)
+
+To ensure proper functioning and performance of your SAP systems, you can now use the SAP data connector page to [monitor information about the health of your SAP systems](monitor-sap-system-health.md) and the status of the SAP roles for the system. You can also use an alert rule template to get information about the health of the SAP agent's data collection.
+ ### New incident investigation experience (Preview) SOC analysts need to understand the full scope of an attack as fast as possible to respond effectively.
Learn how to [add a condition based on a custom detail](create-manage-use-automa
### Add advanced "Or" conditions to automation rules (Preview)
-You can now add OR conditions to automation rules. Also known as condition groups, these allow you to combine several rules with identical actions into a single rule, greatly increasing your SOC's efficiency.
+You can now add OR conditions or condition groups to automation rules. These conditions allow you to combine several rules with identical actions into a single rule, greatly increasing your SOC's efficiency.
For more information, see [Add advanced conditions to Microsoft Sentinel automation rules](add-advanced-conditions-to-automation-rules.md).
Microsoft Sentinel **incidents** have two main sources:
- They are ingested directly from other connected Microsoft security services (such as [Microsoft 365 Defender](microsoft-365-defender-sentinel-integration.md)) that created them.
-There can, however, be data from sources *not ingested into Microsoft Sentinel*, or events not recorded in any log, that justify launching an investigation. For this reason, Microsoft Sentinel now allows security analysts to manually create incidents from scratch for any type of event, regardless of its source or associated data, in order to manage and document the investigation.
+However, in some cases, data from sources *not ingested into Microsoft Sentinel*, or events not recorded in any log, may justify launching an investigation. For this reason, Microsoft Sentinel now allows security analysts to manually create incidents from scratch for any type of event, regardless of its source or associated data, in order to manage and document the investigation.
Since this capability raises the possibility that you'll create an incident in error, Microsoft Sentinel also allows you to delete incidents right from the portal as well.
storage Access Tiers Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/access-tiers-best-practices.md
+
+ Title: Best practices for using blob access tiers
+
+description: Learn about best practice guidelines that help you use access tiers to optimize performance and reduce costs.
+++ Last updated : 01/20/2023+++++
+# Best practices for using blob access tiers
+
+This article provides best practice guidelines that help you use access tiers to optimize performance and reduce costs. To learn more about access tiers, see [Hot, cool, and archive access tiers for blob data](access-tiers-overview.md?tabs=azure-portal).
+
+## Choose the most cost-efficient access tiers
+
+You can reduce costs by placing blob data into the most cost-efficient access tiers. Choose from three tiers that are designed to optimize your costs around data use. For example, the hot tier has a higher storage cost but lower read cost. Therefore, if you plan to access data frequently, the hot tier might be the most cost-efficient choice. If you plan to read data less frequently, the cool or archive tier might make the most sense because it raises the cost of reading data while reducing the cost of storing data.
+
+To identify the most optimal access tier, try to estimate what percentage of the data will be read on a monthly basis. The following chart shows the impact on monthly spending given various read percentages.
+
+> [!div class="mx-imgBorder"]
+> ![Chart that shows a bar for each tier which represents the monthly cost based on percentage read pattern](./media/access-tiers-best-practices/read-pattern-access-tiers.png)
+
+To model and analyze the cost of using cool versus archive storage, see [Archive versus cool](archive-cost-estimation.md#archive-versus-cool). You can apply similar modeling techniques to compare the cost of hot to cool or archive.
+
+## Migrate data directly to the most cost-efficient access tiers
+
+Choosing the most optimal tier up front can reduce costs. If you change the tier of a block blob that you've already uploaded, then you'll pay the cost of writing to the initial tier when you first upload the blob, and then pay the cost of writing to the desired tier. If you change tiers by using a lifecycle management policy, then that policy will require a day to take effect and a day to complete execution. You'll also incur the capacity cost of storing data in the initial tier prior to the tier change.
+
+- For guidance about how to upload to a specific access tier, see [Set a blob's access tier](access-tiers-online-manage.md).
+
+- For offline data movement to the desired tier, see [Azure Data Box](/products/databox/).
+
+## Move data into the most cost-efficient access tiers
+
+After data is uploaded, you should periodically analyze your containers and blobs to understand how they are stored, organized, and used in production. Then, use lifecycle management policies to move data to the most cost-efficient tiers. For example, data that has not been accessed for more than 30 days might be more cost efficient if placed into the cool tier. Consider archiving data that has not been accessed for over 180 days.
+
+To gather telemetry, enable [blob inventory reports](blob-inventory.md) and enable [last access time tracking](lifecycle-management-policy-configure.md#optionally-enable-access-time-tracking). Analyze use patterns based on the last access time by using tools such as Azure Synapse or Azure Databricks. To learn about ways to analyze your data, see any of these articles:
+
+- [Tutorial: Analyze blob inventory reports](storage-blob-inventory-report-analytics.md)
+
+- [Calculate blob count and total size per container using Azure Storage inventory](calculate-blob-count-size.md)
+
+- [How to calculate Container Level Statistics in Azure Blob Storage with Azure Databricks](https://techcommunity.microsoft.com/t5/azure-paas-blog/how-to-calculate-container-level-statistics-in-azure-blob/ba-p/3614650)
+
+## Tier append and page blobs
+
+Your analysis might reveal append or page blobs that are not actively used. For example, you might have log files (append blobs) that are no longer being read or written to, but you'd like to store them for compliance reasons. Similarly, you might want to back up disks or disk snapshots (page blobs). You can move these blobs into cooler tiers as well. However, you must first convert them to block blobs.
+
+For information about how to convert append and page blobs to block blobs, see [Convert append blobs and page blobs to block blobs](convert-append-and-page-blobs-to-block-blobs.md).
+
+## Pack small files before moving data to cooler tiers
+
+Each read or write operation incurs a cost. To reduce the cost of reading and writing data, consider packing small files into larger ones by using file formats such as TAR or ZIP. Fewer files reduce the number of operations required to transfer data.
+
+The following chart shows the relative impact of packing files for the cool tier. The read cost assumes a monthly read percentage of 30%.
+
+> [!div class="mx-imgBorder"]
+> ![Chart that shows the impact on costs when you pack small files before uploading to the cool access tier.](./media/access-tiers-best-practices/packing-impact-cool.png)
+
+The following chart shows the relative impact of packing files for the archive tier. The read cost assumes a monthly read percentage of 30%.
+
+> [!div class="mx-imgBorder"]
+> ![Chart that shows the impact on costs when you pack small files before uploading to the archive access tier.](./media/access-tiers-best-practices/packing-impact-archive.png)
+
+> [!TIP]
+> To facilitate search and read scenarios, consider creating an index that maps packed file paths with original file paths, and then storing these indexes as block blobs in the hot tier.
+
+## Next steps
+
+- [Set a blob's access tier](access-tiers-online-manage.md)
+- [Archive a blob](archive-blob.md)
+- [Optimize costs by automatically managing the data lifecycle](lifecycle-management-overview.md)
+- [Estimate the cost of archiving data](archive-cost-estimation.md)
storage Convert Append And Page Blobs To Block Blobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/convert-append-and-page-blobs-to-block-blobs.md
+
+ Title: Convert append and page blobs into block blobs (Azure Storage)
+
+description: Learn how to convert an append blob or a page blob into a block blob in Azure Blob Storage.
++++ Last updated : 01/20/2023+++
+ms.devlang: powershell, azurecli
+++
+# Convert append blobs and page blobs into block blobs
+
+To convert blobs, copy them to a new location by using PowerShell, Azure CLI, or AzCopy. You'll use command parameters to ensure that the destination blob is a block blob. All metadata from the source blob is copied to the destination blob.
+
+## Convert append and page blobs
+
+### [PowerShell](#tab/azure-powershell)
+
+1. Open a Windows PowerShell command window.
+
+2. Sign in to your Azure subscription with the [Connect-AzAccount](/powershell/module/az.accounts/connect-azaccount) command and follow the on-screen directions.
+
+ ```powershell
+ Connect-AzAccount
+ ```
+
+3. If your identity is associated with more than one subscription, then set your active subscription to subscription of the storage account which contains the append or page blobs.
+
+ ```powershell
+ $context = Get-AzSubscription -SubscriptionId '<subscription-id>'
+ Set-AzContext $context
+ ```
+
+ Replace the `<subscription-id>` placeholder value with the ID of your subscription.
+
+4. Create the storage account context by using the [New-AzStorageContext](/powershell/module/az.storage/new-azstoragecontext) command. Include the `-UseConnectedAccount` parameter so that data operations will be performed using your Azure Active Directory (Azure AD) credentials.
+
+ ```powershell
+ $ctx = New-AzStorageContext -StorageAccountName '<storage account name>' -UseConnectedAccount
+ ```
+
+5. Use the [Copy-AzStorageBlob](/powershell/module/az.storage/copy-azstorageblob) command and set the `-DestBlobType` parameter to `Block`.
+
+ ```powershell
+ $containerName = '<source container name>'
+ $srcblobName = '<source append or page blob name>'
+ $destcontainerName = '<destination container name>'
+ $destblobName = '<destination block blob name>'
+ $destTier = '<destination block blob tier>'
+
+ Copy-AzStorageBlob -SrcContainer $containerName -SrcBlob $srcblobName -Context $ctx -DestContainer $destcontainerName -DestBlob $destblobName -DestContext $ctx -DestBlobType Block -StandardBlobTier $destTier
+ ```
+
+ > [!TIP]
+ > The `-StandardBlobTier` parameter is optional. If you omit that parameter, then the destination blob infers its tier from the [default account access tier setting](access-tiers-overview.md#default-account-access-tier-setting). To change the tier after you've created a block blob, see [Change a blob's tier](access-tiers-online-manage.md#change-a-blobs-tier).
++
+### [Azure CLI](#tab/azure-cli)
+
+1. First, open the [Azure Cloud Shell](../../cloud-shell/overview.md), or if you've [installed](/cli/azure/install-azure-cli) the Azure CLI locally, open a command console application such as Windows PowerShell.
+
+ > [!NOTE]
+ > If you're using a locally installed version of the Azure CLI, ensure that you are using version 2.44.0 or later.
+
+2. If your identity is associated with more than one subscription, then set your active subscription to subscription of storage account which contains the append or page blobs.
+
+ ```azurecli-interactive
+ az account set --subscription <subscription-id>
+ ```
+
+ Replace the `<subscription-id>` placeholder value with the ID of your subscription.
+
+3. Use the [az storage blob copy start](/cli/azure/storage/blob/copy#az-storage-blob-copy-start) command and set the `--destination-blob-type` parameter to `blockBlob`.
+
+ ```azurecli
+ containerName = '<source container name>'
+ srcblobName = '<source append or page blob name>'
+ destcontainerName = '<destination container name>'
+ destblobName = '<destination block blob name>'
+ destTier = '<destination block blob tier>'
+
+ az storage blob copy start --account-name $accountName --destination-blob $destBlobName --destination-container $destcontainerName --destination-blob-type BlockBlob --source-blob $srcblobName --source-container $containerName --tier $destTier
+ ```
+
+ > [!TIP]
+ > The `--tier` parameter is optional. If you omit that parameter, then the destination blob infers its tier from the [default account access tier setting](access-tiers-overview.md#default-account-access-tier-setting). To change the tier after you've created a block blob, see [Change a blob's tier](access-tiers-online-manage.md#change-a-blobs-tier).
+
+ > [!WARNING]
+ > The optional `--metadata` parameter overwrites any existing metadata. Therefore, if you specify metadata by using this parameter, then none of the original metadata from the source blob will be copied to the destination blob.
++
+### [AzCopy](#tab/azcopy)
+
+Use the [azcopy copy](../common/storage-ref-azcopy-copy.md) command. Specify the source and destination paths. Set the `blob-type` parameter to `BlockBlob`.
+
+```azcopy
+azcopy copy 'https://<storage-account-name>.<blob or dfs>.core.windows.net/<container-name>/<append-or-page-blob-name>' 'https://<storage-account-name>.<blob or dfs>.core.windows.net/<container-name>/<name-of-new-block-blob>' --blob-type BlockBlob --block-blob-tier <destination-tier>
+```
+
+> [!TIP]
+> The `--block-blob-tier` parameter is optional. If you omit that parameter, then the destination blob infers its tier from the [default account access tier setting](access-tiers-overview.md#default-account-access-tier-setting). To change the tier after you've created a block blob, see [Change a blob's tier](access-tiers-online-manage.md#change-a-blobs-tier).
+
+> [!WARNING]
+> The optional `--metadata` parameter overwrites any existing metadata. Therefore, if you specify metadata by using this parameter, then none of the original metadata from the source blob will be copied to the destination blob.
+++
+## See also
+
+- [Hot, Cool, and Archive access tiers for blob data](access-tiers-overview.md)
+- [Set a blob's access tier](access-tiers-online-manage.md)
+- [Best practices for using blob access tiers](access-tiers-best-practices.md)
stream-analytics Visual Studio Code Custom Deserializer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/visual-studio-code-custom-deserializer.md
Title: Tutorial - Create custom .NET deserializers for Azure Stream Analytics cloud jobs using Visual Studio Code
+ Title: Tutorial - Create custom .NET deserializers for Azure Stream Analytics cloud jobs using Visual Studio Code (Preview)
description: This tutorial demonstrates how to create a custom .NET deserializer for an Azure Stream Analytics cloud job using Visual Studio Code. Previously updated : 12/27/2022 Last updated : 01/21/2023
-# Tutorial: Custom .NET deserializers for Azure Stream Analytics in Visual Studio Code
+# Tutorial: Custom .NET deserializers for Azure Stream Analytics in Visual Studio Code (Preview)
Azure Stream Analytics has built-in support for three data formats: JSON, CSV, and Avro as shown in this [doc](stream-analytics-parsing-json.md). With custom .NET deserializers, you can process data in other formats such as [Protocol Buffer](https://developers.google.com/protocol-buffers/), [Bond](https://github.com/Microsoft/bond) and other user defined formats for cloud jobs. This tutorial demonstrates how to create, test, and debug a custom .NET deserializer for an Azure Stream Analytics job using Visual Studio Code.
In this tutorial, you learned how to implement a custom .NET deserializer for th
> [!div class="nextstepaction"] > * [Create different .NET deserializers for Azure Stream Analytics jobs](custom-deserializer-examples.md)
-> * [Test Azure Stream Analytics jobs locally with live input using Visual Studio Code](visual-studio-code-local-run-live-input.md)
+> * [Test Azure Stream Analytics jobs locally with live input using Visual Studio Code](visual-studio-code-local-run-live-input.md)
virtual-machines Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/delete.md
PUT https://management.azure.com/subscriptions/subid/resourceGroups/rg1/provider
## Update the delete behavior on an existing VM
-You can change the behavior when you delete a VM. The following example updates the VM to delete the NIC, OS disk, and data disk when the VM is deleted.
+You can change the behavior when you delete a VM.
### [CLI](#tab/cli3)
az resource update --resource-group myResourceGroup --name myVM --resource-type
### [REST](#tab/rest3)
+The following example updates the VM to delete the NIC, OS disk, and data disk when the VM is deleted.
+ ```rest PATCH https://management.azure.com/subscriptions/subID/resourceGroups/resourcegroup/providers/Microsoft.Compute/virtualMachines/testvm?api-version=2021-07-01
You can use the Azure REST API to apply force delete to your virtual machines. U
-## Force Delete for virtual machine scale sets
+## Force Delete for scale sets
-Force delete allows you to forcefully delete your **Uniform** virtual machine scale sets, reducing delete latency and immediately freeing up attached resources. . Force Delete will not immediately free the MAC address associated with a VM, as this is a physical resource that may take up to 10 min to free. If you need to immediately re-use the MAC address on a new VM, Force Delete is not recommended. Force delete should only be used when you are not intending to re-use virtual hard disks. You can use force delete through Portal, CLI, PowerShell, and REST API.
+Force delete allows you to forcefully delete your **Uniform** Virtual Machine Scale Set, reducing delete latency and immediately freeing up attached resources. . Force Delete will not immediately free the MAC address associated with a VM, as this is a physical resource that may take up to 10 min to free. If you need to immediately re-use the MAC address on a new VM, Force Delete is not recommended. Force delete should only be used when you are not intending to re-use virtual hard disks. You can use force delete through Portal, CLI, PowerShell, and REST API.
### [Portal](#tab/portal5)
-When you go to delete an existing virtual machine scale set, you will find an option to apply force delete in the delete pane.
+When you go to delete an existing scale set, you will find an option to apply force delete in the delete pane.
1. Open the [portal](https://portal.azure.com).
-1. Navigate to your virtual machine scale set.
+1. Navigate to your Virtual Machine Scale Set.
1. On the **Overview** page, select **Delete**.
-1. In the **Delete virtual machine scale set** pane, select the checkbox for **Apply force delete**.
+1. In the **Delete Virtual Machine Scale Set** pane, select the checkbox for **Apply force delete**.
1. Select **Ok**. ### [CLI](#tab/cli5)
Remove-AzVmss `
### [REST](#tab/rest5)
-You can use the Azure REST API to apply force delete to your virtual machine scale set. Use the `forceDeletion` parameter for [Virtual Machines Scale Sets - Delete](/rest/api/compute/virtual-machine-scale-sets/delete).
+You can use the Azure REST API to apply force delete to your scale set. Use the `forceDeletion` parameter for [Virtual Machines Scale Sets - Delete](/rest/api/compute/virtual-machine-scale-sets/delete).
A: This feature is supported on all managed disk types used as OS disks and Data
A: No, this feature is only available on disks and NICs associated with a VM.
-### Q: How does this feature work with Flexible virtual machine scale sets?
+### Q: How does this feature work with Flexible Virtual Machine Scale Set?
-A: For Flexible virtual machine scale sets the disks, NICs, and PublicIPs have `deleteOption` set to `Delete` by default so these resources are automatically cleaned up when the VMs are deleted.
+A: For Flexible Virtual Machine Scale Set the disks, NICs, and PublicIPs have `deleteOption` set to `Delete` by default so these resources are automatically cleaned up when the VMs are deleted.
For data disks that were explicitly created and attached to the VMs, you can modify this property to ΓÇÿDetachΓÇÖ instead of ΓÇÿDeleteΓÇÖ if you want the disks to persist after the VM is deleted.
virtual-machines Disks Enable Ultra Ssd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-enable-ultra-ssd.md
description: Learn about ultra disks for Azure VMs
Previously updated : 01/17/2023 Last updated : 01/20/2023