Updates from: 08/26/2024 01:05:30
Service Microsoft Docs article Related commit history on GitHub Change details
ai-services Cognitive Services Container Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/cognitive-services-container-support.md
Previously updated : 08/28/2023 Last updated : 08/23/2024 keywords: on-premises, Docker, container, Kubernetes #Customer intent: As a potential customer, I want to know more about how Azure AI services provides and supports Docker containers for each service.
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/conversational-language-understanding/overview.md
Previously updated : 12/19/2023 Last updated : 08/23/2024
ai-services Language Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/language-studio.md
Previously updated : 12/19/2023 Last updated : 08/23/2024
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/overview.md
Previously updated : 12/19/2023 Last updated : 08/23/2024
ai-services Use Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/text-analytics-for-health/how-to/use-containers.md
server {
#### Example Docker compose file
-The below example shows how a [docker compose](https://docs.docker.com/compose/reference/overview) file can be created to deploy NGINX and health containers:
+The below example shows how a [docker compose](https://docs.docker.com/reference/cli/docker/compose/) file can be created to deploy NGINX and health containers:
```yaml version: "3.7"
Use the host, `http://localhost:5000`, for container APIs.
### Structure the API request for the container
-You can use Postman or the example cURL request below to submit a query to the container you deployed, replacing the `serverURL` variable with the appropriate value. Note the version of the API in the URL for the container is different than the hosted API.
+You can use the [Visual Studio Code REST Client extension](https://marketplace.visualstudio.com/items?itemName=humao.rest-client) or the example cURL request below to submit a query to the container you deployed, replacing the `serverURL` variable with the appropriate value. Note the version of the API in the URL for the container is different than the hosted API.
[!INCLUDE [Use APIs in container](../includes/container-request.md)]
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/whats-new.md
Previously updated : 02/26/2024 Last updated : 08/22/2024
ai-services Metadata Generateanswer Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/How-To/metadata-generateanswer-usage.md
We offer precise answer feature only with the QnA Maker managed version.
## Next steps
-The **Publish** page also provides information to [generate an answer](../quickstarts/get-answer-from-knowledge-base-using-url-tool.md) with Postman or cURL.
+The **Publish** page also provides information to [generate an answer](../quickstarts/get-answer-from-knowledge-base-using-url-tool.md) with the [Visual Studio Code REST Client extension](https://marketplace.visualstudio.com/items?itemName=humao.rest-client) or cURL.
> [!div class="nextstepaction"] > [Get analytics on your knowledge base](../how-to/get-analytics-knowledge-base.md)
ai-services Add Question Metadata Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/Quickstarts/add-question-metadata-portal.md
If you are not continuing to the next quickstart, delete the QnA Maker and Bot f
## Next steps > [!div class="nextstepaction"]
-> [Get answer with Postman or cURL](get-answer-from-knowledge-base-using-url-tool.md)
+> [Get answer with the Visual Studio Code REST Client extension or cURL](get-answer-from-knowledge-base-using-url-tool.md)
ai-services Get Answer From Knowledge Base Using Url Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/Quickstarts/get-answer-from-knowledge-base-using-url-tool.md
Title: Use URL tool to get answer from knowledge base - QnA Maker
-description: This article walks you through getting an answer from your knowledge base using a URL test tool such as cURL or Postman.
+description: This article walks you through getting an answer from your knowledge base using a URL test tool such as cURL or the Visual Studio Code REST Client extension..
#
Last updated 01/19/2024
::: zone-end ::: zone-end
azure-functions Functions Core Tools Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-core-tools-reference.md
description: Reference documentation that supports the Azure Functions Core Tool
- ignite-2023 Previously updated : 08/22/2024 Last updated : 08/20/2023 # Azure Functions Core Tools reference
Core Tools commands are organized into the following contexts, each providing a
Before using the commands in this article, you must [install the Core Tools](functions-run-local.md#install-the-azure-functions-core-tools).
-## func init
+## `func init`
Creates a new Functions project in a specific language.
When you supply `<PROJECT_FOLDER>`, the project is created in a new folder with
> [!NOTE] > When you use either `--docker` or `--docker-only` options, Core Tools automatically create the Dockerfile for C#, JavaScript, Python, and PowerShell functions. For Java functions, you must manually create the Dockerfile. For more information, see [Creating containerized function apps](functions-how-to-custom-container.md#creating-containerized-function-apps).
-## func logs
+## `func logs`
Gets logs for functions running in a Kubernetes cluster.
The `func logs` action supports the following options:
To learn more, see [Azure Functions on Kubernetes with KEDA](functions-kubernetes-keda.md).
-## func new
+## `func new`
Creates a new function in the current project based on a template.
The `func new` action supports the following options:
To learn more, see [Create a function](functions-run-local.md#create-func).
-## func run
+## `func run`
*Version 1.x only.*
For example, to call an HTTP-triggered function and pass content body, run the f
func run MyHttpTrigger --content '{\"name\": \"Azure\"}' ```
-## func start
+## `func start`
Starts the local runtime host and loads the function project in the current folder. The specific command depends on the [runtime version](functions-versions.md).
-# [v2.x+](#tab/v2)
+### [v2.x+](#tab/v2)
```command func start
func start
With the project running, you can [verify individual function endpoints](functions-run-local.md#run-a-local-function).
-# [v1.x](#tab/v1)
+### [v1.x](#tab/v1)
```command func host start
In version 1.x, you can also use the [`func run`](#func-run) command to run a sp
-## func azure functionapp fetch-app-settings
+## `func azure functionapp fetch-app-settings`
Gets settings from a specific function app.
Gets settings from a specific function app.
func azure functionapp fetch-app-settings <APP_NAME> ```
-`func azure functionapp fetch-app-settings` supports these optional arguments:
-
-| Option | Description |
-| | -- |
-| **`--access-token`** | Lets you use a specific access token when performing authenticated `azure` actions. |
-| **`--access-token-stdin `** | Reads a specific access token from a standard input. Use this when reading the token directly from a previous command such as [`az account get-access-token`](/cli/azure/account#az-account-get-access-token). |
-| **`--management-url`** | Sets the management URL for your cloud. Use this when running in a sovereign cloud. |
-| **`--slot`** | Optional name of a specific slot to which to publish. |
-| **`--subscription`** | Sets the default subscription to use. |
- For more information, see [Download application settings](functions-run-local.md#download-application-settings). Settings are downloaded into the local.settings.json file for the project. On-screen values are masked for security. You can protect settings in the local.settings.json file by [enabling local encryption](functions-run-local.md#encrypt-the-local-settings-file).
-## func azure functionapp list-functions
+## `func azure functionapp list-functions`
Returns a list of the functions in the specified function app. ```command func azure functionapp list-functions <APP_NAME> ```-
-`func azure functionapp list-functions` supports these optional arguments:
-
-| Option | Description |
-| | -- |
-| **`--access-token`** | Lets you use a specific access token when performing authenticated `azure` actions. |
-| **`--access-token-stdin `** | Reads a specific access token from a standard input. Use this when reading the token directly from a previous command such as [`az account get-access-token`](/cli/azure/account#az-account-get-access-token). |
-| **`--management-url`** | Sets the management URL for your cloud. Use this when running in a sovereign cloud. |
-| **`--show-keys`** | Shows HTTP function endpoint URLs that include their default access keys. These URLs can be used to access function endpoints with `function` level [HTTP authentication](functions-bindings-http-webhook-trigger.md#http-auth). |
-| **`--slot`** | Optional name of a specific slot to which to publish. |
-| **`--subscription`** | Sets the default subscription to use. |
-
-## func azure functionapp logstream
+## `func azure functionapp logstream`
Connects the local command prompt to streaming logs for the function app in Azure.
func azure functionapp logstream <APP_NAME>
The default timeout for the connection is 2 hours. You can change the timeout by adding an app setting named [SCM_LOGSTREAM_TIMEOUT](functions-app-settings.md#scm_logstream_timeout), with a timeout value in seconds. Not yet supported for Linux apps in the Consumption plan. For these apps, use the `--browser` option to view logs in the portal.
-The `func azure functionapp logstream` command supports these optional arguments:
+The `deploy` action supports the following options:
| Option | Description | | | -- |
-| **`--access-token`** | Lets you use a specific access token when performing authenticated `azure` actions. |
-| **`--access-token-stdin `** | Reads a specific access token from a standard input. Use this when reading the token directly from a previous command such as [`az account get-access-token`](/cli/azure/account#az-account-get-access-token). |
| **`--browser`** | Open Azure Application Insights Live Stream for the function app in the default browser. |
-| **`--management-url`** | Sets the management URL for your cloud. Use this when running in a sovereign cloud. |
-| **`--slot`** | Optional name of a specific slot to which to publish. |
-| **`--subscription`** | Sets the default subscription to use. |
For more information, see [Enable streaming execution logs in Azure Functions](streaming-logs.md).
-## func azure functionapp publish
+## `func azure functionapp publish`
Deploys a Functions project to an existing function app resource in Azure.
The following publish options apply, based on version:
| **`--overwrite-settings -y`** | Suppress the prompt to overwrite app settings when `--publish-local-settings -i` is used.| | **`--publish-local-settings -i`** | Publish settings in local.settings.json to Azure, prompting to overwrite if the setting already exists. If you're using a [local storage emulator](functions-develop-local.md#local-storage-emulator), first change the app setting to an [actual storage connection](#func-azure-storage-fetch-connection-string). | | **`--publish-settings-only`**, **`-o`** | Only publish settings and skip the content. Default is prompt. |
-| **`--show-keys`** | Shows HTTP function endpoint URLs that include their default access keys. These URLs can be used to access function endpoints with `function` level [HTTP authentication](functions-bindings-http-webhook-trigger.md#http-auth). |
| **`--slot`** | Optional name of a specific slot to which to publish. | | **`--subscription`** | Sets the default subscription to use. |
The following publish options apply, based on version:
-## func azure storage fetch-connection-string
+## `func azure storage fetch-connection-string`
Gets the connection string for the specified Azure Storage account.
func azure storage fetch-connection-string <STORAGE_ACCOUNT_NAME>
For more information, see [Download a storage connection string](functions-run-local.md#download-a-storage-connection-string).
-## func azurecontainerapps deploy
+## `func azurecontainerapps deploy`
Deploys a containerized function app to an Azure Container Apps environment. Both the storage account used by the function app and the environment must already exist. For more information, see [Azure Container Apps hosting of Azure Functions](functions-container-apps-hosting.md).
The following deployment options apply:
> [!IMPORTANT] > Storage connection strings and other service credentials are important secrets. Make sure to securely store any script files using `func azurecontainerapps deploy` and don't store them in any publicly accessible source control.
-## func deploy
+## `func deploy`
The `func deploy` command is deprecated. Instead use [`func kubernetes deploy`](#func-kubernetes-deploy).
-## func durable delete-task-hub
+## `func durable delete-task-hub`
Deletes all storage artifacts in the Durable Functions task hub.
The `delete-task-hub` action supports the following options:
To learn more, see the [Durable Functions documentation](./durable/durable-functions-instance-management.md#delete-a-task-hub).
-## func durable get-history
+## `func durable get-history`
Returns the history of the specified orchestration instance.
The `get-history` action supports the following options:
To learn more, see the [Durable Functions documentation](./durable/durable-functions-instance-management.md#azure-functions-core-tools-1).
-## func durable get-instances
+## `func durable get-instances`
Returns the status of all orchestration instances. Supports paging using the `top` parameter.
The `get-instances` action supports the following options:
To learn more, see the [Durable Functions documentation](./durable/durable-functions-instance-management.md#azure-functions-core-tools-2).
-## func durable get-runtime-status
+## `func durable get-runtime-status`
Returns the status of the specified orchestration instance.
The `get-runtime-status` action supports the following options:
To learn more, see the [Durable Functions documentation](./durable/durable-functions-instance-management.md#azure-functions-core-tools-1).
-## func durable purge-history
+## `func durable purge-history`
Purge orchestration instance state, history, and blob storage for orchestrations older than the specified threshold.
The `purge-history` action supports the following options:
To learn more, see the [Durable Functions documentation](./durable/durable-functions-instance-management.md#azure-functions-core-tools-7).
-## func durable raise-event
+## `func durable raise-event`
Raises an event to the specified orchestration instance.
The `raise-event` action supports the following options:
To learn more, see the [Durable Functions documentation](./durable/durable-functions-instance-management.md#azure-functions-core-tools-5).
-## func durable rewind
+## `func durable rewind`
Rewinds the specified orchestration instance.
The `rewind` action supports the following options:
To learn more, see the [Durable Functions documentation](./durable/durable-functions-instance-management.md#azure-functions-core-tools-6).
-## func durable start-new
+## `func durable start-new`
Starts a new instance of the specified orchestrator function.
The `start-new` action supports the following options:
To learn more, see the [Durable Functions documentation](./durable/durable-functions-instance-management.md#azure-functions-core-tools).
-## func durable terminate
+## `func durable terminate`
Stops the specified orchestration instance.
The `terminate` action supports the following options:
To learn more, see the [Durable Functions documentation](./durable/durable-functions-instance-management.md#azure-functions-core-tools-4).
-## func extensions install
+## `func extensions install`
Manually installs Functions extensions in a non-.NET project or in a C# script project.
The following considerations apply when using `func extensions install`:
+ The first time you explicitly install an extension, a .NET project file named extensions.csproj is added to the root of your app project. This file defines the set of NuGet packages required by your functions. While you can work with the [NuGet package references](/nuget/consume-packages/package-references-in-project-files) in this file, Core Tools lets you install extensions without having to manually edit this C# project file.
-## func extensions sync
+## `func extensions sync`
Installs all extensions added to the function app.
The `sync` action supports the following options:
Regenerates a missing extensions.csproj file. No action is taken when an extension bundle is defined in your host.json file.
-## func kubernetes deploy
+## `func kubernetes deploy`
Deploys a Functions project as a custom docker container to a Kubernetes cluster.
Core Tools uses the local Docker CLI to build and publish the image. Make sure y
To learn more, see [Deploying a function app to Kubernetes](functions-kubernetes-keda.md#deploying-a-function-app-to-kubernetes).
-## func kubernetes install
+## `func kubernetes install`
Installs KEDA in a Kubernetes cluster.
The `install` action supports the following options:
To learn more, see [Managing KEDA and functions in Kubernetes](functions-kubernetes-keda.md#managing-keda-and-functions-in-kubernetes).
-## func kubernetes remove
+## `func kubernetes remove`
Removes KEDA from the Kubernetes cluster defined in the kubectl config file.
The `remove` action supports the following options:
To learn more, see [Uninstalling KEDA from Kubernetes](functions-kubernetes-keda.md#uninstalling-keda-from-kubernetes).
-## func settings add
+## `func settings add`
Adds a new setting to the `Values` collection in the [local.settings.json file].
The `add` action supports the following option:
| | -- | | **`--connectionString`** | Adds the name-value pair to the `ConnectionStrings` collection instead of the `Values` collection. Only use the `ConnectionStrings` collection when required by certain frameworks. To learn more, see [local.settings.json file]. |
-## func settings decrypt
+## `func settings decrypt`
Decrypts previously encrypted values in the `Values` collection in the [local.settings.json file].
func settings decrypt
Connection string values in the `ConnectionStrings` collection are also decrypted. In local.settings.json, `IsEncrypted` is also set to `false`. Encrypt local settings to reduce the risk of leaking valuable information from local.settings.json. In Azure, application settings are always stored encrypted.
-## func settings delete
+## `func settings delete`
Removes an existing setting from the `Values` collection in the [local.settings.json file].
The `delete` action supports the following option:
| | -- | | **`--connectionString`** | Removes the name-value pair from the `ConnectionStrings` collection instead of from the `Values` collection. |
-## func settings encrypt
+## `func settings encrypt`
Encrypts the values of individual items in the `Values` collection in the [local.settings.json file].
func settings encrypt
Connection string values in the `ConnectionStrings` collection are also encrypted. In local.settings.json, `IsEncrypted` is also set to `true`, which specifies that the local runtime decrypts settings before using them. Encrypt local settings to reduce the risk of leaking valuable information from local.settings.json. In Azure, application settings are always stored encrypted.
-## func settings list
+## `func settings list`
Outputs a list of settings in the `Values` collection in the [local.settings.json file].
The `list` action supports the following option:
| | -- | | **`--showValue`** | Shows the actual unmasked values in the output. |
-## func templates list
+## `func templates list`
Lists the available function (trigger) templates.
azure-monitor Autoscale Multiprofile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-multiprofile.md
Previously updated : 06/20/2023 Last updated : 08/25/2024 # Customer intent: As a user or dev ops administrator, I want to understand how set up autoscale with more than one profile so I can scale my resources with more flexibility.
Each time the autoscale service runs, the profiles are evaluated in the followin
1. Recurring profiles 1. Default profile
-If a profile's date and time settings match the current time, autoscale will apply that profile's rules and capacity limits. Only the first applicable profile is used.
+If a profile's date and time settings match the current time, autoscale applies that profile's rules and capacity limits. Only the first applicable profile is used.
-The example below shows an autoscale setting with a default profile and recurring profile.
+The following example shows an autoscale setting with a default profile and recurring profile.
:::image type="content" source="./media/autoscale-multiple-profiles/autoscale-default-recurring-profiles.png" lightbox="./media/autoscale-multiple-profiles/autoscale-default-recurring-profiles.png" alt-text="A screenshot showing an autoscale setting with default and recurring profile or scale condition.":::
-In the above example, on Monday after 3 AM, the recurring profile will cease to be used. If the instance count is less than 3, autoscale scales to the new minimum of three. Autoscale continues to use this profile and scales based on CPU% until Monday at 8 PM. At all other times scaling will be done according to the default profile, based on the number of requests. After 8 PM on Monday, autoscale switches to the default profile. If for example, the number of instances at the time is 12, autoscale scales in to 10, which the maximum allowed for the default profile.
+In the example above, on Monday after 3 AM, the recurring profile will cease to be used. If the instance count is less than 3, autoscale scales to the new minimum of three. Autoscale continues to use this profile and scales based on CPU% until Monday at 8 PM. At all other times scaling is done according to the default profile, based on the number of requests. After 8 PM on Monday, autoscale switches to the default profile. If for example, the number of instances at the time is 12, autoscale scales in to 10, which the maximum allowed for the default profile.
## Multiple contiguous profiles Autoscale transitions between profiles based on their start times. The end time for a given profile is determined by the start time of the following profile.
-In the portal, the end time field becomes the next start time for the default profile. You can't specify the same time for the end of one profile and the start of the next. The portal will force the end time to be one minute before the start time of the following profile. During this minute, the default profile will become active. If you don't want the default profile to become active between recurring profiles, leave the end time field empty.
+In the portal, the end time field becomes the next start time for the default profile. You can't specify the same time for the end of one profile and the start of the next. The portal forces the end time to be one minute before the start time of the following profile. During this minute, the default profile becomes active. If you don't want the default profile to become active between recurring profiles, leave the end time field empty.
> [!TIP] > To set up multiple contiguous profiles using the portal, leave the end time empty. The current profile will stop being used when the next profile becomes active. Only specify an end time when you want to revert to the default profile.
When creating multiple profiles using templates, the CLI, and PowerShell, follow
See the autoscale section of the [ARM template resource definition](/azure/templates/microsoft.insights/autoscalesettings) for a full template reference.
-There is no specification in the template for end time. A profile will remain active until the next profile's start time.
+There's no specification in the template for end time. A profile will remain active until the next profile's start time.
## Add a recurring profile using ARM templates
-The example below shows how to create two recurring profiles. One profile for weekends from 00:01 on Saturday morning and a second Weekday profile starting on Mondays at 04:00. That means that the weekend profile will start on Saturday morning at one minute passed midnight and end on Monday morning at 04:00. The Weekday profile will start at 4am on Monday and end just after midnight on Saturday morning.
+The following example shows how to create two recurring profiles. One profile for weekends from 00:01 on Saturday morning and a second Weekday profile starting on Mondays at 04:00. That means that the weekend profile starts on Saturday morning at one minute passed midnight and end on Monday morning at 04:00. The Weekday profile will start at 4am on Monday and end just after midnight on Saturday morning.
Use the following command to deploy the template: `az deployment group create --name VMSS1-Autoscale-607 --resource-group rg-vmss1 --template-file VMSS1-autoscale.json`
-where *VMSS1-autoscale.json* is the file containing the JSON object below.
+where *VMSS1-autoscale.json* is the file containing the following JSON object.
``` JSON {
where *VMSS1-autoscale.json* is the file containing the JSON object below.
"name": "VMSS1-Autoscale-607", "enabled": true,
- "targetResourceUri": "/subscriptions/abc123456-987-f6e5-d43c-9a8d8e7f6541/resourceGroups/rg-vmss1/providers/Microsoft.Compute/virtualMachineScaleSets/VMSS1",
+ "targetResourceUri": "/subscriptions/0000aaaa-11BB-cccc-dd22-eeeeee333333/resourceGroups/rg-vmss1/providers/Microsoft.Compute/virtualMachineScaleSets/VMSS1",
"profiles": [ { "name": "Weekday profile",
where *VMSS1-autoscale.json* is the file containing the JSON object below.
"metricTrigger": { "metricName": "Inbound Flows", "metricNamespace": "microsoft.compute/virtualmachinescalesets",
- "metricResourceUri": "/subscriptions/abc123456-987-f6e5-d43c-9a8d8e7f6541/resourceGroups/rg-vmss1/providers/Microsoft.Compute/virtualMachineScaleSets/VMSS1",
+ "metricResourceUri": "/subscriptions/0000aaaa-11BB-cccc-dd22-eeeeee333333/resourceGroups/rg-vmss1/providers/Microsoft.Compute/virtualMachineScaleSets/VMSS1",
"operator": "GreaterThan", "statistic": "Average", "threshold": 100,
where *VMSS1-autoscale.json* is the file containing the JSON object below.
"metricTrigger": { "metricName": "Inbound Flows", "metricNamespace": "microsoft.compute/virtualmachinescalesets",
- "metricResourceUri": "/subscriptions/abc123456-987-f6e5-d43c-9a8d8e7f6541/resourceGroups/rg-vmss1/providers/Microsoft.Compute/virtualMachineScaleSets/VMSS1",
+ "metricResourceUri": "/subscriptions/0000aaaa-11BB-cccc-dd22-eeeeee333333/resourceGroups/rg-vmss1/providers/Microsoft.Compute/virtualMachineScaleSets/VMSS1",
"operator": "LessThan", "statistic": "Average", "threshold": 60,
The following steps show how to create a recurring autoscale profile using the C
## Add a recurring profile using CLI
-The example below shows how to add a recurring autoscale profile, recurring on Thursdays between 06:00 and 22:50.
+The following example shows how to add a recurring autoscale profile, recurring on Thursdays between 06:00 and 22:50.
``` azurecli
-export autoscaleName=vmss-autoscalesetting=002
+az account set --subscription 0000aaaa-11bb-cccc-dd22-eeeeee333333
+export autoscaleName=vmss-autoscalesetting-002
export resourceGroupName=rg-vmss-001
az monitor autoscale rule create -g rg-vmss1--autoscale-name VMSS1-Autoscale-607
PowerShell can be used to create multiple profiles in your autoscale settings.
-See the [PowerShell Az.Monitor Reference](/powershell/module/az.monitor/#monitor) for the full set of autoscale PowerShell commands.
+See the [PowerShell Az PowerShell module.Monitor Reference](/powershell/module/az.monitor/#monitor) for the full set of autoscale PowerShell commands.
The following steps show how to create an autoscale profile using PowerShell.
The following steps show how to create an autoscale profile using PowerShell.
## Add a recurring profile using PowerShell
-The example below shows how to create default profile and a recurring autoscale profile, recurring on Wednesdays and Fridays between 09:00 and 23:00.
+The following example shows how to create default profile and a recurring autoscale profile, recurring on Wednesdays and Fridays between 09:00 and 23:00.
The default profile uses the `CpuIn` and `CpuOut` Rules. The recurring profile uses the `BandwidthIn` and `BandwidthOut` rules. ```azurepowershell
+Set-AzureSubscription -SubscriptionId "0000aaaa-11BB-cccc-dd22-eeeeee333333"
$ResourceGroupName="rg-vmss-001"
-$TargetResourceId="/subscriptions/abc123456-987-f6e5-d43c-9a8d8e7f6541/resourceGroups/rg-vmss-001/providers/Microsoft.Compute/virtualMachineScaleSets/vmss-001"
+$TargetResourceId="/subscriptions/0000aaaa-11BB-cccc-dd22-eeeeee333333/resourceGroups/rg-vmss-001/providers/Microsoft.Compute/virtualMachineScaleSets/vmss-001"
$ScaleSettingName="vmss-autoscalesetting=001" $CpuOut=New-AzAutoscaleScaleRuleObject `
azure-monitor Autoscale Webhook Email https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-webhook-email.md
description: Learn how to use autoscale actions to call web URLs or send email n
Previously updated : 06/21/2023 Last updated : 08/25/2024
This article shows you how to set up notifications so that you can call specific
Webhooks allow you to send HTTP requests to a specific URL endpoint (callback URL) when a certain event or trigger occurs. Using webhooks, you can automate and streamline processes by enabling the automatic exchange of information between different systems or applications. Use webhooks to trigger custom code, notifications, or other actions to run when an autoscale event occurs. ## Email
-You can send email to any valid email address when an autoscale event occurs. Administrators and co-administrators of the subscription where the rule is running are also notified.
+You can send email to any valid email address when an autoscale event occurs.
+
+> [!NOTE]
+> Starting April 3, 2024, you won't be able to add any new Co-Administrators for Azure Autoscale Notifications. Azure Classic administrators will be retired on August 31, 2024, and you would not be able to send Azure Autoscale Notifications using Administrators and Co-Administrators after August 31, 2024. For moe information, see [Prepare for Co-administrators retirement](/azure/role-based-access-control/classic-administrators?WT.mc_id=Portal-Microsoft_Azure_Monitoring&tabs=azure-portal#prepare-for-co-administrators-retirement)
+ ## Configure Notifications
Use the Azure portal, CLI, PowerShell, or Resource Manager templates to configur
Select the **Notify** tab on the autoscale settings page to configure notifications.
-Select the check boxes to send an email to the subscription administrator or co-administrators. You can also enter a list of email addresses to send notifications to.
+Enter a list of email addresses to send notifications to.
Enter a webhook URI to send a notification to a web service. You can also add custom headers to the webhook request. For example, you can add an authentication token in the header, query parameters, or add a custom header to identify the source of the request.
Use the `az monitor autoscale update` or the `az monitor autoscale create` comma
The following parameters are used to configure notifications:
-+ `--add-action` - The action to take when the autoscale rule is triggered. The value must be `email` or `webhook`.
-+ `--email-administrator {false, true}` - Send email to the subscription administrator.
-+ `--email-coadministrators {false, true}` - Send email to the subscription co-administrators.
++ `--add-action` - The action to take when the autoscale rule is triggered. The value must be `email` or `webhook`, and followed by the email address or webhook URI. + `--remove-action` - Remove an action previously added by `--add-action`. The value must be `email` or `webhook`. The parameter is only relevant for the `az monitor autoscale update` command.
-For example, the following command adds an email notification and a webhook notification to and existing autoscale setting. The command also sends email to the subscription administrator.
+For example, the following command adds an email notification and a webhook notification to and existing autoscale setting.
```azurecli az monitor autoscale update \ --resource-group <resource group name> \ --name <autoscale setting name> \email-administrator true \ --add-action email pdavis@contoso.com \ --add-action webhook http://myservice.com/webhook-listerner-123 ```
The following example shows how to configure a webhook and email notification.
$notification=New-AzAutoscaleNotificationObject ` -EmailCustomEmail "pdavis@contoso.com" `--EmailSendToSubscriptionAdministrator $true ` -EmailSendToSubscriptionCoAdministrator $true ` -Webhook $webhook
When you use the Resource Manager templates or REST API, include the `notificati
| Field | Mandatory | Description | | | | |
-| operation |Yes |Value must be "Scale." |
-| sendToSubscriptionAdministrator |Yes |Value must be "true" or "false." |
-| sendToSubscriptionCoAdministrators |Yes |Value must be "true" or "false." |
-| customEmails |Yes |Value can be null [] or a string array of emails. |
-| webhooks |Yes |Value can be null or valid URI. |
-| serviceUri |Yes |Valid HTTPS URI. |
-| properties |Yes |Value must be empty {} or can contain key-value pairs. |
+| `operation` |Yes |Value must be `Scale`. |
+| `sendToSubscriptionAdministrator` |Yes |No longer supported. Value must be `false`. |
+| `sendToSubscriptionCoAdministrators` |Yes |No longer supported. Value must be `false`.|
+| `customEmails` |Yes |Value can be null [] or a string array of emails. |
+| `webhooks` |Yes |Value can be null or valid URI. |
+| `serviceUri` |Yes |Valid HTTPS URI. |
+| `properties` |Yes |Value must be empty {} or can contain key-value pairs. |
When the autoscale notification is generated, the following metadata is included
"operation": "Scale Out", "context": { "timestamp": "2023-06-22T07:01:47.8926726Z",
- "id": "/subscriptions/123456ab-9876-a1b2-a2b1-123a567b9f8767/resourceGroups/rg-001/providers/microsoft.insights/autoscalesettings/AutoscaleSettings-002",
+ "id": "/subscriptions/0000aaaa-11BB-cccc-dd22-eeeeee333333/resourceGroups/rg-001/providers/microsoft.insights/autoscalesettings/AutoscaleSettings-002",
"name": "AutoscaleSettings-002", "details": "Autoscale successfully started scale operation for resource 'ScaleableAppServicePlan' from capacity '1' to capacity '2'",
- "subscriptionId": "123456ab-9876-a1b2-a2b1-123a567b9f8767",
+ "subscriptionId": "0000aaaa-11BB-cccc-dd22-eeeeee333333",
"resourceGroupName": "rg-001", "resourceName": "ScaleableAppServicePlan", "resourceType": "microsoft.web/serverfarms",
- "resourceId": "/subscriptions/123456ab-9876-a1b2-a2b1-123a567b9f8767/resourceGroups/rg-001/providers/Microsoft.Web/serverfarms/ScaleableAppServicePlan",
- "portalLink": "https://portal.azure.com/#resource/subscriptions/123456ab-9876-a1b2-a2b1-123a567b9f8767/resourceGroups/rg-001/providers/Microsoft.Web/serverfarms/ScaleableAppServicePlan",
+ "resourceId": "/subscriptions/0000aaaa-11BB-cccc-dd22-eeeeee333333/resourceGroups/rg-001/providers/Microsoft.Web/serverfarms/ScaleableAppServicePlan",
+ "portalLink": "https://portal.azure.com/#resource/subscriptions/0000aaaa-11BB-cccc-dd22-eeeeee333333/resourceGroups/rg-001/providers/Microsoft.Web/serverfarms/ScaleableAppServicePlan",
"resourceRegion": "West Central US", "oldCapacity": "1", "newCapacity": "2"
azure-monitor Data Platform Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-platform-logs.md
This article provides an overview of how Azure Monitor Logs works and explains h
> [!NOTE] > Azure Monitor Logs is one half of the data platform that supports Azure Monitor. The other is [Azure Monitor Metrics](../essentials/data-platform-metrics.md), which stores numeric data in a time-series database.
+## How Azure Monitor Logs works
+
+Azure Monitor Logs provides you with the tools to:
+
+* **Collect any data** by using Azure Monitor data collection methods. Transform data based on your needs to optimize costs, remove personal data, and so on, and route data to tables in your Log Analytics workspace.
+* **Manage and optimize log data and costs** by configuring your Log Analytics workspace and log tables, including table schemas, table plans, data retention, data aggregation, who has access to which data, and log-related costs.
+* **Retrieve data in near-real time** by using Kusto Query language (KQL), or KQL-based tools and features that don't require KQL knowledge, such as Simple mode in the Log Analytics user interface, prebuilt curated monitoring experiences called Insights, and predefined queries.
+* **Use data flexibly** for a range of use cases, including data analysis, troubleshooting, alerting, dashboards and reports, custom applications, and other Azure or non-Azure services.
++
+## Data collection, routing, and transformation
+
+Azure Monitor's data collection capabilities let you collect data from all of your applications and resources running in Azure, other clouds, and on-premises. A powerful ingestion pipeline enables filtering, transforming, and routing data to destination tables in your Log Analytics workspace to optimize costs, analytics capabilities, and query performance.
++
+For more information on data collection and transformation, see [Azure Monitor data sources and data collection methods](../data-sources.md) and [Data collection transformations in Azure Monitor](../essentials/data-collection-transformations.md).
+ ## Log Analytics workspace A [Log Analytics workspace](../logs/log-analytics-workspace-overview.md) is a data store that holds tables into which you collect data.
To address the data storage and consumption needs of various personas who use a
You can also configure network isolation, replicate your workspace across regions, and [design a workspace architecture based on your business needs](../logs/workspace-design.md).
-## Kusto Query Language (KQL) and Log Analytics
-
-You retrieve data from a Log Analytics workspace using a [Kusto Query Language (KQL)](/azure/data-explorer/kusto/query/) query, which is a read-only request to process data and return results. KQL is a powerful tool that can analyze millions of records quickly. Use KQL to explore your logs, transform and aggregate data, discover patterns, identify anomalies and outliers, and more.
-
-Log Analytics is a tool in the Azure portal for running log queries and analyzing their results. [Log Analytics Simple mode](log-analytics-simple-mode.md) lets any user, regardless of their knowledge of KQL, retrieve data from one or more tables with one click. A set of controls lets you explore and analyze the retrieved data using the most popular Azure Monitor Logs functionality in an intuitive, spreadsheet-like experience.
--
-If you're familiar with KQL, you can use Log Analytics KQL mode to edit and create queries, which you can then use in Azure Monitor features such as alerts and workbooks, or share with other users.
-
-For more information about Log Analytics, see [Overview of Log Analytics in Azure Monitor](./log-analytics-overview.md).
-
-## Built-in insights and custom dashboards, workbooks, and reports
-
-Many of Azure Monitor's [ready-to-use, curated Insights experiences](../insights/insights-overview.md) store data in Azure Monitor Logs, and present this data in an intuitive way so you can monitor the performance and availability of your cloud and hybrid applications and their supporting components.
--
-You can also [create your own visualizations and reports](../best-practices-analysis.md#built-in-visualization-tools) using workbooks, dashboards, and Power BI.
--
-## Data collection, routing, and transformation
-
-Azure Monitor's data collection capabilities let you collect data from all of your applications and resources running in Azure, other clouds, and on-premises. A powerful ingestion pipeline enables filtering, transforming, and routing data to destination tables in your Log Analytics workspace to optimize costs, analytics capabilities, and query performance.
--
-For more information on data collection and transformation, see [Azure Monitor data sources and data collection methods](../data-sources.md) and [Data collection transformations in Azure Monitor](../essentials/data-collection-transformations.md).
## Table plans
The diagram and table below compare the Analytics, Basic, and Auxiliary table pl
> [!NOTE] > The Auxiliary table plan is in public preview. For current limitations and supported regions, see [Public preview limitations](create-custom-table-auxiliary.md#public-preview-limitations).<br> The Basic and Auxiliary table plans aren't available for workspaces in [legacy pricing tiers](cost-logs.md#legacy-pricing-tiers).
+## Kusto Query Language (KQL) and Log Analytics
+
+You retrieve data from a Log Analytics workspace using a [Kusto Query Language (KQL)](/azure/data-explorer/kusto/query/) query, which is a read-only request to process data and return results. KQL is a powerful tool that can analyze millions of records quickly. Use KQL to explore your logs, transform and aggregate data, discover patterns, identify anomalies and outliers, and more.
+
+Log Analytics is a tool in the Azure portal for running log queries and analyzing their results. [Log Analytics Simple mode](log-analytics-simple-mode.md) lets any user, regardless of their knowledge of KQL, retrieve data from one or more tables with one click. A set of controls lets you explore and analyze the retrieved data using the most popular Azure Monitor Logs functionality in an intuitive, spreadsheet-like experience.
++
+If you're familiar with KQL, you can use Log Analytics KQL mode to edit and create queries, which you can then use in Azure Monitor features such as alerts and workbooks, or share with other users.
+
+For more information about Log Analytics, see [Overview of Log Analytics in Azure Monitor](./log-analytics-overview.md).
+
+## Built-in insights and custom dashboards, workbooks, and reports
+
+Many of Azure Monitor's [ready-to-use, curated Insights experiences](../insights/insights-overview.md) store data in Azure Monitor Logs, and present this data in an intuitive way so you can monitor the performance and availability of your cloud and hybrid applications and their supporting components.
++
+You can also [create your own visualizations and reports](../best-practices-analysis.md#built-in-visualization-tools) using workbooks, dashboards, and Power BI.
+ ## Use cases This table describes some of the ways that you can use the data you collect in Azure Monitor Logs to derive operational and business value.
communication-services European Union Data Boundary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/european-union-data-boundary.md
Calls and meetings can be established in various ways by various users. We defi
For EU communication resources, when the organizer, initiator, or guests join a call from the EU, processing and storage of personal data will be limited to the EU.
+## SMS
+
+Azure Communication Services guarantees that SMS data within the EUDB is stored in EUDB regions. As of today, we process and store data in the Netherlands, Ireland or Switzerland regions, ensuring no unauthorized data transfer outside the EEA.
+Also, Azure Communication Services employs advanced security measures, including encryption, to protect SMS data both at rest and in transit. Customers can select their preferred data residency within the EUDB, making sure data remains within the designated EU regions.
+
+#### SMS EUDB FAQ
+
+**What happens with SMS data in the UK?**
+
+While the UK is no longer part of the EU, Azure Communication Services processes data for the UK within the EUDB. As of today, data processing and storage occur within the Netherlands, Ireland or Switzerland regions, maintaining compliance with EU regulations.
+
+**What happens when an SMS recipient is outside the EU?**
+
+If an SMS recipient is outside the EU, the core data processing and storage remain within the EUDB (Netherlands, Ireland or Switzerland regions). However, for the SMS to be delivered, it may be routed through networks outside the EU, depending on the recipient's location and carrier, which is necessary for successful message delivery.
+
+**Can data be transferred to non-EU regions under any circumstances?**
+
+Yes, to deliver SMS to recipients outside the EU, some data routing may occur outside the EUDB, but this is strictly for message delivery purposes. Data processing and storage at rest still comply with the EUDB regulations.
++ ## Messaging All threads created from an EU resource will process and storage personal data in the EU.
iot-dps How To Manage Linked Iot Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-manage-linked-iot-hubs.md
Title: How to manage linked IoT hubs with Device Provisioning Service (DPS)
description: This article shows how to link and manage IoT hubs with the Device Provisioning Service (DPS). Previously updated : 01/18/2023 Last updated : 08/23/2024
# How to link and manage IoT hubs
-Azure IoT Hub Device Provisioning Service (DPS) can provision devices across one or more IoT hubs. Before DPS can provision devices to an IoT hub, it must be linked to your DPS instance. Once linked, an IoT hub can be used in an allocation policy. Allocation policies determine how devices are assigned to IoT hubs by DPS. This article provides instruction on how to link IoT hubs and manage them in your DPS instance.
+Azure IoT Hub Device Provisioning Service (DPS) can provision devices across one or more IoT hubs. Before DPS can provision devices to an IoT hub, it must be able to write to the IoT hub device registry. This article provides instruction on how to link IoT hubs and manage them in your DPS instance. Once linked, an IoT hub can be used in an allocation policy. Allocation policies determine how devices are assigned to IoT hubs by DPS.
-## Linked IoT hubs and allocation policies
+## Linked IoT hub settings
-DPS can only provision devices to IoT hubs that have been linked to it. Linking an IoT hub to a DPS instance gives the service read/write permissions to the IoT hub's device registry. With these permissions, DPS can register a device ID and set the initial configuration in the device twin. Linked IoT hubs may be in any Azure region. You may link hubs in other subscriptions to your DPS instance.
+The Device Provisioning Service can only provision devices to IoT hubs that have been linked to it. Linking an IoT hub to a DPS instance gives the DPS instance read/write permissions to the IoT hub's device registry. With these permissions, DPS can register a device ID and set the initial configuration in the device twin. Linked IoT hubs may be in any Azure region. You may link hubs in other subscriptions to your DPS instance.
-After an IoT hub is linked to DPS, it's eligible to participate in allocation. Whether and how it will participate in allocation depends on settings in the enrollment that a device provisions through and settings on the linked IoT hub itself.
+After an IoT hub is linked to DPS, it's eligible to participate in allocation. Whether and how it participates in allocation depends on settings in the enrollment that a device provisions through and settings on the linked IoT hub itself.
The following settings control how DPS uses linked IoT hubs:
-* **Connection string**: Sets the IoT Hub connection string that DPS uses to connect to the linked IoT hub. The connection string is based on one of the IoT hub's shared access policies. DPS needs the following permissions on the IoT hub: *RegistryWrite* and *ServiceConnect*. The connection string must be for a shared access policy that has these permissions. To learn more about IoT Hub shared access policies, see [IoT Hub access control and permissions](../iot-hub/iot-hub-dev-guide-sas.md#access-control-and-permissions).
+* **Connection string**: Sets the IoT Hub connection string that DPS uses to connect to the linked IoT hub. The connection string is based on one of the IoT hub's shared access policies. DPS needs the following permissions on the IoT hub: *RegistryWrite* and *ServiceConnect*. The connection string must be for a shared access policy that has these permissions. To learn more about IoT Hub shared access policies, see [IoT Hub access control and permissions](../iot-hub/authenticate-authorize-sas.md#access-control-and-permissions).
* **Allocation weight**: Determines the likelihood of an IoT hub being selected when DPS hashes device assignment across a set of IoT hubs. The value can be between one and 1000. The default is one (or **null**). Higher values increase the IoT hub's probability of being selected.
-* **Apply allocation policy**: Sets whether the IoT hub participates in allocation policy. The default is **Yes** (true). If set to **No** (false), devices won't be assigned to the IoT hub. The IoT hub can still be selected on an enrollment, but it won't participate in allocation. You can use this setting to temporarily or permanently remove an IoT hub from participating in allocation; for example, if it's approaching the allowed number of devices.
+* **Apply allocation policy**: Sets whether the IoT hub participates in allocation policy. The default is **Yes** (true). If set to **No** (false), devices aren't assigned to the IoT hub. The IoT hub can still be selected on an enrollment, but it won't participate in allocation. You can use this setting to temporarily or permanently remove an IoT hub from participating in allocation; for example, if it's approaching the allowed number of devices.
To learn about DPS allocation policies and how linked IoT hubs participate in them, see [Manage allocation policies](how-to-use-allocation-policies.md).
-## Add a linked IoT hub
+## Limitations
-When you link an IoT hub to your DPS instance, it becomes available to participate in allocation. You can add IoT hubs that are inside or outside of your subscription. When you link an IoT hub, it may or may not be available for allocations in existing enrollments:
+* There are some limitations when working with linked IoT hubs and private endpoints. For more information, see [Private endpoint limitations](virtual-network-support.md#private-endpoint-limitations).
-* For enrollments that don't explicitly set the IoT hubs to apply allocation policy to, a newly linked IoT hub immediately begins participating in allocation.
+* The linked IoT Hub must have [Connect using shared access policies](../iot-hub/iot-hub-dev-guide-azure-ad-rbac.md#azure-ad-access-and-shared-access-policies) set to **Allow**.
-* For enrollments that do explicitly set the IoT hubs to apply allocation policy to, you'll need to manually or programmatically add the new IoT hub to the enrollment settings for it to participate in allocation.
+## Add a linked IoT hub
-### Limitations
+You can add IoT hubs that are inside or outside of your subscription. When you link an IoT hub, it might or might not be available for allocations in existing enrollments:
-* There are some limitations when working with linked IoT hubs and private endpoints. For more information, see [Private endpoint limitations](virtual-network-support.md#private-endpoint-limitations).
+* For enrollments that don't explicitly set the IoT hubs to apply allocation policy to, a newly linked IoT hub immediately begins participating in allocation.
-* The linked IoT Hub must have [Connect using shared access policies](../iot-hub/iot-hub-dev-guide-azure-ad-rbac.md#azure-ad-access-and-shared-access-policies) set to **Allow**.
+* For enrollments that do explicitly set the IoT hubs to apply allocation policy to, you'll need to manually or programmatically add the new IoT hub to the enrollment settings for it to participate in allocation.
-### Use the Azure portal to link an IoT hub
+### [Azure portal](#tab/portal)
In the Azure portal, you can link an IoT hub either from the left menu of your DPS instance or from the enrollment when creating or updating an enrollment. In both cases, the IoT hub is scoped to the DPS instance (not just the enrollment).
To link an IoT hub to your DPS instance in the Azure portal:
1. Select **Save**.
-When you're creating or updating an enrollment, you can use the **Link a new IoT hub** button on the enrollment. You'll be presented with the same page and choices as above. After you save the linked hub, it will be available on your DPS instance and can be selected from your enrollment.
- > [!NOTE] >
-> In the Azure portal, you can't set the *Allocation weight* and *Apply allocation policy* settings when you add a linked IoT hub. Instead, You can update these settings after the IoT hub is linked. To learn more, see [Update a linked IoT hub](#update-a-linked-iot-hub).
+> In the Azure portal, you can't set the *Allocation weight* and *Apply allocation policy* settings when you add a linked IoT hub. Instead, update these settings after the IoT hub is linked.
-### Use the Azure CLI to link an IoT hub
+### [Azure CLI](#tab/cli)
Use the [az iot dps linked-hub create](/cli/azure/iot/dps/linked-hub#az-iot-dps-linked-hub-create) Azure CLI command to link an IoT hub to your DPS instance.
For example, the following command links an IoT hub named *MyExampleHub* using a
az iot dps linked-hub create --dps-name MyExampleDps --resource-group MyResourceGroup --connection-string "HostName=MyExampleHub.azure-devices.net;SharedAccessKeyName=iothubowner;SharedAccessKey=XNBhoasdfhqRlgGnasdfhivtshcwh4bJwe7c0RIGuWsirW0=" --location westus ``` ++ DPS also supports linking IoT Hubs using the [Create or Update DPS resource](/rest/api/iot-dps/iot-dps-resource/create-or-update?tabs=HTTP) REST API, [Resource Manager templates](/azure/templates/microsoft.devices/provisioningservices?pivots=deployment-language-arm-template), and the [DPS Management SDKs](libraries-sdks.md#management-sdks). ## Update a linked IoT hub You can update the settings on a linked IoT hub to change its allocation weight, whether it can have allocation policies applied to it, and the connection string that DPS uses to connect to it. When you update the settings for an IoT hub, the changes take effect immediately, whether the IoT hub is specified on an enrollment or used by default.
-### Use the Azure portal to update a linked IoT hub
+### [Azure portal](#tab/portal)
In the Azure portal, you can update the *Allocation weight* and *Apply allocation policy* settings.
To update the settings for a linked IoT hub using the Azure portal:
1. On the **Linked IoT hub details** page:
- :::image type="content" source="media/how-to-manage-linked-iot-hubs/set-linked-iot-hub-properties.png" alt-text="Screenshot that shows the linked IoT hub details page.":::.
+ :::image type="content" source="media/how-to-manage-linked-iot-hubs/set-linked-iot-hub-properties.png" alt-text="Screenshot that shows the linked IoT hub details page.":::
* Use the **Allocation weight** slider or text box to choose a weight between one and 1000. The default is one.
To update the settings for a linked IoT hub using the Azure portal:
> [!NOTE] >
-> You can't update the connection string that DPS uses to connect to the IoT hub from the Azure portal. Instead, you can use the Azure CLI to update the connection string, or you can delete the linked IoT hub from your DPS instance and relink it. To learn more, see [Update keys for linked IoT hubs](#update-keys-for-linked-iot-hubs).
+> You can't update the connection string that DPS uses to connect to the IoT hub from the Azure portal. Instead, use the Azure CLI to update the connection string, or delete the linked IoT hub from your DPS instance and relink it. To learn more, see the [Update keys for linked IoT hubs](#update-keys-for-linked-iot-hubs) section.
-### Use the Azure CLI to update a linked IoT hub
+### [Azure CLI](#tab/cli)
With the Azure CLI, you can update the *Allocation weight*, *Apply allocation policy*, and *Connection string* settings.
az iot dps linked-hub update --dps-name MyExampleDps --resource-group MyResource
Use the [az iot dps update](/cli/azure/iot/dps#az-iot-dps-update) command to update the connection string for a linked IoT hub. You can use the `--set` parameter along with the connection string for the IoT hub shared access policy you want to use. For details, see [Update keys for linked IoT hubs](#update-keys-for-linked-iot-hubs). ++ DPS also supports updating linked IoT Hubs using the [Create or Update DPS resource](/rest/api/iot-dps/iot-dps-resource/create-or-update?tabs=HTTP) REST API, [Resource Manager templates](/azure/templates/microsoft.devices/provisioningservices?pivots=deployment-language-arm-template), and the [DPS Management SDKs](libraries-sdks.md#management-sdks). ## Delete a linked IoT hub
-When you delete a linked IoT hub from your DPS instance, it will no longer be available to set in future enrollments. However, it may or may not be removed from allocations in existing enrollments:
+When you delete a linked IoT hub from your DPS instance, it will no longer be available to set in future enrollments. However, it might or might not be removed from allocations in existing enrollments:
* For enrollments that don't explicitly set the IoT hubs to apply allocation policy to, a deleted linked IoT hub is no longer available for allocation.
-* For enrollments that do explicitly set the IoT hubs to apply allocation policy to, you'll need to manually or programmatically remove the IoT hub from the enrollment settings for it to be removed from participation in allocation. Failure to do so may result in an error when a device tries to provision through the enrollment.
+* For enrollments that do explicitly set the IoT hubs to apply allocation policy to, you'll need to manually or programmatically remove the IoT hub from the enrollment settings for it to be removed from participation in allocation. Failure to do so might result in an error when a device tries to provision through the enrollment.
-### Use the Azure portal to delete a linked IoT hub
+### [Azure portal](#tab/portal)
To delete a linked IoT hub from your DPS instance in the Azure portal:
To delete a linked IoT hub from your DPS instance in the Azure portal:
1. From the list of IoT hubs, select the check box next to the IoT hub or IoT hubs you want to delete. Then select **Delete** at the top of the page and confirm your choice when prompted.
-### Use the Azure CLI to delete a linked IoT hub
+### [Azure CLI](#tab/cli)
Use the [az iot dps linked-hub delete](/cli/azure/iot/dps/linked-hub#az-iot-dps-linked-hub-delete) command to remove a linked IoT hub from the DPS instance. For example, the following command removes the IoT hub named MyExampleHub:
Use the [az iot dps linked-hub delete](/cli/azure/iot/dps/linked-hub#az-iot-dps-
az iot dps linked-hub delete --dps-name MyExampleDps --resource-group MyResourceGroup --linked-hub MyExampleHub ``` ++ DPS also supports deleting linked IoT Hubs from the DPS instance using the [Create or Update DPS resource](/rest/api/iot-dps/iot-dps-resource/create-or-update?tabs=HTTP) REST API, [Resource Manager templates](/azure/templates/microsoft.devices/provisioningservices?pivots=deployment-language-arm-template), and the [DPS Management SDKs](libraries-sdks.md#management-sdks). ## Update keys for linked IoT hubs
-It may become necessary to either rotate or update the symmetric keys for an IoT hub that's been linked to DPS. In this case, you'll also need to update the connection string setting in DPS for the linked IoT hub. Note that provisioning to an IoT hub will fail during the interim between updating a key on the IoT hub and updating your DPS instance with the new connections string based on that key. For this reason, we recommend [using the Azuer CLI to update your keys](#use-the-azure-cli-to-update-keys) because you can update the connnection string on the linked hub direcctly. With the Azure portal, you have to delete the IoT hub from your DPS instance and then relink it in order to update the connection string.
+It may become necessary to either rotate or update the symmetric keys for an IoT hub that's been linked to DPS. In this case, you'll also need to update the connection string setting in DPS for the linked IoT hub.
+
+Provisioning to an IoT hub will fail during the interim between updating a key on the IoT hub and updating your DPS instance with the new connections string based on that key. For this reason, we recommend using the Azure CLI to update your keys because you can update the connection string on the linked hub directly. With the Azure portal, you have to delete the IoT hub from your DPS instance and then relink it in order to update the connection string.
-### Use the Azure portal to update keys
+### [Azure portal](#tab/portal)
-You can't update the connection string setting for a linked IoT Hub when using Azure portal. Instead, you need to delete the linked IoT hub from your DPS instance and then re-add it.
+You can't update the connection string setting for a linked IoT Hub when using Azure portal. Instead, you need to delete the linked IoT hub from your DPS instance and then readd it.
To update symmetric keys for a linked IoT hub in the Azure portal:
-1. On the left menu of your DPS instance in the Azure portal, select the IoT hub that you want to update the key(s) for.
+1. On the left menu of your DPS instance in the Azure portal, select the IoT hub that you want to update one or more keys for.
-1. On the **Linked IoT hub details** page, note down the values for *Allocation weight* and *Apply allocation policy*, you'll need these values when you relink the IoT hub to your DPS instance later. Then, select **Manage Resource** to go to the IoT hub.
+1. On the **Linked IoT hub details** page, note down the values for *Allocation weight* and *Apply allocation policy*. You need these values when you relink the IoT hub to your DPS instance later. Then, select **Manage Resource** to go to the IoT hub.
1. On the left menu of the IoT hub, under **Security settings**, select **Shared access policies**.
To update symmetric keys for a linked IoT hub in the Azure portal:
1. Navigate back to your DPS instance.
-1. Follow the steps in [Delete an IoT hub](#use-the-azure-portal-to-delete-a-linked-iot-hub) to delete the IoT hub from your DPS instance.
+1. Follow the steps in the [Delete a linked IoT hub](#delete-a-linked-iot-hub) section to delete the IoT hub from your DPS instance.
-1. Follow the steps in [Link an IoT hub](#use-the-azure-portal-to-link-an-iot-hub) to relink the IoT hub to your DPS instance with the new connection string for the policy.
+1. Follow the steps in the [Add a linked IoT hub](#add-a-linked-iot-hub) section to relink the IoT hub to your DPS instance with the new connection string for the policy.
-1. If you need to restore the allocation weight and apply allocation policy settings, follow the steps in [Update a linked IoT hub](#use-the-azure-portal-to-update-a-linked-iot-hub) using the values you saved in step 2.
+1. If you need to restore the allocation weight and apply allocation policy settings, follow the steps in the [Update a linked IoT hub](#update-a-linked-iot-hub) section using the values you saved in step 2.
-### Use the Azure CLI to update keys
+### [Azure CLI](#tab/cli)
To update symmetric keys for a linked IoT hub with Azure CLS:
To update symmetric keys for a linked IoT hub with Azure CLS:
az iot dps linked-hub list --dos-name MyExampleDps ```
- The output will show the position of the linked IoT hub you want to update the connection string for in the table of linked IoT hubs maintained by your DPS instance. In this case, it's the first IoT hub in the list, *MyExampleHub*.
+ The output shows the position of the linked IoT hub you want to update the connection string for in the table of linked IoT hubs maintained by your DPS instance. In this case, it's the first IoT hub in the list, *MyExampleHub*.
```json [
To update symmetric keys for a linked IoT hub with Azure CLS:
az iot dps update --name MyExampleDps --set properties.iotHubs[0].connectionString="HostName=MyExampleHub-2.azure-devices.net;SharedAccessKeyName=iothubowner;SharedAccessKey=NewTokenValue" ``` ++ ## Next steps * To learn more about allocation policies, see [Manage allocation policies](how-to-use-allocation-policies.md).
iot-dps How To Use Allocation Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-use-allocation-policies.md
To set allocation policy and select IoT hubs on an enrollment in the Azure porta
1. Select the IoT hubs that devices can be assigned to from the drop-down list. If you select the *Static configuration* allocation policy, you'll be limited to selecting a single linked IoT hub. For all other allocation policies, all the linked IoT hubs will be selected by default, but you can modify this selection using the drop-down. To have the enrollment automatically use linked IoT hubs as they're added to (or deleted from) the DPS instance, unselect all IoT hubs.
- 1. Optionally, you can select the **Link a new IoT hub** button to link a new IoT hub to the DPS instance and make it available in the list of IoT hubs that can be selected. For details about linking an IoT hub, see [Link an IoT Hub](how-to-manage-linked-iot-hubs.md#use-the-azure-portal-to-link-an-iot-hub).
+ 1. Optionally, you can select the **Link a new IoT hub** button to link a new IoT hub to the DPS instance and make it available in the list of IoT hubs that can be selected. For details about linking an IoT hub, see [Add a linked IoT Hub](how-to-manage-linked-iot-hubs.md#add-a-linked-iot-hub).
1. Select the allocation policy you want to apply to the enrollment. The default allocation policy for the DPS instance is selected by default. For custom allocation, you'll also need to specify a custom allocation policy webhook in Azure Functions. For details, see the [Use custom allocation policies](tutorial-custom-allocation-policies.md) tutorial.
machine-learning How To Autoscale Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-autoscale-endpoints.md
To enable autoscale for an online endpoint, you first define an autoscale profil
[!INCLUDE [cli v2](includes/machine-learning-cli-v2.md)]
+If you haven't already set the defaults for the Azure CLI, save your default settings. To avoid passing in the values for your subscription, workspace, and resource group multiple times, run this code:
+
+ ```azurecli
+ az account set --subscription <subscription ID>
+ az configure --defaults workspace=<Azure Machine Learning workspace name> group=<resource group>
+ ```
+ 1. Set the endpoint and deployment names: :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-moe-autoscale.sh" ID="set_endpoint_deployment_name" :::
machine-learning Overview What Is Prompt Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/overview-what-is-prompt-flow.md
Previously updated : 11/02/2023 Last updated : 08/25/2024 # What is Azure Machine Learning prompt flow
-Azure Machine Learning prompt flow is a development tool designed to streamline the entire development cycle of AI applications powered by Large Language Models (LLMs). As the momentum for LLM-based AI applications continues to grow across the globe, Azure Machine Learning prompt flow provides a comprehensive solution that simplifies the process of prototyping, experimenting, iterating, and deploying your AI applications.
+Azure Machine Learning prompt flow is a development tool designed to streamline the entire development cycle of AI applications powered by Large Language Models (LLMs). Prompt flow provides a comprehensive solution that simplifies the process of prototyping, experimenting, iterating, and deploying your AI applications.
-With Azure Machine Learning prompt flow, you'll be able to:
+With Azure Machine Learning prompt flow, you're able to:
- Create executable flows that link LLMs, prompts, and Python tools through a visualized graph. - Debug, share, and iterate your flows with ease through team collaboration. - Create prompt variants and evaluate their performance through large-scale testing. - Deploy a real-time endpoint that unlocks the full power of LLMs for your application.
-If you're looking for a versatile and intuitive development tool that will streamline your LLM-based AI application development, then Azure Machine Learning prompt flow is the perfect solution for you. Get started today and experience the power of streamlined development with Azure Machine Learning prompt flow.
+Azure Machine Learning prompt flow offers a versatile, intuitive way to streamline your LLM-based AI development.
## Benefits of using Azure Machine Learning prompt flow
Azure Machine Learning prompt flow offers a range of benefits that help users tr
### Prompt engineering agility -- Interactive authoring experience: Azure Machine Learning prompt flow provides a visual representation of the flow's structure, allowing users to easily understand and navigate their projects. It also offers a notebook-like coding experience for efficient flow development and debugging.
+- Interactive authoring experience: Visual representation of the flow's structure, allowing users to easily understand and navigate their projects. It also offers a notebook-like coding experience for efficient flow development and debugging.
- Variants for prompt tuning: Users can create and compare multiple prompt variants, facilitating an iterative refinement process. - Evaluation: Built-in evaluation flows enable users to assess the quality and effectiveness of their prompts and flows.-- Comprehensive resources: Azure Machine Learning prompt flow includes a library of built-in tools, samples, and templates that serve as a starting point for development, inspiring creativity and accelerating the process.
+- Comprehensive resources: Access a library of built-in tools, samples, and templates that serve as a starting point for development, inspiring creativity, and accelerating the process.
### Enterprise readiness for LLM-based applications -- Collaboration: Azure Machine Learning prompt flow supports team collaboration, allowing multiple users to work together on prompt engineering projects, share knowledge, and maintain version control.-- All-in-one platform: Azure Machine Learning prompt flow streamlines the entire prompt engineering process, from development and evaluation to deployment and monitoring. Users can effortlessly deploy their flows as Azure Machine Learning endpoints and monitor their performance in real-time, ensuring optimal operation and continuous improvement.-- Azure Machine Learning Enterprise Readiness Solutions: Prompt flow leverages Azure Machine Learning's robust enterprise readiness solutions, providing a secure, scalable, and reliable foundation for the development, experimentation, and deployment of flows.
+- Collaboration: Supports team collaboration, allowing multiple users to work together on prompt engineering projects, share knowledge, and maintain version control.
+- All-in-one platform: Streamlines the entire prompt engineering process, from development and evaluation to deployment and monitoring. Users can effortlessly deploy their flows as Azure Machine Learning endpoints and monitor their performance in real-time, ensuring optimal operation and continuous improvement.
+- Azure Machine Learning Enterprise Readiness Solutions: Prompt flow uses Azure Machine Learning's robust enterprise readiness solutions, providing a secure, scalable, and reliable foundation for the development, experimentation, and deployment of flows.
-With Azure Machine Learning prompt flow, users can unleash their prompt engineering agility, collaborate effectively, and leverage enterprise-grade solutions for successful LLM-based application development and deployment.
+Azure Machine Learning prompt flow empowers agile prompt engineering, seamless collaboration, and robust enterprise LLM-based application development and deployment.
## LLM-based application development lifecycle
-Azure Machine Learning prompt flow offers a well-defined process that facilitates the seamless development of AI applications. By leveraging it, you can effectively progress through the stages of developing, testing, tuning, and deploying flows, ultimately resulting in the creation of fully fledged AI applications.
+Azure Machine Learning prompt flow streamlines AI application development, taking you through developing, testing, tuning, and deploying flows to build complete AI applications.
The lifecycle consists of the following stages:
The lifecycle consists of the following stages:
- Evaluation & Refinement: Assess the flow's performance by running it against a larger dataset, evaluate the prompt's effectiveness, and refine as needed. Proceed to the next stage if the results meet the desired criteria. - Production: Optimize the flow for efficiency and effectiveness, deploy it, monitor performance in a production environment, and gather usage data and feedback. Use this information to improve the flow and contribute to earlier stages for further iterations.
-By following this structured and methodical approach, prompt flow empowers you to develop, rigorously test, fine-tune, and deploy flows with confidence, resulting in the creation of robust and sophisticated AI applications.
+With prompt flow's methodical process, you can develop, test, refine, and deploy sophisticated AI applications confidently.
:::image type="content" source="./media/overview-what-is-prompt-flow/prompt-flow-lifecycle.png" alt-text="Diagram of the prompt flow lifecycle starting from initialization to experimentation then evaluation and refinement and finally production. " lightbox = "./media/overview-what-is-prompt-flow/prompt-flow-lifecycle.png":::
machine-learning Tutorial Auto Train Image Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-auto-train-image-models.md
You write code using the Python SDK in this tutorial and learn the following tas
* [!INCLUDE [prereq-workspace](includes/prereq-workspace.md)]
-* Python 3.6 or 3.7 are supported for this feature
+* Python 3.9 or 3.10 are supported for this feature
* Download and unzip the [**odFridgeObjects.zip*](https://automlsamplenotebookdata.blob.core.windows.net/image-object-detection/odFridgeObjects.zip) data file. The dataset is annotated in Pascal VOC format, where each image corresponds to an xml file. Each xml file contains information on where its corresponding image file is located and also contains information about the bounding boxes and the object labels. In order to use this data, you first need to convert it to the required JSONL format as seen in the [Convert the downloaded data to JSONL](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/automl-standalone-jobs/automl-image-object-detection-task-fridge-items/automl-image-object-detection-task-fridge-items.ipynb) section of the notebook.
public-multi-access-edge-compute-mec Considerations For Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/public-multi-access-edge-compute-mec/considerations-for-deployment.md
- Title: Considerations for deployment in Azure public MEC
-description: Learn about considerations for customers to plan for before they deploy applications in an Azure public multi-access edge compute (MEC) solution.
---- Previously updated : 11/22/2022---
-# Considerations for deployment in Azure public MEC
-
-Azure public multi-access edge compute (MEC) sites are small-footprint extensions of Azure. They're placed in or near mobile operators' data centers in metro areas, and are designed to run workloads that require low latency while being attached to the mobile network. This article focuses on the considerations that customers should plan for before they deploy applications in the Azure public MEC.
-
-## Prerequisites
--- If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.--- Add an allowlisted subscription to your Azure account, which allows you to deploy resources in Azure public MEC. If you don't have an active allowed subscription, contact the [Azure public MEC product team](https://aka.ms/azurepublicmec).-
-## Best practices
-
-For Azure public MEC, follow these best practices:
--- Deploy in Azure public MEC only those components of the application that are latency sensitive or need low latency compute at the Azure public MEC. Deploy in the parent Azure region those components of the application that perform control plane and management plane functionalities.--- To access VMs deployed in the Azure public MEC, deploy jump box virtual machines (VMs) or Azure Bastion in a virtual network (VNet) in the parent region.--- For compute resources in the Azure public MEC, deploy Azure Key Vault in the Azure region to provide secrets management and key management services.--- Use VNet peering between the VNets in the Azure public MEC and the VNets in the parent region. IaaS resources can communicate privately through the Microsoft network and don't need to access the public internet.-
-## Azure public MEC architecture
-
-Deploy application components that require low latencies in the Azure public MEC, and components that are non-latency sensitive in the Azure region. For more information, see [Azure public multi-access edge compute deployment](/azure/architecture/example-scenario/hybrid/public-multi-access-edge-compute-deployment).
-
-### Azure region
-
-The Azure region should run the components of the application that perform control and management plane functions and aren't latency sensitive.
-
-The following sections show some examples.
-
-#### Azure database and storage
--- Azure databases: Azure SQL, Azure Database for MySQL, and so on-- Storage accounts-- Azure Blob Storage-
-#### AI and Analytics
--- Azure Machine Learning Services-- Azure Analytics Services-- Power BI-- Azure Stream Analytics-
-#### Identity services
--- Microsoft Entra ID-
-#### Secrets management
--- Azure Key Vault-
-### Azure public MEC
-
-Azure public MEC should run components that are latency sensitive and need faster response times from compute resources. To do so, run your application on compute services such as Azure Virtual Machines and Azure Kubernetes Service in the public MEC.
-
-## Availability and resiliency
-
-Applications you deploy in the Azure public MEC can be made available and resilient by using the following methods:
--- [Deploy resources in active/standby](/azure/architecture/example-scenario/hybrid/multi-access-edge-compute-ha), with primary resources in the Azure public MEC and standby resources in the parent Azure region. If there's a failure in the Azure public MEC, the resources in the parent region become active.--- Use the [Azure backup and disaster recovery solution](/azure/architecture/framework/resiliency/backup-and-recovery), which provides [Azure Site Recovery](../site-recovery/site-recovery-overview.md) and Azure Backup features. This solution:
- - Actively replicates VMs from the Azure public MEC to the parent region and makes them available to fail over and fail back if there's an outage.
- - Backs up VMs to prevent data corruption or lost data.
-
- > [!NOTE]
- > The Azure backup and disaster recovery solution for Azure public MEC supports only Azure Virtual Machines.
-
-A trade-off exists between availability and latency. Although failing over the application from the Azure public MEC to the Azure region ensures that the application is available, it might increase the latency to the application.
-
-Architect your edge applications by utilizing the Azure Region for the components that are less latency sensitive, need to be persistent or need to be shared between public MEC sites. This will allow for the applications to be more resilient and cost effective. The public MEC can host the latency sensitive components.
-
-## Data residency
-> [!IMPORTANT]
-Azure public MEC doesn't store or process customer data outside the region you deploy the service instance in.
-
-## Next steps
-
-To deploy a virtual machine in Azure public MEC using an Azure Resource Manager (ARM) template, advance to the following article:
-
-> [!div class="nextstepaction"]
-> [Quickstart: Deploy a virtual machine in Azure public MEC using an ARM template](quickstart-create-vm-azure-resource-manager-template.md)
public-multi-access-edge-compute-mec Key Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/public-multi-access-edge-compute-mec/key-concepts.md
- Title: Key concepts for Azure public MEC
-description: Learn about important concepts for Azure public multi-access edge compute (MEC).
---- Previously updated : 11/22/2022---
-# Key concepts for Azure public MEC
-
-This document describes important concepts for Azure public multi-access edge compute (MEC).
-
-## ExtendedLocation field
-
-All resource providers provide an additional field named [extendedLocation](/javascript/api/@azure/arm-compute/extendedlocation), which you use to deploy resources in the Azure public MEC.
-
-## Azure Edge Zone ID
-
-Every Azure public MEC site has an Azure Edge Zone ID. This ID is one of the attributes that the `extendedLocation` field uses to differentiate sites.
-
-## Azure CLI and SDKs
-
-To support Azure public MEC, Microsoft has updated the Azure services SDKs. For information about how to use these SDKs for deployment, see:
--- [Quickstart: Deploy a virtual machine in Azure public MEC using Azure CLI](quickstart-create-vm-cli.md).-- [Tutorial: Deploy resources in Azure public MEC using the Go SDK](tutorial-create-vm-using-go-sdk.md)-- [Tutorial: Deploy a virtual machine in Azure public MEC using the Python SDK](tutorial-create-vm-using-python-sdk.md)-
-## ARM templates
-
-You can use Azure Resource Manager (ARM) templates to deploy resources in the Azure public MEC. Here's an example of how to use `extendedLocation` in an ARM template to deploy a virtual machine (VM):
-
-```json
-{
- ...
- "type": "Microsoft.Compute/virtualMachines"
- "extendedLocation": {
- "type": "EdgeZone",
- "name": <edgezoneid>,
- }
- ...
-}
-```
-
-## Parent Azure regions
-
-Every Azure public MEC site is associated with a parent Azure region. This region hosts all the control plane functions associated with the services running in the Azure public MEC. The following table lists active Azure public MEC sites, along with their Edge Zone ID and associated parent region:
-
-| Telco provider | Azure public MEC name | Edge Zone ID | Parent region |
-| -- | | | - |
-| AT&T | ATT Atlanta A | attatlanta1 | East US 2 |
-| AT&T | ATT Dallas A | attdallas1 | South Central US |
-| AT&T | ATT Detroit A | attdetroit1 | Central US |
-
-## Azure services
-
-### Azure Virtual Machines
-
-Azure public MEC supports specific compute and GPU VM SKUs. The following table lists the supported VM sizes:
-
-| Type | Series | VM size |
-| - | | - |
-| VM | D-series | Standard_DS1_v2, Standard_DS2_v2, Standard_D2s_v3, Standard_D4s_v3, Standard_D8s_v3 |
-| VM | E-series | Standard_E4s_v3, Standard_E8s_v3 |
-| GPU | NCasT4_v3-series | Standard_NC4asT4_v3, Standard_NC8asT4_v3 |
-
-### Public IP
-
-Azure public MEC allows users to create Azure public IPs that you can then associate with resources such as Azure Virtual Machines, Azure Standard Load Balancer, and Azure Kubernetes Clusters. All the Azure public MEC IPs are Standard SKU public IPs.
-
-### Azure Bastion
-
-Azure Bastion is a service you deploy that lets you connect to a VM by using your browser and the Azure portal. To access a VM deployed in the Azure public MEC, the Bastion host must be deployed in a virtual network (VNet) in the parent Azure region of the Azure public MEC site.
-
-### Azure Load Balancer
-
-The Azure public MEC supports the Azure Standard Load Balancer SKU.
-
-### Network security groups
-
-Azure network security groups that are associated with resources created in the Azure public MEC should be created in the parent Azure region.
-
-### Resource groups
-
-Resource groups that are associated with resources created in the Azure public MEC should be created in the parent Azure region.
-
-### Azure Storage services
-
-Azure public MEC supports creating Standard SSD managed disks only. All other Azure Storage services aren't supported in the public MEC.
-
-### Default outbound access
-
-Because Azure public MEC doesn't support [default outbound access](../virtual-network/ip-services/default-outbound-access.md), manage your outbound connectivity by using one of the following methods:
--- Use the frontend IP addresses of an Azure Load Balancer for outbound via outbound rules.-- Assign an Azure public IP to the VM.-
-### DNS Resolution
-
-By default, all services running in the Azure public MEC use the DNS infrastructure in the parent Azure region.
-
-## Next steps
-
-To learn about considerations for deployment in the Azure public MEC, advance to the following article:
-
-> [!div class="nextstepaction"]
-> [Considerations for deployment in the Azure public MEC](considerations-for-deployment.md)
public-multi-access-edge-compute-mec Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/public-multi-access-edge-compute-mec/overview.md
- Title: What is Azure public MEC?
-description: Learn about the benefits of Azure public multi-access edge compute (MEC) and how it works.
---- Previously updated : 11/22/2022---
-# What is Azure public MEC?
-
-Azure public multi-access edge compute (MEC) sites are small-footprint extensions of Azure. They're placed in or near mobile operators' data centers in metro areas, and are designed to run workloads that require low latency while being attached to the mobile network. Azure public MEC is offered in partnership with the operators. The placement of the infrastructure offers lower latency for applications that are accessed from mobile devices connected to the 5G mobile network.
-
-Azure public MEC provides secure, reliable, high-bandwidth connectivity between applications that run close to the user while being served by the Microsoft global network. Azure public MEC offers a set of Azure services like Azure Virtual Machines, Azure Load Balancer, and Azure Kubernetes for Edge, with the ability to leverage and connect to other Azure services available in the Azure region.
-
-Some of the industries and use cases where Azure public MEC can provide benefits are:
--- Media streaming and content delivery-- Real-time analytics and inferencing via artificial intelligence and machine learning-- Rendering for mixed reality-- Connected automobiles-- Healthcare-- Immersive gaming experiences-- Low latency applications for the retail industry--
-## Benefits of Azure public MEC
-
-Azure public MEC has the following benefits:
--- Low latency applications at the 5G network edge:
-
- - Enterprises and developers can run low-latency applications by using the operatorΓÇÖs public 5G network connectivity. This connectivity is architected with a direct, dedicated, and optimized connection to the operatorΓÇÖs mobility core network.
--- Access to key Azure services and experiences:
- - Azure-managed toolset: Azure customers can provision and manage their Azure public MEC services and workloads through the Azure portal and other essential Azure tools.
- - Consistent developer experience: Developing and building applications for the public MEC utilizes the same array of features and tools that Azure uses.
--- Access to a rich partner ecosystem:
- - ISVs working on optimized and scalable applications for edge computing can use the Azure public MEC solution for building solutions. These solutions offer low latency and leverage the 5G mobility network and connected scenarios.
-
-## Service offerings for Azure public MEC
-
-Azure public MEC enables some key Azure services for customers to deploy. The control plane for these services remains in the region and the data plane is deployed at the edge, resulting in a smaller Azure footprint, fewer dependencies, and the ability to leverage other services deployed at the region.
-
-The following key services are available in Azure public MEC:
--- Azure Virtual Machines (Azure public MEC supports these [SKUs](key-concepts.md#azure-virtual-machines))-- Virtual Machine Scale Sets-- Azure Private Link-- Standard public IP-- Azure Virtual Networks-- Virtual network peering-- Azure Standard Load Balancer-- Azure Kubernetes for Edge-- Azure Bastion (must be deployed in a virtual network in the parent Azure region)-- Azure managed disks (Azure public MEC supports Standard SSD)-- Azure IoT Edge - preview-- Azure Site Recovery (ASR) - preview-
-The following diagram shows how services are deployed at the Azure public MEC location. With this capability, enterprises and developers can deploy the customer workloads closer to their users.
--
-## Partnership with operators
-
-Azure public MEC solutions are available in partnership with mobile network operators. The current operator partnerships are as follows:
--- AT&T: Atlanta, Dallas, Detroit-
-## Next steps
-
-To learn about important concepts for Azure public MEC, advance to the following article:
-
-> [!div class="nextstepaction"]
-> [Key concepts for Azure public MEC](key-concepts.md)
public-multi-access-edge-compute-mec Partner Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/public-multi-access-edge-compute-mec/partner-solutions.md
- Title: 'Partner solutions available in Public MEC'
-description: This article lists all the Partner solutions that can be deployed in Public MEC.
---- Previously updated : 11/22/2022----
-# Partner solutions for Public MEC
-
-## List of Partner solutions that can be deployed in Azure public MEC
-
-The table in this article provides information on Partner solutions that can be deployed in Public MEC.
--
->
->
--
-| **Vendor** | **Product(s) Name** | **Market Place** |
-| | | |
-| **Checkpoint** | [CloudGuard Network Security](https://www.checkpoint.com/cloudguard/cloud-network-security/) | [CloudGuard Network Security](https://azuremarketplace.microsoft.com/marketplace/apps/checkpoint.vsec?tab=Overview) |
-| **Citrix** | [Application Delivery Controller](https://www.citrix.com/products/citrix-adc/) | [Citrix ADC](https://azuremarketplace.microsoft.com/marketplace/apps/citrix.netscalervpx-1vm-3nic?tab=Overview) |
-| **Couchbase** | [Server](https://www.couchbase.com/products/server), [Sync-Gateway](https://www.couchbase.com/products/sync-gateway) | [Couchbase Server Enterprise](https://azuremarketplace.microsoft.com/en/marketplace/apps/couchbase.couchbase-enterprise?tab=Overview) [Couchbase Sync Gateway Enterprise](https://azuremarketplace.microsoft.com/en/marketplace/apps/couchbase.couchbase-sync-gateway-enterprise?tab=Overview) |
-| **Fortinet** | [FortiGate](https://www.fortinet.com/products/private-cloud-security/fortigate-virtual-appliances) |[FortiGate](https://azuremarketplace.microsoft.com/marketplace/apps/fortinet.fortinet-fortigate?tab=Overview) |
-| **Fortinet** | [FortiWeb](https://www.fortinet.com/products/web-application-firewall/fortiweb?tab=saas) | [FortiWeb](https://azuremarketplace.microsoft.com/marketplace/apps/fortinet.fortinet_waas?tab=Overview) |
-| **Multicasting.io** | [Multicasting](https://multicasting.io/) | |
-| **Net Foundry** | [Zero Trust Edge Fabric for Azure MEC](https://netfoundry.io/zero-trust-edge-fabric-azure-public-mec/) | [NetFoundry Edge Router](https://azuremarketplace.microsoft.com/marketplace/apps/netfoundryinc.ziti-edge-router?tab=Overview) |
-| **Palo Alto Networks** | [VM-Series](https://docs.paloaltonetworks.com/vm-series/9-1/vm-series-performance-capacity/vm-series-performance-capacity/vm-series-on-azure-models-and-vms) | [VM-Series Next-Generation Firewall](https://ms.portal.azure.com/#view/Microsoft_Azure_Marketplace/GalleryItemDetailsBladeNopdl/id/paloaltonetworks.vmseries-ngfw/product/%7B%22displayName%22%3A%22VM-Series%20Next-Generation%20Firewall%20from%20Palo%20Alto%20Networks%22%2C%22itemDisplayName%22%3A%22VM-Series%20Next-Generation%20Firewall%20from%20Palo%20Alto%20Networks%22%2C%22id%22%3A%22paloaltonetworks.vmseries-ngfw%22%2C%22bigId%22%3A%22DZH318Z0BP7N%22%2C%22legacyId%22%3A%22paloaltonetworks.vmseries-ngfw%22%2C%22offerId%22%3A%22vmseries-ngfw%22%2C%22publisherId%22%3A%22paloaltonetworks%22%2C%22publisherDisplayName%22%3A%22Palo%20Alto%20Networks%2C%20Inc.%22%2C%22summary%22%3A%22Looking%20to%20secure%20your%20applications%20in%20Azure%2C%20protect%20against%20threats%20and%20prevent%20data%20exfiltration%3F%22%2C%22longSummary%22%3A%22VM-Series%20next-generation%20firewall%20is%20for%20developers%2C%20architects%2C%20and%20security%20teams.%20Enable%20firewall%2C%20inline%20threat%2C%20and%20data%20theft%20preventions%20into%20your%20application%20development%20workflows%20using%20native%20Azure%20services%20and%20VM-Series%20automation%20features.%22%2C%22description%22%3A%22%3Cp%3EThe%20VM-Series%20virtualized%20next-generation%20firewall%20allows%20developers%2C%20and%20cloud%20security%20architects%20to%20automate%20and%20deploy%20inline%20firewall%20and%20threat%20prevention%20along%20with%20their%20application%20deployment%20workflows.%20Users%20can%20achieve%20%E2%80%98touchless%E2%80%99%20deployment%20of%20advanced%20firewall%2C%20threat%20prevention%20capabilities%20using%20ARM%20templates%2C%20native%20Azure%20services%2C%20and%20VM-Series%20firewall%20automation%20features%20such%20as%20bootstrapping.%20Auto-scaling%20using%20Azure%20VMSS%20and%20tag-based%20dynamic%20security%20policies%20are%20supported%20using%20the%20Panorama%20Plugin%20for%20Azure.%3C%2Fp%3E%20%5Cn%5Cn%3Cp%3EProtect%20your%20applications%20and%20data%20with%20whitelisting%20and%20segmentation%20policies.%20Policies%20update%20dynamically%20based%20on%20Azure%20tags%20assigned%20to%20application%20VMs%2C%20allowing%20you%20to%20re) |
-| **Spirent** | [Spirent](https://www.spirent.com/solutions/edge-computing-validating-services) | [Spirent for Azure public MEC](https://azuremarketplace.microsoft.com/marketplace/apps/spirentcommunications1641943316121.umetrix-mec?tab=Overview) |
-| **Summit Tech** | [Odience](https://odience.com/interactions) | |
-| **Veeam** | [Veeam Backup & Replication](https://www.veeam.com/kb4375)| [Veeam Backup & Replication](https://azuremarketplace.microsoft.com/marketplace/apps/veeam.veeam-backup-replication?tab=Overview) |
-| **VMware** | [SDWAN Edge](https://sase.vmware.com/products/component-network-edge)| [VMware SD-WAN - Virtual Edge](https://azuremarketplace.microsoft.com/marketplace/apps/vmware-inc.sol-42222-bbj?tab=Overview) |
-| | | |
--
-Currently, the solutions can be deployed at the following locations:
-
-| **MEC Location** | **Extended Location** |
-| | |
-| AT&T Atlanta | attatlanta1 |
-| AT&T Dallas | attdallas1 |
-| AT&T Detroit | attdetroit1 |
-
-## Next steps
-* For more information about Public MEC, see the [Overview](Overview.md).
-
public-multi-access-edge-compute-mec Quickstart Create Vm Azure Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/public-multi-access-edge-compute-mec/quickstart-create-vm-azure-resource-manager-template.md
- Title: 'Quickstart: Deploy a virtual machine in Azure public MEC using an ARM template'
-description: In this quickstart, learn how to deploy a virtual machine in Azure public multi-access edge compute (MEC) by using an Azure Resource Manager template.
---- Previously updated : 11/22/2022---
-# Quickstart: Deploy a virtual machine in Azure public MEC using an ARM template
-
-In this quickstart, you learn how to use an Azure Resource Manager (ARM) template to deploy an Ubuntu Linux virtual machine (VM) in Azure public multi-access edge compute (MEC).
--
-## Prerequisites
--- If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.--- Add an allowlisted subscription to your Azure account, which allows you to deploy resources in Azure public MEC. If you don't have an active allowed subscription, contact the [Azure public MEC product team](https://aka.ms/azurepublicmec).--
- > [!NOTE]
- > Azure public MEC deployments are supported in Azure CLI versions 2.26 and later.
-
-## Review the template
-
-1. Review the following example ARM template.
-
- Every resource you deploy in Azure public MEC has an extra attribute named `extendedLocation`, which Azure adds to the resource provider. The example ARM template deploys these resources:
-
- - Virtual network
- - Public IP address
- - Network interface
- - Network security group
- - Virtual machine
-
- In this example ARM template:
- - The Azure Edge Zone ID is different from the display name of the Azure public MEC.
- - The Azure network security group has an inbound rule that allows SSH and HTTPS access from everywhere.
-
- ```json
- {
- "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
- "contentVersion": "1.0.0.0",
- "parameters": {
- "adminUsername": {
- "type": "String",
- "metadata": {
- "description": "Username for the Virtual Machine."
- }
- },
- "adminPassword": {
- "type": "SecureString",
- "metadata": {
- "description": "Password for the Virtual Machine."
- }
- },
- "dnsLabelPrefix": {
- "type": "String",
- "metadata": {
- "description": "Unique DNS Name for the Public IP used to access the Virtual Machine."
- }
- },
- "vmSize": {
- "defaultValue": "Standard_D2s_v3",
- "type": "String",
- "metadata": {
- "description": "Size of the virtual machine."
- }
- },
- "location": {
- "defaultValue": "[resourceGroup().location]",
- "type": "String",
- "metadata": {
- "description": "Location for all resources."
- }
- },
- "EdgeZone": {
- "type": "String"
- },
- "publisher": {
- "type": "string",
- "defaultValue": "Canonical",
- "metadata" : {
- "description": "Publisher for the VM Image"
- }
- },
- "offer": {
- "type": "string",
- "defaultValue": "UbuntuServer",
- "metadata" : {
- "description": "Offer for the VM Image"
- }
- },
- "sku": {
- "type": "string",
- "defaultValue": "18.04-LTS",
- "metadata" : {
- "description": "SKU for the VM Image"
- }
- },
- "osVersion": {
- "type": "string",
- "defaultValue": "latest",
- "metadata" : {
- "description": "version for the VM Image"
- }
- },
- "vmName": {
- "defaultValue": "myEdgeVM",
- "type": "String",
- "metadata": {
- "description": "VM Name."
- }
- }
- },
- "variables": {
- "nicName": "myEdgeVMNic",
- "addressPrefix": "10.0.0.0/16",
- "subnetName": "Subnet",
- "subnetPrefix": "10.0.0.0/24",
- "publicIPAddressName": "myEdgePublicIP",
- "virtualNetworkName": "MyEdgeVNET",
- "subnetRef": "[resourceId('Microsoft.Network/virtualNetworks/subnets', variables('virtualNetworkName'), variables('subnetName'))]",
- "networkSecurityGroupName": "default-NSG"
- },
- "resources": [
- {
- "type": "Microsoft.Network/publicIPAddresses",
- "apiVersion": "2018-11-01",
- "name": "[variables('publicIPAddressName')]",
- "location": "[parameters('location')]",
- "extendedLocation": {
- "type": "EdgeZone",
- "name": "[parameters('EdgeZone')]"
- },
- "sku": {
- "name": "Standard"
- },
- "properties": {
- "publicIPAllocationMethod": "Static",
- "dnsSettings": {
- "domainNameLabel": "[parameters('dnsLabelPrefix')]"
- }
- }
- },
- {
- "type": "Microsoft.Network/networkSecurityGroups",
- "apiVersion": "2019-08-01",
- "name": "[variables('networkSecurityGroupName')]",
- "location": "[parameters('location')]",
- "properties": {
- "securityRules": [
- {
- "name": "AllowHttps",
- "properties": {
- "description": "HTTPS is allowed",
- "protocol": "*",
- "sourcePortRange": "*",
- "destinationPortRange": "443",
- "sourceAddressPrefix": "*",
- "destinationAddressPrefix": "*",
- "access": "Allow",
- "priority": 130,
- "direction": "Inbound",
- "sourcePortRanges": [],
- "destinationPortRanges": [],
- "sourceAddressPrefixes": [],
- "destinationAddressPrefixes": []
- }
- },
- {
- "name": "AllowSSH",
- "properties": {
- "description": "HTTPS is allowed",
- "protocol": "*",
- "sourcePortRange": "*",
- "destinationPortRange": "22",
- "sourceAddressPrefix": "*",
- "destinationAddressPrefix": "*",
- "access": "Allow",
- "priority": 140,
- "direction": "Inbound",
- "sourcePortRanges": [],
- "destinationPortRanges": [],
- "sourceAddressPrefixes": [],
- "destinationAddressPrefixes": []
- }
- }
- ]
- }
- },
- {
- "type": "Microsoft.Network/virtualNetworks",
- "apiVersion": "2018-11-01",
- "name": "[variables('virtualNetworkName')]",
- "location": "[parameters('location')]",
- "extendedLocation": {
- "type": "EdgeZone",
- "name": "[parameters('EdgeZone')]"
- },
- "dependsOn": [
- "[resourceId('Microsoft.Network/networkSecurityGroups', variables('networkSecurityGroupName'))]"
- ],
- "properties": {
- "addressSpace": {
- "addressPrefixes": [
- "[variables('addressPrefix')]"
- ]
- },
- "subnets": [
- {
- "name": "[variables('subnetName')]",
- "properties": {
- "addressPrefix": "[variables('subnetPrefix')]",
- "networkSecurityGroup": {
- "id": "[resourceId('Microsoft.Network/networkSecurityGroups', variables('networkSecurityGroupName'))]"
- }
- }
- }
- ]
- }
- },
- {
- "type": "Microsoft.Network/networkInterfaces",
- "apiVersion": "2018-11-01",
- "name": "[variables('nicName')]",
- "location": "[parameters('location')]",
- "extendedLocation": {
- "type": "EdgeZone",
- "name": "[parameters('EdgeZone')]"
- },
- "dependsOn": [
- "[resourceId('Microsoft.Network/publicIPAddresses/', variables('publicIPAddressName'))]",
- "[resourceId('Microsoft.Network/virtualNetworks/', variables('virtualNetworkName'))]"
- ],
- "properties": {
- "ipConfigurations": [
- {
- "name": "ipconfig1",
- "properties": {
- "privateIPAllocationMethod": "Dynamic",
- "publicIPAddress": {
- "id": "[resourceId('Microsoft.Network/publicIPAddresses',variables('publicIPAddressName'))]"
- },
- "subnet": {
- "id": "[variables('subnetRef')]"
- }
- }
- }
- ]
- }
- },
- {
- "type": "Microsoft.Compute/virtualMachines",
- "apiVersion": "2020-06-01",
- "name": "[parameters('vmName')]",
- "location": "[parameters('location')]",
- "extendedLocation": {
- "type": "EdgeZone",
- "name": "[parameters('EdgeZone')]"
- },
- "dependsOn": [
- "[resourceId('Microsoft.Network/networkInterfaces/', variables('nicName'))]"
- ],
- "properties": {
- "hardwareProfile": {
- "vmSize": "[parameters('vmSize')]"
- },
- "osProfile": {
- "computerName": "[parameters('vmName')]",
- "adminUsername": "[parameters('adminUsername')]",
- "adminPassword": "[parameters('adminPassword')]"
- },
- "storageProfile": {
- "imageReference": {
- "publisher": "[parameters('publisher')]",
- "offer": "[parameters('offer')]",
- "sku": "[parameters('sku')]",
- "version": "[parameters('osVersion')]"
- },
- "osDisk": {
- "createOption": "FromImage",
- "managedDisk": {
- "storageAccountType": "StandardSSD_LRS"
- }
- }
- },
- "networkProfile": {
- "networkInterfaces": [
- {
- "id": "[resourceId('Microsoft.Network/networkInterfaces',variables('nicName'))]"
- }
- ]
- }
- }
- }
- ],
- "outputs": {
- "hostname": {
- "type": "String",
- "value": "[reference(variables('publicIPAddressName')).dnsSettings.fqdn]"
- },
- "sshCommand": {
- "type": "string",
- "value": "[format('ssh {0}@{1}', parameters('adminUsername'), reference(resourceId('Microsoft.Network/publicIPAddresses', variables('publicIPAddressName'))).dnsSettings.fqdn)]"
- }
- }
- }
- ```
-
-## Deploy the ARM template using the Azure CLI
-
-1. Save the contents of the sample ARM template from the previous section in a file named *azurepublicmecDeploy.json*.
-
-1. Sign in to Azure with [az login](/cli/azure/reference-index#az-login) and set the Azure subscription with [az account set](/cli/azure/account#az-account-set) command.
-
- ```azurecli
- az login
- az account set --subscription <subscription name>
- ```
-
-1. Create an Azure resource group with the [az group create](/cli/azure/group#az-group-create) command. A resource group is a logical container into which Azure resources are deployed and managed. The following example creates a resource group named myResourceGroup:
-
- ```azurecli
- az group create --name myResourceGroup --location <location>
- ```
-
- > [!NOTE]
- > Each Azure public MEC site is associated with an Azure region. Based on the Azure public MEC location where the resource needs to be deployed, select the appropriate region value for the `--location` parameter. For more information, see [Key concepts for Azure public MEC](key-concepts.md).
-
-1. Deploy the ARM template in the resource group with the [az deployment group create](/cli/azure/deployment/group#az-deployment-group-create) command.
-
- ```azurecli
- az deployment group create --resource-group myResourceGroup --template-file azurepublicmecDeploy.json
- ```
-
- ```output
- Please provide string value for 'adminUsername' (? for help): <username>
- Please provide securestring value for 'adminPassword' (? for help): <password>
- Please provide string value for 'dnsLabelPrefix' (? for help): <uniqueDnsLabel>
- Please provide string value for 'EdgeZone' (? for help): <edge zone ID>
- ```
-
-1. Wait a few minutes for the deployment to run.
-
- After the command execution is complete, you can see the new resources in the myResourceGroup resource group. Here's a sample output:
-
- ```output
- {
- "id": "/subscriptions/xxxxxxx/resourceGroups/myResourceGroup/providers/Microsoft.Resources/deployments/edgeZonesDeploy",
- "location": null,
- "name": "edgeZonesDeploy",
- "properties": {
- "correlationId": "<xxxxxxxx>",
- "debugSetting": null,
- "dependencies": [
- {
- "dependsOn": [
- {
- "id": "/subscriptions/xxxxx/resourceGroups/myResourceGroup /providers/Microsoft.Network/networkSecurityGroups/default-NSG",
- "resourceGroup": " myResourceGroup ",
- "resourceName": "default-NSG",
- "resourceType": "Microsoft.Network/networkSecurityGroups"
- }
- ],
- "id": "/subscriptions/xxxxxx/resourceGroups/ myResourceGroup /providers/Microsoft.Network/virtualNetworks/MyEdgeTestVnet",
- "resourceGroup": " myResourceGroup ",
- "resourceName": " MyEdgeTestVnet ",
- "resourceType": "Microsoft.Network/virtualNetworks"
- },
- "outputs": {
- "hostname": {
- "type": "String",
- "value": "xxxxx.cloudapp.azure.com"
- },
- "sshCommand": {
- "type": "String",
- "value": "ssh <adminUsername>@<publicIPFQDN>"
- }
- },
- ...
- }
- ```
-
-## Access the virtual machine
-
-To use SSH to connect to the virtual machine in the Azure public MEC, the best method is to deploy a jump box in an Azure parent region.
-
-1. Follow the instructions in [Create a virtual machine in a region](/azure/virtual-machines/linux/quick-create-template).
-
-1. Use SSH to connect to the jump box virtual machine deployed in the region.
-
- ```bash
- ssh <username>@<regionVM_publicIP>
- ```
-
-1. From the jump box, use SSH to connect to the virtual machine created in the Azure public MEC.
-
- ```bash
- ssh <username>@<edgezoneVM_publicIP>
- ```
-
-## Clean up resources
-
-In this quickstart, you deployed an ARM template in Azure public MEC by using the Azure CLI. If you don't expect to need these resources in the future, use the [az group delete](/cli/azure/group#az-group-delete) command to remove the resource group, scale set, and all related resources. Using the `--yes` parameter deletes the resources without a confirmation prompt.
-
-```azurecli
-az group delete \--name myResourceGroup \--yes
-```
-
-## Next steps
-
-To deploy a virtual machine in Azure public MEC using Azure CLI, advance to the following article:
-
-> [!div class="nextstepaction"]
-> [Quickstart: Deploy a virtual machine in Azure public MEC using Azure CLI](quickstart-create-vm-cli.md)
public-multi-access-edge-compute-mec Quickstart Create Vm Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/public-multi-access-edge-compute-mec/quickstart-create-vm-cli.md
- Title: 'Quickstart: Deploy a virtual machine in Azure public MEC using Azure CLI'
-description: In this quickstart, learn how to deploy a virtual machine in Azure public multi-access edge (MEC) compute by using the Azure CLI.
---- Previously updated : 11/22/2022---
-# Quickstart: Deploy a virtual machine in Azure public MEC using Azure CLI
-
-In this quickstart, you learn how to use Azure CLI to deploy a Linux virtual machine (VM) in Azure public multi-access edge compute (MEC).
-
-## Prerequisites
--- If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.--- Add an allowlisted subscription to your Azure account, which allows you to deploy resources in Azure public MEC. If you don't have an active allowed subscription, contact the [Azure public MEC product team](https://aka.ms/azurepublicmec).--
- > [!NOTE]
- > Azure public MEC deployments are supported in Azure CLI versions 2.26 and later.
-
-## Sign in to Azure and set your subscription
-
-1. Sign in to Azure by using the [az login](/cli/azure/reference-index#az-login) command.
-
- ```azurecli
- az login
- ```
-
-1. Set your Azure subscription with the [az account set](/cli/azure/account#az-account-set) command.
-
- ```azurecli
- az account set --subscription <subscription name>
- ```
-
-## Create a resource group
-
-1. Create an Azure resource group with the [az group create](/cli/azure/group#az-group-create) command. A resource group is a logical container into which Azure resources are deployed and managed. The following example creates a resource group named myResourceGroup.
-
- ```azurecli
- az group create --name myResourceGroup --location <location>
- ```
-
- > [!NOTE]
- > Each Azure public MEC site is associated with an Azure region. Based on the Azure public MEC location where the resource needs to be deployed, select the appropriate region value for the `--location` parameter. For more information, see [Key concepts for Azure public MEC](key-concepts.md).
-
-## Create a VM
-
-1. Create a VM with the [az vm create](/cli/azure/vm#az-vm-create) command.
-
- The following example creates a VM named myVMEdge and adds a user account named azureuser at Azure public MEC:
-
- ```azurecli
- az vm create \--resource-group myResourceGroup \--name myVMEdge \--image Ubuntu2204 \--admin-username azureuser \--admin-password <password> \--edge-zone <edgezone ID> \--public-ip-sku Standard
- ```
-
- The `--edge-zone` parameter determines the Azure public MEC location where the VM and its associated resources are created. Because Azure public MEC supports only standard SKU for a public IP, you must specify `Standard` for the `--public-ip-sku` parameter.
-
-1. Wait a few minutes for the VM and supporting resources to be created.
-
- The following example output shows a successful operation:
-
- ```output
- {
- "fqdns": "",
- "id": "/subscriptions/<id> /resourceGroups/myResourceGroup/providers/Microsoft.Compute/ virtualMachines/myVMEdge",
- "location": "<region>",
- "macAddress": "<mac_address>",
- "powerState": "VM running",
- "privateIpAddress": "10.0.0.4",
- "publicIpAddress": "<public_ip_address>",
- "resourceGroup": "myResourceGroup",
- "zones": ""
- }
- ```
-
-1. Note your `publicIpAddress` value in the output from your myVMEdge VM. Use this address to access the VM in the next sections.
-
-## Create a jump server in the associated region
-
-To use SSH to connect to the VM in Azure public MEC, the best method is to deploy a jump box in the same Azure region where you created your resource group.
-
-1. Create an Azure Virtual Network (VNet) by using the [az network vnet](/cli/azure/network/vnet) command.
-
- The following example creates a VNet named MyVnetRegion:
-
- ```azurecli
- az network vnet create --resource-group myResourceGroup --name MyVnetRegion --address-prefix 10.1.0.0/16 --subnet-name MySubnetRegion --subnet-prefix 10.1.0.0/24
- ```
-
-1. Create a VM to be deployed in the region with the [az vm create](/cli/azure/vm#az-vm-create) command.
-
- The following example creates a VM named myVMRegion in the region:
-
- ```azurecli
- az vm create --resource-group myResourceGroup --name myVMRegion --image Ubuntu2204 --admin-username azureuser --admin-password <password> --vnet-name MyVnetRegion --subnet MySubnetRegion --public-ip-sku Standard
- ```
-
-1. Note your `publicIpAddress` value in the output from the myVMregion VM. Use this address to access the VM in the next sections.
-
-## Accessing the VMs
-
-1. Use SSH to connect to the jump box VM deployed in the region. Use the IP address from the myVMRegion VM you created in the previous section.
-
- ```bash
- ssh azureuser@<regionVM_publicIP>
- ```
-
-1. From the jump box, use SSH to connect to the VM you created in Azure public MEC. Use the IP address from the myVMEdge VM you created in the previous section.
-
- ```bash
- ssh azureuser@<edgeVM_publicIP>
- ```
-
-1. Ensure the Azure network security groups allow port 22 access to the VMs you create.
-
-## Clean up resources
-
-In this quickstart, you deployed a VM in Azure public MEC by using the Azure CLI. If you don't expect to need these resources in the future, use the [az group delete](/cli/azure/group#az-group-delete) command to remove the resource group, VM, and all related resources. Using the `--yes` parameter deletes the resources without a confirmation prompt.
-
-```azurecli
-az group delete \--name myResourceGroup \--yes
-```
-
-## Next steps
-
-To deploy resources in Azure public MEC using the Go SDK, advance to the following article:
-
-> [!div class="nextstepaction"]
-> [Tutorial: Deploy resources in Azure public MEC using the Go SDK](tutorial-create-vm-using-go-sdk.md)
public-multi-access-edge-compute-mec Tutorial Create Vm Using Go Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/public-multi-access-edge-compute-mec/tutorial-create-vm-using-go-sdk.md
- Title: 'Tutorial: Deploy resources in Azure public MEC using the Go SDK'
-description: In this tutorial, learn how to deploy resources in Azure public multi-access edge compute (MEC) by using the Go SDK.
---- Previously updated : 11/22/2022---
-# Tutorial: Deploy resources in Azure public MEC using the Go SDK
-
-In this tutorial, you learn how to use the Go SDK to deploy resources in Azure public multi-access edge compute (MEC). The tutorial provides code snippets written in Go to deploy a virtual machine and public IP resources in an Azure public MEC solution. You can use the same model and template to deploy other resources and services that are supported for Azure public MEC. This article isnΓÇÖt intended to be a tutorial on Go; it focuses only on the API calls required to deploy resources in Azure public MEC.
-
-For more information about Go, see [Azure for Go developers](/azure/developer/go/). For Go samples, see [Azure Go SDK samples](https://github.com/azure-samples/azure-sdk-for-go-samples).
-
-In this tutorial, you learn how to:
-
-> [!div class="checklist"]
->
-> - Create a virtual machine
-> - Create a public IP address
-> - Deploy a virtual network and public IP address
-
-## Prerequisites
--- If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.--- Add an allowlisted subscription to your Azure account, which allows you to deploy resources in Azure public MEC. If you don't have an active allowed subscription, contact the [Azure public MEC product team](https://aka.ms/azurepublicmec).-
-### Install Go
-
-You can download and install the latest version of [Go](https://go.dev/doc/install). It will replace the existing Go on your machine. If you want to install multiple Go versions on the same machine, see [Managing Go installations](https://go.dev/doc/manage-install).
-
-### Authentication
-
-You need to get authentication before you use any Azure service. You could either use Azure CLI to sign in or set authentication environment variables.
-
-#### Use Azure CLI to sign in
-
-You can use `az login` in the command line to sign in to Azure via your default browser. Detailed instructions can be found in [Sign in with Azure CLI](/cli/azure/authenticate-azure-cli).
-
-#### Set environment variables
-
-You need the following values to authenticate to Azure:
--- **Subscription ID**-- **Client ID**-- **Client Secret**-- **Tenant ID**-
-Obtain these values from the portal by following these instructions:
--- Get Subscription ID-
- 1. Login into your Azure account
- 2. Select **Subscriptions** in the left sidebar
- 3. Select whichever subscription is needed
- 4. Select **Overview**
- 5. Copy the Subscription ID
--- Get Client ID / Client Secret / Tenant ID-
- For information on how to get Client ID, Client Secret, and Tenant ID, see [Create a Microsoft Entra application and service principal that can access resources](/azure/active-directory/develop/howto-create-service-principal-portal).
--- Setting Environment Variables-
- After you obtain the values, you need to set the following values as your environment variables:
-
- - `AZURE_CLIENT_ID`
- - `AZURE_CLIENT_SECRET`
- - `AZURE_TENANT_ID`
- - `AZURE_SUBSCRIPTION_ID`
-
- To set the following environment variables on your development system:
-
- **Windows** (Administrator access is required)
-
- 1. Open the Control Panel
- 2. Select **System Security** > **System**
- 3. Select **Advanced system settings** on the left
- 4. Inside the System Properties window, select the `Environment Variables…` button.
- 5. Select the property you would like to change, then select **Edit…**. If the property name is not listed, then select **New…**.
-
- **Linux-based OS** :
-
- ```export AZURE_CLIENT_ID="__CLIENT_ID__"
- export AZURE_CLIENT_SECRET="__CLIENT_SECRET__"
- export AZURE_TENANT_ID="__TENANT_ID__"
- export AZURE_SUBSCRIPTION_ID="__SUBSCRIPTION_ID__"````
-
-## Install the package
-
-The new SDK uses Go modules for versioning and dependency management.
-
-Run the following command to install the packages for this tutorial under your project folder:
-
-```sh
-go get github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/compute/armcompute/v5
-go get github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/network/armnetwork/v3
-go get github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/resources/armresources
-go get github.com/Azure/azure-sdk-for-go/sdk/azcore
-go get github.com/Azure/azure-sdk-for-go/sdk/azidentity
-```
--
-## Provision a virtual machine
-
-```go
-package main
-
-import (
- "context"
- "github.com/Azure/azure-sdk-for-go/sdk/azcore/to"
- "github.com/Azure/azure-sdk-for-go/sdk/azidentity"
- "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/compute/armcompute/v5"
- "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/network/armnetwork/v3"
- "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/resources/armresources"
- "log"
- "os"
-)
-
-func main() {
- subscriptionId := os.Getenv("AZURE_SUBSCRIPTION_ID")
-
- cred, err := azidentity.NewDefaultAzureCredential(nil)
- if err != nil {
- log.Fatalf("authentication failure: %+v", err)
- }
-
- // client factory
- resourcesClientFactory, err := armresources.NewClientFactory(subscriptionId, cred, nil)
- if err != nil {
- log.Fatalf("cannot create client factory: %+v", err)
- }
-
- computeClientFactory, err := armcompute.NewClientFactory(subscriptionId, cred, nil)
- if err != nil {
- log.Fatalf("cannot create client factory: %+v", err)
- }
-
- networkClientFactory, err := armnetwork.NewClientFactory(subscriptionId, cred, nil)
- if err != nil {
- log.Fatalf("cannot create client factory: %+v", err)
- }
-
- // Step 1: Provision a resource group
- _, err = resourcesClientFactory.NewResourceGroupsClient().CreateOrUpdate(
- context.Background(),
- "<resourceGroupName>",
- armresources.ResourceGroup{
- Location: to.Ptr("westus"),
- },
- nil,
- )
- if err != nil {
- log.Fatal("cannot create resources group:", err)
- }
-
- // Step 2: Provision a virtual network
- virtualNetworksClientCreateOrUpdateResponsePoller, err := networkClientFactory.NewVirtualNetworksClient().BeginCreateOrUpdate(
- context.Background(),
- "<resourceGroupName>",
- "<virtualNetworkName>",
- armnetwork.VirtualNetwork{
- Location: to.Ptr("westus"),
- ExtendedLocation: &armnetwork.ExtendedLocation{
- Name: to.Ptr("<edgezoneid>"),
- Type: to.Ptr(armnetwork.ExtendedLocationTypesEdgeZone),
- },
- Properties: &armnetwork.VirtualNetworkPropertiesFormat{
- AddressSpace: &armnetwork.AddressSpace{
- AddressPrefixes: []*string{
- to.Ptr("10.0.0.0/16"),
- },
- },
- Subnets: []*armnetwork.Subnet{
- {
- Name: to.Ptr("test-1"),
- Properties: &armnetwork.SubnetPropertiesFormat{
- AddressPrefix: to.Ptr("10.0.0.0/24"),
- },
- },
- },
- },
- },
- nil,
- )
- if err != nil {
- log.Fatal("network creation failed", err)
- }
- virtualNetworksClientCreateOrUpdateResponse, err := virtualNetworksClientCreateOrUpdateResponsePoller.PollUntilDone(context.Background(), nil)
- if err != nil {
- log.Fatal("cannot create virtual network:", err)
- }
- subnetID := *virtualNetworksClientCreateOrUpdateResponse.Properties.Subnets[0].ID
-
- // Step 3: Provision an IP address
- publicIPAddressesClientCreateOrUpdateResponsePoller, err := networkClientFactory.NewPublicIPAddressesClient().BeginCreateOrUpdate(
- context.Background(),
- "<resourceGroupName>",
- "<publicIPName>",
- armnetwork.PublicIPAddress{
- Name: to.Ptr("<publicIPName>"),
- Location: to.Ptr("westus"),
- ExtendedLocation: &armnetwork.ExtendedLocation{
- Name: to.Ptr("<edgezoneid>"),
- Type: to.Ptr(armnetwork.ExtendedLocationTypesEdgeZone),
- },
- SKU: &armnetwork.PublicIPAddressSKU{
- Name: to.Ptr(armnetwork.PublicIPAddressSKUNameStandard),
- },
- Properties: &armnetwork.PublicIPAddressPropertiesFormat{
- PublicIPAllocationMethod: to.Ptr(armnetwork.IPAllocationMethodStatic),
- },
- },
- nil,
- )
- if err != nil {
- log.Fatal("public ip creation failed", err)
- }
- publicIPAddressesClientCreateOrUpdateResponse, err := publicIPAddressesClientCreateOrUpdateResponsePoller.PollUntilDone(context.Background(), nil)
- if err != nil {
- log.Fatal("cannot create public ip: ", err)
- }
-
- // Step 4: Provision the network interface client
- interfacesClientCreateOrUpdateResponsePoller, err := networkClientFactory.NewInterfacesClient().BeginCreateOrUpdate(
- context.Background(),
- "<resourceGroupName>",
- "<networkInterfaceName>",
- armnetwork.Interface{
- Location: to.Ptr("westus"),
- ExtendedLocation: &armnetwork.ExtendedLocation{
- Name: to.Ptr("<edgezoneid>"),
- Type: to.Ptr(armnetwork.ExtendedLocationTypesEdgeZone),
- },
- Properties: &armnetwork.InterfacePropertiesFormat{
- EnableAcceleratedNetworking: to.Ptr(true),
- IPConfigurations: []*armnetwork.InterfaceIPConfiguration{
- {
- Name: to.Ptr("<ipConfigurationName>"),
- Properties: &armnetwork.InterfaceIPConfigurationPropertiesFormat{
- Subnet: &armnetwork.Subnet{
- ID: to.Ptr(subnetID),
- },
- PublicIPAddress: &armnetwork.PublicIPAddress{
- ID: publicIPAddressesClientCreateOrUpdateResponse.ID,
- },
- },
- },
- },
- },
- },
- nil,
- )
- if err != nil {
- log.Fatal("interface creation failed", err)
- }
- interfacesClientCreateOrUpdateResponse, err := interfacesClientCreateOrUpdateResponsePoller.PollUntilDone(context.Background(), nil)
- if err != nil {
- log.Fatal("cannot create interface:", err)
- }
-
- // Step 5: Provision the virtual machine
- virtualMachinesClientCreateOrUpdateResponsePoller, err := computeClientFactory.NewVirtualMachinesClient().BeginCreateOrUpdate(
- context.Background(),
- "<resourceGroupName>",
- "<vmName>",
- armcompute.VirtualMachine{
- Location: to.Ptr("westus"),
- ExtendedLocation: &armcompute.ExtendedLocation{
- Name: to.Ptr("<edgezoneid>"),
- Type: to.Ptr(armcompute.ExtendedLocationTypesEdgeZone),
- },
- Properties: &armcompute.VirtualMachineProperties{
- StorageProfile: &armcompute.StorageProfile{
- ImageReference: &armcompute.ImageReference{
- Publisher: to.Ptr("<publisher>"),
- Offer: to.Ptr("<offer>"),
- SKU: to.Ptr("<sku>"),
- Version: to.Ptr("<version>"),
- },
- },
- HardwareProfile: &armcompute.HardwareProfile{
- VMSize: to.Ptr(armcompute.VirtualMachineSizeTypesStandardD2SV3),
- },
- OSProfile: &armcompute.OSProfile{
- ComputerName: to.Ptr("<computerName>"),
- AdminUsername: to.Ptr("<adminUsername>"),
- AdminPassword: to.Ptr("<adminPassword>"),
- },
- NetworkProfile: &armcompute.NetworkProfile{
- NetworkInterfaces: []*armcompute.NetworkInterfaceReference{
- {
- ID: interfacesClientCreateOrUpdateResponse.ID,
- Properties: &armcompute.NetworkInterfaceReferenceProperties{
- Primary: to.Ptr(true),
- },
- },
- },
- },
- },
- },
- nil,
- )
- if err != nil {
- log.Fatal("virtual machine creation failed", err)
- }
- _, err = virtualMachinesClientCreateOrUpdateResponsePoller.PollUntilDone(context.Background(), nil)
- if err != nil {
- log.Fatal("cannot create virtual machine:", err)
- }
-}
-```
-
-## Clean up resources
-
-In this tutorial, you created a VM in Azure public MEC by using the Go SDK. If you don't expect to need these resources in the future, use the Azure portal to delete the resource group that you created.
-
-## Next steps
-
-To deploy a virtual machine in Azure public MEC using the Python SDK, advance to the following article:
-
-> [!div class="nextstepaction"]
-> [Tutorial: Deploy a virtual machine in Azure public MEC using the Python SDK](tutorial-create-vm-using-python-sdk.md)
public-multi-access-edge-compute-mec Tutorial Create Vm Using Python Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/public-multi-access-edge-compute-mec/tutorial-create-vm-using-python-sdk.md
- Title: 'Tutorial: Deploy a virtual machine in Azure public MEC using the Python SDK'
-description: This tutorial demonstrates how to use Azure SDK management libraries in a Python script to create a resource group in Azure public multi-access edge compute (MEC) that contains a Linux virtual machine.
---- Previously updated : 11/22/2022---
-# Tutorial: Deploy a virtual machine in Azure public MEC using the Python SDK
-
-In this tutorial, you use Python SDK to deploy resources in Azure public multi-access edge compute (MEC). The tutorial provides Python code to deploy a virtual machine (VM) and its dependencies in Azure public MEC.
-
-For information about Python SDKs, see [Azure libraries for Python usage patterns](/azure/developer/python/sdk/azure-sdk-library-usage-patterns).
-
-In this tutorial, you learn how to:
-
-> [!div class="checklist"]
->
-> - Install the required Azure library packages
-> - Provision a virtual machine
-> - Run the script in your development environment
-> - Create a jump server in the associated region
-> - Access the VMs
-
-## Prerequisites
--- If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.--- Add an allowlisted subscription to your Azure account, which allows you to deploy resources in Azure public MEC. If you don't have an active allowed subscription, contact the [Azure public MEC product team](https://aka.ms/azurepublicmec).--- Set up Python in your local development environment by following the instructions at [Configure your local Python dev environment for Azure](/azure/developer/python/configure-local-development-environment?tabs=cmd). Ensure you create a service principal for local development, and create and activate a virtual environment for this tutorial project.--
-## Install the required Azure library packages
-
-1. Create a file named *requirements.txt* that lists the management libraries used in this example.
-
- ```txt
- azure-mgmt-resource
- azure-mgmt-compute
- azure-mgmt-network
- azure-identity
- azure-mgmt-extendedlocation==1.0.0b2
- ```
-
-1. Open a command prompt with the virtual environment activated and install the management libraries listed in requirements.txt.
-
- ```bash
- pip install -r requirements.txt
- ```
-
-## Provision a virtual machine
-
-1. Create a Python file named *provision_vm_edge.py* and populate it with the following Python script. The script deploys VM and its associated dependency in Azure public MEC. The comments in the script explain the details.
-
- ```Python
- # Import the needed credential and management objects from the libraries.
- from azure.identity import AzureCliCredential
- from azure.mgmt.resource import ResourceManagementClient
- from azure.mgmt.network import NetworkManagementClient
- from azure.mgmt.compute import ComputeManagementClient
- import os
-
- print(f"Provisioning a virtual machine...some operations might take a minute or two.")
-
- # Acquire a credential object using CLI-based authentication.
- credential = AzureCliCredential()
-
- # Retrieve subscription ID from environment variable.
- subscription_id = os.environ["AZURE_SUBSCRIPTION_ID"]
-
- # Step 1: Provision a resource group
-
- # Obtain the management object for resources, using the credentials from the CLI login.
- resource_client = ResourceManagementClient(credential, subscription_id)
-
- # Constants we need in multiple places: the resource group name, the region and the public mec location
- # in which we provision resources. Populate the variables with appropriate values.
- RESOURCE_GROUP_NAME = "PythonAzureExample-VM-rg"
- LOCATION = "<region>"
- PUBLIC_MEC_LOCATION = "<edgezone id>"
- USERNAME = "azureuser"
- PASSWORD = "<password>"
- # Provision the resource group.
- rg_result = resource_client.resource_groups.create_or_update(RESOURCE_GROUP_NAME,
- {
- "location": LOCATION
- }
- )
-
- print(f"Provisioned resource group {rg_result.name} in the {rg_result.location} region")
-
- # For details on the previous code, see Example: Use the Azure libraries to provision a resource group
- # at https://learn.microsoft.com/azure/developer/python/azure-sdk-example-resource-group
-
- # Step 2: Provision a virtual network
-
- # A virtual machine requires a network interface client (NIC). A NIC requires
- # a virtual network and subnet along with an IP address. Therefore, we must provision
- # these downstream components first, then provision the NIC, after which we
- # can provision the VM.
-
- # Network and IP address names
- VNET_NAME = "python-example-vnet-edge"
- SUBNET_NAME = "python-example-subnet-edge"
- IP_NAME = "python-example-ip-edge"
- IP_CONFIG_NAME = "python-example-ip-config-edge"
- NIC_NAME = "python-example-nic-edge"
-
- # Obtain the management object for networks
- network_client = NetworkManagementClient(credential, subscription_id)
-
- # Provision the virtual network and wait for completion
- poller = network_client.virtual_networks.begin_create_or_update(RESOURCE_GROUP_NAME,
- VNET_NAME,
- {
- "location": LOCATION,
- "extendedLocation": {"type": "EdgeZone", "name": PUBLIC_MEC_LOCATION},
- "address_space": {
- "address_prefixes": ["10.1.0.0/16"]
- }
- }
- )
-
- vnet_result = poller.result()
-
- print(f"Provisioned virtual network {vnet_result.name} with address prefixes {vnet_result.address_space.address_prefixes}")
-
- # Step 3: Provision the subnet and wait for completion
- poller = network_client.subnets.begin_create_or_update(RESOURCE_GROUP_NAME,
- VNET_NAME, SUBNET_NAME,
- { "address_prefix": "10.1.0.0/24" }
- )
- subnet_result = poller.result()
-
- print(f"Provisioned virtual subnet {subnet_result.name} with address prefix {subnet_result.address_prefix}")
-
- # Step 4: Provision an IP address and wait for completion
- # Only the standard public IP SKU is supported at EdgeZones
- poller = network_client.public_ip_addresses.begin_create_or_update(RESOURCE_GROUP_NAME,
- IP_NAME,
- {
- "location": LOCATION,
- "extendedLocation": {"type": "EdgeZone", "name": PUBLIC_MEC_LOCATION},
- "sku": { "name": "Standard" },
- "public_ip_allocation_method": "Static",
- "public_ip_address_version" : "IPV4"
- }
- )
-
- ip_address_result = poller.result()
-
- print(f"Provisioned public IP address {ip_address_result.name} with address {ip_address_result.ip_address}")
-
- # Step 5: Provision the network interface client
- poller = network_client.network_interfaces.begin_create_or_update(RESOURCE_GROUP_NAME,
- NIC_NAME,
- {
- "location": LOCATION,
- "extendedLocation": {"type": "EdgeZone", "name": PUBLIC_MEC_LOCATION},
- "ip_configurations": [ {
- "name": IP_CONFIG_NAME,
- "subnet": { "id": subnet_result.id },
- "public_ip_address": {"id": ip_address_result.id }
- }]
- }
- )
-
- nic_result = poller.result()
-
- print(f"Provisioned network interface client {nic_result.name}")
-
- # Step 6: Provision the virtual machine
-
- # Obtain the management object for virtual machines
- compute_client = ComputeManagementClient(credential, subscription_id)
-
- VM_NAME = "ExampleVM-edge"
-
- print(f"Provisioning virtual machine {VM_NAME}; this operation might take a few minutes.")
-
- # Provision the VM specifying only minimal arguments, which defaults to an Ubuntu 18.04 VM
- # on a Standard DSv2-series with a public IP address and a default virtual network/subnet.
-
- poller = compute_client.virtual_machines.begin_create_or_update(RESOURCE_GROUP_NAME, VM_NAME,
- {
- "location": LOCATION,
- "extendedLocation": {"type": "EdgeZone", "name": PUBLIC_MEC_LOCATION},
- "storage_profile": {
- "image_reference": {
- "publisher": 'Canonical',
- "offer": "UbuntuServer",
- "sku": "18.04-LTS",
- "version": "latest"
- }
- },
- "hardware_profile": {
- "vm_size": "Standard_DS2_v2"
- },
- "os_profile": {
- "computer_name": VM_NAME,
- "admin_username": USERNAME,
- "admin_password": PASSWORD
- },
- "network_profile": {
- "network_interfaces": [{
- "id": nic_result.id,
- }]
- }
- }
- )
-
- vm_result = poller.result()
-
- print(f"Provisioned virtual machine {vm_result.name}")
- ```
-
-1. Before you run the script, populate these variables used in the step 1 section of the script:
-
- | Variable name | Description |
- | - | -- |
- | LOCATION | Azure region associated with the Azure public MEC location |
- | PUBLIC_MEC_LOCATION | Azure public MEC location identifier/edgezone ID |
- | PASSWORD | Password to use to sign in to the VM |
-
- > [!NOTE]
- > Each Azure public MEC site is associated with an Azure region. Based on the Azure public MEC location where the resource needs to be deployed, select the appropriate region value for the resource group to be created. For more information, see [Key concepts for Azure public MEC](key-concepts.md).
-
-## Run the script in your development environment
-
-1. Run the Python script you copied from the previous section.
-
- ```python
- python provision_vm_edge.py
- ```
-
-1. Wait a few minutes for the VM and supporting resources to be created.
-
- The following example output shows the VM create operation was successful.
-
- ```output
- (.venv) C:\Users >python provision_vm_edge.py
- Provisioning a virtual machine...some operations might take a minute or two.
- Provisioned resource group PythonAzureExample-VM-rg in the <region> region
- Provisioned virtual network python-example-vnet-edge with address prefixes ['10.1.0.0/16']
- Provisioned virtual subnet python-example-subnet-edge with address prefix 10.1.0.0/24
- Provisioned public IP address python-example-ip-edge with address <public ip>
- Provisioned network interface client python-example-nic-edge
- Provisioning virtual machine ExampleVM-edge; this operation might take a few minutes.
- Provisioned virtual machine ExampleVM-edge
- ```
-
-1. In the output from the python-example-ip-edge field, note your own publicIpAddress. Use this address to access the VM in the next section.
-
-## Create a jump server in the associated region
-
-To use SSH to connect to the VM in the Azure public MEC, the best method is to deploy a jump box in an Azure region where your resource group was deployed in the previous section.
-
-1. Follow the steps in [Use the Azure libraries to provision a virtual machine](/azure/developer/python/sdk/examples/azure-sdk-example-virtual-machines).
-
-1. Note your own publicIpAddress in the output from the python-example-ip field of the jump server VM. Use this address to access the VM in the next section.
-
-## Access the VMs
-
-1. Use SSH to connect to the jump box VM you deployed in the region with its IP address you noted previously.
-
- ```bash
- ssh azureuser@<python-example-ip>
- ```
-
-1. From the jump box, use SSH to connect to the VM you created in the Azure public MEC with its IP address you noted previously.
-
- ```bash
- ssh azureuser@<python-example-ip-edge>
- ```
-
-1. Ensure the Azure network security groups allow port 22 access to the VMs you create.
-
-## Clean up resources
-
-In this tutorial, you created a VM in Azure public MEC by using the Python SDK. If you don't expect to need these resources in the future, use the [az group delete](/cli/azure/group#az-group-delete) command to remove the resource group, scale set, and all related resources. Using the `--yes` parameter deletes the resources without a confirmation prompt.
-
-```azurecli
-az group delete --name PythonAzureExample-VM-rg --yes
-```
-
-## Next steps
-
-For questions about Azure public MEC, contact the product team:
-
-> [!div class="nextstepaction"]
-> [Azure public MEC product team](https://aka.ms/azurepublicmec)
search Search File Storage Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-file-storage-integration.md
Title: Azure Files indexer (preview) description: Set up an Azure Files indexer to automate indexing of file shares in Azure AI Search.-+ - ignite-2023 Previously updated : 06/25/2024 Last updated : 08/23/2024 # Index data from Azure Files
In the [search index](search-what-is-an-index.md), add fields to accept the cont
} ```
-1. Create a document key field ("key": true). For blob content, the best candidates are metadata properties. Metadata properties often include characters, such as `/` and `-`, that are invalid for document keys. Because the indexer has a "base64EncodeKeys" property (true by default), it automatically encodes the metadata property, with no configuration or field mapping required.
+1. Create a document key field ("key": true). For blob content, the best candidates are metadata properties. Metadata properties often include characters, such as `/` and `-`, that are invalid for document keys. The indexer automatically encodes the key metadata property, with no configuration or field mapping required.
+ **`metadata_storage_path`** (default) full path to the object or file
Once the index and data source have been created, you're ready to create the ind
"batchSize": null, "maxFailedItems": null, "maxFailedItemsPerBatch": null,
- "base64EncodeKeys": null,
"configuration": { "indexedFileNameExtensions" : ".pdf,.docx", "excludedFileNameExtensions" : ".png,.jpeg"
search Search Howto Index Azure Data Lake Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-azure-data-lake-storage.md
description: Set up an Azure Data Lake Storage (ADLS) Gen2 indexer to automate indexing of content and metadata for full text search in Azure AI Search. -+ - ignite-2023 Previously updated : 02/19/2024 Last updated : 08/23/2024 # Index data from Azure Data Lake Storage Gen2
In a [search index](search-what-is-an-index.md), add fields to accept the conten
+ A custom metadata property that you add to blobs. This option requires that your blob upload process adds that metadata property to all blobs. Since the key is a required property, any blobs that are missing a value will fail to be indexed. If you use a custom metadata property as a key, avoid making changes to that property. Indexers will add duplicate documents for the same blob if the key property changes.
- Metadata properties often include characters, such as `/` and `-`, that are invalid for document keys. Because the indexer has a "base64EncodeKeys" property (true by default), it automatically encodes the metadata property, with no configuration or field mapping required.
+ Metadata properties often include characters, such as `/` and `-`, that are invalid for document keys. The indexer automatically encodes the key metadata property, with no configuration or field mapping required.
1. Add a "content" field to store extracted text from each file through the blob's "content" property. You aren't required to use this name, but doing so lets you take advantage of implicit field mappings.
Once the index and data source have been created, you're ready to create the ind
"batchSize": null, "maxFailedItems": null, "maxFailedItemsPerBatch": null,
- "base64EncodeKeys": null,
"configuration": { "indexedFileNameExtensions" : ".pdf,.docx", "excludedFileNameExtensions" : ".png,.jpeg",
search Search Howto Indexing Azure Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-indexing-azure-blob-storage.md
description: Set up an Azure blob indexer to automate indexing of blob content for full text search operations and knowledge mining in Azure AI Search. -+ - ignite-2023 Previously updated : 06/25/2024 Last updated : 08/23/2024 # Index data from Azure Blob Storage
In a [search index](search-what-is-an-index.md), add fields to accept the conten
+ A custom metadata property that you add to blobs. This option requires that your blob upload process adds that metadata property to all blobs. Since the key is a required property, any blobs that are missing a value will fail to be indexed. If you use a custom metadata property as a key, avoid making changes to that property. Indexers will add duplicate documents for the same blob if the key property changes.
- Metadata properties often include characters, such as `/` and `-`, which are invalid for document keys. Because the indexer has a "base64EncodeKeys" property (true by default), it automatically encodes the metadata property, with no configuration or field mapping required.
+ Metadata properties often include characters, such as `/` and `-`, which are invalid for document keys. However, the indexer automatically encodes the key metadata property, with no configuration or field mapping required.
1. Add a "content" field to store extracted text from each file through the blob's "content" property. You aren't required to use this name, but doing so lets you take advantage of implicit field mappings.
Once the index and data source have been created, you're ready to create the ind
"batchSize": null, "maxFailedItems": null, "maxFailedItemsPerBatch": null,
- "base64EncodeKeys": null,
"configuration": { "indexedFileNameExtensions" : ".pdf,.docx", "excludedFileNameExtensions" : ".png,.jpeg",
POST https://[service name].search.windows.net/indexers?api-version=2024-07-01
"batchSize": null, "maxFailedItems": null, "maxFailedItemsPerBatch": null,
- "base64EncodeKeys": null,
"configuration": { "indexedFileNameExtensions" : ".pdf,.docx", "excludedFileNameExtensions" : ".png,.jpeg",
POST https://[service name].search.windows.net/indexers?api-version=2024-07-01
"batchSize": null, "maxFailedItems": null, "maxFailedItemsPerBatch": null,
- "base64EncodeKeys": null,
"configuration": { "indexedFileNameExtensions" : ".pdf,.docx", "excludedFileNameExtensions" : ".png,.jpeg",
search Search Howto Indexing Azure Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-indexing-azure-tables.md
Title: Azure table indexer
description: Set up a search indexer to index data stored in Azure Table Storage for full text search in Azure AI Search. -+
- ignite-2023 Previously updated : 02/22/2024 Last updated : 08/23/2024 # Index data from Azure Table Storage
Once you have an index and data source, you're ready to create the indexer. Inde
"batchSize" : null, "maxFailedItems" : null, "maxFailedItemsPerBatch" : null,
- "base64EncodeKeys" : null,
"configuration" : { } }, "fieldMappings" : [ ],
sentinel Unified Connector Custom Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/unified-connector-custom-device.md
Follow these steps to ingest log messages from Apache HTTP Server:
Replace the {TABLE_NAME} and {LOCAL_PATH_FILE} placeholders in the [DCR template](connect-custom-logs-ama.md?tabs=arm#create-the-data-collection-rule) with the values in steps 1 and 2. Replace the other placeholders as directed.
-[Back to list](#specific-instructions-per-application-type) | [Back to top](#custom-logs-via-ama-data-connectorconfigure-data-ingestion-to-microsoft-sentinel-from-specific-applications)
+[Back to top](#custom-logs-via-ama-data-connectorconfigure-data-ingestion-to-microsoft-sentinel-from-specific-applications)
## Apache Tomcat
Follow these steps to ingest log messages from Apache Tomcat:
Replace the {TABLE_NAME} and {LOCAL_PATH_FILE} placeholders in the [DCR template](connect-custom-logs-ama.md?tabs=arm#create-the-data-collection-rule) with the values in steps 1 and 2. Replace the other placeholders as directed.
-[Back to list](#specific-instructions-per-application-type) | [Back to top](#custom-logs-via-ama-data-connectorconfigure-data-ingestion-to-microsoft-sentinel-from-specific-applications)
+[Back to top](#custom-logs-via-ama-data-connectorconfigure-data-ingestion-to-microsoft-sentinel-from-specific-applications)
## Cisco Meraki
Follow these steps to ingest log messages from Cisco Meraki:
1. Configure and connect the Cisco Meraki device(s): follow the [instructions provided by Cisco](https://documentation.meraki.com/General_Administration/Monitoring_and_Reporting/Meraki_Device_Reporting_-_Syslog%2C_SNMP%2C_and_API) for sending syslog messages. Use the IP address or hostname of the virtual machine where the Azure Monitor Agent is installed.
-[Back to list](#specific-instructions-per-application-type) | [Back to top](#custom-logs-via-ama-data-connectorconfigure-data-ingestion-to-microsoft-sentinel-from-specific-applications)
+[Back to top](#custom-logs-via-ama-data-connectorconfigure-data-ingestion-to-microsoft-sentinel-from-specific-applications)
## JBoss Enterprise Application Platform
Follow these steps to ingest log messages from JBoss Enterprise Application Plat
Replace the {TABLE_NAME} and {LOCAL_PATH_FILE} placeholders in the [DCR template](connect-custom-logs-ama.md?tabs=arm#create-the-data-collection-rule) with the values in steps 1 and 2. Replace the other placeholders as directed.
-[Back to list](#specific-instructions-per-application-type) | [Back to top](#custom-logs-via-ama-data-connectorconfigure-data-ingestion-to-microsoft-sentinel-from-specific-applications)
+[Back to top](#custom-logs-via-ama-data-connectorconfigure-data-ingestion-to-microsoft-sentinel-from-specific-applications)
## JuniperIDP
Follow these steps to ingest log messages from JuniperIDP:
1. For the instructions to configure the Juniper IDP appliance to send syslog messages to an external server, see [SRX Getting Started - Configure System Logging.](https://supportportal.juniper.net/s/article/SRX-Getting-Started-Configure-System-Logging).
-[Back to list](#specific-instructions-per-application-type) | [Back to top](#custom-logs-via-ama-data-connectorconfigure-data-ingestion-to-microsoft-sentinel-from-specific-applications)
+[Back to top](#custom-logs-via-ama-data-connectorconfigure-data-ingestion-to-microsoft-sentinel-from-specific-applications)
## MarkLogic Audit
Follow these steps to ingest log messages from MarkLogic Audit:
1. Validate by selecting OK. 1. Refer to MarkLogic documentation for [more details and configuration options](https://docs.marklogic.com/guide/admin/auditing).
-[Back to list](#specific-instructions-per-application-type) | [Back to top](#custom-logs-via-ama-data-connectorconfigure-data-ingestion-to-microsoft-sentinel-from-specific-applications)
+[Back to top](#custom-logs-via-ama-data-connectorconfigure-data-ingestion-to-microsoft-sentinel-from-specific-applications)
## MongoDB Audit
Follow these steps to ingest log messages from MongoDB Audit:
1. Set the `path` parameter to `/data/db/auditlog.json`. 1. Refer to MongoDB documentation for [more parameters and details](https://www.mongodb.com/docs/manual/tutorial/configure-auditing/).
-[Back to list](#specific-instructions-per-application-type) | [Back to top](#custom-logs-via-ama-data-connectorconfigure-data-ingestion-to-microsoft-sentinel-from-specific-applications)
+[Back to top](#custom-logs-via-ama-data-connectorconfigure-data-ingestion-to-microsoft-sentinel-from-specific-applications)
## NGINX HTTP Server
Follow these steps to ingest log messages from NGINX HTTP Server:
Replace the {TABLE_NAME} and {LOCAL_PATH_FILE} placeholders in the [DCR template](connect-custom-logs-ama.md?tabs=arm#create-the-data-collection-rule) with the values in steps 1 and 2. Replace the other placeholders as directed.
-[Back to list](#specific-instructions-per-application-type) | [Back to top](#custom-logs-via-ama-data-connectorconfigure-data-ingestion-to-microsoft-sentinel-from-specific-applications)
+[Back to top](#custom-logs-via-ama-data-connectorconfigure-data-ingestion-to-microsoft-sentinel-from-specific-applications)
## Oracle WebLogic Server
Follow these steps to ingest log messages from Oracle WebLogic Server:
Replace the {TABLE_NAME} and {LOCAL_PATH_FILE} placeholders in the [DCR template](connect-custom-logs-ama.md?tabs=arm#create-the-data-collection-rule) with the values in steps 1 and 2. Replace the other placeholders as directed.
-[Back to list](#specific-instructions-per-application-type) | [Back to top](#custom-logs-via-ama-data-connectorconfigure-data-ingestion-to-microsoft-sentinel-from-specific-applications)
+[Back to top](#custom-logs-via-ama-data-connectorconfigure-data-ingestion-to-microsoft-sentinel-from-specific-applications)
## PostgreSQL Events
Follow these steps to ingest log messages from PostgreSQL Events:
1. Set `logging_collector=on` 1. Refer to PostgreSQL documentation for [more parameters and details](https://www.postgresql.org/docs/current/runtime-config-logging.html).
-[Back to list](#specific-instructions-per-application-type) | [Back to top](#custom-logs-via-ama-data-connectorconfigure-data-ingestion-to-microsoft-sentinel-from-specific-applications)
+[Back to top](#custom-logs-via-ama-data-connectorconfigure-data-ingestion-to-microsoft-sentinel-from-specific-applications)
## SecurityBridge Threat Detection for SAP
Follow these steps to ingest log messages from SecurityBridge Threat Detection f
Replace the {TABLE_NAME} and {LOCAL_PATH_FILE} placeholders in the [DCR template](connect-custom-logs-ama.md?tabs=arm#create-the-data-collection-rule) with the values in steps 1 and 2. Replace the other placeholders as directed.
-[Back to list](#specific-instructions-per-application-type) | [Back to top](#custom-logs-via-ama-data-connectorconfigure-data-ingestion-to-microsoft-sentinel-from-specific-applications)
+[Back to top](#custom-logs-via-ama-data-connectorconfigure-data-ingestion-to-microsoft-sentinel-from-specific-applications)
## SquidProxy
Follow these steps to ingest log messages from SquidProxy:
Replace the {TABLE_NAME} and {LOCAL_PATH_FILE} placeholders in the [DCR template](connect-custom-logs-ama.md?tabs=arm#create-the-data-collection-rule) with the values in steps 1 and 2. Replace the other placeholders as directed.
-[Back to list](#specific-instructions-per-application-type) | [Back to top](#custom-logs-via-ama-data-connectorconfigure-data-ingestion-to-microsoft-sentinel-from-specific-applications)
+[Back to top](#custom-logs-via-ama-data-connectorconfigure-data-ingestion-to-microsoft-sentinel-from-specific-applications)
## Ubiquiti UniFi
Follow these steps to ingest log messages from Ubiquiti UniFi:
1. Follow the [instructions provided by Ubiquiti](https://help.ui.com/hc/en-us/categories/6583256751383) to enable syslog and optionally debugging logs. 1. Select Settings > System Settings > Controller Configuration > Remote Logging and enable syslog.
-[Back to list](#specific-instructions-per-application-type) | [Back to top](#custom-logs-via-ama-data-connectorconfigure-data-ingestion-to-microsoft-sentinel-from-specific-applications)
+[Back to top](#custom-logs-via-ama-data-connectorconfigure-data-ingestion-to-microsoft-sentinel-from-specific-applications)
## VMware vCenter
Follow these steps to ingest log messages from VMware vCenter:
1. Follow the [instructions provided by VMware](https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.monitoring.doc/GUID-9633A961-A5C3-4658-B099-B81E0512DC21.html) for sending syslog messages. 1. Use the IP address or hostname of the machine where the Azure Monitor Agent is installed.
-[Back to list](#specific-instructions-per-application-type) | [Back to top](#custom-logs-via-ama-data-connectorconfigure-data-ingestion-to-microsoft-sentinel-from-specific-applications)
+[Back to top](#custom-logs-via-ama-data-connectorconfigure-data-ingestion-to-microsoft-sentinel-from-specific-applications)
## Zscaler Private Access (ZPA)
Follow these steps to ingest log messages from Zscaler Private Access (ZPA):
1. Log storage location: Create a log file on your external syslog server. Grant the syslog daemon write permissions to the file. Install the AMA on the external syslog server if it's not already installed. Enter this filename and path in the **File pattern** field in the connector, or in place of the `{LOCAL_PATH_FILE}` placeholder in the DCR.
-1. Configure the syslog daemon to export its vCenter log messages to a temporary text file so the AMA can collect them.
+1. Configure the syslog daemon to export its ZPA log messages to a temporary text file so the AMA can collect them.
# [rsyslog](#tab/rsyslog)
Follow these steps to ingest log messages from Zscaler Private Access (ZPA):
1. Follow the [instructions provided by ZPA](https://help.zscaler.com/zpa/configuring-log-receiver). Select JSON as the log template. 1. Select Settings > System Settings > Controller Configuration > Remote Logging and enable syslog.
-[Back to list](#specific-instructions-per-application-type) | [Back to top](#custom-logs-via-ama-data-connectorconfigure-data-ingestion-to-microsoft-sentinel-from-specific-applications)
+[Back to top](#custom-logs-via-ama-data-connectorconfigure-data-ingestion-to-microsoft-sentinel-from-specific-applications)
## Related content
storage Storage Account Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-create.md
The following table describes the fields on the **Basics** tab.
| Project details | Subscription | Required | Select the subscription for the new storage account. | | Project details | Resource group | Required | Create a new resource group for this storage account, or select an existing one. For more information, see [Resource groups](../../azure-resource-manager/management/overview.md#resource-groups). | | Instance details | Storage account name | Required | Choose a unique name for your storage account. Storage account names must be between 3 and 24 characters in length and might contain numbers and lowercase letters only. |
-| Instance details | Region | Required | Select the appropriate region for your storage account. For more information, see [Regions and Availability Zones in Azure](../../availability-zones/az-overview.md).<br /><br />Not all regions are supported for all types of storage accounts or redundancy configurations. For more information, see [Azure Storage redundancy](storage-redundancy.md).<br /><br />The choice of region can have a billing impact. For more information, see [Storage account billing](storage-account-overview.md#storage-account-billing).<br /><br />If your subscription supports Azure public multi-access edge zones (Azure MEC), you can deploy your storage account to an edge zone. For more information about edge zones, see [What is Azure public MEC?](../../public-multi-access-edge-compute-mec/overview.md). |
+| Instance details | Region | Required | Select the appropriate region for your storage account. For more information, see [Regions and Availability Zones in Azure](../../availability-zones/az-overview.md).<br /><br />Not all regions are supported for all types of storage accounts or redundancy configurations. For more information, see [Azure Storage redundancy](storage-redundancy.md).<br /><br />The choice of region can have a billing impact. For more information, see [Storage account billing](storage-account-overview.md#storage-account-billing). |
| Instance details | Performance | Required | Select **Standard** performance for general-purpose v2 storage accounts (default). This type of account is recommended by Microsoft for most scenarios. For more information, see [Types of storage accounts](storage-account-overview.md#types-of-storage-accounts).<br /><br />Select **Premium** for scenarios requiring low latency. After selecting **Premium**, select the type of premium storage account to create. The following types of premium storage accounts are available: <ul><li>[Block blobs](./storage-account-overview.md)</li><li>[File shares](../files/storage-files-planning.md#management-concepts)</li><li>[Page blobs](../blobs/storage-blob-pageblob-overview.md)</li></ul> | Instance details | Redundancy | Required | Select your desired redundancy configuration. Not all redundancy options are available for all types of storage accounts in all regions. For more information about redundancy configurations, see [Azure Storage redundancy](storage-redundancy.md).<br /><br />If you select a geo-redundant configuration (GRS or GZRS), your data is replicated to a data center in a different region. For read access to data in the secondary region, select **Make read access to data available in the event of regional unavailability**. |