Updates from: 10/11/2024 01:07:50
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Partner N8identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-n8identity.md
Use this solution for the following scenarios:
## Prerequisites
-To get started, you'll need:
+To get started, you need:
* An Azure subscription
The TheAccessHub Admin Tool runs in the N8ID Azure subscription or the customer
6. TheAccessHub Admin Tool syncs user records with Azure AD B2C. 7. Based on TheAccessHub Admin Tool response, Azure AD B2C sends a customized welcome email to users.
-## Create a Global Administrator in your Azure AD B2C tenant
+## Create an External Identity Provider Administrator and B2C User Flow Administrator in your Azure AD B2C tenant
-TheAccessHub Admin Tool permissions act on behalf of a Global Administrator to read user information and conduct changes in your Azure AD B2C tenant. Changes to your regular administrators won't affect TheAccessHub Admin Tool interaction with the tenant.
+TheAccessHub Admin Tool permissions act on behalf of an External Identity Provider Administrator and B2C User Flow Administrator to read user information and conduct changes in your Azure AD B2C tenant. Changes to your regular administrators don't affect TheAccessHub Admin Tool interaction with the tenant.
-To create a Global Administrator:
+To create an External Identity Provider Administrator and B2C User Flow Administrator:
1. In the Azure portal, sign in to your Azure AD B2C tenant as an Administrator. 2. Go to **Microsoft Entra ID** > **Users**.
To create a Global Administrator:
* Enter the **account name**, such as TheAccessHub Service Account. 7. Select **Show Password**. 8. Copy and save the initial password.
-9. To assign the Global Administrator role, for **User**, select the user's current role.
-10. Select the **Global Administrator** record.
+9. To assign the External Identity Provider Administrator and B2C User Flow Administrator role, for **User**, select the user's current role.
+10. Select the **External Identity Provider Administrator** and **B2C User Flow Administrator** records.
11. Select **Create**. ## Connect TheAccessHub Admin Tool to your Azure AD B2C tenant
-TheAccessHub Admin Tool uses the Microsoft Graph API to read and make changes to a directory. It acts as a Global Administrator in your tenant. Use the following instructions to add needed permissions.
+TheAccessHub Admin Tool uses the Microsoft Graph API to read and make changes to a directory. It acts as an External Identity Provider Administrator and B2C User Flow Administrator in your tenant. Use the following instructions to add needed permissions.
To authorize TheAccessHub Admin Tool to access your directory: 1. Use the credentials N8 Identity provided to sign in to TheAccessHub Admin Tool. 2. Go to **System Admin** > **Azure AD B2C Config**. 3. Select **Authorize Connection**.
-4. In the new window, sign in with your Global Administrator account. When you sign in for the first time with the new service account, a prompt to reset your password can appear.
+4. In the new window, sign in with your External Identity Provider Administrator and B2C User Flow Administrator account. When you sign in for the first time with the new service account, a prompt to reset your password can appear.
5. Follow the prompts and select **Accept**. ## Configure a new CSR user with your enterprise identity
With TheAccessHub Admin Tool, you can import data from various databases, LDAPs,
* **Type**: **Database** * **Database type**: select a supported database
-* **Connection URL**: enter a JDBC connection string, such as `jdbc:postgresql://myhost.com:5432/databasename`
+* **Connection URL**: enter a Java Database Connectivity (JDBC) connection string, such as `jdbc:postgresql://myhost.com:5432/databasename`
* **Username**: username to access the database * **Password**: password to access the database * **Query**: the SQL query to extract customer details, such as `SELECT * FROM mytable;`'
With TheAccessHub Admin Tool, you can import data from various databases, LDAPs,
8. Select **Next**. 9. In **Search-Mapping configuration**, identify load-record correlation with customers in TheAccessHub Admin Tool. 10. Select source identifying attributes. Match attributes TheAccessHub Admin Tool attributes with the same values. If there's a match, the record is overridden. Otherwise, a new customer is created.
-11. Sequence the number of checks. For example, check email first, then first and last name.
+11. Sequence the number of checks. For example, check email first, then first and family name.
12. On the left-side menu, select **Data Mapping**. 13. In **Data-Mapping configuration**, assign the TheAccessHub Admin Tool attributes to be populated from your source attributes. Unmapped attributes remain unchanged for customers. If you map the attribute `org_name` with a current organization value, created customers go in the organization. 15. Select **Next**.
If you occasionally sync TheAccessHub Admin Tool, it might not be up to date wit
For your sign-up custom policies, the following steps enable a secure certificate to notify TheAccessHub Admin Tool of new accounts.
-1. Use the credentials N8ID provided to sign in to TheAccessHub Admin Tool.
+1. To sign in to TheAccessHub Admin Tool, use the credentials N8ID provided.
2. Go to **System Admin** > **Admin Tools** > **API Security**. 3. Select **Generate**. 4. Copy the **Certificate Password**.
For your sign-up custom policies, the following steps enable a secure certificat
3. Supply your Azure AD B2C tenant domain and the two Identity Experience Framework IDs from your Identity Experience Framework configuration. 4. Select **Save**. 5. Select **Download** to get a .zip file with basic policies that add customers into TheAccessHub Admin Tool as customers sign up.
-6. Use the instructions in [Create user flows](./tutorial-create-user-flows.md?pivots=b2c-custom-policy) to design custom policies in Azure AD B2C.
+6. To design custom policies in Azure AD B2C, use the instructions in [Create user flows](./tutorial-create-user-flows.md?pivots=b2c-custom-policy).
## Next steps
active-directory-b2c Partner Trusona https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-trusona.md
In this scenario, Trusona acts as an Identity Provider (IdP) for Azure AD B2C to
| Steps | Description | |:|:|
-|1. |A user attempts to sign in to the web application via their browser.|
-|2.|The web application redirects to Azure AD B2C sign-up and sign-in policy.|
-|3. |Azure AD B2C redirects the user for authentication to the Trusona Authentication Cloud OpenID Connect (OIDC) IdP.|
-|4. |The user is presented with a sign-in web page that asks for their username ΓÇô typically an email address.|
-|5. |The user enters their email address and selects the **Continue** button. If the user's account isn't found in the Trusona Authentication Cloud, then a response is sent to the browser that initiates a WebAuthn registration process on the device. Otherwise a response is sent to the browser that begins a WebAuthn authentication process.|
-|6. |The user is asked to select a credential to use. The passkey is associated with the domain of the web application or a hardware security key. Once the user selects a credential, the OS requests the user to use a biometric, passcode, or PIN to confirm their identity. User approval unlocks the Secure Enclave/Trusted Execution environment, which generates an authentication assertion signed by the private key associated with the selected credential.|
-|7. |The authentication assertion is returned to the Trusona cloud service for verification.|
-|8. |Once verified, Trusona Authentication Cloud (IdP) creates an OIDC ID token and then forwards it to Azure AD B2C (Service Provider). Azure AD B2C validates the signature of the token and the issuer against the values in the TrusonaΓÇÖs OpenID discovery document. These details were configured during IdP setup. Once verified, Azure AD B2C issues an OIDC id_token (depending on the scope) and redirects the user back to the initiating application with the token.|
-|9. |The web application (or the developer libraries it uses to implement authentication) retrieves the token and verifies the authenticity of the Azure AD B2C token. If thatΓÇÖs the case, it extracts the claims and pass them to the web application to consume.|
-|10. |Upon verification, user is granted/denied access.|
+|1. | A user attempts to sign in to the web application via their browser.|
+|2.| The web application redirects to Azure AD B2C sign-up and sign-in policy.|
+|3. | Azure AD B2C redirects the user for authentication to the Trusona Authentication Cloud OpenID Connect (OIDC) IdP.|
+|4. | The user is presented with a sign-in web page that asks for their username ΓÇô typically an email address.|
+|5. | The user enters their email address and selects the **Continue** button. If the user's account isn't found in the Trusona Authentication Cloud, then a response is sent to the browser that initiates a WebAuthn registration process on the device. Otherwise a response is sent to the browser that begins a WebAuthn authentication process.|
+|6. | The user is asked to select a credential to use. The passkey is associated with the domain of the web application or a hardware security key. Once the user selects a credential, the OS requests the user to use a biometric, passcode, or PIN to confirm their identity. This unlocks the Secure Enclave/Trusted Execution environment, which generates an authentication assertion signed by the private key associated with the selected credential.|
+|7. | The authentication assertion is returned to the Trusona cloud service for verification.|
+|8. | Once verified, Trusona Authentication Cloud (IdP) creates an OIDC ID token and then forwards it to Azure AD B2C (Service Provider). Azure AD B2C validates the signature of the token and the issuer against the values in the TrusonaΓÇÖs OpenID discovery document. These details were configured during IdP setup. Once verified, Azure AD B2C issues an OIDC id_token (depending on the scope) and redirects the user back to the initiating application with the token.
+|9. | The web application (or the developer libraries it uses to implement authentication) retrieves the token and verifies the authenticity of the Azure AD B2C token. If thatΓÇÖs the case, it extracts the claims and pass them to the web application to consume.
+|10. | Upon verification, user is granted/denied access. |
## Step 1: Onboard with Trusona Authentication Cloud
To register a web application in your Azure AD B2C tenant, use our new unified a
1. Under **Supported account types**, select **Accounts in any identity provider or organizational directory (for authenticating users with user flows)**. 1. Under **Redirect URI**, select **Web**, and then enter `https://jwt.ms` in the URL text box.
- The redirect URI is the endpoint to which the authorization server, Azure AD B2C in this case sends the user to. After completing its interaction with the user, an access token or authorization code is sent upon successful authorization. In a production application, it's typically a publicly accessible endpoint where your app is running, like `https://contoso.com/auth-response`. For testing purposes like this tutorial, you can set it to `https://jwt.ms`, a Microsoft-owned web application that displays the decoded contents of a token (the contents of the token never leave your browser). During app development, you might add the endpoint where your application listens locally, like `https://localhost:5000`. You can add and modify redirect Uniform Resource Identifiers (URI) in your registered applications at any time.
+ The redirect URI is the endpoint to which the authorization server, Azure AD B2C in this case sends the user to. After completing its interaction with the user, an access token or authorization code is sent upon successful authorization. In a production application, it's typically a publicly accessible endpoint where your app is running, like `https://contoso.com/auth-response`. For testing purposes like this tutorial, you can set it to `https://jwt.ms`, a Microsoft-owned web application that displays the decoded contents of a token (the contents of the token never leave your browser). During app development, you might add the endpoint where your application listens locally, like `https://localhost:5000`. You can add and modify redirect URIs in your registered applications at any time.
- The following restrictions apply to redirect URIs:
+ The following restrictions apply to redirect Uniform Resource Identifiers (URI):
* The reply URL must begin with the scheme `https`, unless you use a localhost redirect URL. * The reply URL is case-sensitive. Its case must match the case of the URL path of your running application. For example, if your application includes as part of its path `.../abc/response-oidc`, don't specify `.../ABC/response-oidc` in the reply URL. Because the web browser treats paths as case-sensitive, cookies associated with `.../abc/response-oidc` might be excluded if redirected to the case-mismatched `.../ABC/response-oidc` URL.
You can enable implicit grant flow to use this app registration to [test a user
## Step 3: Configure Trusona Authentication Cloud as an IdP in Azure AD B2C
-1. Sign in to the [Azure portal](https://portal.azure.com/) as the global administrator of your Azure AD B2C tenant.
+1. Sign in to the [Azure portal](https://portal.azure.com/) as the External Identity Provider Administrator and B2C User Flow Administrator roles in your Azure AD B2C tenant.
1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
You can enable implicit grant flow to use this app registration to [test a user
1. Select **Map this identity providerΓÇÖs claims**.
-1. Fill out the form to map the IdP:
+1. To map the IdP, fill out the form:
| Property | Value | | : | : |
You should now see Trusona as a **new OpenID Connect Identity Provider** listed
b. **Reply URL**: Select the redirect URL, for example, `https://jwt.ms`.
-2. Select **Run user flow**. You should be redirected to the Trusona Authentication Cloud. The user is presented with a sign-in web page that asks for their username ΓÇô typically an email address. If the user's account isn't found in Trusona Authentication Cloud, then a response is sent to the browser that initiates a WebAuthn registration process on the device. Otherwise a response is sent to the browser that begins a WebAuthn authentication process. The user is asked to select a credential to use. The passkey is associated with the domain of the web application or a hardware security key. Once the user selects a credential, the OS requests the user to use a biometric, passcode, or PIN to confirm their identity. User approval unlocks the Secure Enclave/Trusted Execution environment, which generates an authentication assertion signed by the private key associated with the selected credential. Azure AD B2C validates the Trusona authentication response and issues an OIDC token. It redirects the user back to the initiating application, for example, `https://jwt.ms`, which displays the contents of the token returned by Azure AD B2C.
+2. Select **Run user flow**. You should be redirected to the Trusona Authentication Cloud. The user is presented with a sign-in web page that asks for their username ΓÇô typically an email address. If the user's account isn't found in Trusona Authentication Cloud, then a response is sent to the browser that initiates a WebAuthn registration process on the device. Otherwise a response is sent to the browser that begins a WebAuthn authentication process. The user is asked to select a credential to use. The passkey is associated with the domain of the web application or a hardware security key. Once the user selects a credential, the OS requests the user to use a biometric, passcode, or PIN to confirm their identity. This unlocks the Secure Enclave/Trusted Execution environment, which generates an authentication assertion signed by the private key associated with the selected credential. Azure AD B2C validates the Trusona authentication response and issues an OIDC token. It redirects the user back to the initiating application, for example, `https://jwt.ms`, which displays the contents of the token returned by Azure AD B2C.
::: zone-end ::: zone pivot="b2c-custom-policy"
Store the client secret that you previously generated in [step 1](#step-1-onboar
>[!TIP] >You should have the Azure AD B2C policy configured at this point. If not, follow the [instructions](tutorial-create-user-flows.md?pivots=b2c-custom-policy#custom-policy-starter-pack) on how to set up your Azure AD B2C tenant and configure policies.
-To enable users to sign in using Trusona Authentication Cloud, you need to define Trusona as a claims provider that Azure AD B2C can communicate with through an endpoint. The endpoint provides a set of claims that are used by Azure AD B2C to verify a specific user authentication using a passkey or a hardware security key available on their device, proving the userΓÇÖs identity.
+To enable users to sign in using Trusona Authentication Cloud, you need to define Trusona as a claims provider that Azure AD B2C can communicate with through an endpoint. The endpoint provides a set of claims that are used by Azure AD B2C to verify a specific user has authenticated using a passkey or a hardware security key available on their device, proving the userΓÇÖs identity.
Use the following steps to add Trusona as a claims provider:
In the following example, for the `Trusona Authentication Cloud` user journey, t
2. A sign in screen is shown; at the bottom should be a button to use **Trusona Authentication Cloud** authentication.
-1. You should be redirected to Trusona Authentication Cloud. The user is presented with a sign-in web page that asks for their username ΓÇô typically an email address. If the user's account isn't found in the Trusona Authentication Cloud, then a response is sent to the browser that initiates a WebAuthn registration process on the device. Otherwise a response is sent to the browser that begins a WebAuthn authentication process. The user is asked to select a credential to use. The passkey is associated with the domain of the web application or a hardware security key. Once the user selects a credential, the OS requests the user to use a biometric, passcode, or PIN to confirm their identity. User approval unlocks the Secure Enclave/Trusted Execution environment, which generates an authentication assertion signed by the private key associated with the selected credential.
+1. You should be redirected to Trusona Authentication Cloud. The user is presented with a sign-in web page that asks for their username ΓÇô typically an email address. If the user's account isn't found in the Trusona Authentication Cloud, then a response is sent to the browser that initiates a WebAuthn registration process on the device. Otherwise a response is sent to the browser that begins a WebAuthn authentication process. The user is asked to select a credential to use. The passkey is associated with the domain of the web application or a hardware security key. Once the user selects a credential, the OS requests the user to use a biometric, passcode, or PIN to confirm their identity. This unlocks the Secure Enclave/Trusted Execution environment, which generates an authentication assertion signed by the private key associated with the selected credential.
1. If the sign-in process is successful, your browser is redirected to `https://jwt.ms`, which displays the contents of the token returned by Azure AD B2C.
api-management Workspaces Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/workspaces-overview.md
Workspace gateways are currently available in the following regions:
* North Central US * East US 2 * UK South
-* France Central
+* France Central
+* Germany West Central
* North Europe * East Asia * Southeast Asia
app-service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/overview.md
An App Service Environment is a single-tenant deployment of Azure App Service th
Applications are hosted in App Service plans, which are created in an App Service Environment. An App Service plan is essentially a provisioning profile for an application host. As you scale out your App Service plan, you create more application hosts with all the apps in that App Service plan on each host. A single App Service Environment v3 can have up to 200 total App Service plan instances across all the App Service plans combined. A single App Service Isolated v2 (Iv2) plan can have up to 100 instances by itself.
-When you're deploying onto dedicated hardware (hosts), you're limited in scaling across all App Service plans to the number of cores in this type of environment. An App Service Environment that's deployed on dedicated hosts has 132 vCores available. I1v2 uses two vCores, I2v2 uses four vCores, and I3v2 uses eight vCores per instance. Only I1v2, I2v2, and I3v2 SKU sizes are available on App Service Environment deployed on dedicated hosts.
+If you have a requirement that you must have physical isolation all the way down to the hardware level, you can deploy your App Service Environment v3 onto dedicated hardware (hosts). When you're deploying onto dedicated hosts, you're limited in scaling across all App Service plans to the number of cores in this type of environment. An App Service Environment that's deployed on dedicated hosts has 132 vCores available. I1v2 uses two vCores, I2v2 uses four vCores, and I3v2 uses eight vCores per instance. Only I1v2, I2v2, and I3v2 SKU sizes are available on App Service Environment deployed on dedicated hosts. There's extra charges associated with deployment on dedicated hosts. Isolation down to the hardware level is typically not a requirement for the majority of customers. The limitations with dedicated host deployments should be considered before using the feature. To ensure a dedicated host deployment is right for you, you should review your security and compliance requirements before deployment.
## Virtual network support
app-service Webjobs Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/webjobs-create.md
Azure Functions provides another way to run programs and scripts. For a comparis
### [Windows code](#tab/windowscode) The following file types are supported:<br>
-**.cmd**, **.bat**, **.exe** (using Windows cmd)<br>**.ps1** (using PowerShell)<br>**.sh** (using Bash)<br>**.php** (using PHP)<br>**.py** (using Python)<br>**.js** (using Node.js)<br>**.jar** (using Java)<br><br>The necessary runtimes to run these file types are already installed on the web app instance.
+**.cmd**, **.bat**, **.exe** (using Windows cmd)<br>**.ps1** (using PowerShell)<br>**.sh** (using Bash)<br>**.js** (using Node.js)<br>**.jar** (using Java)<br><br>The necessary runtimes to run these file types are already installed on the web app instance.
### [Windows container](#tab/windowscontainer) > [!NOTE] > WebJobs for Windows container is in preview.
azure-app-configuration Concept Enable Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-enable-rbac.md
Requests for [data plane](../azure-resource-manager/management/control-plane-and
### Control plane access All requests for [control plane](../azure-resource-manager/management/control-plane-and-data-plane.md#control-plane) operations are sent to the Azure Resource Manager URL. These requests pertain to the App Configuration resource. -- **Contributor** or **Owner**: Use this role to manage the App Configuration resource. It grants access to the resource's access keys. While the App Configuration data can be accessed using access keys, this role doesn't grant direct access to the data using Microsoft Entra ID.-- **Reader**: Use this role to give read access to the App Configuration resource. This role doesn't grant access to the resource's access keys, nor to the data stored in App Configuration.
+- **App Configuration Contributor**: Use this role to manage only App Configuration resource. This role does not grant access to manage other Azure resources. It grants access to the resource's access keys. While the App Configuration data can be accessed using access keys, this role doesn't grant direct access to the data using Microsoft Entra ID. It grants access to recover deleted App Configuration resource but not to purge them. To purge deleted App Configuration resources, use the **Contributor** role.
+- **App Configuration Reader**: Use this role to read only App Configuration resource. This role does not grant access to read other Azure resources. It doesn't grant access to the resource's access keys, nor to the data stored in App Configuration.
+- **Contributor** or **Owner**: Use this role to manage the App Configuration resource while also be able to manage other Azure resources. This role is a privileged adminstrator role. It grants access to the resource's access keys. While the App Configuration data can be accessed using access keys, this role doesn't grant direct access to the data using Microsoft Entra ID.
+- **Reader**: Use this role to read App Configuration resource while also be able to read other Azure resources. This role doesn't grant access to the resource's access keys, nor to the data stored in App Configuration.
> [!NOTE] > After a role assignment is made for an identity, allow up to 15 minutes for the permission to propagate before accessing data stored in App Configuration using this identity.
azure-app-configuration Concept Soft Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-soft-delete.md
With Purge protection enabled, soft deleted stores can't be purged in the retent
- `Microsoft.AppConfiguration/configurationStores/write`
-To recover a deleted App Configuration store the `Microsoft.AppConfiguration/configurationStores/write` permission is needed. The built-in "Owner" and "Contributor" roles contain this permission by default. The permission can be assigned at the subscription or resource group scope.
+To recover a deleted App Configuration store the `Microsoft.AppConfiguration/configurationStores/write` permission is needed. The built-in "App Configuration Contributor", "Owner", and "Contributor" roles contain this permission by default. The permission can be assigned at the subscription or resource group scope.
## Permissions to read and purge deleted stores * Read: `Microsoft.AppConfiguration/locations/deletedConfigurationStores/read` * Purge: `Microsoft.AppConfiguration/locations/deletedConfigurationStores/purge/action`
-To list deleted App Configuration stores, or get an individual store by name the `Microsoft.AppConfiguration/locations/deletedConfigurationStores/read` permission is needed. To purge a deleted App Configuration store the `Microsoft.AppConfiguration/locations/deletedConfigurationStores/purge/action` permission is needed. The built-in "Owner" and "Contributor" roles contain these permissions by default. Permissions for reading and purging deleted App Configuration stores must be assigned at the subscription level. This is because deleted configuration stores exist outside of individual resource groups.
+To list deleted App Configuration stores, or get an individual store by name the `Microsoft.AppConfiguration/locations/deletedConfigurationStores/read` permission is needed. To purge a deleted App Configuration store the `Microsoft.AppConfiguration/locations/deletedConfigurationStores/purge/action` permission is needed. The built-in "App Configuration Contributor" and "App Configuration Reader" roles contain the permission for reading deleted App Configuration stores but not the permission for purging deleted App Configuration stores. The built-in "Owner" and "Contributor" roles contain these permissions by default. Permissions for reading and purging deleted App Configuration stores must be assigned at the subscription level. This is because deleted configuration stores exist outside of individual resource groups.
## Billing implications
azure-cache-for-redis Cache Tutorial Aks Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-tutorial-aks-get-started.md
Last updated 10/01/2024
# Tutorial: Connect to Azure Cache for Redis from your application hosted on Azure Kubernetes Service
-In this tutorial, you adapt the [AKS sample voting application](https://github.com/Azure-Samples/azure-voting-app-redis/tree/master) to use with an Azure Cache for Redis instance instead. The original sample uses a Redis cache deployed as a container to your AKS cluster. Following some simple steps, you can configure the AKS sample voting application to connect to your Azure Cache for Redis instance.
+In this tutorial, you use this [sample](https://github.com/Azure-Samples/azure-cache-redis-samples/tree/main/tutorial/connect-from-aks) to connect with an Azure Cache for Redis instance.
## Prerequisites
In this tutorial, you adapt the [AKS sample voting application](https://github.c
## Run sample locally
-To run this sample locally, configure your user principal as a Redis User on your Redis instance. The code sample will use your user principal through (DefaultAzureCredential)[https://learn.microsoft.com/en-us/dotnet/azure/sdk/authentication/?tabs=command-line#use-defaultazurecredential-in-an-application] to connect to Redis instance.
+To run this sample locally, configure your user principal as a Redis User on your Redis instance. The code sample uses your user principal through [DefaultAzureCredential](/dotnet/azure/sdk/authentication/?tabs=command-line#use-defaultazurecredential-in-an-application) to connect to Redis instance.
## Configure your AKS cluster
-Follow these [steps](/azure/aks/workload-identity-deploy-cluster) to configure a workload identity for your AKS cluster. Complete the following steps:
+Follow these [steps](/azure/aks/workload-identity-deploy-cluster) to configure a workload identity for your AKS cluster.
- - Enable OIDC issuer and workload identity
- - Skip the step to create user assigned managed identity if you already created your managed identity. If you create a new managed identity, ensure that you create a new Redis User for your managed identity and assign appropriate data access permissions.
- - Create a Kubernetes Service account annotated with the client ID of your user assigned managed identity
- - Create a federated identity credential for your AKS cluster.
+Then, complete the following steps:
+
+- Enable OIDC issuer and workload identity
+- Skip the step to create user assigned managed identity if you already created your managed identity. If you create a new managed identity, ensure that you create a new Redis User for your managed identity and assign appropriate data access permissions.
+- Create a Kubernetes Service account annotated with the client ID of your user assigned managed identity
+- Create a federated identity credential for your AKS cluster.
## Configure your workload that connects to Azure Cache for Redis
kubectl delete pod entrademo-pod
- [Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using the Azure portal](/azure/aks/learn/quick-kubernetes-deploy-portal) - [Quickstart: Deploy and configure workload identity on an Azure Kubernetes Service (AKS) cluster](/azure/aks/workload-identity-deploy-cluster)-- [Azure Cache for Redis Entra ID Authentication](/azure/azure-cache-for-redis/cache-azure-active-directory-for-authentication)
+- [Azure Cache for Redis Microsoft Entra ID Authentication](/azure/azure-cache-for-redis/cache-azure-active-directory-for-authentication)
azure-functions Azfd0013 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/errors-diagnostics/diagnostic-events/azfd0013.md
The `FUNCTIONS_WORKER_RUNTIME` application setting indicates the language or lan
This event may appear for apps that were previously using inconsistent and undefined behavior to continue running while in a mismatch state. Follow the instructions in this article to resolve the event for these applications. Doing so allows these apps to take advantage of performance enhancements and ensure that they can continue to operate as expected.
-.NET apps undergoing a [migration from the in-process model to the isolated worker][isolated-migration] may encounter this event temporarily during that process. When `FUNCTIONS_WORKER_RUNTIME` is updated to "dotnet-isolated", but the application is still using an in-process model payload, this event may appear until the migration is completed. See the migration guidance for instructions on using deployment slots to prevent this event from appearing in your production environment.
+.NET apps undergoing a [migration from the in-process model to the isolated worker][isolated-migration] may encounter this event temporarily during that process. When `FUNCTIONS_WORKER_RUNTIME` is updated to `dotnet-isolated`, but the application is still using an in-process model payload, this event may appear until the migration is completed. See the migration guidance for instructions on using deployment slots to prevent this event from appearing in your production environment.
## How to resolve the event
-The event message indicates the current value of `FUNCTIONS_WORKER_RUNTIME` and the detected runtime metadata from the app payload. The values must be aligned, either by deploying an application of the appropriate type or by updating the value of `FUNCTIONS_WORKER_RUNTIME` to match.
+The event message indicates the current value of `FUNCTIONS_WORKER_RUNTIME` and the detected runtime metadata from the app payload. These values must be aligned, either by deploying an application payload of the appropriate type or by updating the setting to an expected value
-For most applications, the correct resolution is to update the value of [`FUNCTIONS_WORKER_RUNTIME`][fwr]. To do so, on your function app in Azure, set the `FUNCTIONS_WORKER_RUNTIME` [application setting][app-settings] to the [expected value][fwr] for your application payload. When running locally in the Azure Functions Core Tools, you should also add `FUNCTIONS_WORKER_RUNTIME` to the [local.settings.json file](../../functions-develop-local.md#local-settings-file).
+For most applications, the correct resolution is to update the value of [`FUNCTIONS_WORKER_RUNTIME`][fwr]. To do so, on your function app in Azure, set the `FUNCTIONS_WORKER_RUNTIME` [application setting][app-settings] to the expected value for your application payload. The expected value is not necessarily the same as the detected runtime metadata, though in many cases it will be. Use the following table to determine the correct value to use:
-For apps following a migration guide, see that guide for relevant instructions. [Migrating .NET applications to the isolated worker model][isolated-migration] involves first setting `FUNCTIONS_WORKER_RUNTIME` to "dotnet-isolated" before deploying the updated application payload, and this event may appear temporarily between those steps.
+| Detected payload | Expected `FUNCTIONS_WORKER_RUNTIME` value |
+|-|-|
+| `CSharp` | `dotnet` |
+| `custom` | `custom` |
+| `dotnet` | `dotnet` |
+| `dotnet-isolated` | `dotnet-isolated` |
+| `java` | `java` |
+| `node` | `node` |
+| `powershell` | `powershell` |
+| `python` | `python` |
+| Any multi-stack payload<sup>1</sup> | `dotnet` |
+
+<sup>1</sup> A multi-stack payload is a comma-separated list of stack values. Multi-stack payloads are only supported for [Logic Apps Standard](../../../logic-apps/single-tenant-overview-compare.md).
+
+When running locally in the Azure Functions Core Tools, you should also add `FUNCTIONS_WORKER_RUNTIME` to the [local.settings.json file](../../functions-develop-local.md#local-settings-file).
+
+For apps following a migration guide, see that guide for relevant instructions. [Migrating .NET applications to the isolated worker model][isolated-migration] involves first setting `FUNCTIONS_WORKER_RUNTIME` to `dotnet-isolated` before deploying the updated application payload, and this event may appear temporarily between those steps.
## When to suppress the event
azure-functions Functions Infrastructure As Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-infrastructure-as-code.md
zone_pivot_groups: functions-hosting-plan
# Automate resource deployment for your function app in Azure Functions
-You can use a Bicep file or an Azure Resource Manager (ARM) template to automate the process of deploying your function app. During the deployment, you can use existing Azure resources or create new ones. Automation help's you with these scenarios:
+You can use a Bicep file or an Azure Resource Manager (ARM) template to automate the process of deploying your function app. During the deployment, you can use existing Azure resources or create new ones. Automation helps you with these scenarios:
+ Integrating your resource deployments with your source code in Azure Pipelines and GitHub Actions-based deployments. + Restoring a function app and related resources from a backup.
azure-functions Run Functions From Deployment Package https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/run-functions-from-deployment-package.md
The following table indicates the recommended `WEBSITE_RUN_FROM_PACKAGE` values
+ If your project needs to use remote build, don't use the `WEBSITE_RUN_FROM_PACKAGE` app setting. Instead, add the `SCM_DO_BUILD_DURING_DEPLOYMENT=true` deployment customization app setting. For Linux, also add the `ENABLE_ORYX_BUILD=true` setting. For more information, see [Remote build](functions-deployment-technologies.md#remote-build). > [!NOTE]
-> The `WEBSITE_RUN_FROM_PACKAGE` app setting does not work with MSDeploy as described in [MSDeploy VS. ZipDeploy](https://github.com/projectkudu/kudu/wiki/MSDeploy-VS.-ZipDeploy). You will receive an error during deployment, such as `ARM-MSDeploy Deploy Failed`. To resolve this error, hange `/MSDeploy` to `/ZipDeploy`.
+> The `WEBSITE_RUN_FROM_PACKAGE` app setting does not work with MSDeploy as described in [MSDeploy VS. ZipDeploy](https://github.com/projectkudu/kudu/wiki/MSDeploy-VS.-ZipDeploy). You will receive an error during deployment, such as `ARM-MSDeploy Deploy Failed`. To resolve this error, change `/MSDeploy` to `/ZipDeploy`.
### Add the WEBSITE_RUN_FROM_PACKAGE setting
azure-government Documentation Government Csp List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-csp-list.md
The following tables contain lists of all the authorized Cloud Solution Provider
|[Anautics](https://anautics.com)| |[Anika Systems Inc.](https://www.anikasystems.com)| |[APEX TECHNOLOGY MANAGEMENT INC](https://www.apex.com)|
-|Applied Information Sciences, Inc.|
+|[Applied Information Sciences, Inc.](https://www.ais.com)|
|[Apollo Information Systems Corp.](https://www.apollo-is.com/)| |[Approved Contact, LLC](https://approvedcontact.com)| |[Apps4Rent](https://www.apps4rent.com)|
The following tables contain lists of all the authorized Cloud Solution Provider
|[Edafio Technology Partners](https://edafio.com)| |[eMazzanti Technologies](https://www.emazzanti.net/)| |[Enabling Technologies Corp.](https://www.enablingtechcorp.com/)|
+|[Enavate](https://www.enavate.com)|
|[Enlighten IT Consulting](https://www.eitccorp.com)| |[Ensono](https://www.ensono.com)| |[Enterprise Computing Services](https://thinkecs.com/)|
azure-maps Power Bi Visual Understanding Layers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-understanding-layers.md
The general layer section of the **Format** pane are common settings that apply
> > For more information on the range scaling option, see **Range scaling** in the properties table of the [Add a bubble layer] article.
+## Data-Bound Reference Layer
+
+The Data-Bound Reference Layer enables the association of data with specific shapes in the reference layer based on common attributes.
+
+To use the Data-Bound Reference layer, drag the column containing unique identifiers (can be location data or not) to the Location field of the Azure Maps Visual.
++
+Azure Maps matches these identifiers with the corresponding properties in the uploaded spatial file, automatically linking your data to the shapes on the map.
+
+In scenarios with multiple properties, Azure Maps identifies a common property in each shape and compares its value with the selected data column in the Location field. It then uses the property that has the highest number of matches with the selected data column.
++
+If one or more shapes in the reference layer can't be automatically mapped to any data point, you can manage these unmapped objects by following these steps:
+
+1. Select the **Format visual** tab in the **Visualizations** pane.
+1. Select **Reference layer**.
+1. Select **Unmapped Objects**.
+1. Select the **Show** toggle switch to toggle On/Off. This highlights shapes that aren't mapped to any data points.
+
+Optionally, select the **Use custom colors** toggle switch to toggle On/Off custom fill and border colors for unmapped objects to make them visually distinct on the map.
++
+<!-
+### Key matching example
+
+#### Semantic model
+
+| Datapoint | Country | City | Office name |
+|-||-|-|
+| Datapoint_1 | US | New York | Office C |
+| Datapoint_1 | US | Seattle | Office A |
+| Datapoint_1 | US | LA | Office B |
+
+#### Reference layer data (take GeoJSON as an example)
+
+```json
+{
+ "type": "FeatureCollection",
+ "features": [
+ {
+ "type": "Feature",
+ "properties": {
+ "name": "Office A",
+ "shape": "Shape_1",
+ "id": "Office A"
+ },
+ "geometry": {
+ "type": "Polygon",
+ "coordinates": [
+ ...
+ }
+ },
+ {
+ "type": "Feature",
+ "properties": {
+ "name": "Office B",
+ "shape": "Shape_2",
+ "id": "Office B"
+ },
+ "geometry": {
+ "type": "Point",
+ "coordinates": [
+ ...
+ ]
+ }
+ },
+ {
+ "type": "Feature",
+ "properties": {
+ "name": "Office C",
+ "shape": "Shape_3"
+ },
+ "geometry": {
+ "type": "Point",
+ "coordinates": [
+ ...
+ ]
+ }
+ },
+ {
+ "type": "Feature",
+ "properties": {
+ "name": "Office D",
+ "shape": "Shape_4"
+ },
+ "geometry": {
+ "type": "Polygon",
+ "coordinates": [
+ ...
+ ]
+ }
+ }
+ ]
+}
+```
+
+#### the mapping results
+
+| | Location bucket|Mapping result |
+|--|-|--|
+| Case 1 | Office name | Shape_1 Γåö Datapoint_2 |
+| | | Shape_2 Γåö Datapoint_3 |
+| | | Shape_3 Γåö Datapoint_1 |
+| | | Shape_4 Γåö x (Since thereΓÇÖs no datapoint with Office name ΓÇ£Office DΓÇ¥) |
+| Case 2 | City | Nothing is mapped, since thereΓÇÖs no property that contains matched City names. |
+
+Note that there is a property ΓÇ£idΓÇ¥ also has ΓÇ£Office xΓÇ¥ values that is not being used, but instead the property ΓÇ£nameΓÇ¥ is used for data mapping since it has 3 datapoints matched and ΓÇ£idΓÇ¥ only has 2 datapoints matched.
+
+->
+
+## Conditional Formatting
+
+Conditional formatting can be applied to data to dynamically change the appearance of shapes on a map based on the provided data. For instance, gradient colors can visualize various data values such as population density, sales performance, or other metrics. This is a powerful tool for combining spatial and business data to create interactive and visually compelling reports.
++
+There are several ways to set colors to the shapes. The following table shows the priorities used:
+
+| Priority | Source | Description |
+|-|-|--|
+| 1 | Preset style in spatial files | Color and style as defined in the spatial file |
+| 2 | Unmapped object colors | Custom colors used when the geometry isnΓÇÖt data-bound |
+| 3 | Legend colors | Colors provided by Legend/Series |
+| 4 | Conditional formatting colors | Colors provided by conditional formatting |
+| 5 | Custom formatting colors | User defined custom styles in the Reference Layer options in the formatting pane |
+| 6 | Default colors | Default colors defined in the Azure Maps visual |
+
+> [!TIP]
+>
+> The Azure Maps Power BI Visual can only perform geocoding on valid location data such as geographical coordinates, addresses, or place names. If no valid location data is uploaded, data layers that depend on geocoded locations, such as heat maps or bubble layers, wonΓÇÖt display on the map.
+>
+> The Data-Bound Reference Layer will appear on the map as long as the data column contains unique identifiers that match properties in the spatial file, but to ensure correct results, your data column must include valid geographic information.
+ ## Next steps Change how your data is displayed on the map:
azure-netapp-files Manage Cool Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/manage-cool-access.md
Previously updated : 08/20/2024 Last updated : 10/10/2024
The storage with cool access feature provides options for the ΓÇ£coolness period
* A cool-access capacity pool can contain both volumes with cool access enabled and volumes with cool access disabled. * To prevent data retrieval from the cool tier to the hot tier during sequential read operations (for example, antivirus or other file scanning operations), set the cool access retrieval policy to **Default** or **Never**. For more information, see [Enable cool access on a new volume](#enable-cool-access-on-a-new-volume). * After the capacity pool is configured with the option to support cool access volumes, the setting can't be disabled at the _capacity pool_ level. You can turn on or turn off the cool access setting at the _volume_ level anytime. Turning off the cool access setting at the volume level stops further tiering of data.ΓÇ»
+* Files moved to the cool tier remains there after you disable cool access on a volume. You must perform an I/O operation on _each_ file to return it to the warm tier.
* You can't use [large volumes](large-volumes-requirements-considerations.md) with cool access. * For the maximum number of volumes supported for cool access per subscription per region, see [Resource limits for Azure NetApp Files](azure-netapp-files-resource-limits.md#resource-limits). * Considerations for using cool access with [cross-region replication](cross-region-replication-requirements-considerations.md) and [cross-zone replication](cross-zone-replication-introduction.md):
azure-resource-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/overview.md
Title: Templates overview
description: Describes the benefits using Azure Resource Manager templates (ARM templates) for deployment of resources. Previously updated : 07/05/2024 Last updated : 10/10/2024 # What are ARM templates?
To implement infrastructure as code for your Azure solutions, use Azure Resource
> [!TIP] > We've introduced a new language named [Bicep](../bicep/overview.md) that offers the same capabilities as ARM templates but with a syntax that's easier to use. Each Bicep file is automatically converted to an ARM template during deployment. If you're considering infrastructure as code options, we recommend looking at Bicep. For more information, see [What is Bicep?](../bicep/overview.md).
-To learn about how you can get started with ARM templates, see the following video.
-
-> [!VIDEO https://learn.microsoft.com/Shows/Azure-Enablement/How-and-why-to-learn-about-ARM-templates/player]
- ## Why choose ARM templates? If you're trying to decide between using ARM templates and one of the other infrastructure as code services, consider the following advantages of using templates:
azure-vmware Azure Vmware Solution Platform Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-vmware-solution-platform-updates.md
Last updated 8/20/2024
Microsoft regularly applies important updates to the Azure VMware Solution for new features and software lifecycle management. You should receive a notification through Azure Service Health that includes the timeline of the maintenance. For more information, see [Host maintenance and lifecycle management](architecture-private-clouds.md#host-maintenance-and-lifecycle-management).
+## October 2024
+
+The VMware Cloud Foundations (VCF) license portability feature on Azure VMware Solution is to modernize your VMware workload by bringing your VCF entitlements to Azure VMware Solution and take advantage of incredible cost savings.
+ ## August 2024 All new Azure VMware Solution private clouds are being deployed with VMware vSphere 8.0 version in Azure Commercial. [Learn more](architecture-private-clouds.md#vmware-software-versions)
azure-web-pubsub Socket Io Serverless Function Binding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/socket-io-serverless-function-binding.md
+
+ Title: Socket.IO Azure Function trigger and binding
+description: This article explains the usage of Azure Function trigger and binding
+keywords: Socket.IO, Socket.IO on Azure, serverless, Azure Function, multi-node Socket.IO, scaling Socket.IO, socketio, azure socketio
++ Last updated : 9/1/2024++++
+# Socket.IO Azure Function trigger and binding (Preview)
+
+This article explains how to use Socket.IO serverless integrate with Azure Functions.
+
+| Action | Binding Type |
+|||
+| Get client negotiate result including url and access token | [Input binding](#input-binding)
+| Triggered by messages from the service | [Trigger binding](#trigger-binding) |
+| Invoke service to send messages or manage clients | [Output binding](#output-binding) |
+
+[Source code](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/webpubsub/Microsoft.Azure.WebJobs.Extensions.WebPubSubForSocketIO) |
+[Package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.WebPubSubForSocketIO) |
+[API reference documentation](/dotnet/api/microsoft.azure.webjobs.extensions.webpubsubforsocketio) |
+[Product documentation](./index.yml) |
+[Samples](https://github.com/Azure/azure-webpubsub/tree/main/sdk/webpubsub-socketio-extension/examples)
+
+> [!IMPORTANT]
+> Azure Function bindings can only integrate with Web PubSub for Socket.IO in Serverlesse Mode.
+
+### Authenticate and Connection String
+
+In order to let the extension work with Web PubSub for Socket.IO, you need to provide either access keys or identity based configuration to authenticate with the service.
+
+#### Access key based configuration
+
+| Configuration Name | Description|
+||-|
+|WebPubSubForSocketIOConnectionString| Required. Key based connection string to the service|
++
+You can find the connection string in **Keys** blade in you Web PubSub for Socket.IO in the [Azure portal](https://portal.azure.com/).
+
+For the local development, use the `local.settings.json` file to store the connection string. Set `WebPubSubForSocketIOConnectionString` to the connection string copied from the previous step:
+
+```json
+{
+ "Values": {
+ "AzureWebJobsStorage": "UseDevelopmentStorage=true",
+ `WebPubSubForSocketIOConnectionString`: "Endpoint=https://<webpubsub-name>.webpubsub.azure.com;AccessKey=<access-key>;Version=1.0;"
+ }
+}
+```
+
+When deployed use the [application settings](../azure-functions/functions-how-to-use-azure-function-app-settings.md) to set the connection string.
+
+#### Identity based configuration
+
+| Configuration Name | Description|
+||-|
+|WebPubSubForSocketIOConnectionString__endpoint| Required. The Endpoint of the service. For example, https://mysocketio.webpubsub.azure.com|
+|WebPubSubForSocketIOConnectionString__credential | Defines how a token should be obtained for the connection. This setting should be set to `managedidentity` if your deployed Azure Function intends to use managed identity authentication. This value is only valid when a managed identity is available in the hosting environment.|
+|WebPubSubForSocketIOConnectionString__clientId | When `credential` is set to `managedidentity`, this property can be set to specify the user-assigned identity to be used when obtaining a token. The property accepts a client ID corresponding to a user-assigned identity assigned to the application. If not specified, the system-assigned identity is used.|
+
+The function binding follows the common properties for identity based configuration. See [Common properties for identity-based connections](../azure-functions/functions-reference.md?#common-properties-for-identity-based-connections) for more unmentioned properties.
+
+For the local development, use the `local.settings.json` file to store the connection string. Set `WebPubSubForSocketIOConnectionString` to the connection string copied from the previous step:
+
+```json
+{
+ "Values": {
+ "AzureWebJobsStorage": "UseDevelopmentStorage=true",
+ "WebPubSubForSocketIOConnectionString__endpoint": "https://<webpubsub-name>.webpubsub.azure.com",
+ "WebPubSubForSocketIOConnectionString__tenant": "<tenant id you're in>",
+ }
+}
+```
+
+If you want to use identity based configuration and running online, the `AzureWebJobsStorage` should refer to [Connecting to host storage with an identity](../azure-functions/functions-reference.md#connecting-to-host-storage-with-an-identity).
+
+## Input Binding
+
+Socket.IO Input binding generates a `SocketIONegotiationResult` to the client negotiation request. When a Socket.IO client tries to connect to the service, it needs to know the `endpoint`, `path`, and `access token` for authentication. It's a common practice to have a server to generate these data, which is called negotiation.
+
+# [C#](#tab/csharp)
+
+```cs
+[FunctionName("SocketIONegotiate")]
+public static IActionResult Negotiate(
+ [HttpTrigger(AuthorizationLevel.Anonymous, "get")] HttpRequest req,
+ [SocketIONegotiation(Hub = "hub", UserId = "userId")] SocketIONegotiationResult result)
+{
+ return new OkObjectResult(result);
+}
+```
+
+### Attribute
+
+The attribute for input binding is `[SocketIONegotiation]`.
+
+| Attribute property | Description |
+|||
+| Hub | The hub name that a client needs to connect to. |
+| Connection | The name of the app setting that contains the Socket.IO connection string (defaults to `WebPubSubForSocketIOConnectionString`). |
+| UserId | The userId of the connection. It applies to all sockets in the connection. It becomes the `sub` claim in the generated token. |
+
+# [JavaScript Model v4](#tab/javascript-v4)
+
+```js
+import { app, HttpRequest, HttpResponseInit, InvocationContext, input, } from "@azure/functions";
+
+const socketIONegotiate = input.generic({
+ type: 'socketionegotiation',
+ direction: 'in',
+ name: 'result',
+ hub: 'hub',
+});
+
+export async function negotiate(request: HttpRequest, context: InvocationContext): Promise<HttpResponseInit> {
+ let result = context.extraInputs.get(socketIONegotiate);
+ return { jsonBody: result };
+};
+
+// Negotiation
+app.http('negotiate', {
+ methods: ['GET'],
+ authLevel: 'anonymous',
+ extraInputs: [socketIONegotiate],
+ handler: negotiate
+});
+
+```
+
+### Configuration
+
+| Property | Description |
+|||
+| type | Must be `socketionegotiation` |
+| direction | Must be `in` |
+| name | Variable name used in function code for input connection binding object |
+| hub | The hub name that a client needs to connect to. |
+| connection | The name of the app setting that contains the Socket.IO connection string (defaults to `WebPubSubForSocketIOConnectionString`). |
+| userId | The userId of the connection. It applies to all sockets in the connection. It becomes the `sub` claim in the generated token. |
+
+# [Python Model v2](#tab/python-v2)
+
+A function always needs a trigger binding. We use HttpTrigger as an example in codes.
+
+```python
+import azure.functions as func
+app = func.FunctionApp()
+
+@app.function_name(name="negotiate")
+@app.route(auth_level=func.AuthLevel.ANONYMOUS)
+@app.generic_input_binding(arg_name="negotiate", type="socketionegotiation", hub="hub")
+def negotiate(req: func.HttpRequest, negotiate) -> func.HttpResponse:
+ return func.HttpResponse(negotiate)
+```
+
+### Annotation
+
+| Property | Description |
+|||
+| arg_name | The variable name of the argument in function to represent the input binding. |
+| type | Must be `socketionegotiation` |
+| hub | The hub name that a client needs to connect to. |
+| connection | The name of the app setting that contains the Socket.IO connection string (defaults to `WebPubSubForSocketIOConnectionString`). |
+| userId | The userId of the connection. It applies to all sockets in the connection. It becomes the `sub` claim in the generated token. |
+++
+## Trigger Binding
+
+Azure Function uses trigger binding to trigger a function to process the events from the Web PubSub for Socket.IO.
+
+Trigger binding exposes a specific path followed the Azure Function endpoint. The url should be set as the URL Template of the service (Portal: settings -> event handler -> URL Template). In the endpoint pattern, the query part `code=<API_KEY>` is **REQUIRED** when you're using Azure Function App for [security](../azure-functions/function-keys-how-to.md#understand-keys) reasons. The key can be found in **Azure portal**. Find your function app resource and navigate to **Functions** -> **App keys** -> **System keys** -> **socketio_extension** after you deploy the function app to Azure. Though, this key isn't needed when you're working with local functions.
+
+```
+<Function_App_Endpoint>/runtime/webhooks/socketio?code=<API_KEY>
+```
+
+# [C#](#tab/csharp)
+
+Function triggers for socket connect event.
+
+```cs
+[FunctionName("SocketIOTriggerConnect")]
+public static async Task<SocketIOEventHandlerResponse> Connect(
+ [SocketIOTrigger("hub", "connect")] SocketIOConnectRequest request)
+{
+ return new SocketIOConnectResponse();
+}
+```
+
+Function triggers for socket connected event.
+
+```cs
+[FunctionName("SocketIOTriggerConnected")]
+public static async Task Connected(
+ [SocketIOTrigger("hub", "connected")] SocketIOConnectedRequest request)
+{
+}
+```
+
+Function triggers for socket disconnect event.
+
+```cs
+[FunctionName("SocketIOTriggerDisconnected")]
+public static async Task Disconnected(
+ [SocketIOTrigger("hub", "disconnected")] SocketIODisconnectedRequest request)
+{
+}
+```
+
+Function triggers for normal messages from clients.
+
+```cs
+[FunctionName("SocketIOTriggerMessage")]
+public static async Task NewMessage(
+ [SocketIOTrigger("hub", "new message")] SocketIOMessageRequest request,
+ [SocketIOParameter] string arg)
+{
+}
+```
+
+### Attributes
+
+The attribute for trigger binding is `[SocketIOTrigger]`.
+
+| Attribute property | Description |
+|||
+| Hub | The hub name that a client needs to connect to. |
+| Namespace | The namespace of the socket. Default: "/" |
+| EventName | The event name that the function triggers for. Some event names are predefined: `connect` for socket connect event. `connected` for socket connected event. `disconnected` for socket disconnected event. And other events are defined by user and it need to match the event name sent by client side. |
+| ParameterNames | The parameter name list of the event. The length of list should be consistent with event sent from client. And the name uses the [Binding expressions](../azure-functions/functions-bindings-expressions-patterns.md) and access by the same-name function parameter. |
+
+### Binding Data
+
+`[SocketIOTrigger]` binds some variables to binding data. You can learn more about it from [Azure Functions binding expression patterns](../azure-functions/functions-bindings-expressions-patterns.md)
+
+#### SocketIOAttribute
+
+`SocketIOAttribute` is an alternative of `ParameterNames`, which simplifies the function definition. For example, the following two definitions have the same effection:
+
+```cs
+[FunctionName("SocketIOTriggerMessage")]
+public static async Task NewMessage(
+ [SocketIOTrigger("hub", "new message")] SocketIOMessageRequest request,
+ [SocketIOParameter] string arg)
+{
+}
+```
+
+```cs
+[FunctionName("SocketIOTriggerMessage")]
+public static async Task NewMessage(
+ [SocketIOTrigger("hub", "new message", ParameterNames = new[] {"arg"})] SocketIOMessageRequest request,
+ string arg)
+{
+}
+```
+
+Note that `ParameterNames` and `[SocketIOParameter]` can't be used together.
+
+# [JavaScript Model v4](#tab/javascript-v4)
+
+Function triggers for socket connect event.
+
+```js
+import { app, InvocationContext, input, trigger } from "@azure/functions";
++
+export async function connect(request: any, context: InvocationContext): Promise<any> {
+ return {};
+}
+
+// Trigger for connect
+app.generic('connect', {
+ trigger: trigger.generic({
+ type: 'socketiotrigger',
+ hub: 'hub',
+ eventName: 'connect'
+ }),
+ handler: connect
+});
+```
+
+Function triggers for socket connected event.
+
+```js
+import { app, InvocationContext, trigger } from "@azure/functions";
+
+export async function connected(request: any, context: InvocationContext): Promise<void> {
+}
+
+// Trigger for connected
+app.generic('connected', {
+ trigger: trigger.generic({
+ type: 'socketiotrigger',
+ hub: 'hub',
+ eventName: 'connected'
+ }),
+ handler: connected
+});
+```
+
+Function triggers for socket disconnected event.
+
+```js
+import { app, InvocationContext, trigger } from "@azure/functions";
+
+export async function disconnected(request: any, context: InvocationContext): Promise<void> {
+}
+
+// Trigger for connected
+app.generic('disconnected', {
+ trigger: trigger.generic({
+ type: 'socketiotrigger',
+ hub: 'hub',
+ eventName: 'disconnected'
+ }),
+ handler: disconnected
+});
+```
+
+Function triggers for normal messages from clients.
+
+```js
+import { app, InvocationContext, trigger, output } from "@azure/functions";
+
+export async function newMessage(request: any, context: InvocationContext): Promise<void> {
+}
+
+// Trigger for new message
+app.generic('newMessage', {
+ trigger: trigger.generic({
+ type: 'socketiotrigger',
+ hub: 'hub',
+ eventName: 'new message'
+ }),
+ handler: newMessage
+});
+```
+
+### Configuration
+
+| Property | Description |
+|||
+| type | Must be `socketiotrigger` |
+| hub | The hub name that a client needs to connect to. |
+| namespace | The namespace of the socket. Default: "/" |
+| eventName | The event name that the function triggers for. Some event names are predefined: `connect` for socket connect event. `connected` for socket connected event. `disconnected` for socket disconnected event. And other events are defined by user and it need to match the event name sent by client side. |
+| ParameterNames | The parameter name list of the event. The length of list should be consistent with event sent from client. And the name uses the [Binding expressions](../azure-functions/functions-bindings-expressions-patterns.md) and access by `context.bindings.<name>`. |
+
+# [Python Model v2](#tab/python-v2)
+
+Function triggers for socket connect event.
+
+```python
+import azure.functions as func
+from azure.functions.decorators.core import DataType
+import json
+app = func.FunctionApp()
+
+@app.generic_trigger(arg_name="sio", type="socketiotrigger", data_type=DataType.STRING, hub="hub", eventName="connect")
+def connect(sio: str) -> str:
+ return json.dumps({'statusCode': 200})
+```
+
+Function triggers for socket connected event.
+
+```python
+import azure.functions as func
+from azure.functions.decorators.core import DataType
+import json
+app = func.FunctionApp()
+
+@app.generic_trigger(arg_name="sio", type="socketiotrigger", data_type=DataType.STRING, hub="hub", eventName="connected")
+def connected(sio: str) -> None:
+ print("connected")
+```
+
+Function triggers for socket disconnected event.
+
+```python
+import azure.functions as func
+from azure.functions.decorators.core import DataType
+import json
+app = func.FunctionApp()
+
+@app.generic_trigger(arg_name="sio", type="socketiotrigger", data_type=DataType.STRING, hub="hub", eventName="disconnected")
+def connected(sio: str) -> None:
+ print("disconnected")
+```
+
+Function triggers for normal messages from clients.
+
+```python
+import azure.functions as func
+from azure.functions.decorators.core import DataType
+import json
+app = func.FunctionApp()
+
+@app.generic_trigger(arg_name="sio", type="socketiotrigger", data_type=DataType.STRING, hub="hub", eventName="chat")
+def chat(sio: str) -> None:
+ # do something else
+```
+
+Function trigger for normal messages with callback.
+
+```python
+import azure.functions as func
+from azure.functions.decorators.core import DataType
+import json
+app = func.FunctionApp()
+
+@app.generic_trigger(arg_name="sio", type="socketiotrigger", data_type=DataType.STRING, hub="hub", eventName="chat")
+def chat(sio: str) -> str:
+ return json.dumps({'ack': ["param1"]})
+```
+
+### Annotation
++
+| Property | Description |
+|||
+| arg_name | The variable name of the argument in function to represent the trigger binding. |
+| type | Must be `socketiotrigger` |
+| hub | The hub name that a client needs to connect to. |
+| data_type | Must be `DataType.STRING` |
+| namespace | The namespace of the socket. Default: "/" |
+| eventName | The event name that the function triggers for. Some event names are predefined: `connect` for socket connect event. `connected` for socket connected event. `disconnected` for socket disconnected event. And other events are defined by user and it need to match the event name sent by client side. |
++++
+### Request of Input Binding
+
+The data structure of input binding arguments varies depending on the message type.
+
+#### Connect
+
+```json
+{
+ "namespace": "",
+ "socketId": "",
+ "claims": {
+ "<claim-type>": [ "<claim-value>" ]
+ },
+ "query": {
+ "<query-key>": [ "<query-value>" ]
+ },
+ "headers":{
+ "<header-name>": [ "<header-value>" ]
+ },
+ "clientCertificates":{
+ {
+ "thumbprint": "",
+ "content": ""
+ }
+ }
+}
+```
+
+| Property | Description |
+|||
+| namespace | The namespace of the socket. |
+| socketId | The unique identity of the socket. |
+| claims | The claim of JWT of the client connection. Note, it's not the JWT when the service request the function, but the JWT when the Engine.IO client connects to the service. |
+| query | The query of the client connection. Note, it's not the query when the service request the function, but the query when the Engine.IO client connects to the service. |
+| headers | The headers of the client connection. Note, it's not the headers when the service request the function, but the headers when the Engine.IO client connects to the service. |
+| clientCertificates | The client certificate if it's enabled |
+
+#### Connected
+
+```json
+{
+ "namespace": "",
+ "socketId": "",
+}
+```
+
+| Property | Description |
+|||
+| namespace | The namespace of the socket. |
+| socketId | The unique identity of the socket. |
+
+#### Disconnected
+
+```json
+{
+ "namespace": "",
+ "socketId": "",
+ "reason": ""
+}
+```
+
+| Property | Description |
+|||
+| namespace | The namespace of the socket. |
+| socketId | The unique identity of the socket. |
+| reason | The connection close reason description. |
+
+#### Normal events
+
+```json
+{
+ "namespace": "",
+ "socketId": "",
+ "payload": "",
+ "eventName": "",
+ "parameters": []
+}
+```
+
+| Property | Description |
+|||
+| namespace | The namespace of the socket. |
+| socketId | The unique identity of the socket. |
+| payload | The message payload in Engine.IO protocol |
+| eventName | The event name of the request. |
+| parameters | List of parameters of the message. |
+
+## Output Binding
+
+The output binding currently support the following functionality:
+
+- Add a socket to room
+- Remove a socket from room
+- Send messages to a socket
+- Send messages to a room
+- Send messages to a namespace
+- Disconnect sockets
+
+# [C#](#tab/csharp)
+
+```cs
+[FunctionName("SocketIOOutput")]
+public static async Task<IActionResult> SocketIOOutput(
+ [SocketIOTrigger("hub", "new message")] SocketIOMessageRequest request,
+ [SocketIO(Hub = "hub")] IAsyncCollector<SocketIOAction> collector)
+{
+ await collector.AddAsync(SocketIOAction.CreateSendToNamespaceAction("new message", new[] { "arguments" }));
+}
+```
+
+### Attribute
+
+The attribute for input binding is `[SocketIO]`.
+
+| Attribute property | Description |
+|||
+| Hub | The hub name that a client needs to connect to. |
+| Connection | The name of the app setting that contains the Socket.IO connection string (defaults to `WebPubSubForSocketIOConnectionString`). |
+
+# [JavaScript Model v4](#tab/javascript-v4)
+
+```js
+import { app, InvocationContext, trigger, output } from "@azure/functions";
+
+const socketio = output.generic({
+ type: 'socketio',
+ hub: 'hub',
+})
+
+export async function newMessage(request: any, context: InvocationContext): Promise<void> {
+ context.extraOutputs.set(socketio, {
+ actionName: 'sendToNamespace',
+ namespace: '/',
+ eventName: 'new message',
+ parameters: [
+ "argument"
+ ]
+ });
+}
+
+// Trigger for new message
+app.generic('newMessage', {
+ trigger: trigger.generic({
+ type: 'socketiotrigger',
+ hub: 'hub',
+ eventName: 'new message'
+ }),
+ extraOutputs: [socketio],
+ handler: newMessage
+});
+```
+
+### Configuration
+
+| Attribute property | Description |
+|||
+| type | Must be `socketio` |
+| hub | The hub name that a client needs to connect to. |
+| connection | The name of the app setting that contains the Socket.IO connection string (defaults to `WebPubSubForSocketIOConnectionString`). |
+
+# [Python Model v2](#tab/python-v2)
+
+A function always needs a trigger binding. We use TimerTrigger as an example in codes.
+
+```python
+import azure.functions as func
+from azure.functions.decorators.core import DataType
+import json
+
+app = func.FunctionApp()
+
+@app.timer_trigger(schedule="* * * * * *", arg_name="myTimer", run_on_startup=False,
+ use_monitor=False)
+@app.generic_output_binding(arg_name="sio", type="socketio", data_type=DataType.STRING, hub="hub")
+def new_message(myTimer: func.TimerRequest,
+ sio: func.Out[str]) -> None:
+ sio.set(json.dumps({
+ 'actionName': 'sendToNamespace',
+ 'namespace': '/',
+ 'eventName': 'update',
+ 'parameters': [
+ "message"
+ ]
+ }))
+```
+
+### Annotation
+
+| Attribute property | Description |
+|||
+| arg_name | The variable name of the argument in function to represent the output binding. |
+| type | Must be `socketio` |
+| data_type | Use `DataType.STRING` |
+| hub | The hub name that a client needs to connect to. |
+| connection | The name of the app setting that contains the Socket.IO connection string (defaults to `WebPubSubForSocketIOConnectionString`). |
+++
+### Actions
+
+Output binding uses actions to perform operations. Currently, we support the following actions:
+
+#### AddSocketToRoomAction
+
+```json
+{
+ "type": "AddSocketToRoom",
+ "socketId": "",
+ "room": ""
+}
+```
+
+#### RemoveSocketFromRoomAction
+
+```json
+{
+ "type": "RemoveSocketFromRoom",
+ "socketId": "",
+ "room": ""
+}
+```
+
+#### SendToNamespaceAction
+
+```json
+{
+ "type": "SendToNamespace",
+ "eventName": "",
+ "parameters": [],
+ "exceptRooms": []
+}
+```
+
+#### SendToRoomsAction
+
+```json
+{
+ "type": "SendToRoom",
+ "eventName": "",
+ "parameters": [],
+ "rooms": [],
+ "exceptRooms": []
+}
+```
+
+#### SendToSocketAction
+
+```json
+{
+ "type": "SendToSocket",
+ "eventName": "",
+ "parameters": [],
+ "socketId": ""
+}
+```
+
+#### DisconnectSocketsAction
+
+```json
+{
+ "type": "DisconnectSockets",
+ "rooms": [],
+ "closeUnderlyingConnection": false
+}
+```
azure-web-pubsub Socket Io Serverless Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/socket-io-serverless-overview.md
+
+ Title: Overview of Web PubSub for Socket.IO Serverless Mode
+description: Get an overview of Azure's support for the open-source Socket.IO library on serverless mode.
+keywords: Socket.IO, Socket.IO on Azure, serverless, multi-node Socket.IO, scaling Socket.IO, socketio, azure socketio
++ Last updated : 08/5/2024++++
+# Overview Socket.IO Serverless Mode (Preview)
+
+Socket.IO is a library that enables real-time, bidirectional, and event-based communication between web clients and servers. Traditionally, Socket.IO operates in a server-client architecture, where the server handles all communication logic and maintains persistent connections.
+
+With the increasing adoption of serverless computing, we're introducing a new mode: Socket.IO Serverless mode. This mode allows Socket.IO to function in a serverless environment, handling communication logic through RESTful APIs or webhooks, offering a scalable, cost-effective, and maintenance-free solution.
+
+## Differences Between Default Mode and Serverless Mode
+| Feature | Default Mode | Serverless Mode |
+||||
+|Architecture|Use persistent connection for both servers and clients | Clients use persistent connections but servers use RESTful APIs and webhook event handlers in a stateless manner|
+|SDKs and Languages| Official JavaScript server SDKs together with [Extension library for Web PubSub for Socket.IO SDK](https://www.npmjs.com/package/@azure/web-pubsub-socket.io) is required; All compatible clients|No mandatory SDKs or languages. Use [Socket.IO Function binding](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.WebPubSubForSocketIO) to simplified integrate with Azure Function; All compatible clients|
+|Network Accessibility| The server doesn't need to expose network access as it proactively makes connection to the service|The server needs to expose network access to the service|
+|Feature supports|Most features are supported except some unsupported features: [Unsupported server APIs of Socket.IO](./socketio-supported-server-apis.md)|See list of supported features: [Supported functionality and RESTful APIs](./socket-io-serverless-protocol.md#supported-functionality-and-restful-apis)|
+
+## Next steps
+
+This article provides you with an overview of the Serverless Mode of Web PubSub for Socket.IO.
+
+> [!div class="nextstepaction"]
+> [Tutorial: Build chat app with Azure Function in Serverless Mode](./socket-io-serverless-tutorial.md)
+>
+> [Serverless Protocols](./socket-io-serverless-protocol.md)
+>
+> [Serverless Function Binding](./socket-io-serverless-function-binding.md)
azure-web-pubsub Socket Io Serverless Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/socket-io-serverless-protocol.md
+
+ Title: Web PubSub for Socket.IO Serverless Mode Specification
+description: Get the specification of Socket.IO Serverless
+keywords: Socket.IO, Socket.IO on Azure, serverless, multi-node Socket.IO, scaling Socket.IO, socketio, azure socketio
++ Last updated : 08/5/2024++++
+# Socket.IO Serverless Mode Specification (Preview)
+
+This document describes the details of serverless support. As the Socket.IO supports including the serverless supports highly depends on the Web PubSub service's existing interface, it introduces many complicated transforming and mapping. For most users, we suggest using Azure Function bindings together with Serverless Mode. You can walk through a tutorial [Tutorial: Build chat app with Azure Function in Serverless Mode](./socket-io-serverless-tutorial.md)
+
+## Lifetime workflow
+
+In the Socket.IO Serverless mode, the client's connection lifecycle is managed through a combination of persistent connections and webhooks. The workflow ensures that the serverless architecture can efficiently handle real-time communication while maintaining control over the client's connection state.
+
+### Socket Connects
+
+In the client side, you should use a Socket.IO compatible client. In the following client code, we use the [official JavaScript client SDK](https://www.npmjs.com/package/socket.io-client).
+
+Client:
+
+```javascript
+// Initiate a socket
+var socket = io("<service-endpoint>", {
+ path: "/clients/socketio/hubs/<hub-name>",
+ query: { access_token: "<access-token>"}
+});
+
+// handle the connection to the namespace
+socket.on("connect", () => {
+ // ...
+});
+```
+
+Explanations of the previous sample:
+- The `<service-endpoint>` is the `Endpoint` of the service resource.
+- The `<hub-name>` in `path` is a concept in Web PubSub for Socket.IO, which provides isolation between hubs.
+- The `<access-token>` is a JWT used to authenticate with the service. See (How to generate access token)[] for details.
+
+### Authentication flow
+
+When a client attempts to connect to the service, the process is divided into two distinct steps: establishing an Engine.IO (physical) connection and connecting to a namespace, which is referred to as a socket in Socket.IO terminology. The authentication process differs between these two steps:
+
+1. **Engine.IO connection**: During this step, the service authenticates the client using an access token to determine whether to accept the connection. If the corresponding hub is configured to allow anonymous mode, the Engine.IO connection can proceed without validating the access token. However, for security reasons, we recommend disabling anonymous mode in production environments.
+
+ - The Engine.IO connection url follows the format. But in most cases, it should be handled by Socket.IO client library.
+
+ ```
+ http://<service-endpoint>/clients/socketio/hubs/<hub-name>/?access_token=<access-token>
+ ```
+
+ - The details of access token can be found in [here](#authentication-details)
+
+2. **Socket**: After the Engine.IO connection is successfully established, the client SDK sends a payload to connect to a namespace. Once the service receives the socket connect request, the service triggers a connect call to the event handler. The outcome of this step depends on the status code returned by the connect response: a 200 status code indicates that the socket is approved, while a 4xx or 5xx status code results in the socket being rejected.
+
+3. Once a socket is connected, the service triggers a connected call to the event handler. It's an asynchronized call to notify the event handler a socket is successfully connected.
+
+### Sending messages
+
+Clients can send messages using the following code:
+
+```javascript
+socket.emit("hello", "world");
+```
+
+In this example, the message is sent with the **EventName** "hello", and the subsequent arguments are the parameters. The service triggers a corresponding user event with the same event name. This is a synchronous call, and the response data is returned to the client unchanged. It's common practice to include an acknowledgment in the response body to confirm that the message was received and processed.
+
+For example, the client emits message with ack:
+
+```javascript
+socket.emit("hello", "world", (response) => {
+ console.log(response);
+});
+```
+
+The event handler may respond with a body like `{ type: ACK, namespace: "/", data: ["bar"], id: 13 }` to acknowledge the emission. This response confirms the receipt and handling of the message by the server.
+
+### Socket Disconnects
+
+Client disconnects from a namespace or the corresponding Engine.IO connection closes results in socket close. Service triggers a disconnected event for every disconnected socket. It's an asynchronized call for notification.
+
+## Authentication Details
+
+The service uses bearer token to authenticate. There are two main scenarios to use the token.
+
+- Connect of Engine.IO connection. The following request is an example.
+
+ ```
+ https://<service-endpoint>/clients/socketio/hubs/<hub-name>/?access_token=<access-token>
+ ```
+
+- RESTful request to send messages or manage connections. The following request is an example.
+
+ ```
+ POST {endpoint}/api/hubs/{hub}/:removeFromGroups?api-version=2024-01-01
+
+ Headers:
+ Authorization: Bearer <token>
+ ```
+
+The generation of token can also be divided into two categories: key based authentication or identity based authentication.
+
+### **Key based authentication**
+
+ The JWT format:
+
+ **Header**
+
+ ```text
+ {
+ "alg": "HS256",
+ "typ": "JWT"
+ }
+ ```
+
+ **Payload**
+
+ ```text
+ {
+ "nbf": 1726196900,
+ "exp": 1726197200,
+ "iat": 1726196900,
+ "aud": "https://sample.webpubsub.azure.com/api/hubs/hub/groups/0~Lw~/:send?api-version=2024-01-01",
+ "sub": "userId"
+ }
+ ```
+
+ `aud` should keep consistent with the url which you're requesting.
+
+ `sub` is the userId of connection. Only available for the Engine.IO connection request.
+
+ **Signature**
+
+ ```text
+ HMACSHA256(base64UrlEncode(header) + "." + base64UrlEncode(payload), <AccessKey>)
+ ```
+
+ The `AccessKey` can be obtained from the service Azure portal or from the Azure CLI:
+
+ ```azcli
+ az webpubsub key show -g <resource-group> -n <resource-name>
+ ```
+
+### **Identity based authentication**
+
+#### Token for RESTful API
+
+Identity based authentication uses an [`access token`](/entra/identity-platform/access-tokens) signed by Microsoft identity platform.
+
+The application which is used to request a token must use the resource `https://webpubsub.azure.com` or scope `https://webpubsub.azure.com/.default`. And it needs to be granted `Web PubSub Service Owner` Role. For more detail, see [Authorize access to Web PubSub resources using Microsoft Entra ID](./concept-azure-ad-authorization.md)
+
+#### Token for Engine.IO connection
+
+Different from the RESTful API, Engine.IO connection doesn't use the Microsoft Entra ID token directly. Instead, you must make a RESTful call to the service to get a token and use the returned token as the access token for client.
+
+```Http
+POST {endpoint}/api/hubs/{hub}/:generateToken?api-version=2024-01-01
+
+Headers:
+Authorization: Bearer <Microsoft Entra ID Token>
+```
+
+For more optional parameters, see [Generate Client Token](/rest/api/webpubsub/dataplane/web-pub-sub/generate-client-token)
+
+## Supported functionality and RESTful APIs
+
+A server can use RESTful APIs to manage Socket.IO clients and send message to clients as well. As Socket.IO reuses the Web PubSub service RESTful APIs, Socket.IO terminology is transformed into Web PubSub terminology. The following documents elaborate the transformation.
+
+### Key Concept
+
+#### Namespace, Room, and Group Mapping
+
+As Socket.IO has the terminology namespace and room while Web PubSub service doesn't, we're mapping the namespace and room into groups.
+
+```
+Group name <--> 0~Base64UrlEncoded(namespace)~Base64UrlEncoded(room)
+```
+
+To represent the whole namespace:
+
+```
+Group name <--> 0~Base64UrlEncoded(namespace)~
+```
+
+See [Base64URL Standard](https://base64.guru/standards/base64url), which is a base64 protocol that has ability to use the encoding result as filename or URL address.
+
+For example:
+
+```
+Namespace = /, Room = rm <--> Group = 0~Lw~cm0
+Namespace = /ns, Room = rm <--> Group = 0~L25z~cm0
+Namespace = /ns <--> Group = 0~L25z~
+```
+
+#### Connection ID
+
+Connection ID uniquely identifies an Engine.IO connection. Different sockets running on the same Engine.IO connection share the same connection ID.
+
+#### Socket ID
+
+A Socket ID uniquely identifies a socket connection. According to the Socket.IO specification, each socket automatically joins a room with the same name as its Socket ID. For example, a socket with the Socket ID "abc" is automatically placed in the room "abc." This design allows you to send a message specifically to that socket by targeting the corresponding room with the same name as the Socket ID.
+
+### Add socket to room
+
+```Http
+POST {endpoint}/api/hubs/{hub}/:addToGroups?api-version=2024-01-01
+
+Headers:
+Authorization: Bearer <access token>
+Content-Type: application/json
+```
+
+#### Request Body
+```json
+{
+ "filter": "An OData filter which target connections satisfy",
+ "groups": [] // Target group
+}
+```
+
+See [Add Connections to Groups](/rest/api/webpubsub/dataplane/web-pub-sub/add-connections-to-groups) for REST details. See [OData filter syntax in the Azure Web PubSub service](./reference-odata-filter.md) for filter details.
+
+#### Example
+
+Add socket `socketId` in namespace `/ns` to room `rm` in hub `myHub`.
+
+```HTTP
+POST {endpoint}/api/hubs/myHub/:addToGroups?api-version=2024-01-01
+
+Headers:
+Authorization: Bearer <access token>
+Content-Type: application/json
+
+Body:
+{
+ "filter": "'0~L25z~c29ja2V0SWQ' in groups",
+ "groups": [ "'0~L25z~cm0" ]
+}
+```
+
+### Remove socket from room
+
+```Http
+POST {endpoint}/api/hubs/{hub}/:removeFromGroups?api-version=2024-01-01
+
+Headers:
+Authorization: Bearer <access token>
+Content-Type: application/json
+```
+
+#### Request Body
+```json
+{
+ "filter": "An OData filter which target connections satisfy",
+ "groups": [] // Target group
+}
+```
+
+See [Remove Connections From Groups](/rest/api/webpubsub/dataplane/web-pub-sub/remove-connections-from-groups) for REST details. See [OData filter syntax in the Azure Web PubSub service](./reference-odata-filter.md) for filter details.
+
+#### Example
+
+Remove socket `socketId` in namespace `/ns` from room `rm` in hub `myHub`.
+
+```HTTP
+POST {endpoint}/api/hubs/myHub/:removeFromGroups?api-version=2024-01-01
+
+Headers:
+Authorization: Bearer <access token>
+Content-Type: application/json
+
+Body:
+{
+ "filter": "'0~L25z~c29ja2V0SWQ' in groups",
+ "groups": [ "'0~L25z~cm0" ]
+}
+```
+
+### Send to a socket
+
+```Http
+POST {endpoint}/api/hubs/{hub}/groups/{group}/:send?api-version=2024-01-01
+
+Headers:
+Authorization: Bearer <access token>
+Content-Type: text/plain
+```
+
+#### Request Body
+
+```
+Engine.IO serialized payload
+```
+
+See [Send To All](/rest/api/webpubsub/dataplane/web-pub-sub/send-to-all) for REST details. See [Engine.IO Protocol](https://socket.io/docs/v4/engine-io-protocol/) for Engine.IO Protocol details.
+
+#### Example
+
+Send message `{"eventName", "arg1", "arg2"}` to socket `socketId` in namespace `/ns` in hub `myHub`.
+
+Client can handle messages with codes:
+
+```javascript
+socket.on('eventName', (arg1, arg2) => {
+ // ...
+});
+```
+
+```HTTP
+POST {endpoint}/api/hubs/myHub/groups/0~L25z~c29ja2V0SWQ/:send?api-version=2024-01-01
+
+Headers:
+Authorization: Bearer <access token>
+Content-Type: text/plain
+
+Body:
+42/ns,["eventName","arg1","arg2"]
+```
+
+### Send to a room
+
+```Http
+POST {endpoint}/api/hubs/{hub}/groups/{group}/:send?api-version=2024-01-01
+
+Headers:
+Authorization: Bearer <access token>
+Content-Type: text/plain
+```
+
+#### Request Body
+
+```
+Engine.IO serialized payload
+```
+
+See [Send To All](/rest/api/webpubsub/dataplane/web-pub-sub/send-to-all) for REST details. See [Engine.IO Protocol](https://socket.io/docs/v4/engine-io-protocol/) for Engine.IO Protocol details.
+
+#### Example
+
+Send message `{"eventName", "arg1", "arg2"}` to room `rm` in namespace `/ns` in hub `myHub`.
+
+Client can handle messages with codes:
+
+```javascript
+socket.on('eventName', (arg1, arg2) => {
+ // ...
+});
+```
+
+```HTTP
+POST {endpoint}/api/hubs/myHub/groups/0~L25z~cm0/:send?api-version=2024-01-01
+
+Headers:
+Authorization: Bearer <access token>
+Content-Type: text/plain
+
+Body:
+42/ns,["eventName","arg1","arg2"]
+```
+
+### Send to namespace
+
+```Http
+POST {endpoint}/api/hubs/{hub}/groups/{group}/:send?api-version=2024-01-01
+
+Headers:
+Authorization: Bearer <access token>
+Content-Type: text/plain
+```
+
+#### Request Body
+
+```
+Engine.IO serialized payload
+```
+
+See [Send To All](/rest/api/webpubsub/dataplane/web-pub-sub/send-to-all) for REST details. See [Engine.IO Protocol](https://socket.io/docs/v4/engine-io-protocol/) for Engine.IO Protocol details.
+
+#### Example
+
+Send message `{"eventName", "arg1", "arg2"}` to namespace `/ns` in hub `myHub`.
+
+Client can handle messages with codes:
+
+```javascript
+socket.on('eventName', (arg1, arg2) => {
+ // ...
+});
+```
+
+```HTTP
+POST {endpoint}/api/hubs/myHub/groups/0~L25z~/:send?api-version=2024-01-01
+
+Headers:
+Authorization: Bearer <access token>
+Content-Type: text/plain
+
+Body:
+42/ns,["eventName","arg1","arg2"]
+```
+
+### Disconnect socket
+
+```Http
+POST {endpoint}/api/hubs/{hub}/groups/{group}/:send?api-version=2024-01-01
+
+Headers:
+Authorization: Bearer <access token>
+Content-Type: text/plain
+```
+
+#### Request Body
+
+```
+Engine.IO serialized payload for socket disconnection
+```
+
+See [Send To All](/rest/api/webpubsub/dataplane/web-pub-sub/send-to-all) for REST details. See [Engine.IO Protocol](https://socket.io/docs/v4/engine-io-protocol/) for Engine.IO Protocol details. See [Disconnection from a namespace](https://socket.io/docs/v4/socket-io-protocol/#disconnection-from-a-namespace-1) for disconnection payload details.
+
+#### Example
+
+Disconnect socket `socketId` in namespace `/ns` in hub `myHub`.
+
+```HTTP
+POST {endpoint}/api/hubs/myHub/groups/0~L25z~c29ja2V0SWQ/:send?api-version=2024-01-01
+
+Headers:
+Authorization: Bearer <access token>
+Content-Type: text/plain
+
+Body:
+41/ns,
+```
+
+## Event Handler Specification
+
+Event handler may handle `connect`, `connected`, `disconnected`, and other message events from clients. They're REST calls triggered from the service.
+
+### Connect Event
+
+Service triggers `connect` event when a socket is connecting. Event handler can use `connect` event to auth and accept or reject the socket
+
+Request:
+
+```
+POST /upstream HTTP/1.1
+Host: xxxxxx
+WebHook-Request-Origin: xxx.webpubsub.azure.com
+Content-Type: application/json; charset=utf-8
+Content-Length: xxx
+ce-specversion: 1.0
+ce-type: azure.webpubsub.sys.connect
+ce-source: /hubs/{hub}/client/{connectionId}
+ce-id: {eventId}
+ce-time: 2024-01-01T00:00:00Z
+ce-signature: sha256={connection-id-hash-primary},sha256={connection-id-hash-secondary}
+ce-connectionId: {connectionId}
+ce-hub: {hub}
+ce-eventName: connect
+ce-namespace: {namespace}
+ce-socketId: {socketId}
+
+{
+ "claims": {}, // claims of jwt of client
+ "query": {}, // query string of client connect request
+ "headers": {}, // headers of client connect request
+ "clientCertificates": [
+ {
+ "thumbprint": "ABC"
+ }
+ ]
+}
+```
+
+Successful Response:
+
+```
+HTTP/1.1 200 OK
+```
+
+Nonsuccess status code means the event handler rejects the socket.
+
+### Connected Event
+
+Service trigger `connected` event when a socket is connected successfully.
+
+Request:
+
+```
+POST /upstream HTTP/1.1
+Host: xxxxxx
+WebHook-Request-Origin: xxx.webpubsub.azure.com
+Content-Type: application/json; charset=utf-8
+Content-Length: nnnn
+ce-specversion: 1.0
+ce-type: azure.webpubsub.sys.connected
+ce-source: /hubs/{hub}/client/{connectionId}
+ce-id: {eventId}
+ce-time: 2024-01-01T00:00:00Z
+ce-signature: sha256={connection-id-hash-primary},sha256={connection-id-hash-secondary}
+ce-connectionId: {connectionId}
+ce-hub: {hub}
+ce-eventName: connected
+ce-namespace: {namespace}
+ce-socketId: {socketId}
+
+{}
+```
+
+Response:
+
+The "Connected" event is asynchronous, so the response doesn't matter.
+
+```
+HTTP/1.1 200 OK
+```
+
+### Disconnected Event
+
+Service triggers `disconnected` event when a socket is disconnected.
+
+Request:
+
+```
+POST /upstream HTTP/1.1
+Host: xxxxxx
+WebHook-Request-Origin: xxx.webpubsub.azure.com
+Content-Type: application/json; charset=utf-8
+Content-Length: xxxx
+ce-specversion: 1.0
+ce-type: azure.webpubsub.sys.disconnected
+ce-source: /hubs/{hub}/client/{connectionId}
+ce-id: {eventId}
+ce-time: 2021-01-01T00:00:00Z
+ce-signature: sha256={connection-id-hash-primary},sha256={connection-id-hash-secondary}
+ce-connectionId: {connectionId}
+ce-hub: {hub}
+ce-eventName: disconnected
+ce-namespace: {namespace}
+ce-socketId: {socketId}
+
+{
+ "reason": "{Reason}" // Empty if connection close normally. Reason only implies the close is abnormal.
+}
+```
+
+Response:
+
+`disconnected` is an asynchronized method, response doesn't matters
+
+```
+HTTP/1.1 200 OK
+```
+
+### Message Event
+
+The service triggers a corresponding message event with the same event name.
+
+Request:
+
+```
+POST /upstream HTTP/1.1
+Host: xxxxxx
+WebHook-Request-Origin: xxx.webpubsub.azure.com
+Content-Type: text/plain
+Content-Length: xxxx
+ce-specversion: 1.0
+ce-type: azure.webpubsub.user.message
+ce-source: /hubs/{hub}/client/{connectionId}
+ce-id: {eventId}
+ce-time: 2021-01-01T00:00:00Z
+ce-signature: sha256={connection-id-hash-primary},sha256={connection-id-hash-secondary}
+ce-connectionId: {connectionId}
+ce-hub: {hub}
+ce-eventName: {eventName}
+ce-namespace: {namespace}
+ce-socketId: {socketId}
+
+Engine.IO serialized payload
+```
+
+Response:
+
+The data in body is directly sent to corresponding client. Usually it's used for ack message. (When the request contains an AckId)
+
+```
+HTTP/1.1 200 OK
+Content-Type: text/plain
+Content-Length: nnnn
+
+UserResponsePayload (Engine.IO serialized payload)
+```
+
+Or
+
+```
+HTTP/1.1 204 No Content
+```
azure-web-pubsub Socket Io Serverless Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/socket-io-serverless-quickstart.md
+
+ Title: 'Quickstart: Build chat app with Azure Function in Socket.IO Serverless Mode'
+description: In this article, you familiar with the samples of using Web PubSub for Socket.IO with Azure Function in Serverless Mode.
+keywords: Socket.IO, serverless, azure function, Socket.IO on Azure, multi-node Socket.IO, scaling Socket.IO, socketio, azure socketio
++ Last updated : 09/01/2024++++
+# Quickstart: Build chat app with Azure Function in Socket.IO Serverless Mode (Preview)
+
+In this article, you'll learn how to build a chat app using Web PubSub for Socket.IO in Serverless Mode with Azure Functions. The tutorial will guide you through securing your app with identity-based authentication, while working online.
+
+The project source uses Bicep to deploy the infrastructure on Azure, and Azure Functions Core Tools to deploy the code to the Function App.
+
+## Prerequisites
+++ An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio).+++ [Azure Functions Core Tools](../azure-functions/functions-run-local.md).+++ [.NET 8.0 SDK](https://dotnet.microsoft.com/download)+++ [Node.js 18](https://nodejs.org/) ++
+## Get the sample code
+
+Find the sample code: [Socket.IO Serverless Sample (TS)](https://github.com/Azure/azure-webpubsub/tree/main/sdk/webpubsub-socketio-extension/examples/chat-serverless-typescript)
+
+```bash
+git clone https://github.com/Azure/azure-webpubsub.git
+cd ./sdk/webpubsub-socketio-extension/examples/chat-serverless-typescript
+```
+
+## Deploy infrastructure
+
+The chat samples need to deploy several services in Azure:
+
+- [Azure Function App](../azure-functions/functions-overview.md)
+- [Web PubSub for Socket.IO](./socketio-overview.md)
+- [Managed Identity](/entra/identity/managed-identities-azure-resources/how-manage-user-assigned-managed-identities): Identity for communicating between services
+
+We use [Bicep](../azure-resource-manager/bicep/overview.md) to deploy the infrastructure. The file locates in the `./infra` folder. Deploy it with the az command:
+
+```azcli
+az deployment sub create -n "<deployment-name>" -l "<deployment-location>" --template-file ./infra/main.bicep --parameters environmentName="<env-name>" location="<location>"
+```
+
+- `<deployment-name>`: The name of the deployment.
+- `<deployment-location>`: The location of the deployment metadata. Note it's not the location where resources deploy to.
+- `<env-name>`: The name is a part of the resource group name and resource name.
+- `<location>`: The location of the resources.
+
+### Review of the infrastructure
+
+In the infrastructure release, we deploy an Azure Function App in consumption plan and the Monitor and Storage Account that required by the Function App. We also deploy a Web PubSub for Socket.IO resource in Serverless Mode.
+
+For the identity based authentication purpose, we deploy a user-assigned managed identity, assign it to both Function App and Socket.IO resource and grant it with some permissions:
+
+- **Storage Blob Data Owner role**: Access storage for Function App
+- **Monitoring Metrics Publisher role**: Access monitor for Function App
+- **Web PubSub Service Owner role**: Access Web PubSub for Socket.IO for Function App
+
+As per [Configure your Azure Functions app to use Microsoft Entra sign-in](../app-service/configure-authentication-provider-aad.md), we create a Service Principal. To avoid using secret for the Service Principal, we use [Federated identity credentials](/graph/api/resources/federatedidentitycredentials-overview).
+
+## Deploy sample to the Function App
+
+We prepared a bash script to deploy the sample code to the Function App:
+
+```bash
+# Deploy the project
+./deploy/deploy.sh "<deployment-name>"
+```
+
+### Review the deployment detail
+
+We need to do two steps to deploy the sample app.
+
+- Publish code to the Function App (Use Azure Functions Core Tools)
+
+ ```bash
+ func extensions sync
+ npm install
+ npm run build
+ func azure functionapp publish <function-app-name>
+ ```
+
+- Configure the Web PubSub for Socket.IO to add a hub setting which can send request to the Function App. As per the limitation of Function App's Webhook provider, you need to get an extension key populated by Function. Get more details in [Trigger Binding](./socket-io-serverless-function-binding.md#trigger-binding). And as we use identity-based authentication, in the hub settings, you need to assign the target resource, which is the clientId of the Service Principal created before.
+
+ ```bash
+ code=$(az functionapp keys list -g <resource-group> -n <function-name> --query systemKeys.socketio_extension -o tsv)
+ az webpubsub hub create -n <socketio-name> -g <resource-group> --hub-name "hub" --event-handler url-template="https://${<function-name>}.azurewebsites.net/runtime/webhooks/socketio?code=${code}" user-event-pattern="*" auth-type="ManagedIdentity" auth-resource="<service-principal-client-id>"
+ ```
+
+### Run Sample App
+
+After the code is deployed, visit the website to try the sample:
+
+```bash
+https://<function-endpoint>/api/index
+```
++
+## Next steps
+Next, you can follow the tutorial to write the app step by step:
+
+> [!div class="nextstepaction"]
+> [Tutorial: Build chat app with Azure Function in Serverless Mode](./socket-io-serverless-tutorial.md)
azure-web-pubsub Socket Io Serverless Tutorial Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/socket-io-serverless-tutorial-python.md
+
+ Title: 'Tutorial: Publish data to Web PubSub for Socket.IO clients in Serverless Mode in Python with Azure Function'
+description: In this tutorial, you learn how to use Web PubSub for Socket.IO with Azure Function in Serverless Mode to publish data to sockets with a real-time NASDAQ index update application
+keywords: Socket.IO, serverless, azure function, Socket.IO on Azure, multi-node Socket.IO, scaling Socket.IO, socketio, azure socketio
++ Last updated : 09/01/2024++++
+# Tutorial: Publish data to Socket.IO clients in Serverless Mode in Azure Function with Python (Preview)
+
+This tutorial guides you through how to publish data to Socket.IO clients in Serverless Mode in Python by creating a real-time NASDAQ index application integrated with Azure Function.
+
+Find full code samples that are used in this tutorial:
+
+- [Socket.IO Serverless Python Sample](https://github.com/Azure/azure-webpubsub/tree/main/sdk/webpubsub-socketio-extension/examples/publish-only-python)
+
+> [!IMPORTANT]
+> Default Mode needs a persistent server, you cannot integration Web PubSub for Socket.IO in default mode with Azure Function.
+
+## Prerequisites
+
+> [!div class="checklist"]
+> * An Azure account with an active subscription. If you don't have one, you can [create a free account](https://azure.microsoft.com/free/).
+> * [Azure Function core tool](../azure-functions/functions-run-local.md)
+> * Some familiarity with the Socket.IO library.
+
+## Create a Web PubSub for Socket.IO resource in Serverless Mode
+
+To create a Web PubSub for Socket.IO, you can use the following [Azure CLI](/cli/azure/install-azure-cli) command:
+
+```azcli
+az webpubsub create -g <resource-group> -n <resource-name>kind socketio --service-mode serverless --sku Premium_P1
+```
+
+## Create an Azure Function project locally
+
+You should follow the steps to initiate a local Azure Function project.
+
+1. Follow to step to install the latest [Azure Function core tool](../azure-functions/functions-run-local.md#install-the-azure-functions-core-tools)
+
+1. In the terminal window or from a command prompt, run the following command to create a project in the `SocketIOProject` folder:
+
+ ```bash
+ func init SocketIOProject --worker-runtime python
+ ```
+
+ This command creates a Python-based Function project. And enter the folder `SocketIOProject` to run the following commands.
+
+1. Currently, the Function Bundle doesn't include Socket.IO Function Binding, so you need to manually add the package.
+
+ 1. To eliminate the function bundle reference, edit the host.json file and remove the following lines.
+
+ ```json
+ "extensionBundle": {
+ "id": "Microsoft.Azure.Functions.ExtensionBundle",
+ "version": "[4.*, 5.0.0)"
+ }
+ ```
+
+ 1. Run the command:
+
+ ```bash
+ func extensions install -p Microsoft.Azure.WebJobs.Extensions.WebPubSubForSocketIO -v 1.0.0-beta.4
+ ```
+
+1. Replace the content in `function_app.py` with the codes:
+
+ ```python
+ import random
+ import azure.functions as func
+ from azure.functions.decorators.core import DataType
+ from azure.functions import Context
+ import json
+
+ app = func.FunctionApp()
+ current_index= 14000
+
+ @app.timer_trigger(schedule="* * * * * *", arg_name="myTimer", run_on_startup=False,
+ use_monitor=False)
+ @app.generic_output_binding("sio", type="socketio", data_type=DataType.STRING, hub="hub")
+ def publish_data(myTimer: func.TimerRequest,
+ sio: func.Out[str]) -> None:
+ change = round(random.uniform(-10, 10), 2)
+ global current_index
+ current_index = current_index + change
+ sio.set(json.dumps({
+ 'actionName': 'sendToNamespace',
+ 'namespace': '/',
+ 'eventName': 'update',
+ 'parameters': [
+ current_index
+ ]
+ }))
+
+ @app.function_name(name="negotiate")
+ @app.route(auth_level=func.AuthLevel.ANONYMOUS)
+ @app.generic_input_binding("negotiationResult", type="socketionegotiation", hub="hub")
+ def negotiate(req: func.HttpRequest, negotiationResult) -> func.HttpResponse:
+ return func.HttpResponse(negotiationResult)
+
+ @app.function_name(name="index")
+ @app.route(auth_level=func.AuthLevel.ANONYMOUS)
+ def index(req: func.HttpRequest) -> func.HttpResponse:
+ path = './https://docsupdatetracker.net/index.html'
+ with open(path, 'rb') as f:
+ return func.HttpResponse(f.read(), mimetype='text/html')
+ ```
+
+ Here's the explanation of these functions:
+
+ - `publish_data`: This function updates the NASDAQ index every second with a random change and broadcasts it to connected clients with Socket.IO Output Binding.
+
+ - `negotiate`: This function response a negotiation result to the client.
+
+ - `index`: This function returns a static HTML page.
++
+ Then add a `https://docsupdatetracker.net/index.html` file
+
+ Create the https://docsupdatetracker.net/index.html file with the content:
+
+ ```html
+ <!DOCTYPE html>
+ <html lang="en">
+ <head>
+ <meta charset="UTF-8">
+ <meta name="viewport" content="width=device-width, initial-scale=1.0">
+ <title>Nasdaq Index</title>
+ <style>
+ /* Reset some default styles */
+ * {
+ margin: 0;
+ padding: 0;
+ box-sizing: border-box;
+ }
+
+ body {
+ font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
+ background: linear-gradient(135deg, #f5f7fa, #c3cfe2);
+ height: 100vh;
+ display: flex;
+ justify-content: center;
+ align-items: center;
+ }
+
+ .container {
+ background-color: white;
+ padding: 40px;
+ border-radius: 12px;
+ box-shadow: 0 4px 6px rgba(0,0,0,0.1);
+ text-align: center;
+ max-width: 300px;
+ width: 100%;
+ }
+
+ .nasdaq-title {
+ font-size: 2em;
+ color: #003087;
+ margin-bottom: 20px;
+ }
+
+ .index-value {
+ font-size: 3em;
+ color: #16a34a;
+ margin-bottom: 30px;
+ transition: color 0.3s ease;
+ }
+
+ .update-button {
+ padding: 10px 20px;
+ font-size: 1em;
+ color: white;
+ background-color: #003087;
+ border: none;
+ border-radius: 6px;
+ cursor: pointer;
+ transition: background-color 0.3s ease;
+ }
+
+ .update-button:hover {
+ background-color: #002070;
+ }
+ </style>
+ </head>
+ <body>
+ <div class="container">
+ <div class="nasdaq-title">STOCK INDEX</div>
+ <div id="nasdaqIndex" class="index-value">14,000.00</div>
+ </div>
+
+ <script src="https://cdn.socket.io/4.7.5/socket.io.min.js"></script>
+ <script>
+ function updateIndexCore(newIndex) {
+ newIndex = parseFloat(newIndex);
+ currentIndex = parseFloat(document.getElementById('nasdaqIndex').innerText.replace(/,/g, ''))
+ change = newIndex - currentIndex;
+ // Update the index value in the DOM
+ document.getElementById('nasdaqIndex').innerText = newIndex.toLocaleString('en-US', {minimumFractionDigits: 2, maximumFractionDigits: 2});
+
+ // Optionally, change the color based on increase or decrease
+ const indexElement = document.getElementById('nasdaqIndex');
+ if (change > 0) {
+ indexElement.style.color = '#16a34a'; // Green for increase
+ } else if (change < 0) {
+ indexElement.style.color = '#dc2626'; // Red for decrease
+ } else {
+ indexElement.style.color = '#16a34a'; // Neutral color
+ }
+ }
+
+ async function init() {
+ const negotiateResponse = await fetch(`/api/negotiate`);
+ if (!negotiateResponse.ok) {
+ console.log("Failed to negotiate, status code =", negotiateResponse.status);
+ return;
+ }
+ const negotiateJson = await negotiateResponse.json();
+ socket = io(negotiateJson.endpoint, {
+ path: negotiateJson.path,
+ query: { access_token: negotiateJson.token}
+ });
+
+ socket.on('update', (index) => {
+ updateIndexCore(index);
+ });
+ }
+
+ init();
+ </script>
+ </body>
+ </html>
+ ```
+
+ The key part in the `https://docsupdatetracker.net/index.html`:
+
+ ```javascript
+ async function init() {
+ const negotiateResponse = await fetch(`/api/negotiate`);
+ if (!negotiateResponse.ok) {
+ console.log("Failed to negotiate, status code =", negotiateResponse.status);
+ return;
+ }
+ const negotiateJson = await negotiateResponse.json();
+ socket = io(negotiateJson.endpoint, {
+ path: negotiateJson.path,
+ query: { access_token: negotiateJson.token}
+ });
+
+ socket.on('update', (index) => {
+ updateIndexCore(index);
+ });
+ }
+ ```
+
+ It first negotiates with the Function App to get the Uri and the path to the service. And register a callback to update index.
+
+## How to run the App locally
+
+After code is prepared, following the instructions to run the sample.
+
+### Set up Azure Storage for Azure Function
+
+Azure Functions requires a storage account to work even running in local. Choose either of the two following options:
+
+* Run the free [Azurite emulator](../storage/common/storage-use-azurite.md).
+* Use the Azure Storage service. This may incur costs if you continue to use it.
+
+#### [Local emulation](#tab/storage-azurite)
+
+1. Install the Azurite
+
+ ```bash
+ npm install -g azurite
+ ```
+
+1. Start the Azurite storage emulator:
+
+ ```bash
+ azurite -l azurite -d azurite\debug.log
+ ```
+
+1. Make sure the `AzureWebJobsStorage` in *local.settings.json* set to `UseDevelopmentStorage=true`.
+
+#### [Azure Blob Storage](#tab/azure-blob-storage)
+
+Update the project to use the Azure Blob Storage connection string.
+
+```bash
+func settings add AzureWebJobsStorage "<storage-connection-string>"
+```
+++
+### Set up configuration of Web PubSub for Socket.IO
+
+Add connection string to the Function APP:
+
+```bash
+func settings add WebPubSubForSocketIOConnectionString "<connection string>"
+```
+
+### Run Sample App
+
+After tunnel tool is running, you can run the Function App locally:
+
+```bash
+func start
+```
+
+And visit the webpage at `http://localhost:7071/api/index`.
++
+## Next steps
+Next, you can try to use Bicep to deploy the app online with identity-based authentication:
+
+> [!div class="nextstepaction"]
+> [Quickstart: Build chat app with Azure Function in Socket.IO Serverless Mode](./socket-io-serverless-quickstart.md)
azure-web-pubsub Socket Io Serverless Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/socket-io-serverless-tutorial.md
+
+ Title: 'Tutorial: Use Web PubSub for Socket.IO with Azure Function in Serverless Mode'
+description: In this tutorial, you learn how to use Web PubSub for Socket.IO with Azure Function in Serverless Mode.
+keywords: Socket.IO, serverless, azure function, Socket.IO on Azure, multi-node Socket.IO, scaling Socket.IO, socketio, azure socketio
++ Last updated : 09/01/2024++++
+# Tutorial: Build chat app with Azure Function in Serverless Mode (Preview)
+
+This tutorial walks you through how to create a Web PubSub for Socket.IO service in Serverless Mode and build a chat app integrating with Azure Function.
+
+Find full code samples that are used in this tutorial:
+
+- [Socket.IO Serverless Sample](https://github.com/Azure/azure-webpubsub/tree/main/sdk/webpubsub-socketio-extension/examples/chat-serverless-javascript)
+
+> [!IMPORTANT]
+> Default Mode needs a persistent server, you cannot integration Web PubSub for Socket.IO in default mode with Azure Function.
+
+## Prerequisites
+
+> [!div class="checklist"]
+> * An Azure account with an active subscription. If you don't have one, you can [create a free account](https://azure.microsoft.com/free/).
+> * [Azure Function core tool](../azure-functions/functions-run-local.md)
+> * Some familiarity with the Socket.IO library.
+
+## Create a Web PubSub for Socket.IO resource in Serverless Mode
+
+To create a Web PubSub for Socket.IO, you can use the following [Azure CLI](/cli/azure/install-azure-cli) command:
+
+```azcli
+az webpubsub create -g <resource-group> -n <resource-name>kind socketio --service-mode serverless --sku Premium_P1
+```
+
+## Create an Azure Function project locally
+
+You should follow the steps to initiate a local Azure Function project.
+
+1. Follow to step to install the latest [Azure Function core tool](../azure-functions/functions-run-local.md#install-the-azure-functions-core-tools)
+
+1. In the terminal window or from a command prompt, run the following command to create a project in the `SocketIOProject` folder:
+
+ ```bash
+ func init SocketIOProject --worker-runtime javascript --model V4
+ ```
+
+ This command creates a JavaScript project. And enter the folder `SocketIOProject` to run the following commands.
+
+1. Currently, the Function Bundle doesn't include Socket.IO Function Binding, so you need to manually add the package.
+
+ 1. To eliminate the function bundle reference, edit the host.json file and remove the following lines.
+
+ ```json
+ "extensionBundle": {
+ "id": "Microsoft.Azure.Functions.ExtensionBundle",
+ "version": "[4.*, 5.0.0)"
+ }
+ ```
+
+ 1. Run the command:
+
+ ```bash
+ func extensions install -p Microsoft.Azure.WebJobs.Extensions.WebPubSubForSocketIO -v 1.0.0-beta.4
+ ```
+
+1. Create a function for negotiation. The negotiation function used for generating endpoints and tokens for client to access the service.
+
+ ```bash
+ func new --template "Http Trigger" --name negotiate
+ ```
+
+ Open the file in `src/functions/negotiate.js` and replace with the following code:
+
+ ```js
+ const { app, input } = require('@azure/functions');
+
+ const socketIONegotiate = input.generic({
+ type: 'socketionegotiation',
+ direction: 'in',
+ name: 'result',
+ hub: 'hub'
+ });
+
+ async function negotiate(request, context) {
+ let result = context.extraInputs.get(socketIONegotiate);
+ return { jsonBody: result };
+ };
+
+ // Negotiation
+ app.http('negotiate', {
+ methods: ['GET', 'POST'],
+ authLevel: 'anonymous',
+ extraInputs: [socketIONegotiate],
+ handler: negotiate
+ });
+ ```
+
+ This step creates a function `negotiate` with Http Trigger and `SocketIONegotiation` output binding, which means you can use an Http call to trigger the function and return a negotiation result that generated by `SocketIONegotiation` binding.
+
+1. Create a function for handing messages.
+
+ ```bash
+ func new --template "Http Trigger" --name message
+ ```
+
+ Open the file `src/functions/message.js` and replace with the following code:
+
+ ```js
+ const { app, output, trigger } = require('@azure/functions');
+
+ const socketio = output.generic({
+ type: 'socketio',
+ hub: 'hub',
+ })
+
+ async function chat(request, context) {
+ context.extraOutputs.set(socketio, {
+ actionName: 'sendToNamespace',
+ namespace: '/',
+ eventName: 'new message',
+ parameters: [
+ context.triggerMetadata.socketId,
+ context.triggerMetadata.message
+ ],
+ });
+ }
+
+ // Trigger for new message
+ app.generic('chat', {
+ trigger: trigger.generic({
+ type: 'socketiotrigger',
+ hub: 'hub',
+ eventName: 'chat',
+ parameterNames: ['message'],
+ }),
+ extraOutputs: [socketio],
+ handler: chat
+ });
+ ```
+
+ This uses `SocketIOTrigger` to get triggered by a Socket.IO client message and use `SocketIO` binding to broadcast messages to namespace.
+
+1. Create a function to return an index html for visiting.
+
+ 1. Create a folder `public` under `src/`.
+
+ 1. Create an html file `https://docsupdatetracker.net/index.html` with the following content.
+
+ ```html
+ <html>
+
+ <body>
+ <h1>Socket.IO Serverless Sample</h1>
+ <div id="chatPage" class="chat-container">
+ <div class="chat-input">
+ <input type="text" id="chatInput" placeholder="Type your message here...">
+ <button onclick="sendMessage()">Send</button>
+ </div>
+ <div id="chatMessages" class="chat-messages"></div>
+ </div>
+ <script src="https://cdn.socket.io/4.7.5/socket.io.min.js"></script>
+ <script>
+ function appendMessage(message) {
+ const chatMessages = document.getElementById('chatMessages');
+ const messageElement = document.createElement('div');
+ messageElement.innerText = message;
+ chatMessages.appendChild(messageElement);
+ hatMessages.scrollTop = chatMessages.scrollHeight;
+ }
+
+ function sendMessage() {
+ const message = document.getElementById('chatInput').value;
+ if (message) {
+ document.getElementById('chatInput').value = '';
+ socket.emit('chat', message);
+ }
+ }
+
+ async function initializeSocket() {
+ const negotiateResponse = await fetch(`/api/negotiate`);
+ if (!negotiateResponse.ok) {
+ console.log("Failed to negotiate, status code =", negotiateResponse.status);
+ return;
+ }
+ const negotiateJson = await negotiateResponse.json();
+ socket = io(negotiateJson.endpoint, {
+ path: negotiateJson.path,
+ query: { access_token: negotiateJson.token }
+ });
+
+ socket.on('new message', (socketId, message) => {
+ appendMessage(`${socketId.substring(0,5)}: ${message}`);
+ })
+ }
+
+ initializeSocket();
+ </script>
+ </body>
+
+ </html>
+ ```
+
+ 1. To return the HTML page, create a function and copy codes:
+
+ ```bash
+ func new --template "Http Trigger" --name index
+ ```
+
+ 1. Open the file `src/functions/index.js` and replace with the following code:
+
+ ```js
+ const { app } = require('@azure/functions');
+
+ const fs = require('fs').promises;
+ const path = require('path')
+
+ async function index(request, context) {
+ try {
+ context.log(`Http function processed request for url "${request.url}"`);
+
+ const filePath = path.join(__dirname,'../public/https://docsupdatetracker.net/index.html');
+ const html = await fs.readFile(filePath);
+ return {
+ body: html,
+ headers: {
+ 'Content-Type': 'text/html'
+ }
+ };
+ } catch (error) {
+ context.log(error);
+ return {
+ status: 500,
+ jsonBody: error
+ }
+ }
+ };
+
+ app.http('index', {
+ methods: ['GET', 'POST'],
+ authLevel: 'anonymous',
+ handler: index
+ });
+
+ ```
+
+## How to run the App locally
+
+After code is prepared, following the instructions to run the sample.
+
+### Set up Azure Storage for Azure Function
+
+Azure Functions requires a storage account to work even running in local. Choose either of the two following options:
+
+* Run the free [Azurite emulator](../storage/common/storage-use-azurite.md).
+* Use the Azure Storage service. This may incur costs if you continue to use it.
+
+#### [Local emulation](#tab/storage-azurite)
+
+1. Install the Azurite
+
+```bash
+npm install -g azurite
+```
+
+1. Start the Azurite storage emulator:
+
+```bash
+azurite -l azurite -d azurite\debug.log
+```
+
+1. Make sure the `AzureWebJobsStorage` in *local.settings.json* set to `UseDevelopmentStorage=true`.
+
+#### [Azure Blob Storage](#tab/azure-blob-storage)
+
+Update the project to use the Azure Blob Storage connection string.
+
+```bash
+func settings add AzureWebJobsStorage "<storage-connection-string>"
+```
+++
+### Set up configuration of Web PubSub for Socket.IO
+
+1. Add connection string to the Function APP:
+
+```bash
+func settings add WebPubSubForSocketIOConnectionString "<connection string>"
+```
+
+1. Add hub settings to the Web PubSub for Socket.IO
+
+```bash
+az webpubsub hub create -n <resource name> -g <resource group> --hub-name hub --event-handler url-template="tunnel:///runtime/webhooks/socketio" user-event-pattern="*"
+```
+
+The connection string can be obtained by the Azure CLI command
+
+```azcli
+az webpubsub key show -g <resource group> -n <resource name>
+```
+
+The output contains `primaryConnectionString` and `secondaryConnectionString`, and either is available.
+
+### Set up tunnel
+
+In serverless mode, the service uses webhooks to trigger the function. When you try to run the app locally, a crucial problem is let the service be able to access your local function endpoint.
+
+An easiest way to achieve it's to use [Tunnel Tool](../azure-web-pubsub/howto-web-pubsub-tunnel-tool.md)
+
+1. Install Tunnel Tool:
+
+ ```bash
+ npm install -g @azure/web-pubsub-tunnel-tool
+ ```
+
+1. Run the tunnel
+
+ ```bash
+ awps-tunnel run --hub hub --connection "<connection string>" --upstream http://127.0.0.1:7071
+ ```
+
+ The `--upstream` is the url that local Azure Function exposes. The port may be different and you can check the output when starting the function in the next step.
+
+### Run Sample App
+
+After tunnel tool is running, you can run the Function App locally:
+
+```bash
+func start
+```
+
+And visit the webpage at `http://localhost:7071/api/index`.
++
+## Next steps
+Next, you can try to use Bicep to deploy the app online with identity-based authentication:
+
+> [!div class="nextstepaction"]
+> [Quickstart: Build chat app with Azure Function in Socket.IO Serverless Mode](./socket-io-serverless-quickstart.md)
azure-web-pubsub Socketio Supported Server Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/socketio-supported-server-apis.md
Title: Supported server APIs of Socket.IO
+ Title: Unsupported server APIs of Socket.IO
description: This article lists Socket.IO server APIs that are partially supported or unsupported in Web PubSub for Socket.IO. keywords: Socket.IO, Socket.IO on Azure, multi-node Socket.IO, scaling Socket.IO, Socket.IO APIs, socketio, azure socketio
Last updated 07/27/2023
-# Supported server APIs of Socket.IO
+# Unsupported server APIs of Socket.IO
The Socket.IO library provides a set of [server APIs](https://socket.io/docs/v4/server-api/). The following server APIs are partially supported or unsupported by Web PubSub for Socket.IO.
bastion Bastion Connect Vm Ssh Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-connect-vm-ssh-linux.md
In order to connect to the Linux VM via SSH, you must have the following ports o
1. Use the following sections in this article to configure authentication settings and connect to your VM.
- * [Microsoft Entra ID Authentication](#microsoft-entra-id-authentication-preview)
+ * [Microsoft Entra ID Authentication](#microsoft-entra-id-authentication)
* [Username and password](#password-authentication) * [Password - Azure Key Vault](#password-authenticationazure-key-vault) * [SSH private key from local file](#ssh-private-key-authenticationlocal-file) * [SSH private key - Azure Key Vault](#ssh-private-key-authenticationazure-key-vault)
-## Microsoft Entra ID authentication (Preview)
-
+## Microsoft Entra ID authentication
> [!NOTE]
-> Microsoft Entra ID Authentication support for SSH connections within the portal is in Preview and is currently being rolled out.
+> Microsoft Entra ID Authentication support for SSH connections within the portal is only supported for Linux VMs.
If the following prerequisites are met, Microsoft Entra ID becomes the default option to connect to your VM. If not, Microsoft Entra ID won't appear as an option.
connectors Connectors Native Reqres https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-native-reqres.md
The **Request** trigger creates a manually callable endpoint that handles *only*
> change your storage account and copy your workflow to the new storage account, the URL for > the **Request** trigger also changes to reflect the new storage account. The same workflow has a different URL. +
+### Schema validation for stateless workflows
+
+To enable schema validation for stateless workflows, make sure that the **host.json** file in the logic app resource or project has the following [host setting](../logic-apps/edit-app-settings-host-settings.md#manage-host-settingshostjson):
+
+```json
+"extensions": {
+ "workflow": {
+ "Settings": {
+ "Runtime.StatelessFlowEvaluateTriggerCondition": "true"
+ }
+ }
+}
+```
Now, continue building your workflow by adding another action as the next step. For example, you can respond to the request by [adding a Response action](#add-response), which you can use to return a customized response and is described later in this article.
container-apps Tutorial Ci Cd Runners Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/tutorial-ci-cd-runners-jobs.md
Previously updated : 06/01/2023 Last updated : 10/10/2024 zone_pivot_groups: container-apps-jobs-self-hosted-ci-cd
The workflow runs on the `ubuntu-latest` GitHub-hosted runner and prints a messa
To run a self-hosted runner, you need to create a personal access token (PAT) in GitHub. Each time a runner starts, the PAT is used to generate a token to register the runner with GitHub. The PAT is also used by the GitHub Actions runner scale rule to monitor the repository's workflow queue and start runners as needed.
+> [!NOTE]
+> Personal Access Tokens (PATs) have an expiration date. Regularly rotate your tokens to ensure they remain valid (not expired) to maintain uninterrupted service.
+ 1. In GitHub, select your profile picture in the upper-right corner and select **Settings**. 1. Select **Developer settings**.
Create a new agent pool to run the self-hosted runner.
To run a self-hosted runner, you need to create a personal access token (PAT) in Azure DevOps. The PAT is used to authenticate the runner with Azure DevOps. It's also used by the scale rule to determine the number of pending pipeline runs and trigger new job executions.
+[!NOTE]
+> Personal Access Tokens (PATs) have an expiration date. Regularly rotate your tokens to ensure they remain valid (not expired) to maintain uninterrupted service.
+ 1. In Azure DevOps, select *User settings* next to your profile picture in the upper-right corner. 1. Select **Personal access tokens**.
cost-management-billing Allocate Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/allocate-costs.md
With cost allocation, you can reassign or distribute the costs of shared service
Cost allocation doesn't support purchases, including reservations and savings plans.
-Cost allocation doesn't affect your billing invoice. Billing responsibilities don't change. The primary purpose of cost allocation is to help you charge back costs to others. All chargeback processes happen in your organization outside of Azure. Cost allocation helps you charge back costs by showing them as the get reassigned or distributed.
+Cost allocation doesn't affect your billing invoice. Billing responsibilities don't change. The primary purpose of cost allocation is to help you charge back costs to others. All chargeback processes happen in your organization outside of Azure. Cost allocation helps you charge back costs by showing them as they get reassigned or distributed.
Allocated costs appear in cost analysis. They appear as other items associated with the targeted subscriptions, resource groups, or tags that you specify when you create a cost allocation rule.
cost-management-billing Understand Rhel Reservation Charges https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/understand-rhel-reservation-charges.md
Previously updated : 08/28/2024 Last updated : 10/10/2024 # Understand how the Red Hat Linux Enterprise software reservation plan discount is applied for Azure > [!NOTE]
-> The Red Hat Linux Enterprise software reservation plan and renewal are temporarily unavailable. Disregard any renewal emails until the plan is available.
+> The Red Hat Linux Enterprise software reservation plans and renewals are temporarily unavailable due to pending updates to reservation SKUs and pricing. You can disregard any renewal emails until the new plan is available. In the meantime, you can contact your Microsoft or Red Hat Sales Representative to ask about other options until the new plan is available.
When you buy a Red Hat Linux Enterprise software plan, you get a discount on the cost of running Red Hat software on Azure virtual machines. This article explains how the discount is applied to your Red Hat software costs.
defender-for-iot Configure Sensor Settings Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/configure-sensor-settings-portal.md
Define a new setting whenever you want to define a specific configuration for on
|Tab name |Description | |||
- |**Basics** | Select the subscription where you want to apply your setting, and your [setting type](#sensor-setting-reference). <br><br>Enter a meaningful name and an optional description for your setting. |
- |**Setting** | Define the values for your selected setting type.<br>For details about the options available for each setting type, find your selected setting type in the [Sensor setting reference](#sensor-setting-reference) below. |
+ |**Basics** | Select the subscription where you want to apply your setting, and your [setting type](#add-sensor-settings). <br><br>Enter a meaningful name and an optional description for your setting. |
+ |**Setting** | Define the values for your selected setting type.<br>For details about the options available for each setting type, find your selected setting type in the [Sensor setting reference](#add-sensor-settings) below. |
|**Apply** | Use the **Select sites**, **Select zones**, and **Select sensors** dropdown menus to define where you want to apply your setting. <br><br>**Important**: Selecting a site or zone applies the setting to all connected OT sensors, including any OT sensors added to the site or zone later on. <br>If you select to apply your settings to an entire site, you don't also need to select its zones or sensors. | |**Review and create** | Check the selections made for your setting. <br><br>If your new setting replaces an existing setting, a :::image type="icon" source="media/how-to-manage-individual-sensors/warning-icon.png" border="false"::: warning is shown to indicate the existing setting.<br><br>When you're satisfied with the setting's configuration, select **Create**. |
If you're in a situation where the OT sensor is disconnected from Azure, and you
Continue by updating the relevant setting directly on the OT network sensor. For more information, see [Manage individual sensors](how-to-manage-individual-sensors.md).
-## Sensor setting reference
+## Add sensor settings
-Use the following sections to learn more about the individual OT sensor settings available from the Azure portal:
+Use the following sections to learn more about the individual OT sensor settings available from the Azure portal.
+
+The **Type** settings are:
+
+- [Active Directory](#active-directory)
+- [Bandwidth cap](#bandwidth-cap)
+- [NTP](#ntp)
+- [Local subnets](#local-subnets)
+- [VLAN naming](#vlan-naming)
+- [Public addresses](#public-addresses)
+
+To add a new setting **Type**, select **Sites and sensors** > **Sensor settings**. Select the setting from the **Type** drop down, for example:
+ ### Active Directory
To configure an NTP server for your sensor from the Azure portal, define an IP/D
### Local subnets
-To focus the Azure device inventory on devices that are in your OT scope, you need to manually edit the subnet list to include only the locally monitored subnets that are in your OT scope.
+To focus the Azure device inventory on devices that are in your OT scope, you need to manually edit the subnet list to include only the locally monitored subnets that are in your OT scope.
Subnets in the subnet list are automatically configured as ICS subnets, which means that Defender for IoT recognizes these subnets as OT networks. You can edit this setting when you [configure the subnets](#configure-subnets-in-the-azure-portal).
Once the subnets are configured, the network location of the devices is shown in
#### Configure subnets in the Azure portal
-1. In the Azure portal, go to **Sites and sensors** > **Sensor settings**.
- 1. Under **Local subnets**, review the configured subnets. To focus the device inventory and view local devices in the inventory, delete any subnets that are not in your IoT/OT scope by selecting the options menu (...) on any subnet you want to delete. 1. To modify additional settings, select any subnet and then select **Edit** for the following options:
Once the subnets are configured, the network location of the devices is shown in
- Select **Import subnets** to import a comma-separated list of subnet IP addresses and masks. Select **Export subnets** to export a list of currently configured data, or **Clear all** to start from scratch. - Enter values in the **IP Address**, **Mask**, and **Name** fields to add subnet details manually. Select **Add subnet** to add additional subnets as needed.
-
+ - **ICS Subnet** is on by default, which means that Defender for IoT recognizes the subnet as an OT network. To mark a subnet as non-ICS, toggle off **ICS Subnet**. ### VLAN naming
To define a VLAN for your OT sensor, enter the VLAN ID and a meaningful name.
Select **Add VLAN** to add more VLANs as needed.
+### Public addresses
+
+Add public addresses that might have been used for internal use and shouldn't be included as suspicious IP addresses or tracking the data<!-- Theo is this correct? OR-->.
+Excluded public IP addresses that might have been used for internal use and shouldn't be included as suspicious IP addresses or tracking the data.
+
+1. In the **Settings** tab, type the **IP address** and **Mask** address.
+
+ :::image type="content" source="media/configure-sensor-settings-portal/sensor-settings-ip-addresses.png" alt-text="The screenshot shows the Settings tab for adding public addresses to the sensor settings.":::
+
+1. Select **Next**.
+1. In the **Apply** tab, select sites, and toggle the **Add selection by specific zone/sensor** to optionally apply the IP addresses to specific zones and sensors.
+1. Select **Next**.
+1. Review the details and select **Create** to add the address to the public addresses list.
+ ## Next steps > [!div class="nextstepaction"]
defender-for-iot Configure Mirror Hyper V https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/traffic-mirroring/configure-mirror-hyper-v.md
Before you start:
- Ensure that the data port SPAN configuration isn't configured with an IP address.
-## Configure a traffic mirroring port with Hyper-V
+## Create new Hyper-V virtual switch to relay the mirrored traffic into the VM
+
+### Create a new virtual switch with PowerShell
+
+```PowerShell
+New-VMSwitch -Name vSwitch_Span -NetAdapterName Ethernet -AllowManagementOS:$true
+```
+Where:
+
+| Parameter | Description |
+|--|--|
+|**vSwitch_Span** |Newly added SPAN virtual switch name |
+|**Ethernet** |Physical adapter name |
+
+Learn how to [Create and configure a virtual switch with Hyper-V](/windows-server/virtualization/hyper-v/get-started/create-a-virtual-switch-for-hyper-v-virtual-machines?tabs=powershell#create-a-virtual-switch)
+
+### Create a new virtual switch with Hyper-V Manager
1. Open the Virtual Switch Manager.
Before you start:
## Attach a SPAN Virtual Interface to the virtual switch
-Use Windows PowerShell or Hyper-V Manager to attach a SPAN virtual interface to the virtual switch you'd [created earlier](#configure-a-traffic-mirroring-port-with-hyper-v).
+Use Windows PowerShell or Hyper-V Manager to attach a SPAN virtual interface to the virtual switch you [created earlier](#create-new-hyper-v-virtual-switch-to-relay-the-mirrored-traffic-into-the-vm).
If you use PowerShell, define the name of the newly added adapter hardware as `Monitor`. If you use Hyper-V Manager, the name of the newly added adapter hardware is set to `Network Adapter`. ### Attach a SPAN virtual interface to the virtual switch with PowerShell
-1. Select the newly added SPAN virtual switch you'd configured [earlier](#configure-a-traffic-mirroring-port-with-hyper-v), and run the following command to add a new network adapter:
+1. Select the newly added SPAN virtual switch you [created earlier](#create-new-hyper-v-virtual-switch-to-relay-the-mirrored-traffic-into-the-vm), and run the following command to add a new network adapter:
```powershell ADD-VMNetworkAdapter -VMName VK-C1000V-LongRunning-650 -Name Monitor -SwitchName vSwitch_Span
If you use PowerShell, define the name of the newly added adapter hardware as `M
:::image type="content" source="../media/tutorial-install-components/vswitch-span.png" alt-text="Screenshot of selecting the following options on the virtual switch screen.":::
-1. In the **Hardware** list, under the **Network Adapter** drop-down list, select **Hardware Acceleration** and clear the **Virtual Machine Queue** option for the monitoring network interface.
- 1. In the **Hardware** list, under the **Network Adapter** drop-down list, select **Advanced Features**. Under the **Port Mirroring** section, select **Destination** as the mirroring mode for the new virtual interface. :::image type="content" source="../media/tutorial-install-components/destination.png" alt-text="Screenshot of the selections needed to configure mirroring mode."::: 1. Select **OK**.
-## Turn on Microsoft NDIS capture extensions
+## Turn on Microsoft NDIS capture extensions with PowerShell
+
+Turn on support for [Microsoft NDIS Capture Extensions](/windows-hardware/drivers/network/capturing-extensions) for the virtual switch you [created earlier](#create-new-hyper-v-virtual-switch-to-relay-the-mirrored-traffic-into-the-vm).
+
+**To enable Microsoft NDIS capture extensions for your new virtual switch**:
+
+```PowerShell
+Enable-VMSwitchExtension -VMSwitchName vSwitch_Span -Name "Microsoft NDIS Capture"
+```
+
+## Turn on Microsoft NDIS capture extensions with Hyper-V Manager
-Turn on support for [Microsoft NDIS Capture Extensions](/windows-hardware/drivers/network/capturing-extensions) for the virtual switch you'd [created earlier](#configure-a-traffic-mirroring-port-with-hyper-v).
+Turn on support for [Microsoft NDIS Capture Extensions](/windows-hardware/drivers/network/capturing-extensions) for the virtual switch you [created earlier](#create-new-hyper-v-virtual-switch-to-relay-the-mirrored-traffic-into-the-vm).
**To enable Microsoft NDIS capture extensions for your new virtual switch**:
Turn on support for [Microsoft NDIS Capture Extensions](/windows-hardware/driver
## Configure the switch's mirroring mode
-Configure the mirroring mode on the virtual switch you'd [created earlier](#configure-a-traffic-mirroring-port-with-hyper-v) so that the external port is defined as the mirroring source. This includes configuring the Hyper-V virtual switch (vSwitch_Span) to forward any traffic that comes to the external source port to a virtual network adapter configured as the destination.
+Configure the mirroring mode on the virtual switch you [created earlier](#create-new-hyper-v-virtual-switch-to-relay-the-mirrored-traffic-into-the-vm) so that the external port is defined as the mirroring source. This includes configuring the Hyper-V virtual switch (vSwitch_Span) to forward any traffic that comes to the external source port to a virtual network adapter configured as the destination.
To set the virtual switch's external port as the source mirror mode, run:
Where:
| Parameter | Description | |--|--|
-|**vSwitch_Span** | Name of the virtual switch you'd [created earlier](#configure-a-traffic-mirroring-port-with-hyper-v) |
+|**vSwitch_Span** | Name of the virtual switch you [created earlier](#create-new-hyper-v-virtual-switch-to-relay-the-mirrored-traffic-into-the-vm) |
|**MonitorMode=2** | Source | |**MonitorMode=1** | Destination | |**MonitorMode=0** | None |
Get-VMSwitchExtensionPortFeature -FeatureName "Ethernet Switch Port Security Set
|--|--| |**vSwitch_Span** | Newly added SPAN virtual switch name |
+## Configure VLAN settings for the Monitor adapter (if needed)
+
+If the Hyper-V server is located in a different VLAN than the VLAN from which the mirrored traffic originates, set the Monitor adapter to accept traffic from the mirrored VLANs.
+
+Use this PowerShell command to enable the Monitor adapter to accept the monitored traffic from different VLANs:
+```PowerShell
+Set-VMNetworkAdapterVlan -VMName VK-C1000V-LongRunning-650 -VMNetworkAdapterName Monitor -Trunk -AllowedVlanIdList 1010-1020 -NativeVlanId 10
+```
+Where:
+
+| Parameter | Description |
+|--|--|
+|**VK-C1000V-LongRunning-650** | CPPM VA name |
+|**1010-1020** |VLAN range from which IoT traffic is mirrored |
+|**10** |Native VLAN ID of the environment |
+
+Learn more about the [Set-VMNetworkAdapterVlan](/powershell/module/hyper-v/set-vmnetworkadaptervlan) PowerShell cmdlet.
+ [!INCLUDE [validate-traffic-mirroring](../includes/validate-traffic-mirroring.md)] ## Next steps
defender-for-iot Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/whats-new.md
Features released earlier than nine months ago are described in the [What's new
|Service area |Updates | |||
-| **OT networks** | - [Add wildcards to allowlist domain names](#add-wildcards-allowlist-domain-names)<br> - [Added protocol](#added-protocol) <br> - [Improved OT sensor onboarding](#improved-ot-sensor-onboarding) |
+| **OT networks** | - [Add wildcards to allowlist domain names](#add-wildcards-allowlist-domain-names)<br> - [Added protocol](#added-protocol) <br> - [New sensor setting type Public addresses](#new-sensor-setting-type-public-addresses) <br> - [Improved OT sensor onboarding](#improved-ot-sensor-onboarding) |
### Add wildcards allowlist domain names
When adding domain names to the FQDN allowlist use the `*` wildcard to include a
We now support the OCPI protocol. See [the updated protocol list](concept-supported-protocols.md#supported-protocols-for-ot-device-discovery).
+### New sensor setting type Public addresses
+
+We're adding the **Public addresses** type to the sensor settings, that allows you to exclude public IP addresses that might have been used for internal use and shouldn't be tracked. For more information, see [add sensor settings](configure-sensor-settings-portal.md#add-sensor-settings).
+ ### Improved OT sensor onboarding If there are connection problems, during sensor onboarding, between the OT sensor and the Azure portal at the configuration stage, the process can't be completed until the connection problem is solved.
For more information, see:
Now you can configure Active Directory and NTP settings for your OT sensors remotely from the **Sites and sensors** page in the Azure portal. These settings are available for OT sensor versions 22.3.x and higher.
-For more information, see [Sensor setting reference](configure-sensor-settings-portal.md#sensor-setting-reference).
+For more information, see [Sensor setting reference](configure-sensor-settings-portal.md#add-sensor-settings).
## April 2023
event-grid Handler Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/handler-functions.md
We recommend that you use the first approach (Event Grid trigger) as it has the
- Event Grid automatically adjusts the rate at which events are delivered to a function triggered by an Event Grid event based on the perceived rate at which the function can process events. This rate match feature averts delivery errors that stem from the inability of a function to process events as the functionΓÇÖs event processing rate can vary over time. To improve efficiency at high throughput, enable batching on the event subscription. For more information, see [Enable batching](#enable-batching). > [!NOTE]
-> - When you an Event Grid trigger to add an event subscription using an Azure function, Event Grid fetches the access key for the target function using Event Grid service principal's credentials. The permissions are granted to Event Grid when you register the Event Grid resource provider in their Azure subscription.
+> - When you use an Event Grid trigger to add an event subscription using an Azure function, Event Grid fetches the access key for the target function using Event Grid service principal's credentials. The permissions are granted to Event Grid when you register the Event Grid resource provider in their Azure subscription.
> - If you protect your Azure function with an **Microsoft Entra ID** application, you'll have to take the generic webhook approach using the HTTP trigger. Use the Azure function endpoint as a webhook URL when adding the subscription. ## Tutorials
expressroute Monitor Expressroute Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/monitor-expressroute-reference.md
Dimension for Express Direct:
[!INCLUDE [horz-monitor-ref-logs-tables](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-ref-logs-tables.md)]
+> [!NOTE]
+> Logs in Azure Log Analytics may take up to 24 hours to appear.
+ ### ExpressRoute Microsoft.Network/expressRouteCircuits - [AzureActivity](/azure/azure-monitor/reference/tables/AzureActivity#columns)
iot-operations Howto Configure Fabric Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/connect-to-cloud/howto-configure-fabric-endpoint.md
To send data to Microsoft Fabric OneLake in Azure IoT Operations Preview, you ca
- **Microsoft Fabric OneLake**. See the following steps to create a workspace and lakehouse. - [Create a workspace](/fabric/get-started/create-workspaces). The default *my workspace* isn't supported. - [Create a lakehouse](/fabric/onelake/create-lakehouse-onelake).
+ - If shown, ensure *Lakehouse schemas (Public Preview)* is **unchecked**.
+ - Make note of the workspace and lakehouse names.
## Create a Microsoft Fabric OneLake dataflow endpoint
For more information about dataflow destination settings, see [Create a dataflow
The following authentication methods are available for Microsoft Fabric OneLake dataflow endpoints. For more information about enabling secure settings by configuring an Azure Key Vault and enabling workload identities, see [Enable secure settings in Azure IoT Operations Preview deployment](../deploy-iot-ops/howto-enable-secure-settings.md).
+Before you create the dataflow endpoint, assign workspace *Contributor* role to the IoT Operations extension that grants permission to write to the Fabric lakehouse.
+
+![Screenshot of IoT Operations extension name to grant workspace access to.](media/howto-configure-fabric-endpoint/extension-name.png)
+
+To learn more, see [Give access to a workspace](/fabric/get-started/give-access-workspaces).
+ #### System-assigned managed identity Using the system-assigned managed identity is the recommended authentication method for Azure IoT Operations. Azure IoT Operations creates the managed identity automatically and assigns it to the Azure Arc-enabled Kubernetes cluster. It eliminates the need for secret management and allows for seamless authentication with Azure Data Explorer.
-Before you create the dataflow endpoint, assign a role to the managed identity that grants permission to write to the Fabric lakehouse. To learn more, see [Give access to a workspace](/fabric/get-started/give-access-workspaces).
# [Kubernetes](#tab/kubernetes)
iot-operations Howto Configure Kafka Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/connect-to-cloud/howto-configure-kafka-endpoint.md
To configure a dataflow endpoint for a Kafka endpoint, we suggest using the mana
| Setting | Description | | -- | - | | Name | The name of the dataflow endpoint. |
- | Host | The hostname of the Kafka broker in the format `<HOST>.servicebus.windows.net`. |
+ | Host | The hostname of the Kafka broker in the format `<HOST>.servicebus.windows.net:9093`. Include port number `9093` in the host setting for Event Hubs. |
| Authentication method| The method used for authentication. Choose *System assigned managed identity*, *User assigned managed identity*, or *SASL*. | | SASL type | The type of SASL authentication. Choose *Plain*, *ScramSha256*, or *ScramSha512*. Required if using *SASL*. | | Synced secret name | The name of the secret. Required if using *SASL* or *X509*. |
To configure a dataflow endpoint for non-Event-Hub Kafka brokers, set the host,
| Setting | Description | | -- | - | | Name | The name of the dataflow endpoint. |
- | Host | The hostname of the Kafka broker in the format `<HOST>.servicebus.windows.net`. |
+ | Host | The hostname of the Kafka broker in the format `<Kafa-broker-host>:xxxx`. Include port number in the host setting. |
| Authentication method| The method used for authentication. Choose *System assigned managed identity*, *User assigned managed identity*, *SASL*, or *X509 certificate*. | | SASL type | The type of SASL authentication. Choose *Plain*, *ScramSha256*, or *ScramSha512*. Required if using *SASL*. | | Synced secret name | The name of the secret. Required if using *SASL* or *X509*. |
iot-operations Howto Prepare Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/deploy-iot-ops/howto-prepare-cluster.md
Last updated 10/02/2024
An Azure Arc-enabled Kubernetes cluster is a prerequisite for deploying Azure IoT Operations Preview. This article describes how to prepare a cluster before you [Deploy Azure IoT Operations Preview to an Arc-enabled Kubernetes cluster](howto-deploy-iot-operations.md). This article includes guidance for both Ubuntu and Windows. > [!TIP]
-> The steps in this article prepare your cluster for a secure settings deployment, which is a longer but production-ready process. If you want to deploy Azure IoT Operations quickly and run a sample workload with only test settings, see the [Quickstart: Run Azure IoT Operations Preview in Github Codespaces with K3s](../get-started-end-to-end-sample/quickstart-deploy.md) instead.
+> The steps in this article prepare your cluster for a secure settings deployment, which is a longer but production-ready process. If you want to deploy Azure IoT Operations quickly and run a sample workload with only test settings, see the [Quickstart: Run Azure IoT Operations Preview in GitHub Codespaces with K3s](../get-started-end-to-end-sample/quickstart-deploy.md) instead.
> > For more information about test settings and secure settings, see [Deployment details > Choose your features](./overview-deploy.md#choose-your-features).
The [AksEdgeQuickStartForAio.ps1](https://github.com/Azure/AKS-Edge/blob/main/to
1. Open an elevated PowerShell window and change the directory to a working folder.
-1. Get the `objectId` of the Microsoft Entra ID application that the Azure Arc service uses in your tenant.
+1. Get the `objectId` of the Microsoft Entra ID application that the Azure Arc service uses in your tenant. Run the following command exactly as written, without changing the GUID value.
```azurecli az ad sp show --id bc313c14-388c-4e7d-a58e-70017303ee3b --query id -o tsv
To connect your cluster to Azure Arc:
export CLUSTER_NAME=<NEW_CLUSTER_NAME> ```
+1. After signing in, Azure CLI displays all of your subscriptions and indicates your default subscription with an asterisk `*`. To continue with your default subscription, select `Enter`. Otherwise, type the number of the Azure subscription that you want to use.
+
+1. Register the required resource providers in your subscription:
+
+ >[!NOTE]
+ >This step only needs to be run once per subscription. To register resource providers, you need permission to do the `/register/action` operation, which is included in subscription Contributor and Owner roles. For more information, see [Azure resource providers and types](../../azure-resource-manager/management/resource-providers-and-types.md).
+
+ ```azurecli
+ az provider register -n "Microsoft.ExtendedLocation"
+ az provider register -n "Microsoft.Kubernetes"
+ az provider register -n "Microsoft.KubernetesConfiguration"
+ az provider register -n "Microsoft.IoTOperations"
+ az provider register -n "Microsoft.DeviceRegistry"
+ az provider register -n "Microsoft.SecretSyncController"
+ ```
+
+1. Use the [az group create](/cli/azure/group#az-group-create) command to create a resource group in your Azure subscription to store all the resources:
+
+ ```azurecli
+ az group create --location $LOCATION --resource-group $RESOURCE_GROUP --subscription $SUBSCRIPTION_ID
+ ```
+1. Remove the existing connected k8s cli if any
+ ```azurecli
+ az extension remove --name connectedk8s
+ ```
+
+1. Download and install a preview version of the `connectedk8s` extension for Azure CLI.
+
+ ```azurecli
+ curl -L -o connectedk8s-1.10.0-py2.py3-none-any.whl https://github.com/AzureArcForKubernetes/azure-cli-extensions/raw/refs/heads/connectedk8s/public/cli-extensions/connectedk8s-1.10.0-py2.py3-none-any.whl
+ az extension add --upgrade --source connectedk8s-1.10.0-py2.py3-none-any.whl
+ ```
+
+1. Use the [az connectedk8s connect](/cli/azure/connectedk8s#az-connectedk8s-connect) command to Arc-enable your Kubernetes cluster and manage it as part of your Azure resource group:
+
+ ```azurecli
+ az connectedk8s connect --name $CLUSTER_NAME -l $LOCATION --resource-group $RESOURCE_GROUP --subscription $SUBSCRIPTION_ID --enable-oidc-issuer --enable-workload-identity
+ ```
+
+1. Get the cluster's issuer URL.
+
+ ```azurecli
+ az connectedk8s show --resource-group $RESOURCE_GROUP --name $CLUSTER_NAME --query oidcIssuerProfile.issuerUrl --output tsv
+ ```
+
+ Save the output of this command to use in the next steps.
+
+1. Create a k3s config file.
+
+ ```bash
+ sudo nano /etc/rancher/k3s/config.yaml
+ ```
+
+1. Add the following content to the `config.yaml` file, replacing the `<SERVICE_ACCOUNT_ISSUER>` placeholder with your cluster's issuer URL.
+
+ ```yml
+ kube-apiserver-arg:
+ - service-account-issuer=<SERVICE_ACCOUNT_ISSUER>
+ - service-account-max-token-expiration=24h
+ ```
+
+1. Save the file and exit the nano editor.
+
+1. Get the `objectId` of the Microsoft Entra ID application that the Azure Arc service uses in your tenant and save it as an environment variable. Run the following command exactly as written, without changing the GUID value.
+
+ ```azurecli
+ export OBJECT_ID=$(az ad sp show --id bc313c14-388c-4e7d-a58e-70017303ee3b --query id -o tsv)
+ ```
+
+1. Use the [az connectedk8s enable-features](/cli/azure/connectedk8s#az-connectedk8s-enable-features) command to enable custom location support on your cluster. This command uses the `objectId` of the Microsoft Entra ID application that the Azure Arc service uses. Run this command on the machine where you deployed the Kubernetes cluster:
+
+ ```azurecli
+ az connectedk8s enable-features -n $CLUSTER_NAME -g $RESOURCE_GROUP --custom-locations-oid $OBJECT_ID --features cluster-connect custom-locations
+ ```
+
+1. Restart K3s.
+
+ ```bash
+ systemctl restart k3s
+ ```
iot-operations Quickstart Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/get-started-end-to-end-sample/quickstart-deploy.md
To connect your cluster to Azure Arc:
az provider register -n "Microsoft.KubernetesConfiguration" az provider register -n "Microsoft.IoTOperations" az provider register -n "Microsoft.DeviceRegistry"
+ az provider register -n "Microsoft.SecretSyncController"
``` 1. Use the [az group create](/cli/azure/group#az-group-create) command to create a resource group in your Azure subscription to store all the resources:
To connect your cluster to Azure Arc:
>[!TIP] >The value of `$CLUSTER_NAME` is automatically set to the name of your codespace. Replace the environment variable if you want to use a different name.
-1. Get the `objectId` of the Microsoft Entra ID application that the Azure Arc service in your tenant uses and save it as an environment variable.
+1. Get the `objectId` of the Microsoft Entra ID application that the Azure Arc service in your tenant uses and save it as an environment variable. Run the following command exactly as written, without changing the GUID value.
```azurecli export OBJECT_ID=$(az ad sp show --id bc313c14-388c-4e7d-a58e-70017303ee3b --query id -o tsv)
iot-operations Howto Configure Aks Edge Essentials Layered Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-layered-network/howto-configure-aks-edge-essentials-layered-network.md
This walkthrough is an example of deploying Azure IoT Operations Preview to a sp
>[!IMPORTANT] > This is an advanced scenario for Azure IoT Operations. You should complete the following quickstarts to get familiar with the basic concepts before you start this advanced scenario. > - [Deploy Azure IoT Layered Network Management to an AKS cluster](howto-deploy-aks-layered-network.md)
-> - [Quickstart: Run Azure IoT Operations Preview in Github Codespaces with K3s](../get-started-end-to-end-sample/quickstart-deploy.md)
+> - [Quickstart: Run Azure IoT Operations Preview in GitHub Codespaces with K3s](../get-started-end-to-end-sample/quickstart-deploy.md)
> > You can't migrate a previously deployed Azure IoT Operations from its original network to an isolated network. For this scenario, follow the steps to begin with creating new clusters.
Once your level 3 cluster is Arc-enabled, you can deploy IoT Operations to the c
![Network diagram that shows IoT Operations running on a level 3 cluster.](./media/howto-configure-layered-network/logical-network-segmentation-2.png)
-Follow the steps in [Quickstart: Run Azure IoT Operations Preview in Github Codespaces with K3s](../get-started-end-to-end-sample/quickstart-deploy.md) to deploy IoT Operations to the level 3 cluster.
+Follow the steps in [Quickstart: Run Azure IoT Operations Preview in GitHub Codespaces with K3s](../get-started-end-to-end-sample/quickstart-deploy.md) to deploy IoT Operations to the level 3 cluster.
- In earlier steps, you completed the [prerequisites](../get-started-end-to-end-sample/quickstart-deploy.md#prerequisites) and [connected your cluster to Azure Arc](../get-started-end-to-end-sample/quickstart-deploy.md#connect-a-kubernetes-cluster-to-azure-arc) for Azure IoT Operations. You can review these steps to make sure nothing is missing.
iot-operations Howto Configure L3 Cluster Layered Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-layered-network/howto-configure-l3-cluster-layered-network.md
login.microsoftonline.com. 0 IN A 100.104.0.165
az provider register -n "Microsoft.KubernetesConfiguration" az provider register -n "Microsoft.IoTOperations" az provider register -n "Microsoft.DeviceRegistry"
+ az provider register -n "Microsoft.SecretSyncController"
``` 1. Use the [az group create](/cli/azure/group#az-group-create) command to create a resource group in your Azure subscription to store all the resources: ```bash
login.microsoftonline.com. 0 IN A 100.104.0.165
``` > [!TIP] > If the `connectedk8s` commands fail, try using the cmdlets in [Connect your AKS Edge Essentials cluster to Arc](/azure/aks/hybrid/aks-edge-howto-connect-to-arc).
-1. Fetch the `objectId` or `id` of the Microsoft Entra ID application that the Azure Arc service uses. The command you use depends on your version of Azure CLI:
+1. Fetch the `objectId` or `id` of the Microsoft Entra ID application that the Azure Arc service uses. Run the following command exactly as written, without changing the GUID value. The command you use depends on your version of Azure CLI:
```powershell # If you're using an Azure CLI version lower than 2.37.0, use the following command: az ad sp show --id bc313c14-388c-4e7d-a58e-70017303ee3b --query objectId -o tsv
iot-operations Howto Configure L4 Cluster Layered Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-layered-network/howto-configure-l4-cluster-layered-network.md
The following steps for setting up [AKS Edge Essentials](/azure/aks/hybrid/aks-e
az provider register -n "Microsoft.KubernetesConfiguration" az provider register -n "Microsoft.IoTOperations" az provider register -n "Microsoft.DeviceRegistry"
+ az provider register -n "Microsoft.SecretSyncController"
``` 1. Use the [az group create](/cli/azure/group#az-group-create) command to create a resource group in your Azure subscription to store all the resources: ```bash
iot Tutorial Iot Industrial Solution Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot/tutorial-iot-industrial-solution-architecture.md
Title: "Tutorial: Implement a condition monitoring solution"
+ Title: "Implement the Azure Industrial IoT reference solution architecture"
description: "Azure Industrial IoT reference architecture for condition monitoring, Overall Equipment Effectiveness (OEE) calculation, forecasting, and anomaly detection." Previously updated : 4/17/2024 Last updated : 10/10/2024 #customer intent: As an industrial IT engineer, I want to collect data from on-prem assets and systems so that I can enable the condition monitoring, OEE calculation, forecasting, and anomaly detection use cases for production managers on a global scale.
Here are the components involved in this solution:
| Industrial Assets | A set of simulated OPC UA enabled production lines hosted in Docker containers | | [Azure IoT Operations](/azure/iot-operations/get-started/overview-iot-operations) | Azure IoT Operations is a unified data plane for the edge. It includes a set of modular, scalable, and highly available data services that run on Azure Arc-enabled edge Kubernetes clusters. | | [Data Gateway](/azure/logic-apps/logic-apps-gateway-install#how-the-gateway-works) | This gateway connects your on-premises data sources (like SAP) to Azure Logic Apps in the cloud. |
-| [Azure Kubernetes Services Edge Essentials](/azure/aks/hybrid/aks-edge-overview) | This Kubernetes implementation runs at the Edge. It provides single- and multi-node Kubernetes clusters for a fault-tolerant Edge configuration. Both K3S and K8S are supported. It runs on embedded or PC-class hardware, like an industrial gateway. |
| [Azure Event Hubs](/azure/event-hubs/event-hubs-about) | The cloud message broker that receives OPC UA PubSub messages from edge gateways and stores them until retrieved by subscribers. | | [Azure Data Explorer](/azure/synapse-analytics/data-explorer/data-explorer-overview) | The time series database and front-end dashboard service for advanced cloud analytics, including built-in anomaly detection and predictions. | | [Azure Logic Apps](/azure/logic-apps/logic-apps-overview) | Azure Logic Apps is a cloud platform you can use to create and run automated workflows with little to no code. | | [Azure Arc](/azure/azure-arc/kubernetes/overview) | This cloud service is used to manage the on-premises Kubernetes cluster at the edge. New workloads can be deployed via Flux. |
-| [Azure Storage](/azure/storage/common/storage-introduction) | This cloud service is used to manage the OPC UA certificate store and settings of the Edge Kubernetes workloads. |
-| [Azure Managed Grafana](/azure/managed-grafana/overview) | Azure Managed Grafana is a data visualization platform built on top of the Grafana software by Grafana Labs. Grafana is built as a fully managed service that is hosted and supported by Microsoft. |
+| [Azure Managed Grafana](/azure/managed-grafana/overview) | Azure Managed Grafana is a data visualization platform built on top of the Grafana software by Grafana Labs. Grafana is a fully managed service that Microsoft hosts and supports. |
| [Microsoft Power BI](/power-bi/fundamentals/power-bi-overview) | Microsoft Power BI is a collection of SaaS software services, apps, and connectors that work together to turn your unrelated sources of data into coherent, visually immersive, and interactive insights. | | [Microsoft Dynamics 365 Field Service](/dynamics365/field-service/overview) | Microsoft Dynamics 365 Field Service is a turnkey SaaS solution for managing field service requests. | | [UA Cloud Commander](https://github.com/opcfoundation/ua-cloudcommander) | This open-source reference application converts messages sent to a Message Queue Telemetry Transport (MQTT) or Kafka broker (possibly in the cloud) into OPC UA Client/Server requests for a connected OPC UA server. The application runs in a Docker container. |
Here are the components involved in this solution:
> In a real-world deployment, something as critical as opening a pressure relief valve would be done on-premises. This is just a simple example of how to achieve the digital feedback loop.
-## A cloud-based OPC UA certificate store and persisted storage
-
-When manufacturers run OPC UA applications, their OPC UA configuration files, keys, and certificates must be persisted. While Kubernetes has the ability to persist these files in volumes, a safer place for them is the cloud, especially on single-node clusters where the volume would be lost when the node fails. This scenario is why the OPC UA applications used in this solution store their configuration files, keys, and certificates in the cloud. This approach also has the advantage of providing a single location for mutually trusted certificates for all OPC UA applications.
-- ## UA Cloud Library
-You can read OPC UA Information Models directly from Azure Data Explorer. You can do this by importing the OPC UA nodes defined in the OPC UA Information Model into a table for lookup of more metadata within queries.
+To read OPC UA Information Models directly from Azure Data Explorer, you can import the OPC UA nodes defined in the OPC UA Information Model into a table. You can use the imported information for lookup of more metadata within queries.
-First, configure an Azure Data Explorer (ADX) callout policy for the UA Cloud Library by running the following query on your ADX cluster (make sure you're an ADX cluster administrator, configurable under Permissions in the ADX tab in the Azure portal):
+First, configure an Azure Data Explorer (ADX) callout policy for the UA Cloud Library by running the following query on your ADX cluster. Before you start, make sure you're an ADX cluster administrator, which you can configure in the Azure portal by navigating to **Permissions** in the **ADX** tab.
``` .alter cluster policy callout @'[{"CalloutType": "webapi","CalloutUriRegex": "uacloudlibrary.opcfoundation.org","CanCall": true}]'
evaluate http_request(uri, headers, options)
You need to provide two things in this query: - The Information Model's unique ID from the UA Cloud Library and enter it into the \<insert information model identifier from cloud library here\> field of the ADX query.-- Your UA Cloud Library credentials (generated during registration) basic authorization header hash and insert it into the \<insert your cloud library credentials hash here\> field of the ADX query. Use tools like https://www.debugbear.com/basic-auth-header-generator to generate this.
+- Your UA Cloud Library credentials (generated during registration) basic authorization header hash and insert it into the \<insert your cloud library credentials hash here\> field of the ADX query. Use tools like https://www.debugbear.com/basic-auth-header-generator to generate the hash.
For example, to render the production line simulation Station OPC UA Server's Information Model in the Kusto Explorer tool available for download [here](/azure/data-explorer/kusto/tools/kusto-explorer), run the following query:
edges
| make-graph source --> target with nodes on source ```
-For best results, change the `Layout` option to `Grouped` and the `Lables` to `name`.
+For best results, change the `Layout` option to `Grouped` and the `Labels` to `name`.
:::image type="content" source="media/concepts-iot-industrial-solution-architecture/station-graph.png" alt-text="Graph of the Station Info Model." lightbox="media/concepts-iot-industrial-solution-architecture/station-graph.png" border="false" :::
The following OPC UA Node IDs are used in the Station OPC UA Server for telemetr
## Digital feedback loop with UA Cloud Commander and UA Cloud Action
-This reference implementation implements a "digital feedback loop", specifically triggering a command on one of the OPC UA servers in the simulation from the cloud, based on time-series data reaching a certain threshold (the simulated pressure). You can see the pressure of the assembly machine in the Seattle production line being released on regular intervals in the Azure Data Explorer dashboard.
-
+This section shows how to implement a "digital feedback loop. To create the feedback loop, you trigger a command on one of the OPC UA servers in the simulation from the cloud. The trigger is based on time-series data that reach a certain threshold for the simulated pressure. You can see the pressure of the assembly machine in the Azure Data Explorer dashboard. The pressure is released at regular intervals for the Seattle production line.
## Install the production line simulation and cloud services
-Clicking on the button deploys all required resources on Microsoft Azure:
+Select the **Deploy** button to deploy all required resources on Microsoft Azure:
[![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2Fdigitaltwinconsortium%2FManufacturingOntologies%2Fmain%2FDeployment%2Farm.json)
-During deployment, you must provide a password for a VM used to host the production line simulation and for UA Cloud Twin. The password must have three of the following attributes: One lower case character, one upper case character, one number, and one special character. The password must be between 12 and 72 characters long.
+During deployment, you must provide a password for a Virtual Machine (VM) used to host the production line simulation and for UA Cloud Twin. The password must have three of the following attributes: One lower case character, one upper case character, one number, and one special character. The password must be between 12 and 72 characters long.
> [!NOTE]
-> To save cost, the deployment deploys just a single Windows 11 Enterprise VM for both the production line simulation and the base OS for the Azure Kubernetes Services Edge Essentials instance. In production scenarios, the production line simulation isn't required and for the base OS for the Azure Kubernetes Services Edge Essentials instance, we recommend Windows IoT Enterprise Long Term Servicing Channel (LTSC).
+> To save cost, the deployment deploys a single Windows 11 Enterprise VM for both the production line simulation and Edge infrastructure. In production scenarios, the production line simulation isn't required and for the base OS, we recommend Windows IoT Enterprise Long Term Servicing Channel (LTSC).
-Once the deployment completes, connect to the deployed Windows VM with an RDP (remote desktop) connection. You can download the RDP file in the [Azure portal](https://portal.azure.com) page for the VM, under the **Connect** options. Sign in using the credentials you provided during deployment, open an **Administrator Powershell window**, navigate to the `C:\ManufacturingOntologies-main\Deployment` directory, and run:
+Once the deployment completes, connect to the deployed Windows VM with an RDP (remote desktop) connection. You can download the RDP file in the [Azure portal](https://portal.azure.com) page for the VM, under the **Connect** options. Sign in using the credentials you provided during deployment, open a Windows command prompt and install the Windows Subsystem for Linux (WSL) via:
-```azurepowershell
-New-AksEdgeDeployment -JsonConfigFilePath .\aksedge-config.json
+```
+ wsl --install
```
-After the command is finished, your Azure Kubernetes Services Edge Essentials installation is complete and you can run the production line simulation.
+Once the command is finished, reboot your VM and log back in again. A command prompt finishes the installation of WSL and you're prompted to enter a new username and password for WSL. Then, install K3S, a lightweight Kubernetes runtime, via:
-> [!TIP]
-> To get logs from all your Kubernetes workloads and services at any time, run `Get-AksEdgeLogs` from an **Administrator Powershell window**.
->
-> To check the memory utilization of your Kubernetes cluster, run `Invoke-AksEdgeNodeCommand -Command "sudo cat /proc/meminfo"` from an **Administrator Powershell window**.
+```
+ curl -sfL https://get.k3s.io | sh
+```
+You can now run the production line simulation.
## Run the production line simulation
-From the deployed VM, open a **Windows command prompt**. Navigate to the `C:\ManufacturingOntologies-main\Tools\FactorySimulation` directory and run the **StartSimulation** command by supplying the following parameters:
+From the deployed VM, run a Windows command prompt, enter *wsl*, and press Enter. Navigate to the `/mnt/c/ManufacturingOntologies-main/Tools/FactorySimulation` directory and run the **StartSimulation** shell script:
-```console
- StartSimulation <EventHubsCS> <StorageAccountCS> <AzureSubscriptionID> <AzureTenantID>
```-
-Parameters:
-
-| Parameter | Description |
-| | - |
-| EventHubCS | Copy the Event Hubs namespace connection string as described [here](/azure/event-hubs/event-hubs-get-connection-string). |
-| StorageAccountCS | In the Azure portal, navigate to the Storage Account created by this solution. Select "Access keys" from the left-hand navigation menu. Then, copy the connection string for key1. |
-| AzureSubscriptionID | In Azure portal, browse your Subscriptions and copy the ID of the subscription used in this solution. |
-| AzureTenantID | In Azure portal, open the Microsoft Entry ID page and copy your Tenant ID. |
-
-The following example shows the command with all parameters:
-
-```console
- StartSimulation Endpoint=sb://ontologies.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=abcdefgh= DefaultEndpointsProtocol=https;AccountName=ontologiesstorage;AccountKey=abcdefgh==;EndpointSuffix=core.windows.net <your-subscription-id> <your-tenant-id>
+ sudo ./StartSimulation.sh "<EventHubsCS>"
```- > [!NOTE]
-> If you have access to several Azure subscriptions, it's worth first logging into the Azure portal from the VM through the web browser. You can also switch Active Directory tenants through the Azure portal UI (in the top-right-hand corner), to make sure you're logged in to the tenant used during deployment. Once logged in, leave the browser window open. This ensures that the StartSimulation script can more easily connect to the right subscription.
->
-> In this solution, the OPC UA application certificate store for UA Cloud Publisher, and the simulated production line's MES and individual machines' store, is located in the cloud in the deployed Azure Storage account.
-
+> `<EventHubCS>` is the Event Hubs namespace connection string as described [here](/azure/event-hubs/event-hubs-get-connection-string).
-## Enable the Kubernetes cluster for management via Azure Arc
+Example: StartSimulation "Endpoint=sb://ontologies.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=abcdefgh="
-1. On your virtual machine, open an **Administrator PowerShell window**. Navigate to the `C:\ManufacturingOntologies-main\Deployment` directory and run `CreateServicePrincipal`. The two parameters `subscriptionID` and `tenantID` can be retrieved from the Azure portal.
-1. Run `notepad aksedge-config.json` and provide the following information:
-
- | Attribute | Description |
- | | |
- | Location | The Azure location of your resource group. You can find this location in the Azure portal under the resource group that was deployed for this solution, but remove the spaces in the name! Currently supported regions are eastus, eastus2, westus, westus2, westus3, westeurope, and northeurope. |
- | SubscriptionId | Your subscription ID. In the Azure portal, select on the subscription you're using and copy/paste the subscription ID. |
- | TenantId | Your tenant ID. In the Azure portal, select on Azure Active Directory and copy/paste the tenant ID. |
- | ResourceGroupName | The name of the Azure resource group that was deployed for this solution. |
- | ClientId | The name of the Azure Service Principal previously created. Azure Kubernetes Services uses this service principal to connect your cluster to Arc. |
- | ClientSecret | The password for the Azure Service Principal. |
-
-1. Save the file, close the PowerShell window, and open a new **Administrator Powershell window**. Navigate back to the `C:\ManufacturingOntologies-main\Deployment` directory and run `SetupArc`.
-
-You can now manage your Kubernetes cluster from the cloud via the newly deployed Azure Arc instance. In the Azure portal, browse to the Azure Arc instance and select Workloads. The required service token can be retrieved via `Get-AksEdgeManagedServiceToken` from an **Administrator Powershell window** on your virtual machine.
+> [!NOTE]
+> If a Kubernetes service's external IP address shows up as `<pending>`, you can assign the external IP address of the `traefik` service to it via `sudo kubectl patch service <theService> -n <the service's namespace> -p '{"spec": {"type": "LoadBalancer", "externalIPs":["<the traefik external IP address>"]}}'`.
+> [!NOTE]
+> To prevent WSL (and K3s) from automatically shutting down, keep your WSL command prompt open.
## Deploying Azure IoT Operations on the edge
-Make sure you have already started the production line simulation and enabled the Kubernetes Cluster for management via Azure Arc as described in the previous paragraphs. Then, follow these steps:
-
-1. From the Azure portal, navigate to the Key Vault deployed in this reference solution and add your own identity to the access policies by clicking `Access policies`, `Create`, select the `Keys, Secrets & Certificate Management` template, select `Next`, search for and select your own user identity, select `Next`, leave the Application section blank, select `Next` and finally `Create`.
-1. Enable custom locations for your Arc-connected Kubernetes cluster (called ontologies_cluster) by first logging in to your Azure subscription via `az login` from an **Administrator PowerShell Window** and then running `az connectedk8s enable-features -n "ontologies_cluster" -g "<resourceGroupName>" --features cluster-connect custom-locations`, providing the `resourceGroupName` from the reference solution deployed.
-1. From the Azure portal, deploy Azure IoT Operations by navigating to your Arc-connected kubernetes cluster, select on `Extensions`, `Add`, select `Azure IoT Operations`, and select `Create`. On the Basic page, leave everything as-is. On the Configuration page, set the `MQ Mode` to `Auto`. You don't need to deploy a simulated Programmable Logic Controller (PLC), as this reference solution already contains a much more substantial production line simulation. On the Automation page, select the Key Vault deployed for this reference solution and then copy the `az iot ops init` command automatically generated. From your deployed VM, open a new **Administrator PowerShell Window**, sign in to the correct Azure subscription by running `az login` and then run the `az iot ops init` command with the arguments from the Azure portal. Once the command completes, select `Next` and then close the wizard.
--
-## Configuring OPC UA security and connectivity for Azure IoT Operations
-
-Make sure you successfully deployed Azure IoT Operations and all Kubernetes workloads are up and running by navigating to the Arc-enabled Kubernetes resource in the Azure portal.
-
-1. From the Azure portal, navigate to the Azure Storage deployed in this reference solution, open the `Storage browser` and then `Blob containers`. Here you can access the cloud-based OPC UA certificate store used in this solution. Azure IoT Operations uses Azure Key Vault as the cloud-based OPC UA certificate store so the certificates need to be copied:
- 1. From within the Azure Storage browser's Blob containers, for each simulated production line, navigate to the app/pki/trusted/certs folder, select the assembly, packaging, and test cert file and download it.
- 1. Sign in to your Azure subscription via `az login` from an **Administrator PowerShell Window** and then run `az keyvault secret set --name "<stationName>-der" --vault-name <keyVaultName> --file .<stationName>.der --encoding hex --content-type application/pkix-cert`, providing the `keyVaultName` and `stationName` of each of the 6 stations you downloaded a .der cert file for in the previous step.
-1. From the deployed VM, open a **Windows command prompt** and run `kubectl apply -f secretsprovider.yaml` with the updated secrets provider resource file provided in the `C:\ManufacturingOntologies-main\Tools\FactorySimulation\Station` directory, providing the Key Vault name, the Azure tenant ID, and the station cert file names and aliases you uploaded to Azure Key Vault previously.
-1. From a web browser, sign in to https://iotoperations.azure.com, pick the right Azure directory (top right hand corner) and start creating assets from the production line simulation. The solution comes with two production lines (Munich and Seattle) consisting of three stations each (assembly, test, and packaging):
- 1. For the asset endpoints, enter opc.tcp://assembly.munich in the OPC UA Broker URL field for the assembly station of the Munich production line, etc. Select `Do not use transport authentication certificate` (OPC UA certificate-based mutual authentication between Azure IoT Operations and any connected OPC UA server is still being used).
- 1. For the asset tags, select `Import CSV file` and open the `StationTags.csv` file located in the `C:\ManufacturingOntologies-main\Tools\FactorySimulation\Station` directory.
-1. From the Azure portal, navigate to the Azure Storage deployed in this reference solution, open the `Storage browser` and then `Blob containers`. For each production line simulated, navigate to the `app/pki/rejected/certs` folder and download the Azure IoT Operations certificate file. Then delete the file. Navigate to the `app/pki/trusted/certs` folder and upload the Azure IoT Operations certificate file to this directory.
-1. From the deployed VM, open a **Windows command prompt** and restart the production line simulation by navigating to the `C:\ManufacturingOntologies-main\Tools\FactorySimulation` directory and run the **StopSimulation** command, followed by the **StartSimulation** command.
-1. Follow the instructions as described [here](/azure/iot-operations/get-started/quickstart-add-assets#verify-data-is-flowing) to verify that data is flowing from the production line simulation.
-1. As the last step, connect Azure IoT Operations to the Event Hubs deployed in this reference solution as described [here](/azure/iot-operations/connect-to-cloud/howto-configure-kafka).
+Before you deploy, confirm that you started the production line simulation. Then, follow these steps as described [here](/azure/iot-operations/deploy-iot-ops/overview-deploy).
## Use cases condition monitoring, calculating OEE, detecting anomalies, and making predictions in Azure Data Explorer
You can also visit the [Azure Data Explorer documentation](/azure/synapse-analyt
:::image type="content" source="media/concepts-iot-industrial-solution-architecture/dashboard.png" alt-text="Screenshot of an Azure Data Explorer dashboard." lightbox="media/concepts-iot-industrial-solution-architecture/dashboard.png" border="false" ::: > [!NOTE]
-> If you want to display the OEE for a specific shift, select `Custom Time Range` in the `Time Range` drop-down in the top-left hand corner of the ADX Dashboard and enter the date and time from start to end of the shift you're interested in.
+> If you want to display the OEE for a specific shift, select **Custom Time Range** in the **Time Range** drop-down in the top-left hand corner of the ADX Dashboard and enter the date and time from start to end of the shift you're interested in.
## Render the built-in Unified NameSpace (UNS) and ISA-95 model graph in Kusto Explorer
For best results, change the `Layout` option to `Grouped`.
:::image type="content" source="media/concepts-iot-industrial-solution-architecture/isa-95-graph.png" alt-text="Graph that shows an ISA-95 asset hierarchy." lightbox="media/concepts-iot-industrial-solution-architecture/isa-95-graph.png" border="false" :::
-## Use Azure Managed Grafana Service
+## Use Azure Managed Grafana service
-You can also use Grafana to create a dashboard on Azure for the solution described in this article. Grafana is used within manufacturing to create dashboards that display real-time data. Azure offers a service named Azure Managed Grafana. With this, you can create cloud dashboards. In this configuration manual, you enable Grafana on Azure and you create a dashboard with data that is queried from Azure Data Explorer and Azure Digital Twins service, using the simulated production line data from this reference solution.
+You can also use Grafana to create a dashboard on Azure for the solution described in this article. Grafana is used within manufacturing to create dashboards that display real-time data. Azure offers a service named Azure Managed Grafana. With Grafana, you can create cloud dashboards. In this configuration manual, you enable Grafana on Azure, and you create a dashboard with data that is queried from Azure Data Explorer and Azure Digital Twins service. You use the simulated production line data from this reference solution.
The following screenshot shows the dashboard: :::image type="content" source="media/concepts-iot-industrial-solution-architecture/grafana.png" alt-text="Screenshot that shows a Grafana dashboard." lightbox="media/concepts-iot-industrial-solution-architecture/grafana.png" border="false" :::
-### Enable Azure Managed Grafana Service
+### Enable Azure Managed Grafana service
-1. Go to the Azure portal and search for the service 'Grafana' and select the 'Azure Managed Grafana' service.
+1. Go to the Azure portal and search for the service 'Grafana' and select the **Azure Managed Grafana** service.
:::image type="content" source="media/concepts-iot-industrial-solution-architecture/enable-grafana-service.png" alt-text="Screenshot of enabling Grafana in the Marketplace." lightbox="media/concepts-iot-industrial-solution-architecture/enable-grafana-service.png" border="false" :::
Now you're ready to import the provided sample dashboard.
1. Download the sample dashboard here: [Sample Grafana Manufacturing Dashboard](https://github.com/digitaltwinconsortium/ManufacturingOntologies/blob/main/Tools/GrafanaDashboard/samplegrafanadashboard.json).
-1. Go to 'Dashboard' and select 'Import'.
+1. Navigate to **Dashboard** and select **Import**.
-1. Select the source that you have downloaded and select on 'Save'. You get an error on the page, because two variables aren't set yet. Go to the settings page of the dashboard.
+1. Select the source that you downloaded and select **Save** You get an error on the page, because two variables aren't set yet. Go to the settings page of the dashboard.
-1. Select on the left on 'Variables' and update the two URLs with the URL of your Azure Digital Twins Service.
+1. Select **Variables** and update the two URLs with the URL of your Azure Digital Twins Service.
1. Navigate back to the dashboard and hit the refresh button. You should now see data (don't forget to hit the save button on the dashboard).
Now you're ready to import the provided sample dashboard.
Within Grafana, it's also possible to create alerts. In this example, we create a low OEE alert for one of the production lines.
-1. Sign in to your Grafana service, and select Alert rules in the menu.
+1. Sign in to your Grafana service, and select **Alert rules** in the menu.
:::image type="content" source="media/concepts-iot-industrial-solution-architecture/navigate-to-alerts.png" alt-text="Screenshot that shows navigation to alerts." lightbox="media/concepts-iot-industrial-solution-architecture/navigate-to-alerts.png" border="false" :::
-1. Select 'Create alert rule'.
+1. Select **Create alert rule**.
:::image type="content" source="media/concepts-iot-industrial-solution-architecture/create-rule.png" alt-text="Screenshot that shows how to create an alert rule." lightbox="media/concepts-iot-industrial-solution-architecture/create-rule.png" border="false" :::
-1. Give your alert a name and select 'Azure Data Explorer' as data source. Select query in the navigation pane.
+1. Give your alert a name and select **Azure Data Explorer** as data source. Select **query** in the navigation pane.
:::image type="content" source="media/concepts-iot-industrial-solution-architecture/alert-query.png" alt-text="Screenshot of creating an alert query." lightbox="media/concepts-iot-industrial-solution-architecture/alert-query.png" border="false" :::
-1. In the query field, enter the following query. In this example, we use the 'Seattle' production line.
+1. In the query field, enter the following query. In this example, we use the Seattle production line.
``` let oee = CalculateOEEForStation("assembly", "seattle", 6, 6); print round(oee * 100, 2) ```
-1. Select 'table' as output.
+1. Select **table** as output.
1. Scroll down to the next section. Here, you configure the alert threshold. In this example, we use 'below 10' as the threshold, but in production environments, this value can be higher.
Within Grafana, it's also possible to create alerts. In this example, we create
1. Select the folder where you want to save your alerts and configure the 'Alert Evaluation behavior'. Select the option 'every 2 minutes'.
-1. Select the 'Save and exit' button.
+1. Select the **Save and exit** button.
-In the overview of your alerts, you can now see an alert being triggered when your OEE is below '10'.
+In the overview of your alerts, you can now see an alert being triggered when your OEE is less than 10.
:::image type="content" source="media/concepts-iot-industrial-solution-architecture/alert-overview.png" alt-text="Screenshot that shows an alert overview." lightbox="media/concepts-iot-industrial-solution-architecture/alert-overview.png" border="false" ::: You can integrate this setup with, for example, Microsoft Dynamics Field Services.
-## Connecting the reference solution to Microsoft Power BI
+## Connect the reference solution to Microsoft Power BI
To connect the reference solution Power BI, you need access to a Power BI subscription.
Complete the following steps:
1. Install the Power BI Desktop app from [here](https://go.microsoft.com/fwlink/?LinkId=2240819&clcid=0x409). 1. Sign in to Power BI Desktop app using the user with access to the Power BI subscription. 1. From the Azure portal, navigate to your Azure Data Explorer database instance (`ontologies`) and add `Database Admin` permissions to an Azure Active Directory user with access to just a **single** Azure subscription, specifically the subscription used for your deployed instance of this reference solution. Create a new user in Azure Active Directory if you have to.
-1. From Power BI, create a new report and select Azure Data Explorer time-series data as a data source via `Get data` -> `Azure` -> `Azure Data Explorer (Kusto)`.
-1. In the popup window, enter the Azure Data Explorer endpoint of your instance (for example `https://erichbtest3adx.eastus2.kusto.windows.net`), the database name (`ontologies`) and the following query:
+1. From Power BI, create a new report and select Azure Data Explorer time-series data as a data source via **Get data > Azure > Azure Data Explorer (Kusto)**.
+1. In the popup window, enter the Azure Data Explorer endpoint of your instance (for example `https://erichbtest3adx.eastus2.kusto.windows.net`), the database name (`ontologies`), and the following query:
``` let _startTime = ago(1h);
Complete the following steps:
| project Timestamp, NodeValue ```
-1. Select `Load`. This imports the actual cycle time of the Assembly station of the Munich production line for the last hour.
+1. Select **Load**. This action imports the actual cycle time of the Assembly station of the Munich production line for the last hour.
1. When prompted, log into Azure Data Explorer using the Azure Active Directory user you gave permission to access the Azure Data Explorer database earlier.
-1. From the `Data view`, select the NodeValue column and select `Don't summarize` in the `Summarization` menu item.
+1. From the `Data view`, select the **NodeValue** column and select **Don't summarize** in the **Summarization** menu item.
1. Switch to the `Report view`.
-1. Under `Visualizations`, select the `Line Chart` visualization.
-1. Under `Visualizations`, move the `Timestamp` from the `Data` source to the `X-axis`, select on it and select `Timestamp`.
-1. Under `Visualizations`, move the `NodeValue` from the `Data` source to the `Y-axis`, select on it and select `Median`.
+1. Under **Visualizations**, select the **Line Chart** visualization.
+1. Under **Visualizations**, move the `Timestamp` from the `Data` source to the `X-axis`, select it, and select **Timestamp**.
+1. Under **Visualizations**, move the `NodeValue` from the `Data` source to the `Y-axis`, select it, and select **Median**.
1. Save your new report. > [!NOTE]
Complete the following steps:
:::image type="content" source="media/concepts-iot-industrial-solution-architecture/power-bi.png" alt-text="Screenshot of a Power BI view." lightbox="media/concepts-iot-industrial-solution-architecture/power-bi.png" border="false" :::
-## Connecting the reference solution to Microsoft Dynamics 365 Field Service
+## Connect the reference solution to Microsoft Dynamics 365 Field Service
This integration showcases the following scenarios:
This integration showcases the following scenarios:
The integration uses Azure Logics Apps. With Logic Apps bussiness-critcal apps and services can be connected via no-code workflows. We fetch information from Azure Data Explorer and trigger actions in Dynamics 365 Field Service.
-First, if you're not already a Dynamics 365 Field Service customer, activate a 30 day trial [here](https://dynamics.microsoft.com/field-service/field-service-management-software/free-trial). Remember to use the same Microsoft Entra ID (formerly Azure Active Directory) used while deploying the Manufacturing Ontologies reference solution. Otherwise, you would need to configure cross tenant authentication that isn't part of these instructions!
+First, if you're not already a Dynamics 365 Field Service customer, activate a 30 day trial [here](https://dynamics.microsoft.com/field-service/field-service-management-software/free-trial). Remember to use the same Microsoft Entra ID (formerly Azure Active Directory) used while deploying the Manufacturing Ontologies reference solution. Otherwise, you would need to configure cross tenant authentication that isn't part of these instructions.
### Create an Azure Logic App workflow to create assets in Dynamics 365 Field Service
Let's start with uploading assets from the Manufacturing Ontologies into Dynamic
1. Go to the Azure portal and create a new Logic App.
-2. Give the Azure Logic App a name, place it in the same resource group as the Manufacturing Ontologies reference solution.
+2. Give the Azure Logic App a name, and place it in the same resource group as the Manufacturing Ontologies reference solution.
-3. Select on 'Workflows'.
+3. Select **Workflows**.
4. Give your workflow a name - for this scenario we use the stateful state type, because assets aren't flows of data.
-5. Create a new trigger. We start with creating a 'Recurrence' trigger. This checks the database every day if new assets are created. You can change this to happen more often.
+5. Create a new trigger. We start with creating a recurrence trigger. This checks the database every day if new assets are created. You can change the trigger to occur more often.
-6. In actions, search for `Azure Data Explorer` and select the `Run KQL query` command. Within this query, we check what kind of assets we have. Use the following query to get assets and paste it in the query field:
+6. In actions, search for `Azure Data Explorer` and select the **Run KQL query** command. Within this query, we check what kind of assets we have. Use the following query to get assets and paste it in the query field:
``` let ADTInstance = "PLACE YOUR ADT URL";let ADTQuery = "SELECT T.OPCUAApplicationURI as AssetName, T.$metadata.OPCUAApplicationURI.lastUpdateTime as UpdateTime FROM DIGITALTWINS T WHERE IS_OF_MODEL(T , 'dtmi:digitaltwins:opcua:nodeset;1') AND T.$metadata.OPCUAApplicationURI.lastUpdateTime > 'PLACE DATE'";evaluate azure_digital_twins_query_request(ADTInstance, ADTQuery)
Let's start with uploading assets from the Manufacturing Ontologies into Dynamic
This workflow creates alerts in Dynamics 365 Field Service, specifically when a certain threshold of FaultyTime on an asset of the Manufacturing Ontologies reference solution is reached.
-1. We first need to create an Azure Data Explorer function to get the right data. Go to your Azure Data Explorer query panel in the Azure portal and run the following code to create a FaultyFieldAssets function:
+1. First, create an Azure Data Explorer function to get the right data. Go to your Azure Data Explorer query panel in the Azure portal and run the following code to create a FaultyFieldAssets function:
:::image type="content" source="media/concepts-iot-industrial-solution-architecture/adx-query.png" alt-text="Screenshot of creating a function ADX query." lightbox="media/concepts-iot-industrial-solution-architecture/adx-query.png" border="false" :::
This workflow creates alerts in Dynamics 365 Field Service, specifically when a
| project AssetName, Name, Value, Timestamp} ```
-2. Create a new workflow in Azure Logic App. Create a 'Recurrence' trigger to start - every 3 minutes. Create as action 'Azure Data Explorer' and select the Run KQL Query.
+2. Create a new workflow in Azure Logic App. Create a 'Recurrence' trigger to start - every 3 minutes. Create as action 'Azure Data Explorer' and select the **Run KQL Query**.
3. Enter your Azure Data Explorer Cluster URL, then select your database and use the function name created in step 1 as the query.
-4. Select Microsoft Dataverse as action.
+4. Select **Microsoft Dataverse** as action.
5. Run the workflow and to see new alerts being generated in your Dynamics 365 Field Service dashboard:
This workflow creates alerts in Dynamics 365 Field Service, specifically when a
## Related content - [Connect on-premises SAP systems to Azure](howto-connect-on-premises-sap-to-azure.md)-- [Connecting Azure IoT Operations to Microsoft Fabric](../iot-operations/process-dat)
+- [Connect Azure IoT Operations to Microsoft Fabric](../iot-operations/process-dat)
load-balancer Load Balancer Nat Pool Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-nat-pool-migration.md
An [inbound NAT rule](inbound-nat-rules.md) is used to forward traffic from a lo
## NAT rule version 1
-[Version 1](inbound-nat-rules.md) is the legacy approach for assigning an Azure Load BalancerΓÇÖs frontend port to each backend instance. Rules are applied to the backend instanceΓÇÖs network interface card (NIC). For Azure Virtual Machine Scale Sets (VMSS) instances, inbound NAT rules are automatically created/deleted as new instances are scaled up/down. For VMSS instanes use the `Inbound NAT Pool` property to manage Inbound NAT rules version 1.
+[Version 1](inbound-nat-rules.md) is the legacy approach for assigning an Azure Load BalancerΓÇÖs frontend port to each backend instance. Rules are applied to the backend instanceΓÇÖs network interface card (NIC). For Azure Virtual Machine Scale Sets (VMSS) instances, inbound NAT rules are automatically created/deleted as new instances are scaled up/down. For VMSS instanes use the `Inbound NAT Pools` property to manage Inbound NAT rules version 1.
## NAT rule version 2
An [inbound NAT rule](inbound-nat-rules.md) is used to forward traffic from a lo
## How do I know if IΓÇÖm using version 1 of Inbound NAT rules?
-The easiest way to identify if your deployments are using version 1 of the feature is by inspecting the load balancerΓÇÖs configuration. If either the `InboundNATPool` property or the `backendIPConfiguration` property within the `InboundNATRule` configuration is populated, then the deployment is version 1 of Inbound NAT rules.
+The easiest way to identify if your deployments are using version 1 of the feature is by inspecting the load balancerΓÇÖs configuration. If either the `InboundNATPools` property or the `backendIPConfiguration` property within the `InboundNATRule` configuration is populated, then the deployment is version 1 of Inbound NAT rules.
## How to migrate from version 1 to version 2?
migrate Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/appcat/java.md
Last updated 07/12/2024
This guide describes how to use the Azure Migrate application and code assessment tool for Java to assess and replatform any type of Java application. The tool enables you to evaluate application readiness for replatforming and migration to Azure. This tool is offered as a CLI (command-line interface) and assesses Java application binaries and source code to identify replatforming and migration opportunities for Azure. It helps you modernize and replatform large-scale Java applications by identifying common use cases and code patterns and proposing recommended changes.
-The tool discovers application technology usage through static code analysis, provides effort estimation, and accelerates code replatforming, helping you to prioritize and move Java applications to Azure. With a set of engines and rules, it can discover and assess different technologies such as Java 11, Java 17, Jakarta EE, Spring, Hibernate, Java Message Service (JMS), and more. It then helps you replatform the Java application to different Azure targets (Azure App Service, Azure Kubernetes Service, Azure Container Apps, and Azure Spring Apps) with specific Azure replatforming rules.
+The tool discovers application technology usage through static code analysis, provides effort estimation, and accelerates code replatforming, helping you to prioritize and move Java applications to Azure. With a set of engines and rules, it can discover and assess different technologies such as Java 11, Java 17, Jakarta EE, Spring, Hibernate, Java Message Service (JMS), and more. It then helps you replatform the Java application to different Azure targets (Azure App Service, Azure Kubernetes Service, and Azure Container Apps) with specific Azure replatforming rules.
This tool is open source and is based on [WindUp](https://github.com/windup), a project created by Red Hat and published under the [Eclipse Public License](https://github.com/windup/windup/blob/master/LICENSE).
The rules used by Azure Migrate application and code assessment are grouped base
| Target | Description | ID | |--||| | Azure App Service | Best practices for deploying an app to Azure App Service. | `azure-appservice` |
-| Azure Spring Apps | Best practices for deploying an app to Azure Spring Apps. | `azure-spring-apps` |
| Azure Kubernetes Service | Best practices for deploying an app to Azure Kubernetes Service. | `azure-aks` | | Azure Container Apps | Best practices for deploying an app to Azure Container Apps. | `azure-container-apps` | | Cloud Readiness | General best practices for making an application Cloud (Azure) ready. | `cloud-readiness` |
To run `appcat`, make sure you have a supported JDK installed. The tool supports
* Microsoft Build of OpenJDK 11 * Microsoft Build of OpenJDK 17
-* Eclipse TemurinΓäó JDK 11
-* Eclipse TemurinΓäó JDK 17
+* Eclipse Temurin&trade; JDK 11
+* Eclipse Temurin&trade; JDK 17
After you have a valid JDK installed, make sure its installation directory is properly configured in the `JAVA_HOME` environment variable.
Available target technologies:
azure-aks azure-appservice azure-container-apps
- azure-spring-apps
cloud-readiness discovery linux
To write a custom rule, you use a rich domain specific language (DLS) expressed
To detect the use of this dependency, the rule uses the following XML tags: * `ruleset`: The unique identifier of the ruleset. A ruleset is a collection of rules that are related to a specific technology.
-* `targetTechnology`: The technology that the rule targets. In this case, the rule is targeting Azure App Services, Azure Kubernetes Service (AKS), Azure Spring Apps, and Azure Container Apps.
+* `targetTechnology`: The technology that the rule targets. In this case, the rule is targeting Azure App Services, Azure Kubernetes Service (AKS), and Azure Container Apps.
* `rule`: The root element of a single rule. * `when`: The condition that must be met for the rule to be triggered. * `perform`: The action to be performed when the rule is triggered.
The following XML shows the custom rule definition:
<targetTechnology id="azure-appservice"/> <targetTechnology id="azure-aks"/> <targetTechnology id="azure-container-apps"/>
- <targetTechnology id="azure-spring-apps"/>
</metadata> <rules> <rule id="azure-postgre-flexible-server">
notification-hubs Uwp React https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/uwp-react.md
Create a notification hub in the Azure portal as follows:
### Configure backend
-To configure the app backend, locate the **/NotificationHub.Sample.API/appsettings.json** file and configure the SQL Server connection string:
-
-```json
-"ConnectionStrings": {
- "SQLServerConnectionString": "Server=tcp:<SERVER_NAME>,1433;Initial Catalog=<DB_NAME>;Persist Security Info=False;User ID=<DB_USER_NAME>;Password=<PASSWORD>;MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;"
- },
-```
-
-Replace `<SERVER_NAME>` with the name of your SQL server, `<DB_NAME>` with your deployed database URL, `<DB_USER_NAME>` with configured username, and `<PASSWORD>` with the configured password.
+To configure the app backend, locate the **/NotificationHub.Sample.API/appsettings.json** file and configure the SQL Server connection string.
You can run the API solution locally or on any IIS server, or deploy it as an Azure Web App Service. Keep the URL of the API handy.
oracle Onboard Oracle Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/oracle/oracle-db/onboard-oracle-database.md
Onboarding uses both the Azure portal and the OCI Console.
## Steps to onboard with Oracle Database@Azure -- [Prerequisites](https://docs.oracle.com/en-us/iaas/Content/database-at-azure/getting-started.htm#oaa_prerequisites)-- [Accept Private Offer](https://docs.oracle.com/en-us/iaas/Content/database-at-azure/oaaonboard-task-2.htm#oaaonboard_task_2) (private offer purchases only)-- [Purchase Offer](https://docs.oracle.com/en-us/iaas/Content/database-at-azure/oaaonboard-task-3.htm#oaaonboard_task_3)-- [Link an OCI Account](https://docs.oracle.com/en-us/iaas/Content/database-at-azure/oaaonboard-task-4.htm#oaaonboard_task_4)-- [Register with My Oracle Support](https://docs.oracle.com/en-us/iaas/Content/database-at-azure/oaaonboard-task-5.htm)-- [Find the Azure Availability Zone Mapping](https://docs.oracle.com/en-us/iaas/Content/database-at-azure/oaaonboard-task-6.htm#oaaonboard_task_6) (optional)-- [Set Up Role Based Access Control](https://docs.oracle.com/en-us/iaas/Content/database-at-azure/oaaonboard-task-7.htm#oaaonboard_task_7)-- [Set Up Identity Federation](https://docs.oracle.com/en-us/iaas/Content/database-at-azure/oaaonboard-task-8.htm#oaaonboard_task_8) (optional)
+1. [Prerequisites](https://docs.oracle.com/en-us/iaas/Content/database-at-azure/getting-started.htm#oaa_prerequisites)
+1. [Accept Private Offer](https://docs.oracle.com/en-us/iaas/Content/database-at-azure/oaaonboard-task-2.htm#oaaonboard_task_2) (private offer purchases only)
+1. [Purchase Offer](https://docs.oracle.com/en-us/iaas/Content/database-at-azure/oaaonboard-task-3.htm#oaaonboard_task_3)
+1. [Link an OCI Account](https://docs.oracle.com/en-us/iaas/Content/database-at-azure/oaaonboard-task-4.htm#oaaonboard_task_4)
+1. [Register with My Oracle Support](https://docs.oracle.com/en-us/iaas/Content/database-at-azure/oaaonboard-task-5.htm)
+1. [Find the Azure Availability Zone Mapping](https://docs.oracle.com/en-us/iaas/Content/database-at-azure/oaaonboard-task-6.htm#oaaonboard_task_6) (optional)
+1. [Set Up Role Based Access Control](https://docs.oracle.com/en-us/iaas/Content/database-at-azure/oaaonboard-task-7.htm#oaaonboard_task_7)
+1. [Set Up Identity Federation](https://docs.oracle.com/en-us/iaas/Content/database-at-azure/oaaonboard-task-8.htm#oaaonboard_task_8) (optional)
sap Rise Integration Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/rise-integration-network.md
Applications within a customerΓÇÖs own virtual network connect to the Internet d
SAP Business Technology Platform (BTP) provides a multitude of applications typically accessed through public IP/hostname via the Internet. Customer's services running in their Azure subscriptions access BTP through the configured [outbound access method](../../virtual-network/ip-services/default-outbound-access.md), such as central firewall or outbound public IPs. Some SAP BTP services, such as SAP Data Intelligence, however is by design accessed through a [separate virtual network peering](https://help.sap.com/docs/SAP_DATA_INTELLIGENCE/ca509b7635484070a655738be408da63/a7d98ac925e443ea9d4a716a91e0a604.html) instead of a public endpoint.
-SAP offers [Private Link Service](https://blogs.sap.com/2022/06/22/sap-private-link-service-on-azure-is-now-generally-available-ga/) for customers using SAP BTP on Azure. The SAP Private Link Service connects SAP BTP services through a private IP range into customerΓÇÖs Azure network and thus accessible privately through the private link service instead of through the Internet. Contact SAP for availability of this service for SAP RISE/ECS workloads.
+SAP offers [Private Link Service](https://help.sap.com/docs/private-link/private-link1/what-is-sap-private-link-service) for customers using SAP BTP on Azure for uni-directional requests originating from BTP. The SAP Private Link Service connects SAP BTP services through a private IP range into customerΓÇÖs Azure network and thus accessible privately through the private link service instead of through the Internet. Contact SAP for availability of this service for SAP RISE/ECS workloads. Learn more about the SAP Private Link support for RISE [here](https://community.sap.com/t5/technology-blogs-by-sap/quot-sap-private-link-and-azure-private-link-quot-in-the-context-of-sap/ba-p/13719685).
See [SAP's documentation](https://help.sap.com/docs/private-link/private-link1/consume-azure-services-in-sap-btp) and a series of blog posts on the architecture of the SAP BTP Private Link Service and private connectivity methods, dealing with DNS and certificates in following SAP blog series [Getting Started with BTP Private Link Service for Azure](https://blogs.sap.com/2021/12/29/getting-started-with-btp-private-link-service-for-azure/).
sentinel Connect Logstash Data Connection Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-logstash-data-connection-rules.md
The Logstash engine is composed of three components:
- Output plugins: Customized sending of collected and processed data to various destinations. > [!NOTE]
-> - Microsoft supports only the Microsoft Sentinel-provided Logstash output plugin discussed here. The current plugin is named **[microsoft-sentinel-log-analytics-logstash-output-plugin](https://github.com/Azure/Azure-Sentinel/tree/master/DataConnectors/microsoft-sentinel-log-analytics-logstash-output-plugin)**, v1.1.0. You can [open a support ticket](https://portal.azure.com/#create/Microsoft.Support) for any issues regarding the output plugin.
+> - Microsoft supports only the Microsoft Sentinel-provided Logstash output plugin discussed here. The current plugin is named **[microsoft-sentinel-log-analytics-logstash-output-plugin](https://github.com/Azure/Azure-Sentinel/tree/master/DataConnectors/microsoft-sentinel-log-analytics-logstash-output-plugin)**, v1.1.3. You can [open a support ticket](https://portal.azure.com/#create/Microsoft.Support) for any issues regarding the output plugin.
> > - Microsoft does not support third-party Logstash output plugins for Microsoft Sentinel, or any other Logstash plugin or component of any type. >
The Microsoft Sentinel output plugin for Logstash sends JSON-formatted data to y
- Install a supported version of Logstash. The plugin supports the following Logstash versions: - 7.0 - 7.17.13 - 8.0 - 8.9
- - 8.11 - 8.13
+ - 8.11 - 8.15
> [!NOTE] > If you use Logstash 8, we recommended that you [disable ECS in the pipeline](https://www.elastic.co/guide/en/logstash/8.4/ecs-ls.html).
The following table lists the firewall requirements for scenarios where Azure vi
| Microsoft Azure operated by 21Vianet |https://login.chinacloudapi.cn |Authorization server (the Microsoft identity platform)|Port 443 |Outbound|Yes | | Microsoft Azure operated by 21Vianet |Replace '.com' above with '.cn' | Data collection Endpoint|Port 443 |Outbound|Yes |
+## Plugin-versions
+#### 1.1.3
+- Replaces the `rest-client` library used for connecting to Azure with the `excon` library.
+
+#### 1.1.1
+- Adds support for Azure US Government cloud and Microsoft Azure operated by 21Vianet in China.
+
+#### 1.1.0
+- Allows setting different proxy values for API connections.
+- Upgrades version for logs ingestion API to 2023-01-01.
+- Renames the plugin to microsoft-sentinel-log-analytics-logstash-output-plugin.
+
+#### 1.0.0
+- The initial release for the Logstash output plugin for Microsoft Sentinel. This plugin uses Data Collection Rules (DCRs) with Azure Monitor's Logs Ingestion API.
+## Known issues
+
+When using Logstash installed on a Docker image of Lite Ubuntu, the following warning may appear:
+
+```
+java.lang.RuntimeException: getprotobyname_r failed
+```
+
+To resolve it, use the following commands to install the *netbase* package within your Dockerfile:
+```bash
+USER root
+RUN apt install netbase -y
+```
+For more information, see [JNR regression in Logstash 7.17.0 (Docker)](https://github.com/elastic/logstash/issues/13703).
+
+If your environment's event rate is low considering the number of allocated Logstash workers, we recommend increasing the value of *plugin_flush_interval* to 60 or more. This change will allow each worker to batch more events before uploading to the Data Collection Endpoint (DCE). You can monitor the ingestion payload using [DCR metrics](/azure/azure-monitor/essentials/data-collection-monitor#dcr-metrics).
+For more information on *plugin_flush_interval*, see the [Optional Configuration table](#optional-configuration) mentioned earlier.
+ ## Limitations - Ingestion into standard tables is limited only to [standard tables supported for custom logs ingestion](data-transformation.md#data-transformation-support-for-custom-data-connectors).
The following table lists the firewall requirements for scenarios where Azure vi
In this article, you learned how to use Logstash to connect external data sources to Microsoft Sentinel. To learn more about Microsoft Sentinel, see the following articles: - Learn how to [get visibility into your data and potential threats](get-visibility.md).-- Get started [detecting threats with Microsoft Sentinel](detect-threats-built-in.md).
+- Get started [detecting threats with Microsoft Sentinel](threat-detection.md).
sentinel Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/overview.md
Use Microsoft Sentinel to alleviate the stress of increasingly sophisticated att
[!INCLUDE [unified-soc-preview](includes/unified-soc-preview.md)]
-Microsoft Sentinel inherits the Azure Monitor [tamper-proofing and immutability](/azure/azure-monitor/logs/data-security#tamper-proofing-and-immutability) practices. While Azure Monitor is an append-only data platform, it includes provisions to delete data for compliance purposes
+Microsoft Sentinel inherits the Azure Monitor [tamper-proofing and immutability](/azure/azure-monitor/logs/data-security#tamper-proofing-and-immutability) practices. While Azure Monitor is an append-only data platform, it includes provisions to delete data for compliance purposes.
[!INCLUDE [azure-lighthouse-supported-service](../../includes/azure-lighthouse-supported-service-no-note.md)]
service-bus-messaging Configure Customer Managed Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/configure-customer-managed-key.md
There are some caveats to the customer managed key for service side encryption.
You can use Azure Key Vault (including Azure Key Vault Managed HSM) to manage your keys and audit your key usage. You can either create your own keys and store them in a key vault, or you can use the Azure Key Vault APIs to generate keys. For more information about Azure Key Vault, see [What is Azure Key Vault?](/azure/key-vault/general/overview)
+If you only need to encrypt certain properties of your messages, consider using a library like [NServiceBus](https://docs.particular.net/nservicebus/security/property-encryption) for that.
+ ## Enable customer-managed keys (Azure portal) To enable customer-managed keys in the Azure portal, follow these steps:
service-bus-messaging Service Bus Integrate With Rabbitmq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-integrate-with-rabbitmq.md
Once the policy has been created click on it to see the **Primary Connection Str
:::image type="content" source="./media/service-bus-integrate-with-rabbitmq/sas-policy-key.png" alt-text="Get SAS Policy":::
-Before you can use that connection string, you'll need to convert it to RabbitMQ's AMQP connection format. So go to the [connection string converter tool](https://amqpconnconverter.github.io/) and paste your connection string in the form, click convert. You'll get a connection string that's RabbitMQ ready. (That website runs everything local in your browser so your data isn't sent over the wire). You can access its source code on [GitHub](https://github.com/amqpconnconverter/amqpconnconverter.github.io).
-
+There select the checkbox "Show AMQP connection string" to get the connection string in the AMQP format expected by RabbitMQ Shovel. You'll use it in the next step.
Now open the RabbitMQ management plugin in our browsers `http://localhost:15672/#/dynamic-shovels` and go to `Admin -> Shovel Management`, where you can add your new shovel that will take care of sending messages from a RabbitMQ queue to your Azure Service Bus queue.
Here call your Shovel `azure` and choose `AMQP 0.9.1` as the source protocol. In
On the queue side of things, you can use `azure` as the name of your queue. If that queue doesn't exist, RabbitMQ will create it for you. You can also choose the name of a queue that exists already. You can leave the other options as default.
-Then on the `destination` side of things, choose `AMQP 1.0` as the protocol. In the `URI` field, enter the connecting string that you got from the previous step, were you converted your Azure connection string to the RabbitMQ format. It should look like this:
+Then on the `destination` side of things, choose `AMQP 1.0` as the protocol. In the `URI` field, enter the connecting string that you got from the previous step. It should look like this:
``` amqps://rabbitmq-shovel:StringOfRandomChars@rabbitmq.servicebus.windows.net:5671/?sasl=plain
Congrats! You achieved a lot! You managed to get your messages from RabbitMQ to
3. Add a SAS Policy to your queue 4. Get the queue connection string 5. Enable the RabbitMQ shovel plugin & the management interface
-6. Convert the Azure Service Bus connection string to RabbitMQ's AMQP format
+6. Obtain the Azure Service Bus connection string converted into RabbitMQ's AMQP format from Portal.
7. Add a new Shovel to RabbitMQ & connect it to Azure Service Bus 8. Publish messages
spring-apps How To Config Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-config-server.md
Use the following steps to enable Spring Cloud Config Server:
1. On the **Basics** tab, select **Enterprise tier** in the **Pricing** section and specify the required information. Then, select **Next: Managed components**.
-1. On the **Managed components** tab, select **Enable Spring Cloud Config Server (preview)**.
+1. On the **Managed components** tab, select **Enable Spring Cloud Config Server**.
:::image type="content" source="media/how-to-config-server/create-instance.png" alt-text="Screenshot of the Azure portal that shows the VMware Tanzu settings tab with the Enable Spring Cloud Config Server checkbox highlighted." lightbox="media/how-to-config-server/create-instance.png"::: 1. Specify other settings, and then select **Review and Create**.
-1. On the **Review an create** tab, make sure that **Enable Spring Cloud Config Server (preview)** is set to **Yes**. Select **Create** to create the Enterprise plan instance.
+1. On the **Review an create** tab, make sure that **Enable Spring Cloud Config Server** is set to **Yes**. Select **Create** to create the Enterprise plan instance.
### [Azure CLI](#tab/Azure-CLI)
storage Storage Feature Support In Storage Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-feature-support-in-storage-accounts.md
The following table describes whether a feature is supported in a standard gener
| [Blobfuse](storage-how-to-mount-container-linux.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Change feed](storage-blob-change-feed.md) | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | | [Custom domains](storage-custom-domain-name.md) | &#x2705; | &#x1F7E6; | &#x1F7E6; | &#x1F7E6; |
-| [Customer-managed planned failover (preview)](../common/storage-disaster-recovery-guidance.md#customer-managed-planned-failover-preview) | &#x1F7E6; | &#x1F7E6; | &nbsp;&#x2B24; | &nbsp;&#x1F7E6; |
-| [Customer-managed (unplanned) failover](../common/storage-disaster-recovery-guidance.md#customer-managed-unplanned-failover) | &#x2705; | &#x1F7E6; | &nbsp;&#x2B24; | &nbsp;&#x1F7E6; |
+| [Customer-managed planned failover (preview)](../common/storage-disaster-recovery-guidance.md#customer-managed-planned-failover-preview) | &#x1F7E6; | &#x1F7E6; | &nbsp;&#x2B24; | &#x1F7E6; |
+| [Customer-managed (unplanned) failover](../common/storage-disaster-recovery-guidance.md#customer-managed-unplanned-failover) | &#x2705; | &#x1F7E6; | &nbsp;&#x2B24; | &#x1F7E6; |
| [Customer-managed keys with key vault in the same tenant](../common/customer-managed-keys-overview.md?toc=/azure/storage/blobs/toc.json) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Customer-managed keys with key vault in a different tenant (cross-tenant)](../common/customer-managed-keys-overview.md?toc=/azure/storage/blobs/toc.json) | &#x2705; | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | | [Customer-provided keys](encryption-customer-provided-keys.md) | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
storage Container Storage Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/container-storage-faq.md
**Does Azure Container Storage use the capacity from Ephemeral OS disks for ephemeral disk storage pool?** No, Azure Container Storage only discovers and uses the capacity from ephemeral data disks for ephemeral disk storage pool.
+* <a id="azure-container-storage-installation"></a>
+ **I encountered installation issues due to Azure Policy. What is the recommended approach?**
+
+ If youΓÇÖre experiencing installation issues with Azure Container Storage in your AKS cluster, it might be due to Azure Policy restrictions. To resolve this,
+ youΓÇÖll need to add the `acstor` namespace to the exclusion list of your Azure Policy. Azure Policy is used to create and enforce rules for managing resources
+ within Azure, including AKS clusters. In some cases, policies might block the creation of Azure Container Storage pods and components. You can find more details
+ on working with Azure Policy for Kubernetes by consulting [Azure Policy for Kubernetes](/azure/governance/policy/concepts/policy-for-kubernetes).
+ To resolve this, follow these steps:
+ - [Create your Azure Kubernetes cluster](install-container-storage-aks.md)
+ - Enable Azure Policy for AKS
+ - Create a policy that you suspect is blocking the installation of Azure Container Storage
+ - Attempt to install Azure Container Storage in the AKS cluster
+ - Check the logs for the gatekeeper-controller pod to confirm any policy violations
+ - Add the `acstor` namespace to the exclusion list of the policy
+ - Attempt to install Azure Container Storage in the AKS cluster again
+
## See also - [What is Azure Container Storage?](container-storage-introduction.md)
virtual-desktop Autoscale Create Assign Scaling Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/autoscale-create-assign-scaling-plan.md
Title: Create and assign an autoscale scaling plan for Azure Virtual Desktop
description: How to create and assign an autoscale scaling plan to optimize deployment costs. Previously updated : 04/18/2024 Last updated : 10/07/2024
Autoscale lets you scale your session host virtual machines (VMs) in a host pool
To learn more about autoscale, see [Autoscale scaling plans and example scenarios in Azure Virtual Desktop](autoscale-scenarios.md).
->[!NOTE]
+> [!NOTE]
> - Azure Virtual Desktop (classic) doesn't support autoscale. > - You can't use autoscale and [scale session hosts using Azure Automation and Azure Logic Apps](scaling-automation-logic-apps.md) on the same host pool. You must use one or the other. > - Autoscale is available in Azure and Azure Government.
-> - Autoscale support for Azure Stack HCI with Azure Virtual Desktop is currently in PREVIEW. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
For best results, we recommend using autoscale with VMs you deployed with Azure Virtual Desktop Azure Resource Manager templates or first-party tools from Microsoft.
To learn how to assign the *Desktop Virtualization Power On Off Contributor* rol
## Create a scaling plan
-### [Portal](#tab/portal)
+### [Azure portal](#tab/portal)
Now that you've assigned the *Desktop Virtualization Power On Off Contributor* role to the service principal on your subscriptions, you can create a scaling plan. To create a scaling plan using the portal:
Now that you've assigned the *Desktop Virtualization Power On Off Contributor* r
- For **Load balancing**, you can select either breadth-first or depth-first load balancing. Breadth-first load balancing distributes new user sessions across all available session hosts in the host pool. Depth-first load balancing distributes new sessions to any available session host with the highest number of connections that hasn't reached its session limit yet. For more information about load-balancing types, see [Configure the Azure Virtual Desktop load-balancing method](configure-host-pool-load-balancing.md).
- > [!NOTE]
- > You can't change the capacity threshold here. Instead, the setting you entered in **Ramp-up** will carry over to this setting.
+ > [!NOTE]
+ > You can't change the capacity threshold here. Instead, the setting you entered in **Ramp-up** will carry over to this setting.
- For **Ramp-down**, you'll enter values into similar fields to **Ramp-up**, but this time it will be for when your host pool usage drops off. This will include the following fields:
Now that you've assigned the *Desktop Virtualization Power On Off Contributor* r
- Capacity threshold (%) - Force logoff users
- > [!IMPORTANT]
- > - If you've enabled autoscale to force users to sign out during ramp-down, the feature will choose the session host with the lowest number of user sessions (active and disconnected) to shut down. Autoscale will put the session host in drain mode, send those user sessions a notification telling them they'll be signed out, and then sign out those users after the specified wait time is over. After autoscale signs out those user sessions, it then deallocates the VM.
- >
- > - If you haven't enabled forced sign out during ramp-down, you then need to choose whether you want to shut down ΓÇÿVMs have no active or disconnected sessionsΓÇÖ or ΓÇÿVMs have no active sessionsΓÇÖ during ramp-down.
- >
- > - Whether youΓÇÖve enabled autoscale to force users to sign out during ramp-down or not, the [capacity threshold](autoscale-glossary.md#capacity-threshold) and the [minimum percentage of hosts](autoscale-glossary.md#minimum-percentage-of-hosts) are still respected, autoscale will only shut down VMs if all existing user sessions (active and disconnected) in the host pool can be consolidated to fewer VMs without exceeding the capacity threshold.
- >
- > - You can also configure a time limit policy that will apply to all phases to sign out all disconnected users to reduce the [used host pool capacity](autoscale-glossary.md#used-host-pool-capacity). For more information, see [Configure a time limit policy](#configure-a-time-limit-policy).
+ > [!IMPORTANT]
+ > - If you've enabled autoscale to force users to sign out during ramp-down, the feature will choose the session host with the lowest number of user sessions (active and disconnected) to shut down. Autoscale will put the session host in drain mode, send those user sessions a notification telling them they'll be signed out, and then sign out those users after the specified wait time is over. After autoscale signs out those user sessions, it then deallocates the VM.
+ >
+ > - If you haven't enabled forced sign out during ramp-down, you then need to choose whether you want to shut down ΓÇÿVMs have no active or disconnected sessionsΓÇÖ or ΓÇÿVMs have no active sessionsΓÇÖ during ramp-down.
+ >
+ > - Whether youΓÇÖve enabled autoscale to force users to sign out during ramp-down or not, the [capacity threshold](autoscale-glossary.md#capacity-threshold) and the [minimum percentage of hosts](autoscale-glossary.md#minimum-percentage-of-hosts) are still respected, autoscale will only shut down VMs if all existing user sessions (active and disconnected) in the host pool can be consolidated to fewer VMs without exceeding the capacity threshold.
+ >
+ > - You can also configure a time limit policy that will apply to all phases to sign out all disconnected users to reduce the [used host pool capacity](autoscale-glossary.md#used-host-pool-capacity). For more information, see [Configure a time limit policy](#configure-a-time-limit-policy).
- Likewise, **Off-peak hours** works the same way as **Peak hours**:
Now that you've assigned the *Desktop Virtualization Power On Off Contributor* r
- For **VMs to start**, select whether you want only personal desktops that have a user assigned to them at the start time to be started, you want all personal desktops in the host pool (regardless of user assignment) to be started, or you want no personal desktops in the pool to be started.
- > [!NOTE]
- > We highly recommend that you enable Start VM on Connect if you choose not to start your VMs during the ramp-up phase.
+ > [!NOTE]
+ > We highly recommend that you enable Start VM on Connect if you choose not to start your VMs during the ramp-up phase.
- For **When disconnected for**, specify the number of minutes a user session has to be disconnected before performing a specific action. This number can be anywhere between 0 and 360.
Now that you've assigned the *Desktop Virtualization Power On Off Contributor* r
1. Once you're done, go to the **Review + create** tab and select **Create** to create and assign your scaling plan to the host pools you selected.
-### [PowerShell](#tab/powershell)
+### [Azure PowerShell](#tab/powershell)
Here's how to create a scaling plan using the Az.DesktopVirtualization PowerShell module. The following examples show you how to create a scaling plan and scaling plan schedule. Be sure to change the `<placeholder>` values for your own.
To configure a time limit policy using Group Policy:
Select the relevant tab for your scenario.
-### [Portal](#tab/portal)
+### [Azure Portal](#tab/portal)
To edit an existing scaling plan using the Azure portal:
To edit an existing scaling plan using the Azure portal:
1. To edit the plan's friendly name, description, time zone, or exclusion tags, go to the **Properties** tab.
-### [PowerShell](#tab/powershell)
+### [Azure PowerShell](#tab/powershell)
Here's how to update a scaling plan using the Az.DesktopVirtualization PowerShell module. The following examples show you how to update a scaling plan and scaling plan schedule.
You can assign a scaling plan to any existing host pools of the same type in you
If you disable a scaling plan, all assigned resources will remain in the state they were in at the time you disabled it.
-### [Portal](#tab/portal)
+### [Azure Portal](#tab/portal)
To assign a scaling plan to existing host pools:
To assign a scaling plan to existing host pools:
> [!div class="mx-imgBorder"] > ![A screenshot of the scaling plan window. The "enable autoscale" check box is selected and highlighted with a red border.](media/enable-autoscale.png)
-### [PowerShell](#tab/powershell)
+### [Azure PowerShell](#tab/powershell)
1. Assign a scaling plan to existing host pools using [Update-AzWvdScalingPlan](/powershell/module/az.desktopvirtualization/update-azwvdscalingplan). The following example assigns a personal scaling plan to two existing personal host pools.
virtual-desktop Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/overview.md
Azure Virtual Desktop is a desktop and app virtualization service that runs on A
## Introductory video
-Learn about Azure Virtual Desktop (formerly Windows Virtual Desktop), why it's unique, and what's new in this video:
+Learn about Azure Virtual Desktop (formerly Windows Virtual Desktop), why it's unique, and what's new in this video:<br /><br />
> [!VIDEO https://www.youtube.com/embed/aPEibGMvxZw]
virtual-desktop Start Virtual Machine Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/start-virtual-machine-connect.md
description: Learn how to enable or disable Start VM on Connect for Azure Virtua
Previously updated : 06/04/2024 Last updated : 10/07/2024 # Configure Start VM on Connect
-> [!IMPORTANT]
-> Start VM on Connect for Azure Stack HCI with Azure Virtual Desktop is currently in PREVIEW. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
- Start VM on Connect lets you reduce costs by enabling end users to power on the virtual machines (VMs) used as session hosts only when they're needed. You can then power off VMs when they're not needed. For personal host pools, Start VM on Connect only powers on an existing session host VM that is already assigned or can be assigned to a user. For pooled host pools, Start VM on Connect only powers on a session host VM when none are turned on and more VMs are only be turned on when the first VM reaches the session limit.
virtual-network Create Custom Ip Address Prefix Ipv6 Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-custom-ip-address-prefix-ipv6-portal.md
Title: Create a custom IPv6 address prefix
+ Title: Create a custom IPv6 address prefix in Azure
-description: Learn how to onboard a custom IPv6 address prefix using the Azure portal, Azure CLI, or PowerShell.
+description: Learn how to onboard a custom IPv6 address prefix using the Azure portal, Azure CLI, or Azure PowerShell.
Previously updated : 08/24/2023 Last updated : 08/06/2024
-# Create a custom IPv6 address prefix
+# Create a custom IPv6 address prefix in Azure
-A custom IPv6 address prefix enables you to bring your own IPv6 ranges to Microsoft and associate it to your Azure subscription. The range would continue to be owned by you, though Microsoft would be permitted to advertise it to the Internet. A custom IP address prefix functions as a regional resource that represents a contiguous block of customer owned IP addresses.
+In this article, you learn how to create a custom IPv6 address prefix. You prepare a range to provision, provision the range for IP allocation, and enable the range to be advertised by Microsoft.
-The steps in this article detail the process to:
+A custom IPv6 address prefix enables you to bring your own IPv6 ranges to Microsoft and associate it to your Azure subscription. The range would continue to be owned by you, though Microsoft would be permitted to advertise it to the Internet. A custom IP address prefix functions as a regional resource that represents a contiguous block of customer owned IP addresses.
-* Prepare a range to provision
-
-* Provision the range for IP allocation
-
-* Enable the IPv6 range to be advertised by Microsoft
-
-For this article, choose between the Azure portal, Azure CLI, or PowerShell to create a custom IPv6 address prefix.
+For this article, choose between the Azure portal, Azure CLI, or Azure PowerShell to create a custom IPv6 address prefix.
## Differences between using BYOIPv4 and BYOIPv6
For this article, choose between the Azure portal, Azure CLI, or PowerShell to c
- Ensure your Az.Network module is 5.1.1 or later. To verify the installed module, use the command `Get-InstalledModule -Name "Az.Network"`. If the module requires an update, use the command `Update-Module -Name "Az.Network"` if necessary. - A customer owned IPv6 range to provision in Azure. A sample customer range (2a05:f500:2::/48) is used for this example, but wouldn't be validated by Azure; you need to replace the example range with yours.
-If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure
+If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
The following flow creates a custom IP prefix in the specified region and resour
Sign in to the [Azure portal](https://portal.azure.com).
-### Create and provision a custom IP address prefix
+### Create and provision a custom IPv6 address prefix
1. In the search box at the top of the portal, enter **Custom IP**.
Sign in to the [Azure portal](https://portal.azure.com).
| Global IPv6 Prefix (CIDR) | Enter **2a05:f500:2::/48**. | | ROA expiration date | Enter your ROA expiration date in the **yyyymmdd** format. | | Signed message | Paste in the output of **$byoipauthsigned** from the pre-provisioning section. |
- | Availability Zones | Select **Zone-redundant**. |
:::image type="content" source="./media/create-custom-ip-address-prefix-ipv6/create-custom-ipv6-prefix.png" alt-text="Screenshot of create custom IP prefix page in Azure portal.":::
The range is pushed to the Azure IP Deployment Pipeline. The deployment process
### Provision a regional custom IPv6 address prefix
-After the global custom IP prefix is in a **Provisioned** state, regional custom IP prefixes can be created. These ranges must always be of size /64 to be considered valid. The ranges can be created in any region (it doesn't need to be the same as the global custom IP prefix), keeping in mind any geolocation restrictions associated with the original global range. The "children" custom IP prefixes will be advertised locally from the region they're created in. Because the validation is only done for global custom IP prefix provision, no Authorization or Signed message is required. (Because these ranges will be advertised from a specific region, zones can be utilized.)
+After the global custom IP prefix is in a **Provisioned** state, regional custom IP prefixes can be created. These ranges must always be of size /64 to be considered valid. The ranges can be created in any region (it doesn't need to be the same as the global custom IP prefix), keeping in mind any geolocation restrictions associated with the original global range. The "children" custom IP prefixes advertise from the region they're created in. Because the validation is only done for global custom IP prefix provision, no Authorization or Signed message is required (but availability zones can be utilized).
In the same **Create a custom IP prefix** page as before, enter or select the following information:
In the same **Create a custom IP prefix** page as before, enter or select the fo
| - | -- | | **Project details** | | | Subscription | Select your subscription |
-| Resource group | Select **Create new**. </br> Enter **myResourceGroup**. </br> Select **OK**. |
+| Resource group | Select **Create new**.</br> Enter **myResourceGroup**.</br> Select **OK**. |
| **Instance details** | | | Name | Enter **myCustomIPv6RegionalPrefix**. | | Region | Select **West US 2**. |
In the same **Create a custom IP prefix** page as before, enter or select the fo
| IP prefix range | Select Regional. | | Custom IP prefix parent | Select myCustomIPv6GlobalPrefix (2a05:f500:2::/48) from the drop-down menu. | | Regional IPv6 Prefix (CIDR) | Enter **2a05:f500:2:1::/64**. |
-| ROA expiration date | Enter your ROA expiration date in the **yyyymmdd** format. |
+| ROA expiration date | Enter your ROA (Route Origin Expiration) expiration date in the **yyyymmdd** format. |
| Signed message | Paste in the output of **$byoipauthsigned** from the pre-provisioning section. | | Availability Zones | Select **Zone-redundant**. |
The following command creates a custom IP prefix in the specified region and res
### Provision a regional custom IPv6 address prefix
-After the global custom IP prefix is in a **Provisioned** state, regional custom IP prefixes can be created. These ranges must always be of size /64 to be considered valid. The ranges can be created in any region (it doesn't need to be the same as the global custom IP prefix), keeping in mind any geolocation restrictions associated with the original global range. The *children* custom IP prefixes are advertised locally from the region they're created in. Because the validation is only done for global custom IP prefix provision, no Authorization or Signed message is required. (Because these ranges are advertised from a specific region, zones can be utilized.)
+After the global custom IP prefix is in a **Provisioned** state, regional custom IP prefixes can be created. These ranges must always be of size /64 to be considered valid. The ranges can be created in any region (it doesn't need to be the same as the global custom IP prefix), keeping in mind any geolocation restrictions associated with the original global range. The "children" custom IP prefixes advertise from the region they're created in. Because the validation is only done for global custom IP prefix provision, no Authorization or Signed message is required (but availability zones can be utilized).
```azurecli-interactive az network custom-ip prefix create \
It's possible to commission the global custom IPv6 prefix prior to the regional
> [!IMPORTANT] > As the global custom IPv6 prefix transitions to a **Commissioned** state, the range is being advertised with Microsoft from the local Azure region and globally to the Internet by Microsoft's wide area network under Autonomous System Number (ASN) 8075. Advertising this same range to the Internet from a location other than Microsoft at the same time could potentially create BGP routing instability or traffic loss. For example, a customer on-premises building. Plan any migration of an active range during a maintenance period to avoid impact. - # [Azure PowerShell](#tab/azurepowershell/) ### Create a resource group and specify the prefix and authorization messages
$myCustomIPv6GlobalPrefix = New-AzCustomIPPrefix @prefix
### Provision a regional custom IPv6 address prefix
-After the global custom IP prefix is in a **Provisioned** state, regional custom IP prefixes can be created. These ranges must always be of size /64 to be considered valid. The ranges can be created in any region (it doesn't need to be the same as the global custom IP prefix), keeping in mind any geolocation restrictions associated with the original global range. The "children" custom IP prefixes will be advertised locally from the region they're created in. Because the validation is only done for global custom IP prefix provision, no Authorization or Signed message is required. (Because these ranges will be advertised from a specific region, zones can be utilized.)
+After the global custom IP prefix is in a **Provisioned** state, regional custom IP prefixes can be created. These ranges must always be of size /64 to be considered valid. The ranges can be created in any region (it doesn't need to be the same as the global custom IP prefix), keeping in mind any geolocation restrictions associated with the original global range. The "children" custom IP prefixes advertise from the region they're created in. Because the validation is only done for global custom IP prefix provision, no Authorization or Signed message is required (but availability zones can be utilized).
```azurepowershell-interactive $prefix =@{
virtual-network Create Custom Ip Address Prefix Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-custom-ip-address-prefix-portal.md
Title: Create a custom IPv4 address prefix - Azure portal
+ Title: Create a custom IPv4 address prefix in Azure
-description: Learn how to onboard a custom IP address prefix using the Azure portal
+description: Learn how to onboard and create a custom IP address prefix using the Azure portal, Azure CLI, or Azure PowerShell.
-mai Last updated 07/25/2024
Last updated : 08/08/2024+
-# Create a custom IPv4 address prefix using the Azure portal
+# Create a custom IPv4 address prefix in Azure
-A custom IPv4 address prefix enables you to bring your own IPv4 ranges to Microsoft and associate it to your Azure subscription. You maintain ownership of the range while Microsoft would be permitted to advertise it to the Internet. A custom IP address prefix functions as a regional resource that represents a contiguous block of customer owned IP addresses.
+In this article, you learn how to create a custom IPv4 address prefix using the Azure portal. You prepare a range to provision, provision the range for IP allocation, and enable the range advertisement by Microsoft.
-The steps in this article detail the process to:
+A custom IPv4 address prefix enables you to bring your own IPv4 ranges to Microsoft and associate it to your Azure subscription. You maintain ownership of the range while Microsoft would be permitted to advertise it to the Internet. A custom IP address prefix functions as a regional resource that represents a contiguous block of customer owned IP addresses.
-* Prepare a range to provision
+For this article, choose between the Azure portal, Azure CLI, or PowerShell to create a custom IPv4 address prefix.
-* Provision the range for IP allocation
-
-* Enable the range to be advertised by Microsoft
## Prerequisites
The steps in this article detail the process to:
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - A customer owned IPv4 range to provision in Azure.
- - A sample customer range (1.2.3.0/24) is used for this example. This range won't be validated by Azure. Replace the example range with yours.
+ - A sample customer range (1.2.3.0/24) is used for this example. This range isn't validated in Azure so replace the example range with yours.
> [!NOTE] > For problems encountered during the provisioning process, please see [Troubleshooting for custom IP prefix](manage-custom-ip-address-prefix.md#troubleshooting-and-faqs).
The steps in this article detail the process to:
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - This tutorial requires version 2.28 or later of the Azure CLI (you can run az version to determine which you have). If using Azure Cloud Shell, the latest version is already installed.-- Sign in to Azure CLI and ensure you've selected the subscription with which you want to use this feature using `az account`.
+- Sign in to Azure CLI and select the subscription you want to use with `az account`.
- A customer owned IPv4 range to provision in Azure.
- - A sample customer range (1.2.3.0/24) is used for this example. This range won't be validated by Azure. Replace the example range with yours.
+ - A sample customer range (1.2.3.0/24) is used for this example. This range isn't validated in Azure so replace the example range with yours.
> [!NOTE] > For problems encountered during the provisioning process, please see [Troubleshooting for custom IP prefix](manage-custom-ip-address-prefix.md#troubleshooting-and-faqs).
The steps in this article detail the process to:
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - Azure PowerShell installed locally or Azure Cloud Shell.-- Sign in to Azure PowerShell and ensure you've selected the subscription with which you want to use this feature. For more information, see [Sign in with Azure PowerShell](/powershell/azure/authenticate-azureps).
+- Sign in to Azure PowerShell and select the subscription to use with this feature. For more information, see [Sign in with Azure PowerShell](/powershell/azure/authenticate-azureps).
- Ensure your `Az.Network` module is 5.1.1 or later. To verify the installed module, use the command `Get-InstalledModule -Name "Az.Network"`. If the module requires an update, use the command `Update-Module -Name "Az.Network"` if necessary. - A customer owned IPv4 range to provision in Azure.
- - A sample customer range (1.2.3.0/24) is used for this example. This range won't be validated by Azure. Replace the example range with yours.
+ - A sample customer range (1.2.3.0/24) is used for this example. This range isn't validated in Azure so replace the example range with yours.
If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
If you choose to install and use PowerShell locally, this article requires the A
[!INCLUDE [ip-services-pre-provisioning-steps](../../../includes/ip-services-pre-provisioning-steps.md)]
-## Provisioning steps
+
+## Provisioning and Commissioning a Custom IPv4 Prefix
+
+The following steps display the procedure for provisioning and commissioning a custom IPv4 address prefix with a choice of two models: Unified and Global/Regional. The steps can be performed with the Azure portal, Azure CLI, or Azure PowerShell.
# [Azure portal](#tab/azureportal)
+Use the Azure portal to provision and commission a custom IPv4 address prefix with the Azure portal.
+
+# [Azure CLI](#tab/azurecli)
+
+Use the Azure CLI to provision and commission a custom IPv4 address prefix with the Azure CLI.
+
+# [Azure PowerShell](#tab/azurepowershell/)
+
+Use Azure PowerShell to provision and commission a custom IPv4 address prefix with Azure PowerShell.
+++
+# [Unified Model](#tab/unified/azureportal)
+ The following steps display the procedure for provisioning a sample customer range (1.2.3.0/24) to the US West 2 region. > [!NOTE]
The following steps display the procedure for provisioning a sample customer ran
Sign in to the [Azure portal](https://portal.azure.com).
-## Create and provision a custom IP address prefix
+## Create and provision a unified custom IP address prefix
1. In the search box at the top of the portal, enter **Custom IP**.
The range is pushed to the Azure IP Deployment Pipeline. The deployment process
> [!IMPORTANT] > After the custom IP prefix is in a "Provisioned" state, a child public IP prefix can be created. These public IP prefixes and any public IP addresses can be attached to networking resources. For example, virtual machine network interfaces or load balancer front ends. The IPs won't be advertised and therefore won't be reachable. For more information on a migration of an active prefix, see [Manage a custom IP prefix](manage-custom-ip-address-prefix.md).
-## Create a public IP prefix from custom IP prefix
+## Create a public IP prefix from unified custom IP prefix
When you create a prefix, you must create static IP addresses from the prefix. In this section, you create a static IP address from the prefix you created earlier.
When you create a prefix, you must create static IP addresses from the prefix. I
6. Select **Review + create**, and then **Create** on the following page.
-10. Repeat steps 1-5 to return to the **Overview** page for **myCustomIPPrefix**. You see **myPublicIPPrefix** listed under the **Associated public IP prefixes** section. You can now allocate standard SKU public IP addresses from this prefix. For more information, see [Create a static public IP address from a prefix](manage-public-ip-address-prefix.md#create-a-static-public-ip-address-from-a-prefix).
+7. Repeat steps 1-3 to return to the **Overview** page for **myCustomIPPrefix**. You see **myPublicIPPrefix** listed under the **Associated public IP prefixes** section. You can now allocate standard SKU public IP addresses from this prefix. For more information, see [Create a static public IP address from a prefix](manage-public-ip-address-prefix.md#create-a-static-public-ip-address-from-a-prefix).
-## Commission the custom IP address prefix
+## Commission the unified custom IP address prefix
When the custom IP prefix is in **Provisioned** state, update the prefix to begin the process of advertising the range from Azure.
When the custom IP prefix is in **Provisioned** state, update the prefix to begi
4. In **Overview** of **myCustomIPPrefix**, select the **Commission** dropdown menu and choose **Globally**.
-The operation is asynchronous. You can check the status by reviewing the **Commissioned state** field for the custom IP prefix. Initially, the status will show the prefix as **Commissioning**, followed in the future by **Commissioned**. The advertisement rollout isn't binary and the range will be partially advertised while still in the **Commissioning** status.
+The operation is asynchronous. You can check the status by reviewing the **Commissioned state** field for the custom IP prefix. Initially, the status will show the prefix as **Commissioning**, followed in the future by **Commissioned**. The advertisement rollout isn't completed all at once. The range is partially advertised while still in the **Commissioning** status.
+
+> [!NOTE]
+> The estimated time to fully complete the commissioning process is 3-4 hours.
+
+> [!IMPORTANT]
+> As the custom IP prefix transitions to a **Commissioned** state, the range is being advertised with Microsoft from the local Azure region and globally to the Internet by Microsoft's wide area network under Autonomous System Number (ASN) 8075. Advertising this same range to the Internet from a location other than Microsoft at the same time could potentially create BGP routing instability or traffic loss. For example, a customer on-premises building. Plan any migration of an active range during a maintenance period to avoid impact. To prevent these issues during initial deployment, you can choose the regional only commissioning option where your custom IP prefix will only be advertised within the Azure region it is deployed in. For more information, see [Manage a custom IP address prefix (BYOIP)](manage-custom-ip-address-prefix.md).
+
+# [Global/Regional Model](#tab/globalregional/azureportal)
+
+The following steps display the modified steps for provisioning a sample global (parent) IP range (1.2.3.0/4) and regional (child) IP ranges to the US West 2 and US East 2 Regions.
+
+> [!NOTE]
+> Clean up or delete steps aren't shown on this page given the nature of the resource. For information on removing a provisioned custom IP prefix, see [Manage custom IP prefix](manage-custom-ip-address-prefix.md).
+
+### Provision a global custom IP address prefix
+
+Sign in to the [Azure portal](https://portal.azure.com).
+
+## Create and provision a global custom IP address prefix
+
+1. In the search box at the top of the portal, enter **Custom IP**.
+
+2. In the search results, select **Custom IP Prefixes**.
+
+3. Select **+ Create**.
+
+4. In **Create a custom IP prefix**, enter or select the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | **Project details** | |
+ | Subscription | Select your subscription |
+ | Resource group | Select **Create new**.</br> Enter **myResourceGroup**.</br> Select **OK**. |
+ | **Instance details** | |
+ | Name | Enter **myCustomIPGlobalPrefix**. |
+ | Region | Select **West US 2**. |
+ | IP Version | Select IPv4. |
+ | IP prefix range | Select Global. |
+ | Global IPv4 Prefix (CIDR) | Enter **1.2.3.0/24**. |
+ | ROA expiration date | Enter your ROA expiration date in the **yyyymmdd** format. |
+ | Signed message | Paste in the output of **$byoipauthsigned** from the pre-provisioning section. |
+
+5. Select the **Review + create** tab or the blue **Review + create** button at the bottom of the page.
+
+6. Select **Create**.
+The range is pushed to the Azure IP Deployment Pipeline. The deployment process is asynchronous. You can check the status by reviewing the **Commissioned state** field for the custom IP prefix.
+
+> [!NOTE]
+> The estimated time to complete the provisioning process is 30 minutes.
+
+### Provision regional custom IP address prefixes
+
+After the global custom IP prefix is in a **Provisioned** state, regional custom IP prefixes can be created. These ranges must always be of size /64 to be considered valid. The ranges can be created in any region (it doesn't need to be the same as the global custom IP prefix), keeping in mind any geolocation restrictions associated with the original global range. The "children" custom IP prefixes advertise from the region they're created in. Because the validation is only done for global custom IP prefix provision, no Authorization or Signed message is required (but availability zones can be utilized).
+
+In the same **Create a custom IP prefix** page as before, enter or select the following information:
+
+| Setting | Value |
+| - | -- |
+| **Project details** | |
+| Subscription | Select your subscription |
+| Resource group | Select **Create new**.</br> Enter **myResourceGroup**.</br> Select **OK**. |
+| **Instance details** | |
+| Name | Enter **myCustomIPRegionalPrefix1**. |
+| Region | Select **West US 2**. |
+| IP Version | Select IPv4. |
+| IP prefix range | Select Regional. |
+| Custom IP prefix parent | Select myCustomIPGlobalPrefix (1.2.3.0/24) from the drop-down menu. |
+| Regional IPv4 Prefix (CIDR) | Enter **1.2.3.0/25**. |
+| ROA expiration date | Enter your ROA expiration date in the **yyyymmdd** format. |
+| Signed message | Paste in the output of **$byoipauthsigned** from the pre-provisioning section. |
+| Availability Zones | Select **Zone-redundant**. |
+
+After creation, go through the flow a second time for another regional prefix in a new region.
+
+| Setting | Value |
+| - | -- |
+| **Project details** | |
+| Subscription | Select your subscription |
+| Resource group | Select **Create new**.</br> Enter **myResourceGroup**.</br> Select **OK**. |
+| **Instance details** | |
+| Name | Enter **myCustomIPRegionalPrefix2**. |
+| Region | Select **East US 2**. |
+| IP Version | Select IPv4. |
+| IP prefix range | Select Regional. |
+| Custom IP prefix parent | Select myCustomIPGlobalPrefix (1.2.3.0/24) from the drop-down menu. |
+| Regional IPv4 Prefix (CIDR) | Enter **1.2.3.128/25**. |
+| ROA expiration date | Enter your ROA expiration date in the **yyyymmdd** format. |
+| Signed message | Paste in the output of **$byoipauthsigned** from the pre-provisioning section. |
+| Availability Zones | Select Zone **3**. |
+
+> [!IMPORTANT]
+> After the regional custom IP prefix is in a "Provisioned" state, a child public IP prefix can be created. These public IP prefixes and any public IP addresses can be attached to networking resources. For example, virtual machine network interfaces or load balancer front ends. The IPs won't be advertised and therefore won't be reachable. For more information on a migration of an active prefix, see [Manage a custom IP prefix](manage-custom-ip-address-prefix.md).
+
+## Create a public IP prefix from regional custom IP prefix
+
+When you create a prefix, you must create static IP addresses from the prefix. In this section, you create a static IP address from the prefix you created earlier.
+
+1. In the search box at the top of the portal, enter **Custom IP**.
+
+2. In the search results, select **Custom IP Prefixes**.
+
+3. In **Custom IP Prefixes**, select **myCustomIPPrefix**.
+
+4. In **Overview** of **myCustomIPPrefix**, select **+ Add a public IP prefix**.
+
+5. Enter or select the following information in the **Basics** tab of **Create a public IP prefix**.
+
+ | Setting | Value |
+ | - | -- |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **myResourceGroup**. |
+ | **Instance details** | |
+ | Name | Enter **myPublicIPPrefix**. |
+ | Region | Select **West US 2**. The region of the public IP prefix must match the region of the regional custom IP prefix. |
+ | IP version | Select **IPv4**. |
+ | Prefix ownership | Select **Custom prefix**. |
+ | Custom IP prefix | Select **myCustomIPRegionalPrefix1**. |
+ | Prefix size | Select a prefix size. The size can be as large as the regional custom IP prefix. |
+
+6. Select **Review + create**, and then **Create** on the following page.
+
+7. Repeat steps 1-3 to return to the **Overview** page for **myCustomIPPrefix**. You see **myPublicIPPrefix** listed under the **Associated public IP prefixes** section. You can now allocate standard SKU public IP addresses from this prefix. For more information, see [Create a static public IP address from a prefix](manage-public-ip-address-prefix.md#create-a-static-public-ip-address-from-a-prefix).
+
+### Commission the custom IP address prefixes
+
+When commissioning custom IP prefixes using this model, the global and regional prefixes are treated separately. In other words, commissioning a regional custom IP prefix isn't connected to commissioning the global custom IP prefix.
++
+The safest strategy for range migrations is as follows:
+1. Provision all required regional custom IP prefixes in their respective regions. Create public IP prefixes and public IP addresses and attach to resources.
+2. Commission each regional custom IP prefix and test connectivity to the IPs within the region. Repeat for each regional custom IP prefix.
+3. Commission the global custom IP prefix, which advertises the larger range to the Internet. Complete this step only after verifying all regional custom IP prefixes (and derived prefixes/IPs) work as expected.
+
+To commission a custom IP prefix (regional or global) using the portal:
+
+1. In the search box at the top of the portal, enter **Custom IP** and select **Custom IP Prefixes**.
+
+2. Verify the custom IP prefix is in a **Provisioned** state.
+
+3. In **Custom IP Prefixes**, select the desired custom IP prefix.
+
+4. In **Overview** page of the custom IP prefix, select the **Commission** button near the top of the screen. If the range is global, it begins advertising from the Microsoft WAN. If the range is regional, it advertises only from the specific region.
+
+> [!NOTE]
+> The estimated time to fully complete the commissioning process for a custom IP global prefix is 3-4 hours. The estimated time to fully complete the commissioning process for a custom IP regional prefix is 30 minutes.
+
+It's possible to commission the global custom IP prefix before the regional custom IP prefixes. Doing this advertises the global range to the Internet before the regional prefixes are ready so it's not recommended for migrations of active ranges. You can decommission a global custom IP prefix while there are still active (commissioned) regional custom IP prefixes. Also, you can decommission a regional custom IP prefix while the global prefix is still active (commissioned).
[!INCLUDE [ip-services-provisioning-note-1](../../../includes/ip-services-provisioning-note-1.md)]
-# [Azure CLI](#tab/azurecli/)
+# [Unified model](#tab/unified/azurecli)
The following steps display the procedure for provisioning a sample customer range (1.2.3.0/24) to the US West 2 region. > [!NOTE]
-> Clean up or delete steps aren't shown on this page given the nature of the resource. For information on removing a provisioned custom IP prefix, see [Manage custom IP prefix](manage-custom-ip-address-prefix.md).
+> Clean up or delete steps aren't shown on this page given the nature of the resource. For information on removing a provisioned custom IP prefix, see [Manage custom IP prefix](manage-public-ip-address-prefix.md).
### Create a resource group and specify the prefix and authorization messages
Create a resource group in the desired location for provisioning the BYOIP range
--name myResourceGroup \ --location westus2 ```
-### Provision a custom IP address prefix
+### Provision a unified custom IP address prefix
The following command creates a custom IP prefix in the specified region and resource group. Specify the exact prefix in CIDR notation as a string to ensure there's no syntax error. For the `--authorization-message` parameter, use the variable **$byoipauth** that contains your subscription ID, prefix to be provisioned, and expiration date matching the Validity Date on the ROA. Ensure the format is in that order. Use the variable **$byoipauthsigned** for the `--signed-message` parameter created in the certificate readiness section. ```azurecli-interactive
The **CommissionedState** field should show the range as **Provisioning** initia
> The estimated time to complete the provisioning process is 30 minutes. > [!IMPORTANT]
-> After the custom IP prefix is in a **Provisioned** state, a child public IP prefix can be created. These public IP prefixes and any public IP addresses can be attached to networking resources. For example, virtual machine network interfaces or load balancer front ends. The IPs won't be advertised and therefore won't be reachable. For more information on a migration of an active prefix, see [Manage a custom IP prefix](manage-custom-ip-address-prefix.md).
+> After the custom IP prefix is in a **Provisioned** state, a child public IP prefix can be created. These public IP prefixes and any public IP addresses can be attached to networking resources. For example, virtual machine network interfaces or load balancer front ends. The IPs won't be advertised and therefore won't be reachable. For more information on a migration of an active prefix, see [Manage a custom IP prefix](manage-public-ip-address-prefix.md).
-### Commission the custom IP address prefix
+### Commission the unified custom IP address prefix
When the custom IP prefix is in **Provisioned** state, the following command updates the prefix to begin the process of advertising the range from Azure.
az network custom-ip prefix update \
--state commission ```
-As before, the operation is asynchronous. Use [az network custom-ip prefix show](/cli/azure/network/custom-ip/prefix#az-network-custom-ip-prefix-show) to retrieve the status. The **CommissionedState** field will initially show the prefix as **Commissioning**, followed in the future by **Commissioned**. The advertisement rollout isn't binary and the range will be partially advertised while still in **Commissioning**.
+As before, the operation is asynchronous. Use [az network custom-ip prefix show](/cli/azure/network/custom-ip/prefix#az-network-custom-ip-prefix-show) to retrieve the status. The **CommissionedState** field will initially show the prefix as **Commissioning**, followed in the future by **Commissioned**. The advertisement rollout isn't completed all at once. The range is partially advertised while still in the **Commissioning** status.
+
+> [!NOTE]
+> The estimated time to fully complete the commissioning process is 3-4 hours.
+
+> [!IMPORTANT]
+> As the custom IP prefix transitions to a **Commissioned** state, the range is being advertised with Microsoft from the local Azure region and globally to the Internet by Microsoft's wide area network under Autonomous System Number (ASN) 8075. Advertising this same range to the Internet from a location other than Microsoft at the same time could potentially create BGP routing instability or traffic loss. For example, a customer on-premises building. Plan any migration of an active range during a maintenance period to avoid impact. Additionally, you could take advantage of the regional commissioning feature to put a custom IP prefix into a state where it is only advertised within the Azure region it is deployed in--see [Manage a custom IP address prefix (BYOIP)](manage-custom-ip-address-prefix.md) for more information.
+
+# [Global/Regional Model)](#tab/globalregional/azurecli)
+
+The following steps display the modified steps for provisioning a sample global (parent) IP range (1.2.3.0/4) and regional (child) IP ranges to the US West 2 and US East 2 Regions.
+
+> [!NOTE]
+> Clean up or delete steps aren't shown on this page given the nature of the resource. For information on removing a provisioned custom IP prefix, see [Manage custom IP prefix](manage-public-ip-address-prefix.md).
+
+### Create a resource group and specify the prefix and authorization messages
+
+Create a resource group in the desired location for provisioning the global range resource. Although the global range will be associated with a region, the prefix will be advertised by the Microsoft WAN to the Internet globally.
+
+```azurecli-interactive
+ az group create \
+ --name myResourceGroup \
+ --location westus2
+```
+
+### Provision a global custom IP address prefix
+
+The following command creates a custom IP prefix in the specified region and resource group. Specify the exact prefix in CIDR notation as a string to ensure there's no syntax error. No zonal properties are provided because the global range isn't associated with any particular region (and therefore no regional availability zones). The global custom IP prefix resource will still sit in a region in your subscription; this has no bearing on how the range will be advertised by Microsoft.
+
+```azurecli-interactive
+ byoipauth="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx|1.2.3.0/24|yyyymmdd"
+
+ az network custom-ip prefix create \
+ --name myCustomIPGlobalPrefix \
+ --resource-group myResourceGroup \
+ --location westus2 \
+ --cidr ΓÇÿ1.2.3.0/24ΓÇÖ \
+ --authorization-message $byoipauth \
+ --signed-message $byoipauthsigned
+ --isparent
+```
+
+### Provision regional custom IP address prefixes
+
+After the global custom IP prefix is in a **Provisioned** state, regional custom IP prefixes can be created. These ranges must always be of size /64 to be considered valid. The ranges can be created in any region (it doesn't need to be the same as the global custom IP prefix), keeping in mind any geolocation restrictions associated with the original global range. The "children" custom IP prefixes advertise from the region they're created in. Because the validation is only done for global custom IP prefix provision, no Authorization or Signed message is required (but availability zones can be utilized).
+
+```azurecli-interactive
+ az network custom-ip prefix create \
+ --name myCustomIPRegionalPrefix1 \
+ --resource-group myResourceGroup \
+ --location westus2 \
+ --cidr ΓÇÿ1.2.3.0/25ΓÇÖ \
+ --zone 1 2 3 \
+ --cip-prefix-parent myCustomIPGlobalPrefix
+
+ az network custom-ip prefix create \
+ --name myCustomIPRegionalPrefix2 \
+ --resource-group myResourceGroup \
+ --location westus2 \
+ --cidr ΓÇÿ1.2.3.128/25ΓÇÖ \
+ --zone 3
+ --cip-prefix-parent myCustomIPGlobalPrefix
+```
+
+After the regional custom IP prefix is in a **Provisioned** state, public IP prefixes can be derived from the regional custom IP prefix. These public IP prefixes and any public IP addresses derived from them can be attached to networking resources, though they aren't yet being advertised.
+
+### Commission the custom IP address prefixes
+
+When commissioning custom IP prefixes using this model, the global and regional prefixes are treated separately. In other words, commissioning a regional custom IP prefix isn't connected to commissioning the global custom IP prefix.
++
+The safest strategy for range migrations is as follows:
+1. Provision all required regional custom IP prefixes in their respective regions. Create public IP prefixes and public IP addresses and attach to resources.
+2. Commission each regional custom IP prefix and test connectivity to the IPs within the region. Repeat for each regional custom IP prefix.
+3. Commission the global custom IP prefix, which advertises the larger range to the Internet. Complete this step only after verifying all regional custom IP prefixes (and derived prefixes/IPs) work as expected.
+
+Using the previous example ranges, the command sequence would be:
+
+```azurecli-interactive
+az network custom-ip prefix update \
+ --name myCustomIPRegionalPrefix \
+ --resource-group myResourceGroup \
+ --state commission
+
+az network custom-ip prefix update \
+ --name myCustomIPRegionalPrefix2 \
+ --resource-group myResourceGroup \
+ --state commission
+```
+Followed by:
+
+```azurecli-interactive
+az network custom-ip prefix update \
+ --name myCustomIPGlobalPrefix \
+ --resource-group myResourceGroup \
+ --state commission
+```
+
+> [!NOTE]
+> The estimated time to fully complete the commissioning process for a custom IP global prefix is 3-4 hours. The estimated time to fully complete the commissioning process for a custom IP regional prefix is 30 minutes.
+
+It's possible to commission the global custom IP prefix before the regional custom IP prefixes. Doing this advertises the global range to the Internet before the regional prefixes are ready so it's not recommended for migrations of active ranges. You can decommission a global custom IP prefix while there are still active (commissioned) regional custom IP prefixes. Also, you can decommission a regional custom IP prefix while the global prefix is still active (commissioned).
[!INCLUDE [ip-services-provisioning-note-1](../../../includes/ip-services-provisioning-note-1.md)]
-# [Azure PowerShell](#tab/azurepowershell/)
+
+# [Unified Model](#tab/unified/azurepowershell)
The following steps display the procedure for provisioning a sample customer range (1.2.3.0/24) to the US West 2 region. > [!NOTE]
-> Clean up or delete steps aren't shown on this page given the nature of the resource. For information on removing a provisioned custom IP prefix, see [Manage custom IP prefix](manage-custom-ip-address-prefix.md).
+> Clean up or delete steps aren't shown on this page given the nature of the resource. For information on removing a provisioned custom IP prefix, see [Manage custom IP prefix](manage-public-ip-address-prefix.md).
### Create a resource group and specify the prefix and authorization messages
$rg =@{
New-AzResourceGroup @rg ```
-### Provision a custom IP address prefix
+### Provision a unified custom IP address prefix
The following command creates a custom IP prefix in the specified region and resource group. Specify the exact prefix in CIDR notation as a string to ensure there's no syntax error. For the `-AuthorizationMessage` parameter, substitute your subscription ID, prefix to be provisioned, and expiration date matching the Validity Date on the ROA. Ensure the format is in that order. Use the variable **$byoipauthsigned** for the `-SignedMessage` parameter created in the certificate readiness section.
The range is pushed to the Azure IP Deployment Pipeline. The deployment process
Get-AzCustomIpPrefix -ResourceId $myCustomIpPrefix.Id ```
-Sample output is shown below, with some fields removed for clarity:
+Here's a sample output with some fields removed for clarity:
``` Name : myCustomIpPrefix
CommissionedState : Provisioning
The **CommissionedState** field should show the range as **Provisioning** initially, followed in the future by **Provisioned**.
+> [!NOTE]
+> The estimated time to complete the provisioning process is 30 minutes.
+
+> [!IMPORTANT]
+> After the custom IP prefix is in a **Provisioned** state, a child public IP prefix can be created. These public IP prefixes and any public IP addresses can be attached to networking resources. For example, virtual machine network interfaces or load balancer front ends. The IPs won't be advertised and therefore won't be reachable. For more information on a migration of an active prefix, see [Manage a custom IP prefix](manage-public-ip-address-prefix.md).
-### Commission the custom IP address prefix
+### Commission the unified custom IP address prefix
When the custom IP prefix is in the **Provisioned** state, the following command updates the prefix to begin the process of advertising the range from Azure.
When the custom IP prefix is in the **Provisioned** state, the following command
Update-AzCustomIpPrefix -ResourceId $myCustomIPPrefix.Id -Commission ```
-As before, the operation is asynchronous. Use [Get-AzCustomIpPrefix](/powershell/module/az.network/get-azcustomipprefix) to retrieve the status. The **CommissionedState** field will initially show the prefix as **Commissioning**, followed in the future by **Commissioned**. The advertisement rollout isn't binary and the range will be partially advertised while still in **Commissioning**.
+As before, the operation is asynchronous. Use [Get-AzCustomIpPrefix](/powershell/module/az.network/get-azcustomipprefix) to retrieve the status. The **CommissionedState** field will initially show the prefix as **Commissioning**, followed in the future by **Commissioned**. The advertisement rollout isn't completed all at once. The range is partially advertised while still in the **Commissioning** status.
+
+> [!NOTE]
+> The estimated time to fully complete the commissioning process is 3-4 hours.
+
+> [!IMPORTANT]
+> As the custom IP prefix transitions to a **Commissioned** state, the range is being advertised with Microsoft from the local Azure region and globally to the Internet by Microsoft's wide area network under Autonomous System Number (ASN) 8075. Advertising this same range to the Internet from a location other than Microsoft at the same time could potentially create BGP routing instability or traffic loss. For example, a customer on-premises building. Plan any migration of an active range during a maintenance period to avoid impact. Additionally, you could take advantage of the regional commissioning feature to put a custom IP prefix into a state where it is only advertised within the Azure region it is deployed in. For more information, see [Manage a custom IP address prefix (BYOIP)](manage-public-ip-address-prefix.md).
+
+# [Global/Regional Model](#tab/globalregional/azurepowershell)
+
+The following steps display the modified steps for provisioning a sample global (parent) IP range (1.2.3.0/4) and regional (child) IP ranges to the US West 2 and US East 2 Regions.
+
+> [!NOTE]
+> Clean up or delete steps aren't shown on this page given the nature of the resource. For information on removing a provisioned custom IP prefix, see [Manage custom IP prefix](manage-public-ip-address-prefix.md).
+
+### Create a resource group and specify the prefix and authorization messages
+
+Create a resource group in the desired location for provisioning the global range resource. Although the global range is associated with a region, the prefix is advertised by the Microsoft WAN to the Internet globally.
+
+```azurepowershell-interactive
+$rg =@{
+ Name = 'myResourceGroup'
+ Location = 'USWest2'
+}
+New-AzResourceGroup @rg
+```
+
+### Provision a global custom IP address prefix
+
+The following command creates a custom IP prefix in the specified region and resource group. Specify the exact prefix in CIDR notation as a string to ensure there's no syntax error. No zonal properties are provided because the global range isn't associated with any particular region (and therefore no regional availability zones). The global custom IP prefix resource will still sit in a region in your subscription; this has no bearing on how the range is advertised by Microsoft.
+
+```azurepowershell-interactive
+$prefix =@{
+ Name = 'myCustomGlobalPrefix'
+ ResourceGroupName = 'myResourceGroup'
+ Location = 'WestUS2'
+ CIDR = '1.2.3.0/24'
+ AuthorizationMessage = 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx|1.2.3.0/24|yyyymmdd'
+ SignedMessage = $byoipauthsigned
+}
+$myCustomIPGlobalPrefix = New-AzCustomIPPrefix @prefix -IsParent
+```
+### Provision regional custom IP address prefixes
+
+After the global custom IP prefix is in a **Provisioned** state, regional custom IP prefixes can be created. These ranges must always be of size /64 to be considered valid. The ranges can be created in any region (it doesn't need to be the same as the global custom IP prefix), keeping in mind any geolocation restrictions associated with the original global range. The *child* custom IP prefixes advertise from the region where they're created. Because the validation is only done for global custom IP prefix provision, no Authorization or Signed message is required (but availability zones can be utilized).
+
+```azurepowershell-interactive
+$prefix =@{
+ Name = 'myCustomIPRegionalPrefix1'
+ ResourceGroupName = 'myResourceGroup'
+ Location = 'WestUS2'
+ CIDR = '1.2.3.0/25'
+
+}
+$myCustomIPRegionalPrefix = New-AzCustomIPPrefix @prefix -CustomIpPrefixParent $myCustomIPGlobalPrefix -Zone 1,2,3
+
+$prefix2 =@{
+ Name = 'myCustomIPRegionalPrefix2'
+ ResourceGroupName = 'myResourceGroup'
+ Location = 'EastUS2'
+ CIDR = '1.2.3.128/25'
+}
+$myCustomIPRegionalPrefix2 = New-AzCustomIPPrefix @prefix2 -CustomIpPrefixParent $myCustomIPGlobalPrefix -Zone 3
+```
+
+After the regional custom IP prefix is in a **Provisioned** state, public IP prefixes can be derived from the regional custom IP prefix. These public IP prefixes and any public IP addresses derived from them can be attached to networking resources, though they aren't yet being advertised.
+
+### Commission the custom IP address prefixes
+
+When commissioning custom IP prefixes using this model, the global and regional prefixes are treated separately. In other words, commissioning a regional custom IP prefix isn't connected to commissioning the global custom IP prefix.
++
+The safest strategy for range migrations is as follows:
+1. Provision all required regional custom IP prefixes in their respective regions. Create public IP prefixes and public IP addresses and attach to resources.
+2. Commission each regional custom IP prefix and test connectivity to the IPs within the region. Repeat for each regional custom IP prefix.
+3. Commission the global custom IP prefix, which advertises the larger range to the Internet. Complete this step only after verifying all regional custom IP prefixes (and derived prefixes/IPs) work as expected.
+
+With the previous example ranges, the command sequence would be:
+
+```azurepowershell-interactive
+Update-AzCustomIpPrefix -ResourceId $myCustomIPRegionalPrefix.Id -Commission
+Update-AzCustomIpPrefix -ResourceId $myCustomIPRegionalPrefix2.Id -Commission
+```
+Followed by:
+
+```azurepowershell-interactive
+Update-AzCustomIpPrefix -ResourceId $myCustomIPGlobalPrefix.Id -Commission
+```
+> [!NOTE]
+> The estimated time to fully complete the commissioning process for a custom IP global prefix is 3-4 hours. The estimated time to fully complete the commissioning process for a custom IP regional prefix is 30 minutes.
+
+It's possible to commission the global custom IP prefix before the regional custom IP prefixes. Since this process advertises the global range to the Internet before the regional prefixes are ready, it's not recommended for migrations of active ranges. You can decommission a global custom IP prefix while there are still active (commissioned) regional custom IP prefixes. Also, you can decommission a regional custom IP prefix while the global prefix is still active (commissioned).
+
+> [!IMPORTANT]
+> As the global custom IP prefix transitions to a **Commissioned** state, the range is being advertised with Microsoft from the local Azure region and globally to the Internet by Microsoft's wide area network under Autonomous System Number (ASN) 8075. Advertising this same range to the Internet from a location other than Microsoft at the same time could potentially create BGP routing instability or traffic loss. For example, a customer on-premises building. Plan any migration of an active range during a maintenance period to avoid impact.
[!INCLUDE [ip-services-provisioning-note-1](../../../includes/ip-services-provisioning-note-1.md)]
As before, the operation is asynchronous. Use [Get-AzCustomIpPrefix](/powershell
- To learn about scenarios and benefits of using a custom IP prefix, see [Custom IP address prefix (BYOIP)](custom-ip-address-prefix.md). -- For more information on managing a custom IP prefix, see [Manage a custom IP address prefix (BYOIP)](manage-custom-ip-address-prefix.md).
+- For more information on managing a custom IP prefix, see [Manage a custom IP address prefix (BYOIP)](manage-public-ip-address-prefix.md).
virtual-network Custom Ip Address Prefix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/custom-ip-address-prefix.md
Previously updated : 08/24/2023 Last updated : 08/05/2024 # Custom IP address prefix (BYOIP)
When ready, you can issue the command to have your range advertised from Azure a
## Limitations
-* A custom IPv4 prefix must be associated with a single Azure region.
+* By default, you can bring a maximum of five custom IP prefixes per region to Azure. This limit can be increased upon request.
-* You can bring a maximum of five prefixes per region to Azure.
-
-* A custom IPv4 Prefix must be between /21 and /24; a global (parent) custom IPv6 prefix must be /48.
+* By default:
+ - A unified custom IPv4 Prefix must be between /21 and /24.
+ - A global (parent) custom IPv4 prefix must be between /21 and /24, a regional (child) custom IPv4 prefix must be between /22 and /26 (dependent on the size of their respective parent range, which they must be at least one level smaller than)
+ - A global (parent) custom IPv6 prefix must be /48, a regional (child) custom IPv6 prefix must be /64
* Custom IP prefixes don't currently support derivation of IPs with Internet Routing Preference or that use Global Tier (for cross-region load-balancing).
When ready, you can issue the command to have your range advertised from Azure a
* The advertisements of IPs from a custom IP prefix over an Azure ExpressRoute Microsoft peering isn't currently supported.
-* Custom IP prefixes don't support Reverse DNS lookup using Azure-owned zones; customers must onboard their own Reverse Zones to Azure DNS
+* Custom IP prefixes don't support Reverse DNS lookup using Azure-owned zones; customers must onboard their own Reverse Zones to Azure DNS.
* Once provisioned, custom IP prefix ranges can't be moved to another subscription. Custom IP address prefix ranges can't be moved within resource groups in a single subscription. It's possible to derive a public IP prefix from a custom IP prefix in another subscription with the proper permissions as described [here](manage-custom-ip-address-prefix.md#permissions). * IPs brought to Azure may have a delay of up to a week before they can be used for Windows Server Activation. > [!IMPORTANT]
-> There are several differences between how custom IPv4 and IPv6 prefixes are onboarded and utilized. For more information, see [Differences between using BYOIPv4 and BYOIPv6](create-custom-ip-address-prefix-ipv6-portal.md#differences-between-using-byoipv4-and-byoipv6).
+> There are several differences between how custom IPv4 and IPv6 prefixes are onboarded and utilized. For more information, see [Differences between using BYOIPv4 and BYOIPv6](create-custom-ip-address-prefix-ipv6-powershell.md#differences-between-using-byoipv4-and-byoipv6).
## Pricing
virtual-network Manage Custom Ip Address Prefix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/manage-custom-ip-address-prefix.md
Title: Manage a custom IP address prefix description: Learn about custom IP address prefixes and how to manage and delete them.-+ Previously updated : 08/24/2023 Last updated : 08/05/2024
For information on provisioning an IP address, see [Create a custom IP address p
## Create a public IP prefix from a custom IP prefix
-When a custom IP prefix is in **Provisioned**, **Commissioning**, or **Commissioned** state, a linked public IP prefix can be created. Either as a subset of the custom IP prefix range or the entire range.
+When a unified (or regional) model custom IP prefix is in **Provisioned**, **Commissioning**, or **Commissioned** state, a linked public IP prefix can be created. Either as a subset of the custom IP prefix range or the entire range.
Use the following CLI and PowerShell commands to create public IP prefixes with the `--custom-ip-prefix-name` (CLI) and `-CustomIpPrefix` (PowerShell) parameters that point to an existing custom IP prefix. |Tool|Command| |||
-|CLI|[az network public-ip prefix create](/cli/azure/network/public-ip/prefix#az-network-public-ip-prefix-create)|
+|CLI|[az network custom-ip prefix update](/cli/azure/network/public-ip/prefix#az-network-public-ip-prefix-create)|
|PowerShell|[New-AzPublicIpPrefix](/powershell/module/az.network/new-azpublicipprefix)| > [!NOTE]
If another network advertises the provisioned range to the Internet, you should
* Alternatively, the ranges can be commissioned first and then changed. This process doesn't work for all resource types with public IPs. In those cases, a new resource with the provisioned public IP must be created.
-### Use the regional commissioning feature
+### Use the regional commissioning feature for unified model custom IP prefixes
-When a custom IP prefix transitions to a fully **Commissioned** state, the range is being advertised with Microsoft from the local Azure region and globally to the Internet by Microsoft's wide area network. If the range is currently being advertised to the Internet from a location other than Microsoft at the same time, there's the potential for BGP routing instability or traffic loss. In order to ease the transition for a range that is currently "live" outside of Azure, you can utilize a *regional commissioning* feature, which places an onboarded range into a **CommissionedNoInternetAdvertise** state where it's only advertised from within a single Azure region. This state allows for testing of all the attached infrastructure from within this region before advertising this range to the Internet, and fits well with Method 1 in the previous section.
+When a unified model custom IP prefix transitions to a fully **Commissioned** state, the range is being advertised with Microsoft from the local Azure region and globally to the Internet by Microsoft's wide area network. If the range is currently being advertised to the Internet from a location other than Microsoft at the same time, there's the potential for BGP routing instability or traffic loss. In order to ease the transition for a range that is currently "live" outside of Azure, you can utilize a *regional commissioning* feature, which places an onboarded range into a **CommissionedNoInternetAdvertise** state where it's only advertised from within a single Azure region. This state allows for testing of all the attached infrastructure from within this region before advertising this range to the Internet, and fits well with Method 1 in the previous section.
Use the following steps in the Azure portal to put a custom IP prefix into this state:
Alternatively, a custom IP prefix can be decommissioned via the Azure portal usi
### Use the regional commissioning feature to assist decommission
-A custom IP prefix must be clear of public IP prefixes before it can be put into **Decommissioning** state. To ease a migration, you can reverse the regional commissioning feature. You can change a globally commissioned range back to a regionally commissioned status. This change allows you to ensure the range is no longer advertised beyond the scope of a single region before removing any public IP addresses from their respective resources.
+A unified (or regional) model custom IP prefix must be clear of public IP prefixes before it can be put into **Decommissioning** state. To ease a migration, you can reverse the regional commissioning feature. You can change a globally commissioned range back to a regionally commissioned status. This change allows you to ensure the range is no longer advertised beyond the scope of a single region before removing any public IP addresses from their respective resources.
The command is similar as the one from earlier on this page:
The operation is asynchronous. You can check the status by reviewing the **Commi
To fully remove a custom IP prefix, it must be deprovisioned and then deleted.
+> [!IMPORTANT]
+> It is strongly reccomended to decommission the range **prior** to modifying/deleting the Route Origin Authorization (ROA) you created with your Routing Internet Registry. Failure to do this will mean Microsoft will still be advertising your range when not authrorized to do so. Please see the [creation documentation](create-custom-ip-address-prefix-powershell.md) for more information about ROAs.
+ > [!NOTE] > If there is a requirement to migrate an provisioned range from one region to the other, the original custom IP prefix must be fully removed from the first region before a new custom IP prefix with the same address range can be created in another region. >
To migrate a custom IP prefix, it must first be deprovisioned from one region. A
### Are there any special considerations when using IPv6
-Yes - there are multiple differences for provisioning and commissioning when using BYOIPv6. For more information, see [Create a custom IPv6 address prefix - PowerShell](create-custom-ip-address-prefix-ipv6-portal.md).
+Yes - there are multiple differences for provisioning and commissioning when using BYOIPv6. For more information, see [Create a custom IPv6 address prefix - PowerShell](create-custom-ip-address-prefix-ipv6-powershell.md).
### Status messages
When you onboard or remove a custom IP prefix from Azure, the system updates the
- To create a custom IP address prefix using the Azure portal, see [Create custom IP address prefix using the Azure portal](create-custom-ip-address-prefix-portal.md). -- To create a custom IP address prefix using PowerShell, see [Create a custom IP address prefix using Azure PowerShell](create-custom-ip-address-prefix-powershell.md).
+- To create a custom IP address prefix using PowerShell, see [Create a custom IP address prefix using Azure PowerShell](create-custom-ip-address-prefix-powershell.md).
virtual-network Service Tags Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/service-tags-overview.md
By default, service tags reflect the ranges for the entire cloud. Some service t
| **MicrosoftAzureFluidRelay** | This tag represents the IP addresses used for Azure Microsoft Fluid Relay Server. </br> **Note**: This tag has a dependency on the **AzureFrontDoor.Frontend** tag. | Outbound | No | Yes | | **MicrosoftCloudAppSecurity** | Microsoft Defender for Cloud Apps. | Outbound | No | Yes | | **[MicrosoftDefenderForEndpoint](/defender-endpoint/configure-device-connectivity)** | Microsoft Defender for Endpoint core services.<br/><br/>**Note**: Devices need to be onboarded with streamlined connectivity and meet requirements in order to use this service tag. Defender for Endpoint/Server require additional service tags, like OneDSCollector, to support all functionality.<br/></br> For more information, see [Onboarding devices using streamlined connectivity for Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/configure-device-connectivity) | Both | No | Yes |
-| **[PowerBI](/power-bi/enterprise/service-premium-service-tags)** | Power BI platform backend services and API endpoints.<br/><br/>**Note:** does not include frontend endpoints at the moment (e.g., app.powerbi.com).<br/><br/>Access to frontend endpoints should be provided through AzureCloud tag (Outbound, HTTPS, can be regional). | Both | No | Yes |
+| **[PowerBI](/power-bi/enterprise/service-premium-service-tags)** | Power BI platform backend services and API endpoints.<br/><br/> | Both | No | Yes |
| **[PowerPlatformInfra](/power-platform/admin/online-requirements)** | This tag represents the IP addresses used by the infrastructure to host Power Platform services. | Both | Yes | Yes | | **[PowerPlatformPlex](/power-platform/admin/online-requirements)** | This tag represents the IP addresses used by the infrastructure to host Power Platform extension execution on behalf of the customer. | Both | Yes | Yes | | **[PowerQueryOnline](/data-integration/gateway/service-gateway-communication)** | Power Query Online. | Both | No | Yes |