Updates from: 02/10/2024 02:15:23
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Partner Strata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-strata.md
- Title: Tutorial to configure Azure Active Directory B2C with Strata-
-description: Learn how to integrate Azure AD B2C authentication with whoIam for user verification
----- Previously updated : 01/26/2024---
-# Customer intent: As an IT admin, I want to integrate Azure Active Directory B2C with StrataMaverics Identity Orchestrator. I need to protect on-premises applications and enable customer single sign-on (SSO) to hybrid apps.
---
-# Tutorial to configure Azure Active Directory B2C with Strata
-
-In this tutorial, learn how to integrate Azure Active Directory B2C (Azure AD B2C) with Strata [Maverics Identity Orchestrator](https://www.strata.io/), which helps protect on-premises applications. It connects to identity systems, migrates users and credentials, synchronizes policies and configurations, and abstracts authentication and session management. Use Strata to transition from legacy, to Azure AD B2C, without rewriting applications.
-
-The solution has the following benefits:
--- **Customer single sign-on (SSO) to on-premises hybrid apps** - Azure AD B2C supports customer SSO with Maverics Identity Orchestrator
- - Users sign in with accounts hosted in Azure AD B2C or identity provider (IdP)
- - Maverics proves SSO to apps historically secured by legacy identity systems like Symantec SiteMinder
-- **Extend standards SSO to apps** - Use Azure AD B2C to manage user access and enable SSO with Maverics Identity Orchestrator Security Assertion Markup Language (SAML) or OpenID Connect (OIDC) connectors-- **Easy configuration** - Connect Maverics Identity Orchestrator SAML or OIDC connectors to Azure AD B2C-
-## Prerequisites
-
-To get started, you'll need:
-
-* An Azure subscription
-
- - If you don't have one, you can get an [Azure free account](https://azure.microsoft.com/free/)
-- An [Azure AD B2C tenant](./tutorial-create-tenant.md) linked to your Azure subscription-- An instance of [Azure Key Vault](https://azure.microsoft.com/services/key-vault/) to store secrets used by Maverics Identity Orchestrator. Connect to Azure AD B2C or other attribute providers such as a Lightweight Directory Access Protocol (LDAP) directory or database.-- An instance of [Maverics Identity Orchestrator](https://www.strata.io/) running in an Azure virtual machine (VM), or an on-premises server. To get software and documentation, go to strata.io [Contact Strata Identity](https://www.strata.io/company/contact/).-- An on-premises application to transition to Azure AD B2C-
-## Scenario description
-
-Maverics Identity Orchestrator integration includes the following components:
--- **Azure AD B2C** - The authorization server that verifies user credentials
- - Authenticated users access on-premises apps using a local account in the Azure AD B2C directory
-- **External social or enterprise identity provider (IdP)**: An OIDC provider, Facebook, Google, or GitHub
- - See, [Add an identity provider to your Azure Active Directory B2C tenant](./add-identity-provider.md)
-- **Strata Maverics Identity Orchestrator**: The user sign-on service that passes identity to apps through HTTP headers-
-The following architecture diagram shows the implementation.
-
- ![Diagram of the Azure AD B2C integration architecture, with Maverics Identity Orchestrator, for access to hybrid apps.](./media/partner-strata/strata-architecture-diagram.png)
-
-1. The user requests access the on-premises hosted application. Maverics Identity Orchestrator proxies the request to the application.
-2. Orchestrator checks the user authentication state. If there's no session token, or the token is invalid, the user goes to Azure AD B2C for authentication
-3. Azure AD B2C sends the authentication request to the configured social IdP.
-4. The IdP challenges the user for credential. Multifactor authentication (MFA) might be required.
-5. The IdP sends the authentication response to Azure AD B2C. The user can create a local account in the Azure AD B2C directory.
-6. Azure AD B2C sends the user request to the endpoint specified during the Orchestrator app registration in the Azure AD B2C tenant.
-7. The Orchestrator evaluates access policies and attribute values for HTTP headers forwarded to the app. Orchestrator might call to other attribute providers to retrieve information to set the header values. The Orchestrator sends the request to the app.
-8. The user is authenticated and has access to the app.
-
-## Maverics Identity Orchestrator
-
-To get software and documentation, go to strata.io [Contact Strata Identity](https://www.strata.io/company/contact/). Determine Orchestrator prerequisites. Install and configure.
-
-## Configure your Azure AD B2C tenant
-
-During the following instructions, document:
-
-* Tenant name and identifier
-* Client ID
-* Client secret
-* Configured claims
-* Redirect URI
-
-1. [Register a web application in Azure Active Directory B2C](./tutorial-register-applications.md?tabs=app-reg-ga) in Azure AD B2C tenant.
-2. Grant Microsoft MS Graph API permissions to your applications. Use permissions: `offline_access`, `openid`.
-3. Add a redirect URI that matches the `oauthRedirectURL` parameter of the Orchestrator Azure AD B2C connector configuration, for example, `https://example.com/oidc-endpoint`.
-4. [Create user flows and custom policies in Azure Active Directory B2C](./tutorial-create-user-flows.md).
-5. [Add an identity provider to your Azure Active Directory B2C tenant](./add-identity-provider.md). Sign in your user with a local account, a social, or enterprise.
-6. Define the attributes to be collected during sign-up.
-7. Specify attributes to be returned to the application with your Orchestrator instance.
-
-> [!NOTE]
-> The Orchestrator consumes attributes from claims returned by Azure AD B2C and can retrieve attributes from connected identity systems such as LDAP directories and databases. Those attributes are in HTTP headers and sent to the upstream on-premises application.
-
-## Configure Maverics Identity Orchestrator
-
-Use the instructions in the following sections to configure an Orchestrator instance.
-
-### Maverics Identity Orchestrator server requirements
-
-You can run your Orchestrator instance on any server, whether on-premises or in a public cloud infrastructure by provider such as Azure, AWS, or GCP.
--- **Operating System**: REHL 7.7 or higher, CentOS 7+-- **Disk**: 10 GB (small)-- **Memory**: 16 GB-- **Ports**: 22 (SSH/SCP), 443, 80-- **Root access**: For install/administrative tasks-- **Maverics Identity Orchestrator**: Runs as user `maverics` under `systemd`-- **Network egress**: From the server hosting Maverics Identity Orchestrator that can reach your Microsoft Entra tenant-
-### Install Maverics Identity Orchestrator
-
-1. Obtain the latest Maverics RPM package.
-2. Place the package on the system you'd like to install Maverics. If you're copying to a remote host, use SSH [scp](https://www.ssh.com/academy/ssh/scp).
-3. Run the following command. Use your filename to replace `maverics.rpm`.
-
- `sudo rpm -Uvf maverics.rpm`
-
- By default, Maverics is in the `/usr/local/bin` directory.
-
-4. Maverics runs as a service under `systemd`.
-5. To verify Maverics service is running, run the following command:
-
- `sudo service maverics status`
-
-6. The following message (or similar) appears.
-
-```
-Redirecting to /bin/systemctl status maverics.service
- maverics.service - Maverics
- Loaded: loaded (/etc/systemd/system/maverics.service; enabled; vendor preset: disabled)
- Active: active (running) since Thu 2020-08-13 16:48:01 UTC; 24h ago
- Main PID: 330772 (maverics)
- Tasks: 5 (limit: 11389)
- Memory: 14.0M
- CGroup: /system.slice/maverics.service
- ΓööΓöÇ330772 /usr/local/bin/maverics --config /etc/maverics/maverics.yaml
- ```
-
-> [!NOTE]
-> If Maverics fails to start, execute the following command:
-
- `journalctl --unit=maverics.service --reverse`
-
- The most recent log entry appears in the output.
-
-7. The default `maverics.yaml` file is created in the `/etc/maverics` directory.
-8. Configure your Orchestrator to protect the application.
-9. Integrate with Azure AD B2C, and store.
-10. Retrieve secrets from [Azure Key Vault](https://azure.microsoft.com/services/key-vault/?OCID=AID2100131_SEM_bf7bdd52c7b91367064882c1ce4d83a9:G:s&ef_id=bf7bdd52c7b91367064882c1ce4d83a9:G:s&msclkid=bf7bdd52c7b91367064882c1ce4d83a9).
-11. Define the location from where the Orchestrator reads its configuration.
-
-### Supply configuration using environment variables
-
-Configure your Orchestrator instances with environment variables.
-
-`MAVERICS_CONFIG`
-
-This environment variable informs the Orchestrator instance what YAML configuration files to use, and where to find them during startup or restart. Set the environment variable in `/etc/maverics/maverics.env`.
-
-### Create the Orchestrator TLS configuration
-
-The `tls` field in `maverics.yaml` declares the transport layer security configurations your Orchestrator instance uses. Connectors use TLS objects and the Orchestrator server.
-
-The `maverics` key is reserved for the Orchestrator server. Use other keys to inject a TLS object into a connector.
-
-```yaml
-tls:
- maverics:
- certFile: /etc/maverics/maverics.cert
- keyFile: /etc/maverics/maverics.key
-```
-
-### Configure the Azure AD B2C Connector
-
-Orchestrators use Connectors to integrate with authentication and attribute providers. The Orchestrators App Gateway uses the Azure AD B2C connector as an authentication and attribute provider. Azure AD B2C uses the social IdP for authentication and then provides attributes to the Orchestrator, passing them in claims set in HTTP headers.
-
-The Connector configuration corresponds to the app registered in the Azure AD B2C tenant.
-
-1. From your app config, copy the Client ID, Client secret, and redirect URI into your tenant.
-2. Enter a Connector name (example is `azureADB2C`).
-3. Set the connector `type` to be `azure`.
-4. Make a note of the Connector name. You'll use this value in other configuration parameters.
-5. Set the `authType` to `oidc`.
-6. For the `oauthClientID` parameter, set the Client ID you copied.
-7. For the `oauthClientSecret` parameter, set the Client secret you copied.
-8. For the `oauthRedirectURL` parameter, set the redirect URI you copied.
-9. The Azure AD B2C OIDC Connector uses the OIDC endpoint to discover metadata, including URLs and signing keys. For the tenant endpoint, use `oidcWellKnownURL`.
-
-```yaml
-connectors:
- name: azureADB2C
- type: azure
- oidcWellKnownURL: https://<tenant name>.b2clogin.com/<tenant name>.onmicrosoft.com/B2C_1_login/v2.0/.well-known/openid-configuration
- oauthRedirectURL: https://example.com/oidc-endpoint
- oauthClientID: <azureADB2CClientID>
- oauthClientSecret: <azureADB2CClientSecret>
- authType: oidc
-```
-
-### Define Azure AD B2C as your authentication provider
-
-An authentication provider determines authentication for users who don't present a valid session during an app resource request. Azure AD B2C tenant configuration determines how users are challenged for credentials, while it applies other authentication policies. An example is to require a second factor to complete authentication and decide what is returned to the Orchestrator App Gateway, after authentication.
-
-The value for the `authProvider` must match your Connector `name` value.
-
-```yaml
-authProvider: azureADB2C
-```
-
-### Protect on-premises apps with an Orchestrator App Gateway
-
-The Orchestrator App Gateway configuration declares how Azure AD B2C protects your application and how users access the app.
-
-1. Enter an App gateway name.
-2. Set the `location`. The example uses the app root `/`.
-3. Define the protected application in `upstream`. Use the host:port convention: `https://example.com:8080`.
-4. Set the values for error and unauthorized pages.
-5. Define the HTTP header names and attribute values for the application to establish authentication and control. Header names typically correspond to app configuration. Attribute values are namespaced by the Connector. In the example, values returned from Azure AD B2C are prefixed with the Connector name `azureADB2C`. The suffix is the attribute name with the required value, for example `given_name`.
-6. Set the policies. Three actions are defined: `allowUnauthenticated`, `allowAnyAuthenticated`, and `allowIfAny`. Each action is associated with a `resource`. Policy is evaluated for that `resource`.
-
->[!NOTE]
->`headers` and `policies` use JavaScript or GoLang service extensions to implement arbitrary logic.
-
-```yaml
-appgateways:
- - name: Sonar
- location: /
- upstream: https://example.com:8080
- errorPage: https://example.com:8080/sonar/error
- unauthorizedPage: https://example.com:8080/sonar/accessdenied
-
- headers:
- SM_USER: azureADB2C.sub
- firstname: azureADB2C.given_name
- lastname: azureADB2C.family_name
-
- policies:
- - resource: ~ \.(jpg|png|ico|svg)
- allowUnauthenticated: true
- - resource: /
- allowAnyAuthenticated: true
- - resource: /sonar/daily_deals
- allowIfAny:
- azureADB2C.customAttribute: Rewards Member
-```
-
-### Azure Key Vault as secrets provider
-
-Secure the secrets your Orchestrator uses to connect to Azure AD B2C, and other identity systems. Maverics load secrets in plain text out of `maverics.yaml`, however, in this tutorial, use Azure Key Vault as the secrets provider.
-
-Follow the instructions in, [Quickstart: Set and retrieve a secret from Azure Key Vault using the Azure portal](../key-vault/secrets/quick-create-portal.md). Add your secrets to the vault and make a note of the `SECRET NAME` for each secret. For example, `AzureADB2CClientSecret`.
-
-To declare a value as a secret in a `maverics.yaml` config file, wrap the secret with angle brackets:
-
-```yaml
-connectors:
- - name: AzureADB2C
- type: azure
- oauthClientID: <AzureADB2CClientID>
- oauthClientSecret: <AzureADB2CClientSecret>
-```
-
-The value in the angle brackets must correspond to the `SECRET NAME` given to a secret in your Azure Key Vault.
-
-To load secrets from Azure Key Vault, set the environment variable `MAVERICS_SECRET_PROVIDER` in the file `/etc/maverics/maverics.env`, with the credentials found in the azure-credentials.json file. Use the following pattern:
-
-`MAVERICS_SECRET_PROVIDER='azurekeyvault://<KEYVAULT NAME>.vault.azure.net?clientID=<APPID>&clientSecret=<PASSWORD>&tenantID=<TENANT>'`
-
-### Complete the configuration
-
-The following information illustrates how Orchestrator configuration appears.
-
-```yaml
-version: 0.4.2
-listenAddress: ":443"
-tls:
- maverics:
- certFile: certs/maverics.crt
- keyFile: certs/maverics.key
-
-authProvider: azureADB2C
-
-connectors:
- - name: azureADB2C
- type: azure
- oidcWellKnownURL: https://<tenant name>.b2clogin.com/<tenant name>.onmicrosoft.com/B2C_1_login/v2.0/.well-known/openid-configuration
- oauthRedirectURL: https://example.com/oidc-endpoint
- oauthClientID: <azureADB2CClientID>
- oauthClientSecret: <azureADB2CClientSecret>
- authType: oidc
-
-appgateways:
- - name: Sonar
- location: /
- upstream: http://example.com:8080
- errorPage: http://example.com:8080/sonar/accessdenied
- unauthorizedPage: http://example.com:8080/sonar/accessdenied
-
- headers:
- SM_USER: azureADB2C.sub
- firstname: azureADB2C.given_name
- lastname: azureADB2C.family_name
-
- policies:
- - resource: ~ \.(jpg|png|ico|svg)
- allowUnauthenticated: true
- - resource: /
- allowAnyAuthenticated: true
- - resource: /sonar/daily_deals
- allowIfAny:
- azureADB2C.customAttribute: Rewards Member
-```
-
-## Test the flow
-
-1. Navigate to the on-premises application URL, `https://example.com/sonar/dashboard`.
-2. The Orchestrator redirects to the user flow page.
-3. From the list, select the IdP.
-4. Enter credentials, including an MFA token, if required by the IdP.
-5. You're redirected to Azure AD B2C, which forwards the app request to the Orchestrator redirect URI.
-6. The Orchestrator evaluates policies, and calculates headers.
-7. The requested application appears.
-
-## Next steps
--- [Azure AD B2C custom policy overview](./custom-policy-overview.md)-- [Tutorial: Create user flows and custom policies in Azure Active Directory B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy)+
+ Title: Tutorial to configure Azure Active Directory B2C with Strata
+
+description: Learn how to integrate Azure AD B2C authentication with whoIam for user verification
+++++ Last updated : 01/26/2024+++
+# Customer intent: As an IT admin, I want to integrate Azure Active Directory B2C with StrataMaverics Identity Orchestrator. I need to protect on-premises applications and enable customer single sign-on (SSO) to hybrid apps.
+++
+# Tutorial to configure Azure Active Directory B2C with Strata
+
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+
+In this tutorial, learn how to integrate Azure Active Directory B2C (Azure AD B2C) with Strata [Maverics Identity Orchestrator](https://www.strata.io/), which helps protect on-premises applications. It connects to identity systems, migrates users and credentials, synchronizes policies and configurations, and abstracts authentication and session management. Use Strata to transition from legacy, to Azure AD B2C, without rewriting applications.
+
+The solution has the following benefits:
+
+- **Customer single sign-on (SSO) to on-premises hybrid apps** - Azure AD B2C supports customer SSO with Maverics Identity Orchestrator
+ - Users sign in with accounts hosted in Azure AD B2C or identity provider (IdP)
+ - Maverics proves SSO to apps historically secured by legacy identity systems like Symantec SiteMinder
+- **Extend standards SSO to apps** - Use Azure AD B2C to manage user access and enable SSO with Maverics Identity Orchestrator Security Assertion Markup Language (SAML) or OpenID Connect (OIDC) connectors
+- **Easy configuration** - Connect Maverics Identity Orchestrator SAML or OIDC connectors to Azure AD B2C
+
+## Prerequisites
+
+To get started, you'll need:
+
+* An Azure subscription
+
+ - If you don't have one, you can get an [Azure free account](https://azure.microsoft.com/free/)
+- An [Azure AD B2C tenant](./tutorial-create-tenant.md) linked to your Azure subscription
+- An instance of [Azure Key Vault](https://azure.microsoft.com/services/key-vault/) to store secrets used by Maverics Identity Orchestrator. Connect to Azure AD B2C or other attribute providers such as a Lightweight Directory Access Protocol (LDAP) directory or database.
+- An instance of [Maverics Identity Orchestrator](https://www.strata.io/) running in an Azure virtual machine (VM), or an on-premises server. To get software and documentation, go to strata.io [Contact Strata Identity](https://www.strata.io/company/contact/).
+- An on-premises application to transition to Azure AD B2C
+
+## Scenario description
+
+Maverics Identity Orchestrator integration includes the following components:
+
+- **Azure AD B2C** - The authorization server that verifies user credentials
+ - Authenticated users access on-premises apps using a local account in the Azure AD B2C directory
+- **External social or enterprise identity provider (IdP)**: An OIDC provider, Facebook, Google, or GitHub
+ - See, [Add an identity provider to your Azure Active Directory B2C tenant](./add-identity-provider.md)
+- **Strata Maverics Identity Orchestrator**: The user sign-on service that passes identity to apps through HTTP headers
+
+The following architecture diagram shows the implementation.
+
+ ![Diagram of the Azure AD B2C integration architecture, with Maverics Identity Orchestrator, for access to hybrid apps.](./media/partner-strata/strata-architecture-diagram.png)
+
+1. The user requests access the on-premises hosted application. Maverics Identity Orchestrator proxies the request to the application.
+2. Orchestrator checks the user authentication state. If there's no session token, or the token is invalid, the user goes to Azure AD B2C for authentication
+3. Azure AD B2C sends the authentication request to the configured social IdP.
+4. The IdP challenges the user for credential. Multifactor authentication (MFA) might be required.
+5. The IdP sends the authentication response to Azure AD B2C. The user can create a local account in the Azure AD B2C directory.
+6. Azure AD B2C sends the user request to the endpoint specified during the Orchestrator app registration in the Azure AD B2C tenant.
+7. The Orchestrator evaluates access policies and attribute values for HTTP headers forwarded to the app. Orchestrator might call to other attribute providers to retrieve information to set the header values. The Orchestrator sends the request to the app.
+8. The user is authenticated and has access to the app.
+
+## Maverics Identity Orchestrator
+
+To get software and documentation, go to strata.io [Contact Strata Identity](https://www.strata.io/company/contact/). Determine Orchestrator prerequisites. Install and configure.
+
+## Configure your Azure AD B2C tenant
+
+During the following instructions, document:
+
+* Tenant name and identifier
+* Client ID
+* Client secret
+* Configured claims
+* Redirect URI
+
+1. [Register a web application in Azure Active Directory B2C](./tutorial-register-applications.md?tabs=app-reg-ga) in Azure AD B2C tenant.
+2. Grant Microsoft MS Graph API permissions to your applications. Use permissions: `offline_access`, `openid`.
+3. Add a redirect URI that matches the `oauthRedirectURL` parameter of the Orchestrator Azure AD B2C connector configuration, for example, `https://example.com/oidc-endpoint`.
+4. [Create user flows and custom policies in Azure Active Directory B2C](./tutorial-create-user-flows.md).
+5. [Add an identity provider to your Azure Active Directory B2C tenant](./add-identity-provider.md). Sign in your user with a local account, a social, or enterprise.
+6. Define the attributes to be collected during sign-up.
+7. Specify attributes to be returned to the application with your Orchestrator instance.
+
+> [!NOTE]
+> The Orchestrator consumes attributes from claims returned by Azure AD B2C and can retrieve attributes from connected identity systems such as LDAP directories and databases. Those attributes are in HTTP headers and sent to the upstream on-premises application.
+
+## Configure Maverics Identity Orchestrator
+
+Use the instructions in the following sections to configure an Orchestrator instance.
+
+### Maverics Identity Orchestrator server requirements
+
+You can run your Orchestrator instance on any server, whether on-premises or in a public cloud infrastructure by provider such as Azure, AWS, or GCP.
+
+- **Operating System**: REHL 7.7 or higher, CentOS 7+
+- **Disk**: 10 GB (small)
+- **Memory**: 16 GB
+- **Ports**: 22 (SSH/SCP), 443, 80
+- **Root access**: For install/administrative tasks
+- **Maverics Identity Orchestrator**: Runs as user `maverics` under `systemd`
+- **Network egress**: From the server hosting Maverics Identity Orchestrator that can reach your Microsoft Entra tenant
+
+### Install Maverics Identity Orchestrator
+
+1. Obtain the latest Maverics RPM package.
+2. Place the package on the system you'd like to install Maverics. If you're copying to a remote host, use SSH [scp](https://www.ssh.com/academy/ssh/scp).
+3. Run the following command. Use your filename to replace `maverics.rpm`.
+
+ `sudo rpm -Uvf maverics.rpm`
+
+ By default, Maverics is in the `/usr/local/bin` directory.
+
+4. Maverics runs as a service under `systemd`.
+5. To verify Maverics service is running, run the following command:
+
+ `sudo service maverics status`
+
+6. The following message (or similar) appears.
+
+```
+Redirecting to /bin/systemctl status maverics.service
+ maverics.service - Maverics
+ Loaded: loaded (/etc/systemd/system/maverics.service; enabled; vendor preset: disabled)
+ Active: active (running) since Thu 2020-08-13 16:48:01 UTC; 24h ago
+ Main PID: 330772 (maverics)
+ Tasks: 5 (limit: 11389)
+ Memory: 14.0M
+ CGroup: /system.slice/maverics.service
+ ΓööΓöÇ330772 /usr/local/bin/maverics --config /etc/maverics/maverics.yaml
+ ```
+
+> [!NOTE]
+> If Maverics fails to start, execute the following command:
+
+ `journalctl --unit=maverics.service --reverse`
+
+ The most recent log entry appears in the output.
+
+7. The default `maverics.yaml` file is created in the `/etc/maverics` directory.
+8. Configure your Orchestrator to protect the application.
+9. Integrate with Azure AD B2C, and store.
+10. Retrieve secrets from [Azure Key Vault](https://azure.microsoft.com/services/key-vault/?OCID=AID2100131_SEM_bf7bdd52c7b91367064882c1ce4d83a9:G:s&ef_id=bf7bdd52c7b91367064882c1ce4d83a9:G:s&msclkid=bf7bdd52c7b91367064882c1ce4d83a9).
+11. Define the location from where the Orchestrator reads its configuration.
+
+### Supply configuration using environment variables
+
+Configure your Orchestrator instances with environment variables.
+
+`MAVERICS_CONFIG`
+
+This environment variable informs the Orchestrator instance what YAML configuration files to use, and where to find them during startup or restart. Set the environment variable in `/etc/maverics/maverics.env`.
+
+### Create the Orchestrator TLS configuration
+
+The `tls` field in `maverics.yaml` declares the transport layer security configurations your Orchestrator instance uses. Connectors use TLS objects and the Orchestrator server.
+
+The `maverics` key is reserved for the Orchestrator server. Use other keys to inject a TLS object into a connector.
+
+```yaml
+tls:
+ maverics:
+ certFile: /etc/maverics/maverics.cert
+ keyFile: /etc/maverics/maverics.key
+```
+
+### Configure the Azure AD B2C Connector
+
+Orchestrators use Connectors to integrate with authentication and attribute providers. The Orchestrators App Gateway uses the Azure AD B2C connector as an authentication and attribute provider. Azure AD B2C uses the social IdP for authentication and then provides attributes to the Orchestrator, passing them in claims set in HTTP headers.
+
+The Connector configuration corresponds to the app registered in the Azure AD B2C tenant.
+
+1. From your app config, copy the Client ID, Client secret, and redirect URI into your tenant.
+2. Enter a Connector name (example is `azureADB2C`).
+3. Set the connector `type` to be `azure`.
+4. Make a note of the Connector name. You'll use this value in other configuration parameters.
+5. Set the `authType` to `oidc`.
+6. For the `oauthClientID` parameter, set the Client ID you copied.
+7. For the `oauthClientSecret` parameter, set the Client secret you copied.
+8. For the `oauthRedirectURL` parameter, set the redirect URI you copied.
+9. The Azure AD B2C OIDC Connector uses the OIDC endpoint to discover metadata, including URLs and signing keys. For the tenant endpoint, use `oidcWellKnownURL`.
+
+```yaml
+connectors:
+ name: azureADB2C
+ type: azure
+ oidcWellKnownURL: https://<tenant name>.b2clogin.com/<tenant name>.onmicrosoft.com/B2C_1_login/v2.0/.well-known/openid-configuration
+ oauthRedirectURL: https://example.com/oidc-endpoint
+ oauthClientID: <azureADB2CClientID>
+ oauthClientSecret: <azureADB2CClientSecret>
+ authType: oidc
+```
+
+### Define Azure AD B2C as your authentication provider
+
+An authentication provider determines authentication for users who don't present a valid session during an app resource request. Azure AD B2C tenant configuration determines how users are challenged for credentials, while it applies other authentication policies. An example is to require a second factor to complete authentication and decide what is returned to the Orchestrator App Gateway, after authentication.
+
+The value for the `authProvider` must match your Connector `name` value.
+
+```yaml
+authProvider: azureADB2C
+```
+
+### Protect on-premises apps with an Orchestrator App Gateway
+
+The Orchestrator App Gateway configuration declares how Azure AD B2C protects your application and how users access the app.
+
+1. Enter an App gateway name.
+2. Set the `location`. The example uses the app root `/`.
+3. Define the protected application in `upstream`. Use the host:port convention: `https://example.com:8080`.
+4. Set the values for error and unauthorized pages.
+5. Define the HTTP header names and attribute values for the application to establish authentication and control. Header names typically correspond to app configuration. Attribute values are namespaced by the Connector. In the example, values returned from Azure AD B2C are prefixed with the Connector name `azureADB2C`. The suffix is the attribute name with the required value, for example `given_name`.
+6. Set the policies. Three actions are defined: `allowUnauthenticated`, `allowAnyAuthenticated`, and `allowIfAny`. Each action is associated with a `resource`. Policy is evaluated for that `resource`.
+
+>[!NOTE]
+>`headers` and `policies` use JavaScript or GoLang service extensions to implement arbitrary logic.
+
+```yaml
+appgateways:
+ - name: Sonar
+ location: /
+ upstream: https://example.com:8080
+ errorPage: https://example.com:8080/sonar/error
+ unauthorizedPage: https://example.com:8080/sonar/accessdenied
+
+ headers:
+ SM_USER: azureADB2C.sub
+ firstname: azureADB2C.given_name
+ lastname: azureADB2C.family_name
+
+ policies:
+ - resource: ~ \.(jpg|png|ico|svg)
+ allowUnauthenticated: true
+ - resource: /
+ allowAnyAuthenticated: true
+ - resource: /sonar/daily_deals
+ allowIfAny:
+ azureADB2C.customAttribute: Rewards Member
+```
+
+### Azure Key Vault as secrets provider
+
+Secure the secrets your Orchestrator uses to connect to Azure AD B2C, and other identity systems. Maverics load secrets in plain text out of `maverics.yaml`, however, in this tutorial, use Azure Key Vault as the secrets provider.
+
+Follow the instructions in, [Quickstart: Set and retrieve a secret from Azure Key Vault using the Azure portal](../key-vault/secrets/quick-create-portal.md). Add your secrets to the vault and make a note of the `SECRET NAME` for each secret. For example, `AzureADB2CClientSecret`.
+
+To declare a value as a secret in a `maverics.yaml` config file, wrap the secret with angle brackets:
+
+```yaml
+connectors:
+ - name: AzureADB2C
+ type: azure
+ oauthClientID: <AzureADB2CClientID>
+ oauthClientSecret: <AzureADB2CClientSecret>
+```
+
+The value in the angle brackets must correspond to the `SECRET NAME` given to a secret in your Azure Key Vault.
+
+To load secrets from Azure Key Vault, set the environment variable `MAVERICS_SECRET_PROVIDER` in the file `/etc/maverics/maverics.env`, with the credentials found in the azure-credentials.json file. Use the following pattern:
+
+`MAVERICS_SECRET_PROVIDER='azurekeyvault://<KEYVAULT NAME>.vault.azure.net?clientID=<APPID>&clientSecret=<PASSWORD>&tenantID=<TENANT>'`
+
+### Complete the configuration
+
+The following information illustrates how Orchestrator configuration appears.
+
+```yaml
+version: 0.4.2
+listenAddress: ":443"
+tls:
+ maverics:
+ certFile: certs/maverics.crt
+ keyFile: certs/maverics.key
+
+authProvider: azureADB2C
+
+connectors:
+ - name: azureADB2C
+ type: azure
+ oidcWellKnownURL: https://<tenant name>.b2clogin.com/<tenant name>.onmicrosoft.com/B2C_1_login/v2.0/.well-known/openid-configuration
+ oauthRedirectURL: https://example.com/oidc-endpoint
+ oauthClientID: <azureADB2CClientID>
+ oauthClientSecret: <azureADB2CClientSecret>
+ authType: oidc
+
+appgateways:
+ - name: Sonar
+ location: /
+ upstream: http://example.com:8080
+ errorPage: http://example.com:8080/sonar/accessdenied
+ unauthorizedPage: http://example.com:8080/sonar/accessdenied
+
+ headers:
+ SM_USER: azureADB2C.sub
+ firstname: azureADB2C.given_name
+ lastname: azureADB2C.family_name
+
+ policies:
+ - resource: ~ \.(jpg|png|ico|svg)
+ allowUnauthenticated: true
+ - resource: /
+ allowAnyAuthenticated: true
+ - resource: /sonar/daily_deals
+ allowIfAny:
+ azureADB2C.customAttribute: Rewards Member
+```
+
+## Test the flow
+
+1. Navigate to the on-premises application URL, `https://example.com/sonar/dashboard`.
+2. The Orchestrator redirects to the user flow page.
+3. From the list, select the IdP.
+4. Enter credentials, including an MFA token, if required by the IdP.
+5. You're redirected to Azure AD B2C, which forwards the app request to the Orchestrator redirect URI.
+6. The Orchestrator evaluates policies, and calculates headers.
+7. The requested application appears.
+
+## Next steps
+
+- [Azure AD B2C custom policy overview](./custom-policy-overview.md)
+- [Tutorial: Create user flows and custom policies in Azure Active Directory B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy)
advisor Advisor Cost Optimization Workbook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-cost-optimization-workbook.md
Title: Understand and optimize your Azure costs with the new Azure Cost Optimization workbook. description: Understand and optimize your Azure costs with the new Azure Cost Optimization workbook. Previously updated : 07/17/2023 Last updated : 12/28/2023++ # Understand and optimize your Azure costs using the Cost Optimization workbook
-The Azure Cost Optimization workbook is designed to provide an overview and help you optimize costs of your Azure environment. It offers a set of cost-relevant insights and recommendations aligned with the WAF Cost Optimization pillar.
+The Azure Cost Optimization workbook is designed to provide an overview and help you optimize costs of your Azure environment. It offers a set of cost-relevant insights and recommendations aligned with the Well-Architected Framework Cost Optimization pillar.
## Overview
-The Azure Cost Optimization workbook serves as a centralized hub for some of the most commonly used tools that can help you drive utilization and efficiency goals. It offers a range of recommendations, including Azure Advisor cost recommendations, identification of idle resources, and management of improperly deallocated Virtual Machines. Additionally, it provides insights into using Azure Hybrid benefit options for Windows, Linux, and SQL databases. The workbook template is available in Azure Advisor gallery.
+The Azure Cost Optimization workbook serves as a centralized hub for some of the most commonly used tools that can help you drive utilization and efficiency goals. It offers a range of recommendations, including Azure Advisor cost recommendations, identification of idle resources, and management of improperly deallocated Virtual Machines. Additionally, it provides recommendations for applying Azure Reservations and Savings Plan for Compute and insights into using Azure Hybrid Benefit options. The workbook template is available in Azure Advisor gallery.
HereΓÇÖs how to get started:
-1. Navigate to [Workbooks gallery](https://aka.ms/advisorworkbooks) in Azure Advisor
-1. Open **Cost Optimization (Preview)** workbook template.
+1. Navigate to [Workbooks gallery](https://aka.ms/advisorworkbooks) in Azure Advisor.
+1. Open **Cost Optimization (Preview)** workbook template.
-The workbook is organized into different tabs, each focusing on a specific area to help you reduce the cost of your Azure environment.
-* Compute
-* Azure Hybrid Benefit
-* Storage
-* Networking
+The workbook is organized into different tabs and subtabs, each focusing on a specific area to help you reduce the cost of your Azure environment.
+
+* Overview
+* Rate Optimization
+
+ * Azure Hybrid Benefit
+ * Azure Reservations
+ * Azure Savings Plan for Compute
+
+* Usage Optimization
+
+ * Compute
+ * Storage
+ * Networking
+ * Other popular Azure services
Each tab supports the following capabilities:
-* **Filters** - use subscription, resource group and tag filters to focus on a specific workload.
+* **Filters** - use subscription, resource group, and tag filters to focus on a specific workload.
* **Export** - export the recommendations to share the insights and collaborate with your team more effectively. * **Quick Fix** - apply the recommended optimization directly from the workbook page, streamlining the optimization process. > [!NOTE]
-> The workbook serves as guidance and does not guarantee cost reduction.
+> The workbook serves as guidance and doesn't guarantee cost reduction.
+
-## Compute
+### Welcome
+The home page of the workbook highlights the goal and prerequisites. It also provides a way to submit feedback and raise issues.
-### Advisor recommendations
+### Resource overview
+This image shows the resources distribution per region. Here, you should review where most of the resources are located and understand if there's data being transferred to other regions and if this behavior is expected, since data transfer costs might apply. It's important to notice that the cost of an Azure service can vary between locations based on on-demand and local infrastructure costs and replication costs.
-This query focuses on reviewing the Advisor recommendations related to compute. Some of the recommendations available in this query could be *Optimize virtual machine spend by resizing or shutting down underutilized instances* or *Buy reserved virtual machine instances to save money over pay-as-you-go costs*.
+### Security Recommendations
-### Virtual machines in Stopped State
+The Security Recommendations query focuses on reviewing the Azure Advisor security recommendations.
+Potentially, you could enhance the security of your workloads by reinvesting some of the cost savings identified from the workbook assessment.
-This query identifies Virtual Machines that are not properly deallocated. If a virtual machineΓÇÖs status is *Stopped* rather than *Stopped (Deallocated)*, you are still billed for the resource as the hardware remains allocated for you.
+### Reliability recommendations
-### Web Apps
-This query helps identify Azure App Services with and without Auto Scale, and App Services where the actual app might be stopped.
+The Reliability Recommendations query focuses on reviewing the Azure Advisor reliability recommendations.
+Potentially, you could enhance the reliability of your workloads by reinvesting some of the cost savings identified from the workbook assessment.
-### Azure Kubernetes Clusters (AKS)
+## Rate Optimization
-This query focuses on cost optimization opportunities specific to Azure Kubernetes Clusters (AKS). It provides recommendations such as:
-* Enabling cluster autoscaler to automatically adjust the number of agent nodes in response to resource constraints.
-* Considering the use of Azure Spot VMs for workloads that can handle interruptions, early terminations, or evictions.
-* Utilizing the Horizontal Pod Autoscaler to adjust the number of pods in a deployment based on CPU utilization or other selected metrics.
-* Using the Start/Stop feature in Azure Kubernetes Services (AKS) to optimize cost during off-peak hours.
-* Using appropriate VM SKUs per node pool and considering reserved instances where long-term capacity is expected.
+The Rate Optimization tab focuses on reviewing potential savings related to the rate optimization of your Azure services.
+ ### Azure Hybrid Benefit
-Windows VMs and VMSS not using Hybrid Benefit
+Azure Hybrid Benefit represents an excellent opportunity to save on Virtual Machines (VMs) operating system costs. Using the workbook, you can identify the opportunities to use the Azure Hybrid Benefit for VM/VMSS (Windows and Linux), SQL (SQL Server VMs, SQL DB and SQL MI), and Azure Stack HCI (VMs and AKS).
+
+> [!NOTE]
+> If you select a Dev/Test subscription in the scope of the workbook, then you should already have discounts on Windows and SQL licenses. So, any recommendations shown on the page donΓÇÖt apply to the subscription.
+
+#### Windows VM/VMSS
-Azure Hybrid Benefit represents an excellent opportunity to save on Virtual Machines OS costs. You can see potential savings using Azure Hybrid Benefit Calculator Check this link to learn more about the Azure Hybrid Benefit.
+Azure Hybrid Benefit represents an excellent opportunity to save on Virtual Machines OS costs.
+If you have Software Assurance, you can enable the [Azure Hybrid Benefit](../virtual-machines/windows/hybrid-use-benefit-licensing.md). You can see potential savings using [Azure Hybrid Benefit Calculator](https://azure.microsoft.com/pricing/hybrid-benefit/#calculator).
> [!NOTE]
-> If you have selected Dev/Test subscription(s) within the scope of this Workbook then they should already have discounts on Windows licenses so recommendations here donΓÇÖt apply to this subscription(s)
+> The query has a Quick Fix column that helps you to apply Azure Hybrid Benefit to Windows VMs.
+
+#### Linux VM/VMSS
+
+[Azure Hybrid Benefit for Linux](../virtual-machines/linux/azure-hybrid-benefit-linux.md) is a licensing benefit that helps you to significantly reduce the costs of running your Red Hat Enterprise Linux (RHEL) and SUSE Linux Enterprise Server (SLES) virtual machines (VMs) in the cloud.
+
+#### SQL
+
+Azure Hybrid Benefit represents an excellent opportunity to save costs on SQL instances.
+If you have Software Assurance, you can enable [SQL Hybrid Benefit](/azure/azure-sql/azure-hybrid-benefit).
+You can see potential savings using [Azure Hybrid Benefit Calculator](https://azure.microsoft.com/pricing/hybrid-benefit/#calculator).
+
+#### Azure Stack HCI
+
+Azure Hybrid Benefit represents an excellent opportunity to save costs on Azure Stack HCI. If you have Software Assurance, you can enable [Azure Stack HCI Hybrid Benefit](/azure-stack/hci/concepts/azure-hybrid-benefit-hci).
+
+### Azure Reservations
+
+Review Azure Reservations cost saving opportunities. Use filters for subscriptions, a look back period (7, 30 or 60 days), a term (1 year or 3 years), and a resource type. Learn more about [What are Azure Reservations?](../cost-management-billing/reservations/save-compute-costs-reservations.md) and how much you can [save with Reservations](https://azure.microsoft.com/pricing/reservations).
+
+### Azure savings plan for compute
-### Linux VM not using Hybrid Benefit
+Review Azure savings plan for compute cost saving opportunities. Use filters for subscriptions, a look back period (7, 30 or 60 days), and a term (1 year or 3 years). Learn more about [What is Azure savings plans for compute?](https://azure.microsoft.com/pricing/offers/savings-plan-compute) and how much you can [save with Savings Plan for Compute](https://azure.microsoft.com/pricing/offers/savings-plan-compute).
-Similar to Windows VMs, Azure Hybrid Benefit provides an excellent opportunity to save on Virtual Machine OS costs. The Azure Hybrid Benefit for Linux is a licensing benefit that significantly reduces the costs of running Red Hat Enterprise Linux (RHEL) and SUSE Linux Enterprise Server (SLES) virtual machines (VMs) in the cloud.
+## Usage Optimization
-### SQL HUB Licenses
+The Usage Optimization tab focuses on reviewing potential savings related to usage optimization of your Azure services.
-Azure Hybrid Benefit can also be applied to SQL services, such as SQL server on VMs, SQL Database or SQL Managed Instance.
-## Storage
+### Compute
-### Advisor recommendations
+The following queries show compute resources that you can optimize to save money.
-Review the Advisor recommendations for Storage. This section provides insights into various recommendations such as ΓÇ£Blob storage reserved capacityΓÇ¥ or ΓÇ£Use lifecycle management.ΓÇ¥ These recommendations can help optimize your storage costs and improve efficiency.
-Unattached Managed Disks
-This query focuses on the list of managed unattached disks. It automatically ignores disks used by Azure Site Recovery. Use this information to identify and remove any unattached disks that are no longer needed.
+#### Virtual Machines in a Stopped State
+
+This query identifies Virtual Machines that aren't properly deallocated. If a virtual machineΓÇÖs status is Stopped rather than Stopped (Deallocated), you're still billed for the resource as the hardware remains allocated for you. Learn more about [States and billing status of Azure Virtual Machines](../virtual-machines/states-billing.md).
+
+#### Deallocated virtual machines
+
+A virtual machine in a deallocated state is not only powered off, but the underlying host infrastructure is also released, resulting in no charges for the allocated resources while the VM is in this state. However, some Azure resources such as disks and networking continue to incur charges.
+
+#### Virtual Machine Scale Sets
+
+This query focuses on cost optimization opportunities specific to Virtual Machine Scale Sets. It provides recommendations such as:
+
+* Consider using Azure Spot VMs for workloads that can handle interruptions, early terminations, or evictions. For example, workloads such as batch processing jobs, development and testing environments, and large compute workloads may be good candidates for scheduling on a spot node pool.
+* Spot priority mix: Azure provides the flexibility of running a mix of uninterruptible standard VMs and interruptible Spot VMs for Virtual Machine Scale Set deployments. You can use the Spot Priority Mix using Flexible orchestration to easily balance between high-capacity availability and lower infrastructure costs according to workload requirements.
+
+#### Advisor Recommendations
+Review the Advisor recommendations for Compute. Some of the recommendations available in this tile could be "Optimize virtual machine spend by resizing or shutting down underutilized instances", or "Buy reserved virtual machine instances to save money over pay-as-you-go costs."
+
+### Storage
+
+The following queries show storage resources that you can optimize to save money.
+
+#### Storage accounts which are not v2
+
+The Storage accounts which are not v2 query focuses on identifying the storage accounts which are configured as v1. There are several reasons to justify upgrading to v2, such as:
+
+* Ability to enable Storage Lifecycle Management;
+* Storage Reserved Instances;
+* Access tiers - you can transition data from a hotter access tier to a cooler access tier if there's no access for a period.
+
+Upgrading a v1 storage account to a general-purpose v2 account is free. You can specify the desired account tier during the upgrade process. If an account tier isn't specified on the upgrade, the default account tier of the upgraded account will be Hot. However, changing the storage access tier after the upgrade may result in changes to your bill, so we recommend that you specify the new account tier during an upgrade.
+
+#### Unattached Managed Disks
+
+The Unattached Managed Disks query helps you to identify unattached managed disks. Unattached disks represent a cost in the subscription. The query automatically ignores disks used by Azure Site Recovery. Use the information to identify and remove any unattached disks that are no longer needed.
> [!NOTE]
-> This query has a Quick Fix column that helps you to remove the disk if not needed.
+> The query has a Quick Fix column that helps you to remove the disk if not needed.
+
+#### Disk Snapshots with + 30 Days
+
+The Disk Snapshots with + 30 Days query identifies snapshots that are older than 30 days. Identifying and managing outdated snapshots can help you optimize storage costs and ensure efficient use of your Azure environment.
+
+#### Snapshots using premium storage
-### Disk snapshots older than 30 days
-This query identifies snapshots that are older than 30 days. Identifying and managing outdated snapshots can help you optimize storage costs and ensure efficient use of your Azure environment.
+To save 60% of cost, we recommend storing your snapshots in Standard Storage, regardless of the storage type of the parent disk. It's the default option for Managed Disks snapshots. Migrate your snapshot from Premium to Standard Storage.
-## Networking
+#### Snapshots with deleted source disk
-### Advisor recommendations
-Review the Advisor recommendations for Networking. This section provides insights into various recommendations, such as ΓÇ£Reduce costs by deleting or reconfiguring idle virtual network gatewaysΓÇ¥ or ΓÇ£Reduce costs by eliminating unprovisioned ExpressRoute circuits.ΓÇ¥
+The Snapshots with deleted source disk query identifies snapshots where the source disk has been deleted.
-### Application Gateway with empty backend pool
+#### Idle Backup
-Review the Application Gateways with empty backend pools. App gateways are considered idle if there isnΓÇÖt any backend pool with targets.
+Review protected items backup activity to determine if there are items not backed up in the last 90 days. This could either mean that the underlying resource that's being backed up doesn't exist anymore or there's some issue with the resource that's preventing backups from being taken reliably.
-### Load Balancer with empty backend pool
+#### Backup storage redundancy settings
-Review the Load Balancers with empty backend pools. Load Balancers are considered idle if there isnΓÇÖt any backend pool with targets.
+By default, when you configure backup for resources, geo-redundant storage (GRS) replication is applied to these backups. While this is the recommended storage replication option as it creates more redundancy for your critical data, you can choose to protect items using locally-redundant storage (LRS) if that meets your backup availability needs for dev-test workloads. Using LRS instead of GRS halves the cost of your backup storage.
-### Unattached Public IPs
+#### Advisor Recommendations
-Review the list of idle Public IP Addresses. This query also shows Public IP addresses attached to idle Network Interface Cards (NICs)
+Review the Advisor recommendations for Storage. Some of the recommendations available in this tile could be "Blob storage reserved capacity", or "Use lifecycle management".
-### Idle Virtual Network Gateways
+### Networking
-Review the Idle Virtual Network Gateways. This query shows VPN Gateways without any active connection.
+The following queries show networking resources that you can optimize to save money.
+
+#### Azure Firewall Premium
+
+The Azure Firewall Premium query identifies Azure Firewalls with Premium SKU and evaluates whether the associated policy incorporates premium-only features or not. If a Premium SKU Firewall lacks a policy with premium features, such as TLS or intrusion detection, it is shown on the page. For more information about Azure Firewall SKUs, see [SKU comparison table](../firewall/choose-firewall-sku.md).
+
+#### Azure Firewall instances per region
+
+Optimize the use of Azure Firewall by having a central instance of Azure Firewall in the hub virtual network or Virtual WAN secure hub. Share the same firewall across many spoke virtual networks that are connected to the same hub from the same region. Ensure there's no unexpected cross-region traffic as part of the hub-spoke topology, nor multiple Azure firewall instances deployed to the same region. To learn more about Azure Firewall design principles, check [Azure Well-Architected Framework review - Azure Firewall](/azure/well-architected/service-guides/azure-firewall#cost-optimization).
+
+#### Application Gateway with empty backend pool
+
+Review the Application Gateways with empty backend pools.
+App gateways are considered idle if there isn't any backend pool with targets.
+
+#### Load Balancer with empty backend pool
+
+Review the Standard Load Balancers with empty backend pools. Load Balancers are considered idle if there isnΓÇÖt any backend pool with targets.
+
+#### Unattached Public IPs
+
+Review the orphan Public IP Addresses. The query also shows Public IP addresses attached to idle network interface cards (NIC).
+
+#### Virtual Network Gateways
+
+Review idle Virtual Network Gateways that have no connections defined, as they may represent additional cost.
+
+#### Advisor Recommendations
+
+Review the Advisor recommendations for Networking. Some of the recommendations available in this tile could be "Reduce costs by deleting or reconfiguring idle virtual network gateways", or "Reduce costs by eliminating unprovisioned ExpressRoute circuits."
+
+### Top 10 services
+
+The following queries show other popular Azure resources that you can optimize to save money.
+
+#### Web Apps
+
+Review the App Service list.
+
+* Review the Stopped App Services as they will be charged.
+
+* Consider upgrading from the V2 SKU to the V3 SKU. The V3 SKU is cheaper than similar V2 SKU and allows [Reserved Instances and Savings plan for compute](https://azure.microsoft.com/pricing/details/app-service/windows/).
+
+* Determine the right reserved instance size before you buy - Before you buy a reservation, you should determine the size of the Premium v3 reserved instance that you need. The following sections help you determine the right Premium v3 reserved instance size.
+
+* Use Autoscale appropriately - Autoscale can be used to provision resources for when they're needed or on demand, which allows you to minimize costs when your environment is idle.
+
+#### Azure Kubernetes Clusters (AKS)
+
+Review the AKS list. Some of the cost optimization opportunities are:
+
+* Enable cluster autoscaler to automatically adjust the number of agent nodes in response to resource constraints.
+* Consider using Azure Spot VMs for workloads that can handle interruptions, early terminations, or evictions. For example, workloads such as batch processing jobs, development and testing environments, and large compute workloads may be good candidates for scheduling on a spot node pool.
+* Utilize the Horizontal pod autoscaler to adjust the number of pods in a deployment depending on CPU utilization or other select metrics.
+* Use the Start/Stop feature in Azure Kubernetes Services (AKS).
+* Use appropriate VM SKU per node pool and reserved instances where long-term capacity is expected.
+
+#### Azure Synapse
+
+Review the Azure Synapse workspaces that don't have any SQL pools attached to them.
+
+#### Monitoring
+
+Review [Azure Monitor - Best Practices](../azure-monitor/best-practices-cost.md) for design checklists and configuration recommendations related to Azure Monitor Logs, Azure resources, Alerts, Virtual machines, Containers, and Application Insights.
+
+**Log Analytics**
+
+Review costs related to data ingestion on Log Analytics. The following advice could be of help in cost optimization:
+
+* Adopt commitment tiers where applicable.
+* Adopt Azure Monitor Logs dedicated cluster if a single workspace does not ingest enough data as per the minimum commitment tier (100 GB/day) or if it is possible to aggregate ingestion costs from more than one workspace in the same region.
+* Convert the free tier based workspace to Pay-as-you-go model and add them to an Azure Monitor Logs dedicated cluster where possible.
+
+🖱️ Select one or more Log Analytics workspaces to review the daily ingestion trend for the past 30 days and understand its usage.
+
+**Azure Advisor Cost recommendations**
+
+Review the Advisor recommendations for Log Analytics. Some of the recommendations available in this tile could be *Consider removing unused restored tables* or *Consider configuring the low-cost Basic logs plan on selected tables*.
For more information, see: * [Well-Architected cost optimization design principles](/azure/well-architected/cost/principles) * [Cloud Adoption Framework manage cloud costs](/azure/cloud-adoption-framework/get-started/manage-costs) * [Azure FinOps principles](/azure/cost-management-billing/finops/overview-finops) * [Azure Advisor cost recommendations](advisor-reference-cost-recommendations.md)-
ai-services Encrypt Data At Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/encrypt-data-at-rest.md
Previously updated : 01/19/2024 Last updated : 02/05/2024 #Customer intent: As a user of the Language Understanding (LUIS) service, I want to learn how encryption at rest works.
Data is encrypted and decrypted using [FIPS 140-2](https://en.wikipedia.org/wiki
## About encryption key management
-By default, your subscription uses Microsoft-managed encryption keys. There is also the option to manage your subscription with your own keys called customer-managed keys (CMK). CMK offer greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your data.
+By default, your subscription uses Microsoft-managed encryption keys. There is also the option to manage your subscription with your own keys called customer-managed keys (CMKs). CMKs offer greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your data.
## Customer-managed keys with Azure Key Vault
-There is also an option to manage your subscription with your own keys. Customer-managed keys (CMK), also known as Bring your own key (BYOK), offer greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your data.
+There is also an option to manage your subscription with your own keys. Customer-managed keys (CMKs), also known as Bring your own key (BYOK), offer greater flexibility to create, rotate, disable, and revoke access controls. You can also audit the encryption keys used to protect your data.
You must use Azure Key Vault to store your customer-managed keys. You can either create your own keys and store them in a key vault, or you can use the Azure Key Vault APIs to generate keys. The Azure AI services resource and the key vault must be in the same region and in the same Microsoft Entra tenant, but they can be in different subscriptions. For more information about Azure Key Vault, see [What is Azure Key Vault?](../../key-vault/general/overview.md).
You must use Azure Key Vault to store your customer-managed keys. You can either
There are some limitations when using the E0 tier with existing/previously created applications:
-* Migration to an E0 resource will be blocked. Users will only be able to migrate their apps to F0 resources. After you've migrated an existing resource to F0, you can create a new resource in the E0 tier. Learn more about [migration here](./luis-migration-authoring.md).
-* Moving applications to or from an E0 resource will be blocked. A work around for this limitation is to export your existing application, and import it as an E0 resource.
+* Migration to an E0 resource will be blocked. Users will only be able to migrate their apps to F0 resources. After you've migrated an existing resource to F0, you can create a new resource in the E0 tier.
+* Moving applications to or from an E0 resource will be blocked. A work-around for this limitation is to export your existing application, and import it as an E0 resource.
* The Bing Spell check feature isn't supported. * Logging end-user traffic is disabled if your application is E0. * The Speech priming capability from the Azure AI Bot Service isn't supported for applications in the E0 tier. This feature is available via the Azure AI Bot Service, which doesn't support CMK.
ai-services Luis Concept Data Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-concept-data-storage.md
Previously updated : 01/19/2024 Last updated : 02/05/2024 # Data storage and removal in Language Understanding (LUIS) Azure AI services
Last updated 01/19/2024
LUIS stores data encrypted in an Azure data store corresponding to [the region](luis-reference-regions.md) specified by the key.
-* Data used to train the model such as entities, intents, and utterances will be saved in LUIS for the lifetime of the application. If an owner or contributor deletes the app, this data will be deleted with it. If an application hasn't been used in 90 days, it will be deleted.
+* Data used to train the model such as entities, intents, and utterances will be saved in LUIS for the lifetime of the application. If an owner or contributor deletes the app, this data is deleted with it. If an application hasn't been used in 90 days, it will be deleted.
-* Application authors can choose to [enable logging](how-to/improve-application.md#log-user-queries-to-enable-active-learning) on the utterances that are sent to a published application. If enabled, utterances will be saved for 30 days, and can be viewed by the application author. If logging isn't enabled when the application is published, this data is not stored.
+* Application authors can choose to [enable logging](how-to/improve-application.md#log-user-queries-to-enable-active-learning) on the utterances that are sent to a published application. If enabled, utterances are saved for 30 days, and can be viewed by the application author. If logging isn't enabled when the application is published, this data isn't stored.
## Export and delete app Users have full control over [exporting](how-to/sign-in.md) and [deleting](how-to/sign-in.md) the app.
You can delete utterances from the list of user utterances that LUIS suggests in
If you don't want active learning utterances, you can [disable active learning](how-to/improve-application.md). Disabling active learning also disables logging. ### Disable logging utterances
-[Disabling active learning](how-to/improve-application.md) is disables logging.
+[Disabling active learning](how-to/improve-application.md) disables logging.
<a name="accounts"></a>
If you are not migrated, you can delete your account and all your apps will be d
Deleting account is available from the **Settings** page. Select your account name in the top right navigation bar to get to the **Settings** page. ## Delete an authoring resource
-If you have [migrated to an authoring resource](./luis-migration-authoring.md), deleting the resource itself from the Azure portal will delete all your applications associated with that resource, along with their example utterances and logs. The data is retained for 90 days before it is deleted permanently.
+If you have migrated to an authoring resource, deleting the resource itself from the Azure portal deletes all your applications associated with that resource, along with their example utterances and logs. The data is retained for 90 days before it is deleted permanently.
To delete your resource, go to the [Azure portal](https://portal.azure.com/#home) and select your LUIS authoring resource. Go to the **Overview** tab and select the **Delete** button on the top of the page. Then confirm your resource was deleted. ## Data inactivity as an expired subscription
-For the purposes of data retention and deletion, an inactive LUIS app may at _MicrosoftΓÇÖs discretion_ be treated as an expired subscription. An app is considered inactive if it meets the following criteria for the last 90 days:
+For the purposes of data retention and deletion, an inactive LUIS app might at _MicrosoftΓÇÖs discretion_ be treated as an expired subscription. An app is considered inactive if it meets the following criteria for the last 90 days:
* Has had **no** calls made to it. * Has not been modified.
For the purposes of data retention and deletion, an inactive LUIS app may at _Mi
## Next steps
-[Learn about exporting and deleting an app](how-to/sign-in.md)
+[Learn about exporting and deleting an app.](how-to/sign-in.md)
ai-services Luis How To Collaborate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-how-to-collaborate.md
Previously updated : 01/19/2024 Last updated : 02/05/2024 # Add contributors to your app
Last updated 01/19/2024
[!INCLUDE [deprecation notice](./includes/deprecation-notice.md)]
-An app owner can add contributors to apps. These contributors can modify the model, train, and publish the app. Once you have [migrated](luis-migration-authoring.md) your account, _contributors_ are managed in the Azure portal for the authoring resource, using the **Access control (IAM)** page. Add a user, using the collaborator's email address and the _contributor_ role.
+An app owner can add contributors to apps. These contributors can modify the model, train, and publish the app. _Contributors_ are managed in the Azure portal for the authoring resource, using the **Access control (IAM)** page. Add a user, using the collaborator's email address and the _contributor_ role.
## Add contributor to Azure authoring resource
The tenant admin should work directly with the user who needs access granted to
If the tenant admin only wants certain users to use LUIS, there are a couple of possible solutions: * Giving the "admin consent" (consent to all users of the Microsoft Entra ID), but then set to "Yes" the "User assignment required" under Enterprise Application Properties, and finally assign/add only the wanted users to the Application. With this method, the Administrator is still providing "admin consent" to the App, however, it's possible to control the users that can access it.
-* A second solution, is by using the [Microsoft Entra identity and access management API in Microsoft Graph](/graph/azuread-identity-access-management-concept-overview) to provide consent to each specific user.
+* A second solution is to use the [Microsoft Entra identity and access management API in Microsoft Graph](/graph/azuread-identity-access-management-concept-overview) to provide consent to each specific user.
Learn more about Microsoft Entra users and consent: * [Restrict your app](../../active-directory/develop/howto-restrict-your-app-to-a-set-of-users.md) to a set of users
Learn more about Microsoft Entra users and consent:
* Learn [how to use versions](luis-how-to-manage-versions.md) to control your app life cycle. * Understand the about [authoring resources](luis-how-to-azure-subscription.md) and [adding contributors](luis-how-to-collaborate.md) on that resource.
-* Learn [how to create](luis-how-to-azure-subscription.md) authoring and runtime resources
-* Migrate to the new [authoring resource](luis-migration-authoring.md)
+* Learn [how to create](luis-how-to-azure-subscription.md) authoring and runtime resources
ai-services Luis Migration Api V1 To V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-migration-api-v1-to-v2.md
- Title: v1 to v2 API Migration-
-description: The version 1 endpoint and authoring Language Understanding APIs are deprecated. Use this guide to understand how to migrate to version 2 endpoint and authoring APIs.
-#
------ Previously updated : 01/19/2024--
-# API v1 to v2 Migration guide for LUIS apps
--
-The version 1 [endpoint](https://aka.ms/v1-endpoint-api-docs) and [authoring](https://aka.ms/v1-authoring-api-docs) APIs are deprecated. Use this guide to understand how to migrate to version 2 [endpoint](https://go.microsoft.com/fwlink/?linkid=2092356) and [authoring](https://go.microsoft.com/fwlink/?linkid=2092087) APIs.
-
-## New Azure regions
-LUIS has new [regions](./luis-reference-regions.md) provided for the LUIS APIs. LUIS provides a different portal for region groups. The application must be authored in the same region you expect to query. Applications do not automatically migrate regions. You export the app from one region then import into another for it to be available in a new region.
-
-## Authoring route changes
-The authoring API route changed from using the **prog** route to using the **api** route.
--
-| version | route |
-|--|--|
-|1|/luis/v1.0/**prog**/apps|
-|2|/luis/**api**/v2.0/apps|
--
-## Endpoint route changes
-The endpoint API has new query string parameters as well as a different response. If the verbose flag is true, all intents, regardless of score, are returned in an array named intents, in addition to the topScoringIntent.
-
-| version | GET route |
-|--|--|
-|1|/luis/v1/application?ID={appId}&q={q}|
-|2|/luis/v2.0/apps/{appId}?q={q}[&timezoneOffset][&verbose][&spellCheck][&staging][&bing-spell-check-subscription-key][&log]|
--
-v1 endpoint success response:
-```json
-{
- "odata.metadata":"https://dialogice.cloudapp.net/odata/$metadata#domain","value":[
- {
- "id":"bccb84ee-4bd6-4460-a340-0595b12db294","q":"turn on the camera","response":"[{\"intent\":\"OpenCamera\",\"score\":0.976928055},{\"intent\":\"None\",\"score\":0.0230718572}]"
- }
- ]
-}
-```
-
-v2 endpoint success response:
-```json
-{
- "query": "forward to frank 30 dollars through HSBC",
- "topScoringIntent": {
- "intent": "give",
- "score": 0.3964121
- },
- "entities": [
- {
- "entity": "30",
- "type": "builtin.number",
- "startIndex": 17,
- "endIndex": 18,
- "resolution": {
- "value": "30"
- }
- },
- {
- "entity": "frank",
- "type": "frank",
- "startIndex": 11,
- "endIndex": 15,
- "score": 0.935219169
- },
- {
- "entity": "30 dollars",
- "type": "builtin.currency",
- "startIndex": 17,
- "endIndex": 26,
- "resolution": {
- "unit": "Dollar",
- "value": "30"
- }
- },
- {
- "entity": "hsbc",
- "type": "Bank",
- "startIndex": 36,
- "endIndex": 39,
- "resolution": {
- "values": [
- "BankeName"
- ]
- }
- }
- ]
-}
-```
-
-## Key management no longer in API
-The subscription endpoint key APIs are deprecated, returning 410 GONE.
-
-| version | route |
-|--|--|
-|1|/luis/v1.0/prog/subscriptions|
-|1|/luis/v1.0/prog/subscriptions/{subscriptionKey}|
-
-Azure [endpoint keys](luis-how-to-azure-subscription.md) are generated in the Azure portal. You assign the key to a LUIS app on the **[Publish](luis-how-to-azure-subscription.md)** page. You do not need to know the actual key value. LUIS uses the subscription name to make the assignment.
-
-## New versioning route
-The v2 model is now contained in a [version](luis-how-to-manage-versions.md). A version name is 10 characters in the route. The default version is "0.1".
-
-| version | route |
-|--|--|
-|1|/luis/v1.0/**prog**/apps/{appId}/entities|
-|2|/luis/**api**/v2.0/apps/{appId}/**versions**/{versionId}/entities|
-
-## Metadata renamed
-Several APIs that return LUIS metadata have new names.
-
-| v1 route name | v2 route name |
-|--|--|
-|PersonalAssistantApps |assistants|
-|applicationcultures|cultures|
-|applicationdomains|domains|
-|applicationusagescenarios|usagescenarios|
--
-## "Sample" renamed to "suggest"
-LUIS suggests utterances from existing [endpoint utterances](how-to/improve-application.md) that may enhance the model. In the previous version, this was named **sample**. In the new version, the name is changed from sample to **suggest**. This is called **[Review endpoint utterances](how-to/improve-application.md)** in the LUIS website.
-
-| version | route |
-|--|--|
-|1|/luis/v1.0/**prog**/apps/{appId}/entities/{entityId}/**sample**|
-|1|/luis/v1.0/**prog**/apps/{appId}/intents/{intentId}/**sample**|
-|2|/luis/**api**/v2.0/apps/{appId}/**versions**/{versionId}/entities/{entityId}/**suggest**|
-|2|/luis/**api**/v2.0/apps/{appId}/**versions**/{versionId}/intents/{intentId}/**suggest**|
--
-## Create app from prebuilt domains
-[Prebuilt domains](./howto-add-prebuilt-models.md) provide a predefined domain model. Prebuilt domains allow you to quickly develop your LUIS application for common domains. This API allows you to create a new app based on a prebuilt domain. The response is the new appID.
-
-|v2 route|verb|
-|--|--|
-|/luis/api/v2.0/apps/customprebuiltdomains |get, post|
-|/luis/api/v2.0/apps/customprebuiltdomains/{culture} |get|
-
-## Importing 1.x app into 2.x
-The exported 1.x app's JSON has some areas that you need to change before importing into [LUIS][LUIS] 2.0.
-
-### Prebuilt entities
-The [prebuilt entities](./howto-add-prebuilt-models.md) have changed. Make sure you are using the V2 prebuilt entities. This includes using [datetimeV2](luis-reference-prebuilt-datetimev2.md), instead of datetime.
-
-### Actions
-The actions property is no longer valid. It should be an empty
-
-### Labeled utterances
-V1 allowed labeled utterances to include spaces at the beginning or end of the word or phrase. Removed the spaces.
-
-## Common reasons for HTTP response status codes
-See [LUIS API response codes](luis-reference-response-codes.md).
-
-## Next steps
-
-Use the v2 API documentation to update existing REST calls to LUIS [endpoint](https://go.microsoft.com/fwlink/?linkid=2092356) and [authoring](https://go.microsoft.com/fwlink/?linkid=2092087) APIs.
-
-[LUIS]: ./luis-reference-regions.md
ai-services Luis Migration Authoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-migration-authoring.md
- Title: Migrate to an Azure resource authoring key-
-description: This article describes how to migrate Language Understanding (LUIS) authoring authentication from an email account to an Azure resource.
-#
------ Previously updated : 01/19/2024-
-# Migrate to an Azure resource authoring key
---
-> [!IMPORTANT]
-> As of December 3rd 2020, existing LUIS users must have completed the migration process to continue authoring LUIS applications.
-
-Language Understanding (LUIS) authoring authentication has changed from an email account to an Azure resource. Use this article to learn how to migrate your account, if you haven't migrated yet.
--
-## What is migration?
-
-Migration is the process of changing authoring authentication from an email account to an Azure resource. Your account will be linked to an Azure subscription and an Azure authoring resource after you migrate.
-
-Migration has to be done from the [LUIS portal](https://www.luis.ai). If you create the authoring keys by using the LUIS CLI, for example, you'll need to complete the migration process in the LUIS portal. You can still have co-authors on your applications after migration, but these will be added on the Azure resource level instead of the application level. Migrating your account can't be reversed.
-
-> [!Note]
-> * If you need to create a prediction runtime resource, there's [a separate process](luis-how-to-azure-subscription.md#create-luis-resources) to create it.
-> * See the [migration notes](#migration-notes) section below for information on how your applications and contributors will be affected.
-> * Authoring your LUIS app is free, as indicated by the F0 tier. Learn [more about pricing tiers](luis-limits.md#resource-usage-and-limits).
-
-## Migration prerequisites
-
-* A valid Azure subscription. Ask your tenant admin to add you on the subscription, or [sign up for a free one](https://azure.microsoft.com/free/cognitive-services).
-* A LUIS Azure authoring resource from the LUIS portal or from the [Azure portal](https://portal.azure.com/#create/Microsoft.CognitiveServicesLUISAllInOne).
- * Creating an authoring resource from the LUIS portal is part of the migration process described in the next section.
-* If you're a collaborator on applications, applications won't automatically migrate. You will be prompted to export these apps while going through the migration flow. You can also use the [export API](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c40). You can import the app back into LUIS after migration. The import process creates a new app with a new app ID, for which you're the owner.
-* If you're the owner of the application, you won't need to export your apps because they'll migrate automatically. An email template with a list of all collaborators for each application is provided, so they can be notified of the migration process.
-
-## Migration steps
-
-1. When you sign-in to the [LUIS portal](https://www.luis.ai), an Azure migration window will open with the steps for migration. If you dismiss it, you won't be able to proceed with authoring your LUIS applications, and the only action displayed will be to continue with the migration.
-
- > [!div class="mx-imgBorder"]
- > ![Migration Window Intro](./media/migrate-authoring-key/notify-azure-migration.png)
-
-2. If you have collaborators on any of your apps, you will see a list of application names owned by you, along with the authoring region and collaborator emails on each application. We recommend sending your collaborators an email notifying them about the migration by clicking on the **send** symbol button on the left of the application name.
-A `*` symbol will appear next to the application name if a collaborator has a prediction resource assigned to your application. After migration, these apps will still have these prediction resources assigned to them even though the collaborators will not have access to author your applications. However, this assignment will be broken if the owner of the prediction resource [regenerated the keys](./luis-how-to-azure-subscription.md#regenerate-an-azure-key) from the Azure portal.
-
- > [!div class="mx-imgBorder"]
- > ![Notify collaborators](./media/migrate-authoring-key/notify-azure-migration-collabs.png)
--
- For each collaborator and app, the default email application opens with a lightly formatted email. You can edit the email before sending it. The email template includes the exact app ID and app name.
-
- ```html
- Dear Sir/Madam,
-
- I will be migrating my LUIS account to Azure. Consequently, you will no longer have access to the following app:
-
- App Id: <app-ID-omitted>
- App name: Human Resources
-
- Thank you
- ```
- > [!Note]
- > After you migrate your account to Azure, your apps will no longer be available to collaborators.
-
-3. If you're a collaborator on any apps, a list of application names shared with you is shown along with the authoring region and owner emails on each application. It is recommend to export a copy of the apps by clicking on the export button on the left of the application name. You can import these apps back after you migrate, because they won't be automatically migrated with you.
-A `*` symbol will appear next to the application name if you have a prediction resource assigned to an application. After migration, your prediction resource will still be assigned to these applications even though you will no longer have access to author these apps. If you want to break the assignment between your prediction resource and the application, you will need to go to Azure portal and [regenerate the keys](./luis-how-to-azure-subscription.md#regenerate-an-azure-key).
-
- > [!div class="mx-imgBorder"]
- > ![Export your applications.](./media/migrate-authoring-key/migration-export-apps.png)
--
-4. In the window for migrating regions, you will be asked to migrate your applications to an Azure resource in the same region they were authored in. LUIS has three authoring regions [and portals](./luis-reference-regions.md#luis-authoring-regions). The window will show the regions where your owned applications were authored. The displayed migration regions may be different depending on the regional portal you use, and apps you've authored.
-
- > [!div class="mx-imgBorder"]
- > ![Multi region migration.](./media/migrate-authoring-key/migration-regional-flow.png)
-
-5. For each region, choose to create a new LUIS authoring resource, or to migrate to an existing one using the buttons.
-
- > [!div class="mx-imgBorder"]
- > ![choose to create or existing authoring resource](./media/migrate-authoring-key/migration-multiregional-resource.png)
-
- Provide the following information:
-
- * **Tenant Name**: The tenant that your Azure subscription is associated with. By default this is set to the tenant you're currently using. You can switch tenants by closing this window and selecting the avatar in the top right of the screen, containing your initials. Select **Migrate to Azure** to re-open the window.
- * **Azure Subscription Name**: The subscription that will be associated with the resource. If you have more than one subscription that belongs to your tenant, select the one you want from the drop-down list.
- * **Authoring Resource Name**: A custom name that you choose. It's used as part of the URL for your authoring and prediction endpoint queries. If you are creating a new authoring resource, note that the resource name can only include alphanumeric characters, `-`, and canΓÇÖt start or end with `-`. If any other symbols are included in the name,
- resource creation and migration will fail.
- * **Azure Resource Group Name**: A custom resource group name that you choose from the drop-down list. Resource groups allow you to group Azure resources for access and management. If you currently do not have a resource group in your subscription, you will not be allowed to create one in the LUIS portal. Go to [Azure portal](https://portal.azure.com/#create/Microsoft.ResourceGroup) to create one then go to LUIS to continue the sign-in process.
-
-6. After you have successfully migrated in all regions, select finish. You will now have access to your applications. You can continue authoring and maintaining all your applications in all regions within the portal.
-
-## Migration notes
-
-* Before migration, coauthors are known as _collaborators_ on the LUIS app level. After migration, the Azure role of _contributor_ is used for the same functionality on the Azure resource level.
-* If you have signed-in to more than one [LUIS regional portal](./luis-reference-regions.md#luis-authoring-regions), you will be asked to migrate in multiple regions at once.
-* Applications will automatically migrate with you if you're the owner of the application. Applications will not migrate with you if you're a collaborator on the application. However, collaborators will be prompted to export the apps they need.
-* Application owners can't choose a subset of apps to migrate and there is no way for an owner to know if collaborators have migrated.
-* Migration does not automatically move or add collaborators to the Azure authoring resource. The app owner is the one who needs to complete this step after migration. This step requires [permissions to the Azure authoring resource](./luis-how-to-collaborate.md).
-* After contributors are assigned to the Azure resource, they will need to migrate before they can access applications. Otherwise, they won't have access to author the applications.
--
-## Using apps after migration
-
-After the migration process, all your LUIS apps for which you're the owner will now be assigned to a single LUIS authoring resource.
-The **My Apps** list shows the apps migrated to the new authoring resource. Before you access your apps, select **Choose a different authoring resource** to select the subscription and authoring resource to view the apps that can be authored.
-
-> [!div class="mx-imgBorder"]
-> ![select subscription and authoring resource](./media/migrate-authoring-key/select-sub-and-resource.png)
--
-If you plan to edit your apps programmatically, you'll need the authoring key values. These values are displayed by clicking **Manage** at the top of the screen in the LUIS portal, and then selecting **Azure Resources**. They're also available in the Azure portal on the resource's **Key and endpoints** page. You can also create more authoring resources and assign them from the same page.
-
-## Adding contributors to authoring resources
--
-Learn [how to add contributors](luis-how-to-collaborate.md) on your authoring resource. Contributors will have access to all applications under that resource.
-
-You can add contributors to the authoring resource from the Azure portal, on the **Access Control (IAM)** page for that resource. For more information, see [Add contributors to your app](luis-how-to-collaborate.md).
-
-> [!Note]
-> If the owner of the LUIS app migrated and added the collaborator as a contributor on the Azure resource, the collaborator will still have no access to the app unless they also migrate.
-
-## Troubleshooting the migration process
-
-If you cannot find your Azure subscription in the drop-down list:
-* Ensure that you have a valid Azure subscription that's authorized to create Azure AI services resources. Go to the [Azure portal](https://portal.azure.com) and check the status of the subscription. If you don't have one, [create a free Azure account](https://azure.microsoft.com/free/cognitive-services/).
-* Ensure that you're in the proper tenant associated with your valid subscription. You can switch tenants selecting the avatar in the top right of the screen, containing your initials.
-
- > [!div class="mx-imgBorder"]
- > ![Page for switching directories](./media/migrate-authoring-key/switch-directories.png)
-
-If you have an existing authoring resource but can't find it when you select the **Use Existing Authoring Resource** option:
-* Your resource was probably created in a different region than the one your are trying to migrate in.
-* Create a new resource from the LUIS portal instead.
-
-If you select the **Create New Authoring Resource** option and migration fails with the error message "Failed retrieving user's Azure information, retry again later":
-* Your subscription might have 10 or more authoring resources per region, per subscription. If that's the case, you won't be able to create a new authoring resource.
-* Migrate by selecting the **Use Existing Authoring Resource** option and selecting one of the existing resources under your subscription.
-
-## Create new support request
-
-If you are having any issues with the migration that are not addressed in the troubleshooting section, please [create a support topic](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) and provide the information below with the following fields:
-
- * **Issue Type**: Technical
- * **Subscription**: Choose a subscription from the dropdown list
- * **Service**: Search and select "Azure AI services"
- * **Resource**: Choose a LUIS resource if there is an existing one. If not, select General question.
-
-## Next steps
-
-* Review [concepts about authoring and runtime keys](luis-how-to-azure-subscription.md)
-* Review how to [assign keys](luis-how-to-azure-subscription.md) and [add contributors](luis-how-to-collaborate.md)
ai-services Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/role-based-access-control.md
A user that should only be validating and reviewing LUIS applications, typically
* [LUIS Programmatic v2.0 APIs](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c2f) All the APIs under:
- * [LUIS Endpoint APIs v2.0](./luis-migration-api-v1-to-v2.md)
+ * LUIS Endpoint APIs v2.0
* [LUIS Endpoint APIs v3.0](https://westcentralus.dev.cognitive.microsoft.com/docs/services/luis-endpoint-api-v3-0/operations/5cb0a9459a1fe8fa44c28dd8) * [LUIS Endpoint APIs v3.0-preview](https://westcentralus.dev.cognitive.microsoft.com/docs/services/luis-endpoint-api-v3-0-preview/operations/5cb0a9459a1fe8fa44c28dd8)
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/whats-new.md
Learn what's new in the service. These items include release notes, videos, blog
### December 2020
-* All LUIS users are required to [migrate to a LUIS authoring resource](luis-migration-authoring.md)
+* All LUIS users are required to migrate to a LUIS authoring resource.
* New [evaluation endpoints](luis-how-to-batch-test.md#batch-testing-using-the-rest-api) that allow you to submit batch tests using the REST API, and get accuracy results for your intents and entities. Available starting with the v3.0-preview LUIS Endpoint. ### June 2020
Learn what's new in the service. These items include release notes, videos, blog
### September 3, 2019
-* Azure authoring resource - [migrate now](luis-migration-authoring.md).
+* Azure authoring resource - migrate now.
* 500 apps per Azure resource * 100 versions per app * Turkish support for prebuilt entities
ai-services Use Blocklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/how-to/use-blocklist.md
Create a new C# console app and open it in your preferred editor or IDE. Paste i
string endpoint = Environment.GetEnvironmentVariable("CONTENT_SAFETY_ENDPOINT"); string key = Environment.GetEnvironmentVariable("CONTENT_SAFETY_KEY");
-ContentSafetyClient client = new ContentSafetyClient(new Uri(endpoint), new AzureKeyCredential(key));
+BlocklistClient blocklistClient = new BlocklistClient(new Uri(endpoint), new AzureKeyCredential(key));
var blocklistName = "<your_list_name>"; var blocklistDescription = "<description>";
var data = new
description = blocklistDescription, };
-var createResponse = client.CreateOrUpdateTextBlocklist(blocklistName, RequestContent.Create(data));
+var createResponse = blocklistClient.CreateOrUpdateTextBlocklist(blocklistName, RequestContent.Create(data));
+ if (createResponse.Status == 201) { Console.WriteLine("\nBlocklist {0} created.", blocklistName);
else if (createResponse.Status == 200)
1. Optionally replace `<description>` with a custom description. 1. Run the code.
+#### [Java](#tab/java)
+
+Create a Java application and open it in your preferred editor or IDE. Paste in the following code.
+
+```java
+String endpoint = Configuration.getGlobalConfiguration().get("CONTENT_SAFETY_ENDPOINT");
+String key = Configuration.getGlobalConfiguration().get("CONTENT_SAFETY_KEY");
+BlocklistClient blocklistClient = new BlocklistClientBuilder()
+ .credential(new KeyCredential(key))
+ .endpoint(endpoint).buildClient();
+
+String blocklistName = "<your_list_name>";
++
+Map<String, String> description = new HashMap<>();
+description.put("description", "<description>");
+BinaryData resource = BinaryData.fromObject(description);
+RequestOptions requestOptions = new RequestOptions();
+Response<BinaryData> response =
+ blocklistClient.createOrUpdateTextBlocklistWithResponse(blocklistName, resource, requestOptions);
+if (response.getStatusCode() == 201) {
+ System.out.println("\nBlocklist " + blocklistName + " created.");
+} else if (response.getStatusCode() == 200) {
+ System.out.println("\nBlocklist " + blocklistName + " updated.");
+}
+```
+
+1. Replace `<your_list_name>` with a custom name for your list. Allowed characters: `0-9, A-Z, a-z, - . _ ~`.
+1. Optionally replace `<description>` with a custom description.
+1. Run the code.
#### [Python](#tab/python) Create a new Python script and open it in your preferred editor or IDE. Paste in the following code. ```python import os
-from azure.ai.contentsafety import ContentSafetyClient
+from azure.ai.contentsafety import BlocklistClient
from azure.core.credentials import AzureKeyCredential from azure.ai.contentsafety.models import TextBlocklist from azure.core.exceptions import HttpResponseError
from azure.core.exceptions import HttpResponseError
key = os.environ["CONTENT_SAFETY_KEY"] endpoint = os.environ["CONTENT_SAFETY_ENDPOINT"]
-# Create a Content Safety client
-client = ContentSafetyClient(endpoint, AzureKeyCredential(key))
+# Create a Blocklist client
+client = BlocklistClient(endpoint, AzureKeyCredential(key))
blocklist_name = "<your_list_name>" blocklist_description = "<description>" try:
- blocklist = client.create_or_update_text_blocklist(blocklist_name=blocklist_name, resource={"description": blocklist_description})
+ blocklist = client.create_or_update_text_blocklist(
+ blocklist_name=blocklist_name,
+ options=TextBlocklist(blocklist_name=blocklist_name, description=blocklist_description),
+ )
if blocklist: print("\nBlocklist created or updated: ") print(f"Name: {blocklist.blocklist_name}, Description: {blocklist.description}")
except HttpResponseError as e:
1. Replace `<description>` with a custom description. 1. Run the script.
+#### [JavaScript](#tab/javascript)
+
+Create a new JavaScript script and open it in your preferred editor or IDE. Paste in the following code.
+
+```javascript
+const ContentSafetyClient = require("@azure-rest/ai-content-safety").default,
+ { isUnexpected } = require("@azure-rest/ai-content-safety");
+const { AzureKeyCredential } = require("@azure/core-auth");
+
+// Load the .env file if it exists
+require("dotenv").config();
+
+const endpoint = process.env["CONTENT_SAFETY_ENDPOINT"] || "<endpoint>";
+const key = process.env["CONTENT_SAFETY_API_KEY"] || "<key>";
+
+const credential = new AzureKeyCredential(key);
+const client = ContentSafetyClient(endpoint, credential);
+
+async function createOrUpdateTextBlocklist() {
+ const blocklistName = "<your_list_name>";
+ const blocklistDescription = "<description>";
+
+ const createOrUpdateTextBlocklistParameters = {
+ contentType: "application/merge-patch+json",
+ body: {
+ description: blocklistDescription,
+ },
+ };
+
+ const result = await client
+ .path("/text/blocklists/{blocklistName}", blocklistName)
+ .patch(createOrUpdateTextBlocklistParameters);
+
+ if (isUnexpected(result)) {
+ throw result;
+ }
+
+ console.log(
+ "Blocklist created or updated. Name: ",
+ result.body.blocklistName,
+ ", Description: ",
+ result.body.description
+ );
+}
+
+(async () => {
+ await createOrUpdateTextBlocklist();
+})().catch((err) => {
+ console.error("The sample encountered an error:", err);
+});
+```
+
+1. Replace `<your_list_name>` with a custom name for your list. Allowed characters: `0-9, A-Z, a-z, - . _ ~`.
+1. Optionally replace `<description>` with a custom description.
+1. Run the script.
+ ### Add blocklistItems to the list
Create a new C# console app and open it in your preferred editor or IDE. Paste i
```csharp string endpoint = Environment.GetEnvironmentVariable("CONTENT_SAFETY_ENDPOINT"); string key = Environment.GetEnvironmentVariable("CONTENT_SAFETY_KEY");
-ContentSafetyClient client = new ContentSafetyClient(new Uri(endpoint), new AzureKeyCredential(key));
+BlocklistClient blocklistClient = new BlocklistClient(new Uri(endpoint), new AzureKeyCredential(key));
var blocklistName = "<your_list_name>";
-string blockItemText1 = "<block_item_text_1>";
-string blockItemText2 = "<block_item_text_2>";
-var blockItems = new TextBlockItemInfo[] { new TextBlockItemInfo(blockItemText1), new TextBlockItemInfo(blockItemText2) };
-var addedBlockItems = client.AddBlockItems(blocklistName, new AddBlockItemsOptions(blockItems));
+string blocklistItemText1 = "<block_item_text_1>";
+string blocklistItemText2 = "<block_item_text_2>";
+
+var blocklistItems = new TextBlocklistItem[] { new TextBlocklistItem(blocklistItemText1), new TextBlocklistItem(blocklistItemText2) };
+var addedBlocklistItems = blocklistClient.AddOrUpdateBlocklistItems(blocklistName, new AddOrUpdateTextBlocklistItemsOptions(blocklistItems));
-if (addedBlockItems != null && addedBlockItems.Value != null)
+if (addedBlocklistItems != null && addedBlocklistItems.Value != null)
{
- Console.WriteLine("\nBlockItems added:");
- foreach (var addedBlockItem in addedBlockItems.Value.Value)
+ Console.WriteLine("\nBlocklistItems added:");
+ foreach (var addedBlocklistItem in addedBlocklistItems.Value.BlocklistItems)
{
- Console.WriteLine("BlockItemId: {0}, Text: {1}, Description: {2}", addedBlockItem.BlockItemId, addedBlockItem.Text, addedBlockItem.Description);
+ Console.WriteLine("BlocklistItemId: {0}, Text: {1}, Description: {2}", addedBlocklistItem.BlocklistItemId, addedBlocklistItem.Text, addedBlocklistItem.Description);
+ }
+}
+```
+
+1. Replace `<your_list_name>` with the name you used in the list creation step.
+1. Replace the values of the `blocklistItemText1` and `blocklistItemText2` fields with the items you'd like to add to your blocklist. The maximum length of a blockItem is 128 characters.
+1. Optionally add more blockItem strings to the `blockItems` parameter.
+1. Run the code.
+
+#### [Java](#tab/java)
+
+Create a Java application and open it in your preferred editor or IDE. Paste in the following code.
+
+```java
+String endpoint = Configuration.getGlobalConfiguration().get("CONTENT_SAFETY_ENDPOINT");
+String key = Configuration.getGlobalConfiguration().get("CONTENT_SAFETY_KEY");
+BlocklistClient blocklistClient = new BlocklistClientBuilder()
+ .credential(new KeyCredential(key))
+ .endpoint(endpoint).buildClient();
+
+String blocklistName = "<your_list_name>";
+
+String blockItemText1 = "<block_item_text_1>";
+String blockItemText2 = "<block_item_text_2>";
+List<TextBlocklistItem> blockItems = Arrays.asList(new TextBlocklistItem(blockItemText1).setDescription("Kill word"),
+ new TextBlocklistItem(blockItemText2).setDescription("Hate word"));
+AddOrUpdateTextBlocklistItemsResult addedBlockItems = blocklistClient.addOrUpdateBlocklistItems(blocklistName,
+ new AddOrUpdateTextBlocklistItemsOptions(blockItems));
+if (addedBlockItems != null && addedBlockItems.getBlocklistItems() != null) {
+ System.out.println("\nBlockItems added:");
+ for (TextBlocklistItem addedBlockItem : addedBlockItems.getBlocklistItems()) {
+ System.out.println("BlockItemId: " + addedBlockItem.getBlocklistItemId() + ", Text: " + addedBlockItem.getText() + ", Description: " + addedBlockItem.getDescription());
} } ```
Create a new Python script and open it in your preferred editor or IDE. Paste in
```python import os
-from azure.ai.contentsafety import ContentSafetyClient
+from azure.ai.contentsafety import BlocklistClient
from azure.core.credentials import AzureKeyCredential from azure.ai.contentsafety.models import (
- TextBlockItemInfo,
- AddBlockItemsOptions
+ AddOrUpdateTextBlocklistItemsOptions, TextBlocklistItem
) from azure.core.exceptions import HttpResponseError key = os.environ["CONTENT_SAFETY_KEY"] endpoint = os.environ["CONTENT_SAFETY_ENDPOINT"]
-# Create an Content Safety client
-client = ContentSafetyClient(endpoint, AzureKeyCredential(key))
+# Create a Blocklist client
+client = BlocklistClient(endpoint, AzureKeyCredential(key))
blocklist_name = "<your_list_name>"
-block_item_text_1 = "<block_item_text_1>"
-block_item_text_2 = "<block_item_text_2>"
+blocklist_item_text_1 = "<block_item_text_1>"
+blocklist_item_text_2 = "<block_item_text_2>"
-block_items = [TextBlockItemInfo(text=block_item_text_1), TextBlockItemInfo(text=block_item_text_2)]
+blocklist_items = [TextBlocklistItem(text=blocklist_item_text_1), TextBlocklistItem(text=blocklist_item_text_2)]
try:
- result = client.add_block_items(
- blocklist_name=blocklist_name,
- body=AddBlockItemsOptions(block_items=block_items),
- )
- if result and result.value:
- print("\nBlock items added: ")
- for block_item in result.value:
- print(f"BlockItemId: {block_item.block_item_id}, Text: {block_item.text}, Description: {block_item.description}")
+ result = client.add_or_update_blocklist_items(
+ blocklist_name=blocklist_name, options=AddOrUpdateTextBlocklistItemsOptions(blocklist_items=blocklist_items)
+ for blocklist_item in result.blocklist_items:
+ print(
+ f"BlocklistItemId: {blocklist_item.blocklist_item_id}, Text: {blocklist_item.text}, Description: {blocklist_item.description}"
+ )
except HttpResponseError as e:
- print("\nAdd block items failed: ")
+ print("\nAdd blocklistItems failed: ")
if e.error: print(f"Error code: {e.error.code}") print(f"Error message: {e.error.message}")
except HttpResponseError as e:
``` 1. Replace `<your_list_name>` with the name you used in the list creation step.
-1. Replace the values of the `block_item_text_1` and `block_item_text_2` fields with the items you'd like to add to your blocklist. The maximum length of a blockItem is 128 characters.
+1. Replace the values of the `blocklist_item_text_1` and `blocklist_item_text_2` fields with the items you'd like to add to your blocklist. The maximum length of a blockItem is 128 characters.
1. Optionally add more blockItem strings to the `block_items` parameter. 1. Run the script.
+#### [JavaScript](#tab/javascript)
++
+```javascript
+const ContentSafetyClient = require("@azure-rest/ai-content-safety").default,
+ { isUnexpected } = require("@azure-rest/ai-content-safety");
+const { AzureKeyCredential } = require("@azure/core-auth");
+
+// Load the .env file if it exists
+require("dotenv").config();
+
+const endpoint = process.env["CONTENT_SAFETY_ENDPOINT"] || "<endpoint>";
+const key = process.env["CONTENT_SAFETY_API_KEY"] || "<key>";
+
+const credential = new AzureKeyCredential(key);
+const client = ContentSafetyClient(endpoint, credential);
+
+async function addBlocklistItems() {
+ const blocklistName = "<your_list_name>";
+ const blocklistItemText1 = "<block_item_text_1>";
+ const blocklistItemText2 = "<block_item_text_2>";
+ const addOrUpdateBlocklistItemsParameters = {
+ body: {
+ blocklistItems: [
+ {
+ description: "Test blocklist item 1",
+ text: blocklistItemText1,
+ },
+ {
+ description: "Test blocklist item 2",
+ text: blocklistItemText2,
+ },
+ ],
+ },
+ };
+
+ const result = await client
+ .path("/text/blocklists/{blocklistName}:addOrUpdateBlocklistItems", blocklistName)
+ .post(addOrUpdateBlocklistItemsParameters);
+
+ if (isUnexpected(result)) {
+ throw result;
+ }
+
+ console.log("Blocklist items added: ");
+ if (result.body.blocklistItems) {
+ for (const blocklistItem of result.body.blocklistItems) {
+ console.log(
+ "BlocklistItemId: ",
+ blocklistItem.blocklistItemId,
+ ", Text: ",
+ blocklistItem.text,
+ ", Description: ",
+ blocklistItem.description
+ );
+ }
+ }
+}
+(async () => {
+ await addBlocklistItems();
+})().catch((err) => {
+ console.error("The sample encountered an error:", err);
+});
+```
+1. Replace `<your_list_name>` with the name you used in the list creation step.
+1. Replace the values of the `block_item_text_1` and `block_item_text_2` fields with the items you'd like to add to your blocklist. The maximum length of a blockItem is 128 characters.
+1. Optionally add more blockItem strings to the `blocklistItems` parameter.
+1. Run the script.
> [!NOTE]
var blocklistName = "<your_list_name>";
// After you edit your blocklist, it usually takes effect in 5 minutes, please wait some time before analyzing with blocklist after editing. var request = new AnalyzeTextOptions("<your_input_text>"); request.BlocklistNames.Add(blocklistName);
-request.BreakByBlocklists = true;
+request.HaltOnBlocklistHit = true;
Response<AnalyzeTextResult> response; try
catch (RequestFailedException ex)
throw; }
-if (response.Value.BlocklistsMatchResults != null)
+if (response.Value.BlocklistsMatch != null)
{ Console.WriteLine("\nBlocklist match result:");
- foreach (var matchResult in response.Value.BlocklistsMatchResults)
+ foreach (var matchResult in response.Value.BlocklistsMatch)
{
- Console.WriteLine("Blockitem was hit in text: Offset: {0}, Length: {1}", matchResult.Offset, matchResult.Length);
- Console.WriteLine("BlocklistName: {0}, BlockItemId: {1}, BlockItemText: {2}, ", matchResult.BlocklistName, matchResult.BlockItemId, matchResult.BlockItemText);
+ Console.WriteLine("BlocklistName: {0}, BlocklistItemId: {1}, BlocklistText: {2}, ", matchResult.BlocklistName, matchResult.BlocklistItemId, matchResult.BlocklistItemText);
+ }
+}
+```
+
+1. Replace `<your_list_name>` with the name you used in the list creation step.
+1. Replace the `request` input text with whatever text you want to analyze.
+1. Run the script.
+
+#### [Java](#tab/java)
+
+Create a Java application and open it in your preferred editor or IDE. Paste in the following code.
+
+```java
+String endpoint = Configuration.getGlobalConfiguration().get("CONTENT_SAFETY_ENDPOINT");
+String key = Configuration.getGlobalConfiguration().get("CONTENT_SAFETY_KEY");
+
+ContentSafetyClient contentSafetyClient = new ContentSafetyClientBuilder()
+ .credential(new KeyCredential(key))
+ .endpoint(endpoint).buildClient();
+
+String blocklistName = "<your_list_name>";
+
+AnalyzeTextOptions request = new AnalyzeTextOptions("<sample_text>");
+request.setBlocklistNames(Arrays.asList(blocklistName));
+request.setHaltOnBlocklistHit(true);
+
+AnalyzeTextResult analyzeTextResult;
+try {
+ analyzeTextResult = contentSafetyClient.analyzeText(request);
+} catch (HttpResponseException ex) {
+ System.out.println("Analyze text failed.\nStatus code: " + ex.getResponse().getStatusCode() + ", Error message: " + ex.getMessage());
+ throw ex;
+}
+
+if (analyzeTextResult.getBlocklistsMatch() != null) {
+ System.out.println("\nBlocklist match result:");
+ for (TextBlocklistMatch matchResult : analyzeTextResult.getBlocklistsMatch()) {
+ System.out.println("BlocklistName: " + matchResult.getBlocklistName() + ", BlockItemId: " + matchResult.getBlocklistItemId() + ", BlockItemText: " + matchResult.getBlocklistItemText());
} } ```
from azure.core.exceptions import HttpResponseError
key = os.environ["CONTENT_SAFETY_KEY"] endpoint = os.environ["CONTENT_SAFETY_ENDPOINT"]
-# Create an Content Safety client
+# Create a Content Safety client
client = ContentSafetyClient(endpoint, AzureKeyCredential(key)) blocklist_name = "<your_list_name>" input_text = "<your_input_text>" try:
- # After you edit your blocklist, it usually takes effect in 5 minutes, please wait some time before analyzing with blocklist after editing.
- analysis_result = client.analyze_text(AnalyzeTextOptions(text=input_text, blocklist_names=[blocklist_name], break_by_blocklists=False))
- if analysis_result and analysis_result.blocklists_match_results:
+ # After you edit your blocklist, it usually takes effect in 5 minutes, please wait some time before analyzing
+ # with blocklist after editing.
+ analysis_result = client.analyze_text(
+ AnalyzeTextOptions(text=input_text, blocklist_names=[blocklist_name], halt_on_blocklist_hit=False)
+ )
+ if analysis_result and analysis_result.blocklists_match:
print("\nBlocklist match results: ")
- for match_result in analysis_result.blocklists_match_results:
- print(f"Block item was hit in text, Offset={match_result.offset}, Length={match_result.length}.")
- print(f"BlocklistName: {match_result.blocklist_name}, BlockItemId: {match_result.block_item_id}, BlockItemText: {match_result.block_item_text}")
+ for match_result in analysis_result.blocklists_match:
+ print(
+ f"BlocklistName: {match_result.blocklist_name}, BlocklistItemId: {match_result.blocklist_item_id}, "
+ f"BlocklistItemText: {match_result.blocklist_item_text}"
+ )
except HttpResponseError as e: print("\nAnalyze text failed: ") if e.error:
except HttpResponseError as e:
1. Replace the `input_text` variable with whatever text you want to analyze. 1. Run the script.
+#### [JavaScript](#tab/javascript)
+
+```javascript
+const ContentSafetyClient = require("@azure-rest/ai-content-safety").default,
+ { isUnexpected } = require("@azure-rest/ai-content-safety");
+const { AzureKeyCredential } = require("@azure/core-auth");
+
+// Load the .env file if it exists
+require("dotenv").config();
+
+const endpoint = process.env["CONTENT_SAFETY_ENDPOINT"] || "<endpoint>";
+const key = process.env["CONTENT_SAFETY_API_KEY"] || "<key>";
+
+const credential = new AzureKeyCredential(key);
+const client = ContentSafetyClient(endpoint, credential);
+
+async function analyzeTextWithBlocklists() {
+ const blocklistName = "<your_list_name>";
+ const inputText = "<your_input_text>";
+ const analyzeTextParameters = {
+ body: {
+ text: inputText,
+ blocklistNames: [blocklistName],
+ haltOnBlocklistHit: false,
+ },
+ };
+
+ const result = await client.path("/text:analyze").post(analyzeTextParameters);
+
+ if (isUnexpected(result)) {
+ throw result;
+ }
+
+ console.log("Blocklist match results: ");
+ if (result.body.blocklistsMatch) {
+ for (const blocklistMatchResult of result.body.blocklistsMatch) {
+ console.log(
+ "BlocklistName: ",
+ blocklistMatchResult.blocklistName,
+ ", BlocklistItemId: ",
+ blocklistMatchResult.blocklistItemId,
+ ", BlocklistItemText: ",
+ blocklistMatchResult.blocklistItemText
+ );
+ }
+ }
+}
+
+(async () => {
+ await analyzeTextWithBlocklists();
+})().catch((err) => {
+ console.error("The sample encountered an error:", err);
+});
+```
+
+1. Replace `<your_list_name>` with the name you used in the list creation step.
+1. Replace the `inputText` variable with whatever text you want to analyze.
+1. Run the script.
+ + ## Other blocklist operations This section contains more operations to help you manage and use the blocklist feature.
Create a new C# console app and open it in your preferred editor or IDE. Paste i
```csharp string endpoint = Environment.GetEnvironmentVariable("CONTENT_SAFETY_ENDPOINT"); string key = Environment.GetEnvironmentVariable("CONTENT_SAFETY_KEY");
-ContentSafetyClient client = new ContentSafetyClient(new Uri(endpoint), new AzureKeyCredential(key));
+BlocklistClient blocklistClient = new BlocklistClient(new Uri(endpoint), new AzureKeyCredential(key));
var blocklistName = "<your_list_name>";
-var allBlockitems = client.GetTextBlocklistItems(blocklistName);
-Console.WriteLine("\nList BlockItems:");
-foreach (var blocklistItem in allBlockitems)
+var allBlocklistitems = blocklistClient.GetTextBlocklistItems(blocklistName);
+Console.WriteLine("\nList BlocklistItems:");
+foreach (var blocklistItem in allBlocklistitems)
{
- Console.WriteLine("BlockItemId: {0}, Text: {1}, Description: {2}", blocklistItem.BlockItemId, blocklistItem.Text, blocklistItem.Description);
+ Console.WriteLine("BlocklistItemId: {0}, Text: {1}, Description: {2}", blocklistItem.BlocklistItemId, blocklistItem.Text, blocklistItem.Description);
+}
+
+```
+
+1. Replace `<your_list_name>` with the name you used in the list creation step.
+1. Run the script.
+
+#### [Java](#tab/java)
+
+Create a Java application and open it in your preferred editor or IDE. Paste in the following code.
+
+```java
+String endpoint = Configuration.getGlobalConfiguration().get("CONTENT_SAFETY_ENDPOINT");
+String key = Configuration.getGlobalConfiguration().get("CONTENT_SAFETY_KEY");
+BlocklistClient blocklistClient = new BlocklistClientBuilder()
+ .credential(new KeyCredential(key))
+ .endpoint(endpoint).buildClient();
+
+String blocklistName = "<your_list_name>";
+
+PagedIterable<TextBlocklistItem> allBlockitems = blocklistClient.listTextBlocklistItems(blocklistName);
+System.out.println("\nList BlockItems:");
+for (TextBlocklistItem blocklistItem : allBlockitems) {
+ System.out.println("BlockItemId: " + blocklistItem.getBlocklistItemId() + ", Text: " + blocklistItem.getText() + ", Description: " + blocklistItem.getDescription());
} ``` 1. Replace `<your_list_name>` with the name you used in the list creation step. 1. Run the script. + #### [Python](#tab/python) Create a new Python script and open it in your preferred editor or IDE. Paste in the following code.
Create a new Python script and open it in your preferred editor or IDE. Paste in
```python import os
-from azure.ai.contentsafety import ContentSafetyClient
+from azure.ai.contentsafety import BlocklistClient
from azure.core.credentials import AzureKeyCredential from azure.core.exceptions import HttpResponseError key = os.environ["CONTENT_SAFETY_KEY"] endpoint = os.environ["CONTENT_SAFETY_ENDPOINT"]
-# Create an Content Safety client
-client = ContentSafetyClient(endpoint, AzureKeyCredential(key))
+# Create a Blocklist client
+client = BlocklistClient(endpoint, AzureKeyCredential(key))
blocklist_name = "<your_list_name>" try:
- block_items = client.list_text_blocklist_items(blocklist_name=blocklist_name)
- if block_items:
- print("\nList block items: ")
- for block_item in block_items:
- print(f"BlockItemId: {block_item.block_item_id}, Text: {block_item.text}, Description: {block_item.description}")
+ blocklist_items = client.list_text_blocklist_items(blocklist_name=blocklist_name)
+ if blocklist_items:
+ print("\nList blocklist items: ")
+ for blocklist_item in blocklist_items:
+ print(
+ f"BlocklistItemId: {blocklist_item.blocklist_item_id}, Text: {blocklist_item.text}, "
+ f"Description: {blocklist_item.description}"
+ )
except HttpResponseError as e:
- print("\nList block items failed: ")
+ print("\nList blocklist items failed: ")
if e.error: print(f"Error code: {e.error.code}") print(f"Error message: {e.error.message}")
except HttpResponseError as e:
1. Replace `<your_list_name>` with the name you used in the list creation step. 1. Run the script.
+#### [JavaScript](#tab/javascript)
+
+```javascript
+const ContentSafetyClient = require("@azure-rest/ai-content-safety").default,
+ { isUnexpected } = require("@azure-rest/ai-content-safety");
+const { AzureKeyCredential } = require("@azure/core-auth");
+
+// Load the .env file if it exists
+require("dotenv").config();
+
+const endpoint = process.env["CONTENT_SAFETY_ENDPOINT"] || "<endpoint>";
+const key = process.env["CONTENT_SAFETY_API_KEY"] || "<key>";
+
+const credential = new AzureKeyCredential(key);
+const client = ContentSafetyClient(endpoint, credential);
+
+async function listBlocklistItems() {
+ const blocklistName = "<your_list_name>";
+
+ const result = await client
+ .path("/text/blocklists/{blocklistName}/blocklistItems", blocklistName)
+ .get();
+
+ if (isUnexpected(result)) {
+ throw result;
+ }
+
+ console.log("List blocklist items: ");
+ if (result.body.value) {
+ for (const blocklistItem of result.body.value) {
+ console.log(
+ "BlocklistItemId: ",
+ blocklistItem.blocklistItemId,
+ ", Text: ",
+ blocklistItem.text,
+ ", Description: ",
+ blocklistItem.description
+ );
+ }
+ }
+}
+
+(async () => {
+ await listBlocklistItems();
+})().catch((err) => {
+ console.error("The sample encountered an error:", err);
+});
+```
+
+1. Replace `<your_list_name>` with the name you used in the list creation step.
+1. Run the script.
+
Create a new C# console app and open it in your preferred editor or IDE. Paste i
```csharp string endpoint = Environment.GetEnvironmentVariable("CONTENT_SAFETY_ENDPOINT"); string key = Environment.GetEnvironmentVariable("CONTENT_SAFETY_KEY");
-ContentSafetyClient client = new ContentSafetyClient(new Uri(endpoint), new AzureKeyCredential(key));
+BlocklistClient blocklistClient = new BlocklistClient(new Uri(endpoint), new AzureKeyCredential(key));
-var blocklists = client.GetTextBlocklists();
+var blocklists = blocklistClient.GetTextBlocklists();
Console.WriteLine("\nList blocklists:"); foreach (var blocklist in blocklists) {
- Console.WriteLine("BlocklistName: {0}, Description: {1}", blocklist.BlocklistName, blocklist.Description);
+ Console.WriteLine("BlocklistName: {0}, Description: {1}", blocklist.Name, blocklist.Description);
+}
+```
+
+Run the script.
+
+#### [Java](#tab/java)
+
+Create a Java application and open it in your preferred editor or IDE. Paste in the following code.
+
+```java
+String endpoint = Configuration.getGlobalConfiguration().get("CONTENT_SAFETY_ENDPOINT");
+String key = Configuration.getGlobalConfiguration().get("CONTENT_SAFETY_KEY");
+BlocklistClient blocklistClient = new BlocklistClientBuilder()
+ .credential(new KeyCredential(key))
+ .endpoint(endpoint).buildClient();
+
+PagedIterable<TextBlocklist> allTextBlocklists = blocklistClient.listTextBlocklists();
+System.out.println("\nList Blocklist:");
+for (TextBlocklist blocklist : allTextBlocklists) {
+ System.out.println("Blocklist: " + blocklist.getName() + ", Description: " + blocklist.getDescription());
} ```
Create a new Python script and open it in your preferred editor or IDE. Paste in
```python import os
-from azure.ai.contentsafety import ContentSafetyClient
+from azure.ai.contentsafety import BlocklistClient
from azure.core.credentials import AzureKeyCredential from azure.core.exceptions import HttpResponseError key = os.environ["CONTENT_SAFETY_KEY"] endpoint = os.environ["CONTENT_SAFETY_ENDPOINT"]
-# Create an Content Safety client
-client = ContentSafetyClient(endpoint, AzureKeyCredential(key))
+# Create a Blocklist client
+client = BlocklistClient(endpoint, AzureKeyCredential(key))
+ try: blocklists = client.list_text_blocklists()
except HttpResponseError as e:
Run the script. -
+#### [JavaScript](#tab/javascript)
+```javascript
+const ContentSafetyClient = require("@azure-rest/ai-content-safety").default,
+ { isUnexpected } = require("@azure-rest/ai-content-safety");
+const { AzureKeyCredential } = require("@azure/core-auth");
+
+// Load the .env file if it exists
+require("dotenv").config();
+
+const endpoint = process.env["CONTENT_SAFETY_ENDPOINT"] || "<endpoint>";
+const key = process.env["CONTENT_SAFETY_API_KEY"] || "<key>";
+
+const credential = new AzureKeyCredential(key);
+const client = ContentSafetyClient(endpoint, credential);
+
+async function listTextBlocklists() {
+ const result = await client.path("/text/blocklists").get();
+
+ if (isUnexpected(result)) {
+ throw result;
+ }
+
+ console.log("List blocklists: ");
+ if (result.body.value) {
+ for (const blocklist of result.body.value) {
+ console.log(
+ "BlocklistName: ",
+ blocklist.blocklistName,
+ ", Description: ",
+ blocklist.description
+ );
+ }
+ }
+}
+
+(async () => {
+ await listTextBlocklists();
+})().catch((err) => {
+ console.error("The sample encountered an error:", err);
+});
+```
+
+Run the script.
++ ### Get a blocklist by blocklistName
Create a new C# console app and open it in your preferred editor or IDE. Paste i
```csharp string endpoint = Environment.GetEnvironmentVariable("CONTENT_SAFETY_ENDPOINT"); string key = Environment.GetEnvironmentVariable("CONTENT_SAFETY_KEY");-
-ContentSafetyClient client = new ContentSafetyClient(new Uri(endpoint), new AzureKeyCredential(key));
+BlocklistClient blocklistClient = new BlocklistClient(new Uri(endpoint), new AzureKeyCredential(key));
var blocklistName = "<your_list_name>";
-var getBlocklist = client.GetTextBlocklist(blocklistName);
+var getBlocklist = blocklistClient.GetTextBlocklist(blocklistName);
if (getBlocklist != null && getBlocklist.Value != null) { Console.WriteLine("\nGet blocklist:");
- Console.WriteLine("BlocklistName: {0}, Description: {1}", getBlocklist.Value.BlocklistName, getBlocklist.Value.Description);
+ Console.WriteLine("BlocklistName: {0}, Description: {1}", getBlocklist.Value.Name, getBlocklist.Value.Description);
+}
+```
+
+1. Replace `<your_list_name>` with the name you used in the list creation step.
+1. Run the script.
+
+#### [Java](#tab/java)
+
+Create a Java application and open it in your preferred editor or IDE. Paste in the following code.
+
+```java
+String endpoint = Configuration.getGlobalConfiguration().get("CONTENT_SAFETY_ENDPOINT");
+String key = Configuration.getGlobalConfiguration().get("CONTENT_SAFETY_KEY");
+BlocklistClient blocklistClient = new BlocklistClientBuilder()
+ .credential(new KeyCredential(key))
+ .endpoint(endpoint).buildClient();
+
+String blocklistName = "<your_list_name>";
+
+TextBlocklist getBlocklist = blocklistClient.getTextBlocklist(blocklistName);
+if (getBlocklist != null) {
+ System.out.println("\nGet blocklist:");
+ System.out.println("BlocklistName: " + getBlocklist.getName() + ", Description: " + getBlocklist.getDescription());
} ```
Create a new Python script and open it in your preferred editor or IDE. Paste in
```python import os
-from azure.ai.contentsafety import ContentSafetyClient
+from azure.ai.contentsafety import BlocklistClient
from azure.core.credentials import AzureKeyCredential from azure.core.exceptions import HttpResponseError key = os.environ["CONTENT_SAFETY_KEY"] endpoint = os.environ["CONTENT_SAFETY_ENDPOINT"]
-# Create an Content Safety client
-client = ContentSafetyClient(endpoint, AzureKeyCredential(key))
+# Create a Blocklist client
+client = BlocklistClient(endpoint, AzureKeyCredential(key))
blocklist_name = "<your_list_name>"
except HttpResponseError as e:
1. Replace `<your_list_name>` with the name you used in the list creation step. 1. Run the script.
+#### [JavaScript](#tab/javascript)
+
+```javascript
+const ContentSafetyClient = require("@azure-rest/ai-content-safety").default,
+ { isUnexpected } = require("@azure-rest/ai-content-safety");
+const { AzureKeyCredential } = require("@azure/core-auth");
+
+// Load the .env file if it exists
+require("dotenv").config();
+
+const endpoint = process.env["CONTENT_SAFETY_ENDPOINT"] || "<endpoint>";
+const key = process.env["CONTENT_SAFETY_API_KEY"] || "<key>";
+
+const credential = new AzureKeyCredential(key);
+const client = ContentSafetyClient(endpoint, credential);
+
+async function getTextBlocklist() {
+ const blocklistName = "<your_list_name>";
+
+ const result = await client.path("/text/blocklists/{blocklistName}", blocklistName).get();
+
+ if (isUnexpected(result)) {
+ throw result;
+ }
+
+ console.log("Get blocklist: ");
+ console.log("Name: ", result.body.blocklistName, ", Description: ", result.body.description);
+}
++
+(async () => {
+ await getTextBlocklist();
+})().catch((err) => {
+ console.error("The sample encountered an error:", err);
+});
+```
+
+1. Replace `<your_list_name>` with the name you used in the list creation step.
+1. Run the script.
+
Create a new C# console app and open it in your preferred editor or IDE. Paste i
```csharp string endpoint = Environment.GetEnvironmentVariable("CONTENT_SAFETY_ENDPOINT"); string key = Environment.GetEnvironmentVariable("CONTENT_SAFETY_KEY");-
-ContentSafetyClient client = new ContentSafetyClient(new Uri(endpoint), new AzureKeyCredential(key));
+BlocklistClient blocklistClient = new BlocklistClient(new Uri(endpoint), new AzureKeyCredential(key));
var blocklistName = "<your_list_name>";
-var getBlockItemId = "<your_block_item_id>";
+var getBlocklistItemId = "<your_block_item_id>";
+
+var getBlocklistItem = blocklistClient.GetTextBlocklistItem(blocklistName, getBlocklistItemId);
+
+Console.WriteLine("\nGet BlocklistItem:");
+Console.WriteLine("BlocklistItemId: {0}, Text: {1}, Description: {2}", getBlocklistItem.Value.BlocklistItemId, getBlocklistItem.Value.Text, getBlocklistItem.Value.Description);
+```
+
+1. Replace `<your_list_name>` with the name you used in the list creation step.
+1. Replace `<your_block_item_id>` with the ID of a previously added item.
+1. Run the script.
+
+#### [Java](#tab/java)
+
+Create a Java application and open it in your preferred editor or IDE. Paste in the following code.
+
+```java
+String endpoint = Configuration.getGlobalConfiguration().get("CONTENT_SAFETY_ENDPOINT");
+String key = Configuration.getGlobalConfiguration().get("CONTENT_SAFETY_KEY");
+BlocklistClient blocklistClient = new BlocklistClientBuilder()
+ .credential(new KeyCredential(key))
+ .endpoint(endpoint).buildClient();
-var getBlockItem = client.GetTextBlocklistItem(blocklistName, getBlockItemId);
+String blocklistName = "<your_list_name>";
-Console.WriteLine("\nGet BlockItem:");
-Console.WriteLine("BlockItemId: {0}, Text: {1}, Description: {2}", getBlockItem.Value.BlockItemId, getBlockItem.Value.Text, getBlockItem.Value.Description);
+String getBlockItemId = "<your_block_item_id>";
+
+TextBlocklistItem getBlockItem = blocklistClient.getTextBlocklistItem(blocklistName, getBlockItemId);
+System.out.println("\nGet BlockItem:");
+System.out.println("BlockItemId: " + getBlockItem.getBlocklistItemId() + ", Text: " + getBlockItem.getText() + ", Description: " + getBlockItem.getDescription());
``` 1. Replace `<your_list_name>` with the name you used in the list creation step.
Create a new Python script and open it in your preferred editor or IDE. Paste in
```python import os
-from azure.ai.contentsafety import ContentSafetyClient
+from azure.ai.contentsafety import BlocklistClient
from azure.core.credentials import AzureKeyCredential
-from azure.ai.contentsafety.models import TextBlockItemInfo, AddBlockItemsOptions
+from azure.ai.contentsafety.models import TextBlocklistItem, AddOrUpdateTextBlocklistItemsOptions
from azure.core.exceptions import HttpResponseError key = os.environ["CONTENT_SAFETY_KEY"] endpoint = os.environ["CONTENT_SAFETY_ENDPOINT"]
-# Create an Content Safety client
-client = ContentSafetyClient(endpoint, AzureKeyCredential(key))
+# Create a Blocklist client
+client = BlocklistClient(endpoint, AzureKeyCredential(key))
blocklist_name = "<your_list_name>"
-block_item_text_1 = "<block_item_text>"
+blocklist_item_text_1 = "<block_item_text>"
try:
- # Add a blockItem
- add_result = client.add_block_items(
+ # Add a blocklistItem
+ add_result = client.add_or_update_blocklist_items(
blocklist_name=blocklist_name,
- body=AddBlockItemsOptions(block_items=[TextBlockItemInfo(text=block_item_text_1)]),
+ options=AddOrUpdateTextBlocklistItemsOptions(blocklist_items=[TextBlocklistItem(text=blocklist_item_text_1)]),
)
- if not add_result or not add_result.value or len(add_result.value) <= 0:
- raise RuntimeError("BlockItem not created.")
- block_item_id = add_result.value[0].block_item_id
-
- # Get this blockItem by blockItemId
- block_item = client.get_text_blocklist_item(
- blocklist_name=blocklist_name,
- block_item_id= block_item_id
+ if not add_result or not add_result.blocklist_items or len(add_result.blocklist_items) <= 0:
+ raise RuntimeError("BlocklistItem not created.")
+ blocklist_item_id = add_result.blocklist_items[0].blocklist_item_id
+
+ # Get this blocklistItem by blocklistItemId
+ blocklist_item = client.get_text_blocklist_item(blocklist_name=blocklist_name, blocklist_item_id=blocklist_item_id)
+ print("\nGet blocklistItem: ")
+ print(
+ f"BlocklistItemId: {blocklist_item.blocklist_item_id}, Text: {blocklist_item.text}, Description: {blocklist_item.description}"
)
- print("\nGet blockitem: ")
- print(f"BlockItemId: {block_item.block_item_id}, Text: {block_item.text}, Description: {block_item.description}")
except HttpResponseError as e:
- print("\nGet block item failed: ")
+ print("\nGet blocklist item failed: ")
if e.error: print(f"Error code: {e.error.code}") print(f"Error message: {e.error.message}")
except HttpResponseError as e:
1. Replace `<block_item_text>` with your block item text. 1. Run the script.
+#### [JavaScript](#tab/javascript)
+
+```javascript
+const ContentSafetyClient = require("@azure-rest/ai-content-safety").default,
+ { isUnexpected } = require("@azure-rest/ai-content-safety");
+const { AzureKeyCredential } = require("@azure/core-auth");
+
+// Load the .env file if it exists
+require("dotenv").config();
+
+const endpoint = process.env["CONTENT_SAFETY_ENDPOINT"] || "<endpoint>";
+const key = process.env["CONTENT_SAFETY_API_KEY"] || "<key>";
+
+const credential = new AzureKeyCredential(key);
+const client = ContentSafetyClient(endpoint, credential);
+
+async function getBlocklistItem() {
+ const blocklistName = "<your_list_name>";
+
+ const blocklistItemId = "<your_block_item_id>";
+
+ // Get this blocklistItem by blocklistItemId
+ const blocklistItem = await client
+ .path(
+ "/text/blocklists/{blocklistName}/blocklistItems/{blocklistItemId}",
+ blocklistName,
+ blocklistItemId
+ )
+ .get();
+
+ if (isUnexpected(blocklistItem)) {
+ throw blocklistItem;
+ }
+
+ console.log("Get blocklistitem: ");
+ console.log(
+ "BlocklistItemId: ",
+ blocklistItem.body.blocklistItemId,
+ ", Text: ",
+ blocklistItem.body.text,
+ ", Description: ",
+ blocklistItem.body.description
+ );
+}
++
+(async () => {
+ await getBlocklistItem();
+})().catch((err) => {
+ console.error("The sample encountered an error:", err);
+});
+```
+1. Replace `<your_list_name>` with the name you used in the list creation step.
+1. Replace `<your_block_item_id>` with the ID of the item you want to get.
+1. Run the script.
### Remove blocklistItems from a blocklist.
Create a new C# console app and open it in your preferred editor or IDE. Paste i
```csharp string endpoint = os.environ["CONTENT_SAFETY_ENDPOINT"]; string key = os.environ["CONTENT_SAFETY_KEY"];-
-ContentSafetyClient client = new ContentSafetyClient(new Uri(endpoint), new AzureKeyCredential(key));
+BlocklistClient blocklistClient = new BlocklistClient(new Uri(endpoint), new AzureKeyCredential(key));
var blocklistName = "<your_list_name>";
-var removeBlockItemId = "<your_block_item_id>";
-var removeBlockItemIds = new List<string> { removeBlockItemId };
-var removeResult = client.RemoveBlockItems(blocklistName, new RemoveBlockItemsOptions(removeBlockItemIds));
+var removeBlocklistItemId = "<your_block_item_id>";
+var removeBlocklistItemIds = new List<string> { removeBlocklistItemId };
+var removeResult = blocklistClient.RemoveBlocklistItems(blocklistName, new RemoveTextBlocklistItemsOptions(removeBlocklistItemIds));
if (removeResult != null && removeResult.Status == 204) {
- Console.WriteLine("\nBlockItem removed: {0}.", removeBlockItemId);
+ Console.WriteLine("\nBlocklistItem removed: {0}.", removeBlocklistItemId);
} ```
if (removeResult != null && removeResult.Status == 204)
1. Replace `<your_block_item_id>` with the ID of a previously added item. 1. Run the script.
+#### [Java](#tab/java)
+
+Create a Java application and open it in your preferred editor or IDE. Paste in the following code.
+
+```java
+String endpoint = Configuration.getGlobalConfiguration().get("CONTENT_SAFETY_ENDPOINT");
+String key = Configuration.getGlobalConfiguration().get("CONTENT_SAFETY_KEY");
+BlocklistClient blocklistClient = new BlocklistClientBuilder()
+ .credential(new KeyCredential(key))
+ .endpoint(endpoint).buildClient();
+
+String blocklistName = "<your_list_name>";
+
+String removeBlockItemId = "<your_block_item_id>";
+
+List<String> removeBlockItemIds = new ArrayList<>();
+removeBlockItemIds.add(removeBlockItemId);
+blocklistClient.removeBlocklistItems(blocklistName, new RemoveTextBlocklistItemsOptions(removeBlockItemIds));
+```
+
+1. Replace `<your_list_name>` with the name you used in the list creation step.
+1. Replace `<your_block_item_id>` with the ID of a previously added item.
+1. Run the script.
+ #### [Python](#tab/python) Create a new Python script and open it in your preferred editor or IDE. Paste in the following code. ```python import os
-from azure.ai.contentsafety import ContentSafetyClient
+from azure.ai.contentsafety import BlocklistClient
from azure.core.credentials import AzureKeyCredential from azure.ai.contentsafety.models import (
- TextBlockItemInfo,
- AddBlockItemsOptions,
- RemoveBlockItemsOptions
+ TextBlocklistItem,
+ AddOrUpdateTextBlocklistItemsOptions,
+ RemoveTextBlocklistItemsOptions,
) from azure.core.exceptions import HttpResponseError key = os.environ["CONTENT_SAFETY_KEY"] endpoint = os.environ["CONTENT_SAFETY_ENDPOINT"]
-# Create an Content Safety client
-client = ContentSafetyClient(endpoint, AzureKeyCredential(key))
+# Create a Blocklist client
+client = BlocklistClient(endpoint, AzureKeyCredential(key))
blocklist_name = "<your_list_name>"
-block_item_text_1 = "<block_item_text>"
+blocklist_item_text_1 = "<block_item_text>"
try:
- # Add a blockItem
- add_result = client.add_block_items(
+ # Add a blocklistItem
+ add_result = client.add_or_update_blocklist_items(
blocklist_name=blocklist_name,
- body=AddBlockItemsOptions(block_items=[TextBlockItemInfo(text=block_item_text_1)]),
+ options=AddOrUpdateTextBlocklistItemsOptions(blocklist_items=[TextBlocklistItem(text=blocklist_item_text_1)]),
)
- if not add_result or not add_result.value or len(add_result.value) <= 0:
- raise RuntimeError("BlockItem not created.")
- block_item_id = add_result.value[0].block_item_id
+ if not add_result or not add_result.blocklist_items or len(add_result.blocklist_items) <= 0:
+ raise RuntimeError("BlocklistItem not created.")
+ blocklist_item_id = add_result.blocklist_items[0].blocklist_item_id
- # Remove this blockItem by blockItemId
- client.remove_block_items(
- blocklist_name=blocklist_name,
- body=RemoveBlockItemsOptions(block_item_ids=[block_item_id])
+ # Remove this blocklistItem by blocklistItemId
+ client.remove_blocklist_items(
+ blocklist_name=blocklist_name, options=RemoveTextBlocklistItemsOptions(blocklist_item_ids=[blocklist_item_id])
)
- print(f"\nRemoved blockItem: {add_result.value[0].block_item_id}")
+ print(f"\nRemoved blocklistItem: {add_result.blocklist_items[0].blocklist_item_id}")
except HttpResponseError as e:
- print("\nRemove block item failed: ")
+ print("\nRemove blocklist item failed: ")
if e.error: print(f"Error code: {e.error.code}") print(f"Error message: {e.error.message}")
except HttpResponseError as e:
Replace `<block_item_text>` with your block item text. 1. Run the script.
+#### [JavaScript](#tab/javascript)
+
+```javascript
+const ContentSafetyClient = require("@azure-rest/ai-content-safety").default,
+ { isUnexpected } = require("@azure-rest/ai-content-safety");
+const { AzureKeyCredential } = require("@azure/core-auth");
+
+// Load the .env file if it exists
+require("dotenv").config();
+
+const endpoint = process.env["CONTENT_SAFETY_ENDPOINT"] || "<endpoint>";
+const key = process.env["CONTENT_SAFETY_API_KEY"] || "<key>";
+
+const credential = new AzureKeyCredential(key);
+const client = ContentSafetyClient(endpoint, credential);
+
+// Sample: Remove blocklistItems from a blocklist
+async function removeBlocklistItems() {
+ const blocklistName = "<your_list_name>";
+
+ const blocklistItemId = "<your_block_item_id>";
+
+ // Remove this blocklistItem by blocklistItemId
+ const removeBlocklistItemsParameters = {
+ body: {
+ blocklistItemIds: [blocklistItemId],
+ },
+ };
+ const removeBlocklistItem = await client
+ .path("/text/blocklists/{blocklistName}:removeBlocklistItems", blocklistName)
+ .post(removeBlocklistItemsParameters);
+
+ if (isUnexpected(removeBlocklistItem)) {
+ throw removeBlocklistItem;
+ }
+
+ console.log("Removed blocklistItem: ", blocklistItemText);
+}
++
+(async () => {
+ await removeBlocklistItems();
+})().catch((err) => {
+ console.error("The sample encountered an error:", err);
+});
+```
+
+1. Replace `<your_list_name>` with the name you used in the list creation step.
+Replace `<your_block_item_id` with the ID of the item you want to remove.
+1. Run the script.
+
Create a new C# console app and open it in your preferred editor or IDE. Paste i
```csharp string endpoint = os.environ["CONTENT_SAFETY_ENDPOINT"]; string key = os.environ["CONTENT_SAFETY_KEY"];-
-ContentSafetyClient client = new ContentSafetyClient(new Uri(endpoint), new AzureKeyCredential(key));
+BlocklistClient blocklistClient = new BlocklistClient(new Uri(endpoint), new AzureKeyCredential(key));
var blocklistName = "<your_list_name>";
-var deleteResult = client.DeleteTextBlocklist(blocklistName);
+var deleteResult = blocklistClient.DeleteTextBlocklist(blocklistName);
if (deleteResult != null && deleteResult.Status == 204) { Console.WriteLine("\nDeleted blocklist.");
if (deleteResult != null && deleteResult.Status == 204)
1. Replace `<your_list_name>` (in the request URL) with the name you used in the list creation step. 1. Run the script.
+#### [Java](#tab/java)
+
+Create a Java application and open it in your preferred editor or IDE. Paste in the following code.
+
+```java
+String endpoint = Configuration.getGlobalConfiguration().get("CONTENT_SAFETY_ENDPOINT");
+String key = Configuration.getGlobalConfiguration().get("CONTENT_SAFETY_KEY");
+BlocklistClient blocklistClient = new BlocklistClientBuilder()
+ .credential(new KeyCredential(key))
+ .endpoint(endpoint).buildClient();
+
+String blocklistName = "<your_list_name>";
+
+blocklistClient.deleteTextBlocklist(blocklistName);
+```
+
+1. Replace `<your_list_name>` (in the request URL) with the name you used in the list creation step.
+1. Run the script.
+ #### [Python](#tab/python) Create a new Python script and open it in your preferred editor or IDE. Paste in the following code. ```python import os
-from azure.ai.contentsafety import ContentSafetyClient
+from azure.ai.contentsafety import BlocklistClient
from azure.core.credentials import AzureKeyCredential from azure.core.exceptions import HttpResponseError key = os.environ["CONTENT_SAFETY_KEY"] endpoint = os.environ["CONTENT_SAFETY_ENDPOINT"]
-# Create an Content Safety client
-client = ContentSafetyClient(endpoint, AzureKeyCredential(key))
+# Create a Blocklist client
+client = BlocklistClient(endpoint, AzureKeyCredential(key))
blocklist_name = "<your_list_name>"
except HttpResponseError as e:
1. Replace `<your_list_name>` (in the request URL) with the name you used in the list creation step. 1. Run the script.
+#### [JavaScript](#tab/javascript)
+```javascript
+const ContentSafetyClient = require("@azure-rest/ai-content-safety").default,
+ { isUnexpected } = require("@azure-rest/ai-content-safety");
+const { AzureKeyCredential } = require("@azure/core-auth");
+
+// Load the .env file if it exists
+require("dotenv").config();
+
+const endpoint = process.env["CONTENT_SAFETY_ENDPOINT"] || "<endpoint>";
+const key = process.env["CONTENT_SAFETY_API_KEY"] || "<key>";
+
+const credential = new AzureKeyCredential(key);
+const client = ContentSafetyClient(endpoint, credential);
+
+async function deleteBlocklist() {
+ const blocklistName = "<your_list_name>";
+
+ const result = await client.path("/text/blocklists/{blocklistName}", blocklistName).delete();
+
+ if (isUnexpected(result)) {
+ throw result;
+ }
+
+ console.log("Deleted blocklist: ", blocklistName);
+}
++
+(async () => {
+ await deleteBlocklist();
+})().catch((err) => {
+ console.error("The sample encountered an error:", err);
+});
+```
+
+1. Replace `<your_list_name>` (in the request URL) with the name you used in the list creation step.
+1. Run the script.
+
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/whats-new.md
Learn what's new in the service. These items might be release notes, videos, blo
The Azure AI Content Safety service is now generally available through the following client library SDKs: -- **C#**: [Package](https://www.nuget.org/packages/Azure.AI.ContentSafety) | [API reference](/dotnet/api/overview/azure/ai.contentsafety-readme?view=azure-dotnet) | [Samples](https://github.com/Azure-Samples/AzureAIContentSafety/tree/main/dotnet/1.0.0)-- **Python**: [Package](https://pypi.org/project/azure-ai-contentsafety/) | [API reference](/python/api/overview/azure/ai-contentsafety-readme?view=azure-python) | [Samples](https://github.com/Azure-Samples/AzureAIContentSafety/tree/main/python/1.0.0)-- **Java**: [Package](https://oss.sonatype.org/#nexus-search;quick~contentsafety) | [API reference](/java/api/overview/azure/ai-contentsafety-readme?view=azure-java-stable) | [Samples](https://github.com/Azure-Samples/AzureAIContentSafety/tree/main/java/1.0.0)-- **JavaScript**: [Package](https://www.npmjs.com/package/@azure-rest/ai-content-safety?activeTab=readme) | [API reference](https://www.npmjs.com/package/@azure-rest/ai-content-safety/v/1.0.0) | [Samples](https://github.com/Azure-Samples/AzureAIContentSafety/tree/main/js/1.0.0)
+- **C#**: [Package](https://www.nuget.org/packages/Azure.AI.ContentSafety) | [API reference](/dotnet/api/overview/azure/ai.contentsafety-readme?view=azure-dotnet) | [Samples](https://github.com/Azure-Samples/AzureAIContentSafety/tree/main/dotnet/1.0.0) | Quickstarts: [Text](./quickstart-text.md), [Image](./quickstart-image.md)
+- **Python**: [Package](https://pypi.org/project/azure-ai-contentsafety/) | [API reference](/python/api/overview/azure/ai-contentsafety-readme?view=azure-python) | [Samples](https://github.com/Azure-Samples/AzureAIContentSafety/tree/main/python/1.0.0) | Quickstarts: [Text](./quickstart-text.md), [Image](./quickstart-image.md)
+- **Java**: [Package](https://oss.sonatype.org/#nexus-search;quick~contentsafety) | [API reference](/jav)
+- **JavaScript**: [Package](https://www.npmjs.com/package/@azure-rest/ai-content-safety?activeTab=readme) | [API reference](https://www.npmjs.com/package/@azure-rest/ai-content-safety/v/1.0.0) | [Samples](https://github.com/Azure-Samples/AzureAIContentSafety/tree/main/js/1.0.0) | Quickstarts: [Text](./quickstart-text.md), [Image](./quickstart-image.md)
+
+> [!IMPORTANT]
+> The public preview versions of the Azure AI Content Safety SDKs will be deprecated by March 31, 2024. Please update your applications to use the GA versions.
## November 2023
ai-services Assistants Reference Messages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/assistants-reference-messages.md
+
+ Title: Azure OpenAI Service Assistants Python & REST API messages reference
+
+description: Learn how to use Azure OpenAI's Python & REST API messages with Assistants.
+++ Last updated : 02/01/2024++
+recommendations: false
+++
+# Assistants API (Preview) messages reference
+
+This article provides reference documentation for Python and REST for the new Assistants API (Preview). More in-depth step-by-step guidance is provided in the [getting started guide](./how-to/assistant.md).
+
+## Create message
+
+```http
+POST https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/{thread_id}/messages?api-version=2024-02-15-preview
+```
+
+Create a message.
+
+**Path parameter**
+
+|Parameter| Type | Required | Description |
+|||||
+|`thread_id` | string | Required | The ID of the thread to create a message for. |
+
+**Request body**
+
+|Name | Type | Required | Description |
+| | | | |
+| `role` | string | Required | The role of the entity that is creating the message. Currently only user is supported.|
+| `content` | string | Required | The content of the message. |
+| `file_ids` | array | Optional | A list of File IDs that the message should use. There can be a maximum of 10 files attached to a message. Useful for tools like retrieval and code_interpreter that can access and use files. |
+| `metadata` | map | Optional | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maximum of 512 characters long. |
+
+### Returns
+
+A [message](#message-object) object.
+
+### Example create message request
+
+# [Python 1.x](#tab/python)
+
+```python
+from openai import AzureOpenAI
+
+client = AzureOpenAI(
+ api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_version="2024-02-15-preview",
+ azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
+ )
+
+thread_message = client.beta.threads.messages.create(
+ "thread_abc123",
+ role="user",
+ content="How does AI work? Explain it in simple terms.",
+)
+print(thread_message)
+```
+
+# [REST](#tab/rest)
+
+```console
+curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/{thread_id}/messages?api-version=2024-02-15-preview \
+ -H "api-key: $AZURE_OPENAI_KEY" \
+ -H 'Content-Type: application/json' \
+ -d '{
+ "role": "user",
+ "content": "How does AI work? Explain it in simple terms."
+ }'
+```
+++
+## List messages
+
+```http
+GET https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/{thread_id}/messages?api-version=2024-02-15-preview
+```
+
+Returns a list of messages for a given thread.
+
+**Path Parameters**
++
+|Parameter| Type | Required | Description |
+|||||
+|`thread_id` | string | Required | The ID of the thread that messages belong to. |
+
+**Query Parameters**
+
+|Name | Type | Required | Description |
+| | | | |
+| `limit` | integer | Optional - Defaults to 20 |A limit on the number of objects to be returned. Limit can range between 1 and 100, and the default is 20.|
+| `order` | string | Optional - Defaults to desc |Sort order by the created_at timestamp of the objects. asc for ascending order and desc for descending order.|
+| `after` | string | Optional | A cursor for use in pagination. after is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include after=obj_foo in order to fetch the next page of the list.|
+| `before` | string | Optional | A cursor for use in pagination. before is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include before=obj_foo in order to fetch the previous page of the list.|
+
+### Returns
+
+A list of [message](#message-object) objects.
+
+### Example list messages request
+
+# [Python 1.x](#tab/python)
+
+```python
+from openai import AzureOpenAI
+
+client = AzureOpenAI(
+ api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_version="2024-02-15-preview",
+ azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
+ )
+
+thread_messages = client.beta.threads.messages.list("thread_abc123")
+print(thread_messages.data)
+
+```
+
+# [REST](#tab/rest)
+
+```console
+curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/{thread_id}/messages?api-version=2024-02-15-preview \
+ -H "api-key: $AZURE_OPENAI_KEY" \
+ -H 'Content-Type: application/json'
+```
+++
+## List message files
+
+```http
+GET https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/{thread_id}/messages/{message_id}/files?api-version=2024-02-15-preview
+```
+
+Returns a list of message files.
+
+|Parameter| Type | Required | Description |
+|||||
+|`thread_id` | string | Required | The ID of the thread that the message and files belong to. |
+|`message_id`| string | Required | The ID of the message that the files belongs to. |
+
+**Query Parameters**
+
+|Name | Type | Required | Description |
+| | | | |
+| `limit` | integer | Optional - Defaults to 20 |A limit on the number of objects to be returned. Limit can range between 1 and 100, and the default is 20.|
+| `order` | string | Optional - Defaults to desc |Sort order by the created_at timestamp of the objects. asc for ascending order and desc for descending order.|
+| `after` | string | Optional | A cursor for use in pagination. after is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include after=obj_foo in order to fetch the next page of the list.|
+| `before` | string | Optional | A cursor for use in pagination. before is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include before=obj_foo in order to fetch the previous page of the list.|
+
+### Returns
+
+A list of [message file](#message-file-object) objects
+
+### Example list message files request
+
+# [Python 1.x](#tab/python)
+
+```python
+from openai import AzureOpenAI
+
+client = AzureOpenAI(
+ api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_version="2024-02-15-preview",
+ azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
+ )
+
+message_files = client.beta.threads.messages.files.list(
+ thread_id="thread_abc123",
+ message_id="msg_abc123"
+)
+print(message_files)
+
+```
+
+# [REST](#tab/rest)
+
+```console
+curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/{thread_id}/messages/files?api-version=2024-02-15-preview \
+ -H "api-key: $AZURE_OPENAI_KEY" \
+ -H 'Content-Type: application/json'
+```
+++
+## Retrieve message
+
+```http
+GET https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/{thread_id}/messages/{message_id}?api-version=2024-02-15-preview
+```
+
+Retrieves a message file.
+
+**Path parameters**
+
+|Parameter| Type | Required | Description |
+|||||
+|`thread_id` | string | Required | The ID of the thread that the message belongs to. |
+|`message_id`| string | Required | The ID of the message to retrieve. |
++
+### Returns
+
+The [message](#message-object) object matching the specified ID.
+
+### Example retrieve message request
+
+# [Python 1.x](#tab/python)
+
+```python
+from openai import AzureOpenAI
+
+client = AzureOpenAI(
+ api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_version="2024-02-15-preview",
+ azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
+ )
+
+message = client.beta.threads.messages.retrieve(
+ message_id="msg_abc123",
+ thread_id="thread_abc123",
+)
+print(message)
+
+```
+
+# [REST](#tab/rest)
+
+```console
+curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/{thread_id}/messages/{message_id}?api-version=2024-02-15-preview \
+ -H "api-key: $AZURE_OPENAI_KEY" \
+ -H 'Content-Type: application/json'
+```
+++
+## Retrieve message file
+
+```http
+GET https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/{thread_id}/messages/{message_id}/files/{file_id}?api-version=2024-02-15-preview
+```
+
+Retrieves a message file.
+
+**Path parameters**
+
+|Parameter| Type | Required | Description |
+|||||
+|`thread_id` | string | Required | The ID of the thread, which the message and file belongs to. |
+|`message_id`| string | Required | The ID of the message that the file belongs to. |
+|`file_id` | string | Required | The ID of the file being retrieved. |
+
+**Returns**
+
+The [message file](#message-file-object) object.
+
+### Example retrieve message file request
+
+# [Python 1.x](#tab/python)
+
+```python
+from openai import AzureOpenAI
+
+client = AzureOpenAI(
+ api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_version="2024-02-15-preview",
+ azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
+ )
+
+message_files = client.beta.threads.messages.files.retrieve(
+ thread_id="thread_abc123",
+ message_id="msg_abc123",
+ file_id="assistant-abc123"
+)
+print(message_files)
+```
+
+# [REST](#tab/rest)
+
+```console
+curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/{thread_id}/messages/{message_id}/files/{file_id}?api-version=2024-02-15-preview
+``` \
+ -H "api-key: $AZURE_OPENAI_KEY" \
+ -H 'Content-Type: application/json'
+```
+++
+## Modify message
+
+```http
+POST https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/{thread_id}/messages/{message_id}?api-version=2024-02-15-preview
+```
+
+Modifies a message.
+
+**Path parameters**
+
+|Parameter| Type | Required | Description |
+|||||
+|`thread_id` | string | Required | The ID of the thread to which the message belongs. |
+|`message_id`| string | Required | The ID of the message to modify. |
+
+**Request body**
+
+|Parameter| Type | Required | Description |
+|||||
+| metadata | map| Optional | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maximum of 512 characters long.|
+
+### Returns
+
+The modified [message](#message-object) object.
+
+# [Python 1.x](#tab/python)
+
+```python
+from openai import AzureOpenAI
+
+client = AzureOpenAI(
+ api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_version="2024-02-15-preview",
+ azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
+ )
+
+message = client.beta.threads.messages.update(
+ message_id="msg_abc12",
+ thread_id="thread_abc123",
+ metadata={
+ "modified": "true",
+ "user": "abc123",
+ },
+)
+print(message)
+```
+
+# [REST](#tab/rest)
+
+```console
+curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/{thread_id}/messages/{message_id}?api-version=2024-02-15-preview
+``` \
+ -H "api-key: $AZURE_OPENAI_KEY" \
+ -H 'Content-Type: application/json' \
+ -d '{
+ "metadata": {
+ "modified": "true",
+ "user": "abc123"
+ }
+ }'
+
+```
+++
+## Message object
+
+Represents a message within a thread.
+
+|Name | Type | Description |
+| | | |
+| `id` | string |The identifier, which can be referenced in API endpoints.|
+| `object` | string |The object type, which is always thread.message.|
+| `created_at` | integer |The Unix timestamp (in seconds) for when the message was created.|
+| `thread_id` | string |The thread ID that this message belongs to.|
+| `role` | string |The entity that produced the message. One of user or assistant.|
+| `content` | array |The content of the message in array of text and/or images.|
+| `assistant_id` | string or null |If applicable, the ID of the assistant that authored this message.|
+| `run_id` | string or null |If applicable, the ID of the run associated with the authoring of this message.|
+| `file_ids` | array |A list of file IDs that the assistant should use. Useful for tools like retrieval and code_interpreter that can access files. A maximum of 10 files can be attached to a message.|
+| `metadata` | map |Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maximum of 512 characters long.|
+
+## Message file object
+
+A list of files attached to a message.
+
+|Name | Type | Description |
+| | | |
+| `id`| string | The identifier, which can be referenced in API endpoints.|
+|`object`|string| The object type, which is always `thread.message.file`.|
+|`created_at`|integer | The Unix timestamp (in seconds) for when the message file was created.|
+|`message_id`| string | The ID of the message that the File is attached to.|
ai-services Assistants Reference Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/assistants-reference-runs.md
+
+ Title: Azure OpenAI Service Assistants Python & REST API runs reference
+
+description: Learn how to use Azure OpenAI's Python & REST API runs with Assistants.
+++ Last updated : 02/01/2024++
+recommendations: false
+++
+# Assistants API (Preview) runs reference
+
+This article provides reference documentation for Python and REST for the new Assistants API (Preview). More in-depth step-by-step guidance is provided in the [getting started guide](./how-to/assistant.md).
+
+## Create run
+
+```http
+POST https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/{thread_id}/runs?api-version=2024-02-15-preview
+```
+
+Create a run.
+
+**Path parameter**
+
+|Parameter| Type | Required | Description |
+|||||
+|`thread_id` | string | Required | The ID of the thread to create a message for. |
+
+**Request body**
+
+|Name | Type | Required | Description |
+| | | | |
+| `assistant_id` | string | Required | The ID of the assistant to use to execute this run. |
+| `model` | string or null | Optional | The model deployment name to be used to execute this run. If a value is provided here, it will override the model deployment name associated with the assistant. If not, the model deployment name associated with the assistant will be used. |
+| `instructions` | string or null | Optional | Overrides the instructions of the assistant. This is useful for modifying the behavior on a per-run basis. |
+| `additional_instructions` | string or null | Optional | Appends additional instructions at the end of the instructions for the run. This is useful for modifying the behavior on a per-run basis without overriding other instructions. |
+| `tools` | array or null | Optional | Override the tools the assistant can use for this run. This is useful for modifying the behavior on a per-run basis. |
+| `metadata` | map | Optional | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maximum of 512 characters long. |
+
+### Returns
+
+A run object.
+
+### Example create run request
+
+# [Python 1.x](#tab/python)
+
+```python
+from openai import AzureOpenAI
+
+client = AzureOpenAI(
+ api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_version="2024-02-15-preview",
+ azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
+ )
+
+run = client.beta.threads.runs.create(
+ thread_id="thread_abc123",
+ assistant_id="asst_abc123"
+)
+print(run)
+```
+
+# [REST](#tab/rest)
+
+```console
+curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/{thread_id}/runs?api-version=2024-02-15-preview \
+ -H "api-key: $AZURE_OPENAI_KEY" \
+ -H 'Content-Type: application/json' \
+ -d '{
+ "assistant_id": "asst_abc123"
+ }'
+```
+++
+## Create thread and run
+
+```http
+POST https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/runs?api-version=2024-02-15-preview
+```
+
+Create a thread and run it in a single request.
+
+**Request Body**
+
+|Name | Type | Required | Description |
+| | | | |
+| `assistant_id` | string | Required | The ID of the assistant to use to execute this run.|
+| `thread` | object | Optional | |
+| `model` | string or null | Optional | The ID of the Model deployment name to be used to execute this run. If a value is provided here, it will override the model deployment name associated with the assistant. If not, the model deployment name associated with the assistant will be used.|
+| `instructions` | string or null | Optional | Override the default system message of the assistant. This is useful for modifying the behavior on a per-run basis.|
+| `tools` | array or null | Optional | Override the tools the assistant can use for this run. This is useful for modifying the behavior on a per-run basis.|
+| `metadata` | map | Optional | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maximum of 512 characters long.|
+
+### Returns
+
+A run object.
+
+### Example create thread and run request
+
+# [Python 1.x](#tab/python)
+
+```python
+from openai import AzureOpenAI
+
+client = AzureOpenAI(
+ api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_version="2024-02-15-preview",
+ azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
+ )
+
+run = client.beta.threads.create_and_run(
+ assistant_id="asst_abc123",
+ thread={
+ "messages": [
+ {"role": "user", "content": "Explain deep learning to a 5 year old."}
+ ]
+ }
+)
+```
+
+# [REST](#tab/rest)
+
+```console
+curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/runs?api-version=2024-02-15-preview \
+ -H "api-key: $AZURE_OPENAI_KEY" \
+ -H 'Content-Type: application/json' \
+ -d '{
+ "assistant_id": "asst_abc123",
+ "thread": {
+ "messages": [
+ {"role": "user", "content": "Explain deep learning to a 5 year old."}
+ ]
+ }
+ }'
+```
+++
+## List runs
+
+```http
+GET https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/{thread_id}/runs?api-version=2024-02-15-preview
+```
+
+Returns a list of runs belonging to a thread.
+
+**Path parameter**
+
+|Parameter| Type | Required | Description |
+|||||
+|`thread_id` | string | Required | The ID of the thread that the run belongs to. |
+
+**Query Parameters**
+
+|Name | Type | Required | Description |
+| | | | |
+| `limit` | integer | Optional - Defaults to 20 |A limit on the number of objects to be returned. Limit can range between 1 and 100, and the default is 20.|
+| `order` | string | Optional - Defaults to desc |Sort order by the created_at timestamp of the objects. asc for ascending order and desc for descending order.|
+| `after` | string | Optional | A cursor for use in pagination. after is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include after=obj_foo in order to fetch the next page of the list.|
+| `before` | string | Optional | A cursor for use in pagination. before is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include before=obj_foo in order to fetch the previous page of the list.|
+
+### Returns
+
+A list of [run](#run-object) objects.
+
+### Example list runs request
+
+# [Python 1.x](#tab/python)
+
+```python
+from openai import AzureOpenAI
+
+client = AzureOpenAI(
+ api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_version="2024-02-15-preview",
+ azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
+ )
+
+runs = client.beta.threads.runs.list(
+ "thread_abc123"
+)
+print(runs)
+```
+
+# [REST](#tab/rest)
+
+```console
+curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/{thread_id}/runs?api-version=2024-02-15-preview \
+ -H "api-key: $AZURE_OPENAI_KEY" \
+ -H 'Content-Type: application/json'
+```
+++
+## List run steps
+
+```http
+GET https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/{thread_id}/runs/{run_id}/steps?api-version=2024-02-15-preview
+```
+
+Returns a list of steps belonging to a run.
+
+**Path parameters**
+
+|Parameter| Type | Required | Description |
+|||||
+|`thread_id` | string | Required | The ID of the thread that the run belongs to. |
+|`run_id` | string | Required | The ID of the run associated with the run steps to be queried. |
+
+**Query parameters**
+
+|Name | Type | Required | Description |
+| | | | |
+| `limit` | integer | Optional - Defaults to 20 |A limit on the number of objects to be returned. Limit can range between 1 and 100, and the default is 20.|
+| `order` | string | Optional - Defaults to desc |Sort order by the created_at timestamp of the objects. asc for ascending order and desc for descending order.|
+| `after` | string | Optional | A cursor for use in pagination. after is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include after=obj_foo in order to fetch the next page of the list.|
+| `before` | string | Optional | A cursor for use in pagination. before is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include before=obj_foo in order to fetch the previous page of the list.|
+
+### Returns
+
+A list of [run step](#run-step-object) objects.
+
+### Example list run steps request
+
+# [Python 1.x](#tab/python)
+
+```python
+from openai import AzureOpenAI
+
+client = AzureOpenAI(
+ api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_version="2024-02-15-preview",
+ azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
+ )
+
+run_steps = client.beta.threads.runs.steps.list(
+ thread_id="thread_abc123",
+ run_id="run_abc123"
+)
+print(run_steps)
+```
+
+# [REST](#tab/rest)
+
+```console
+curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/{thread_id}/runs/{run_id}/steps?api-version=2024-02-15-preview \
+ -H "api-key: $AZURE_OPENAI_KEY" \
+ -H 'Content-Type: application/json'
+```
+++
+## Retrieve run
+
+```http
+GET https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/{thread_id}/runs/{run_id}?api-version=2024-02-15-preview
+```
+
+Retrieves a run.
+
+**Path parameters**
+
+|Parameter| Type | Required | Description |
+|||||
+|`thread_id` | string | Required | The ID of the thread that was run. |
+|`run_id` | string | Required | The ID of the run to retrieve. |
+
+### Returns
+
+The [run](#run-object) object matching the specified run ID.
+
+### Example list run steps request
+
+# [Python 1.x](#tab/python)
+
+```python
+from openai import AzureOpenAI
+
+client = AzureOpenAI(
+ api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_version="2024-02-15-preview",
+ azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
+ )
+
+run = client.beta.threads.runs.retrieve(
+ thread_id="thread_abc123",
+ run_id="run_abc123"
+)
+print(run)
+```
+
+# [REST](#tab/rest)
+
+```console
+curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/{thread_id}/runs/{run_id}?api-version=2024-02-15-preview \
+ -H "api-key: $AZURE_OPENAI_KEY" \
+ -H 'Content-Type: application/json'
+```
+++
+## Retrieve run step
+
+```http
+GET https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/{thread_id}/runs/{run_id}/steps/{step_id}?api-version=2024-02-15-preview
+```
+
+Retrieves a run step.
+
+**Path Parameters**
+
+|Parameter| Type | Required | Description |
+|||||
+|`thread_id` | string | Required | The ID of the thread to which the run and run step belongs. |
+|`run_id` | string | Required | The ID of the run to which the run step belongs. |
+|`step_id`| string | Required | The ID of the run step to retrieve.|
+
+### Returns
+
+The [run step](#run-step-object) object matching the specified ID.
+
+### Example retrieve run steps request
+
+# [Python 1.x](#tab/python)
+
+```python
+from openai import AzureOpenAI
+
+client = AzureOpenAI(
+ api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_version="2024-02-15-preview",
+ azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
+ )
+
+run_step = client.beta.threads.runs.steps.retrieve(
+ thread_id="thread_abc123",
+ run_id="run_abc123",
+ step_id="step_abc123"
+)
+print(run_step)
+```
+
+# [REST](#tab/rest)
+
+```console
+curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/{thread_id}/runs/{run_id}/steps/{step_id}?api-version=2024-02-15-preview \
+ -H "api-key: $AZURE_OPENAI_KEY" \
+ -H 'Content-Type: application/json'
+```
+++
+## Modify run
+
+```http
+POST https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/{thread_id}/runs/{run_id}?api-version=2024-02-15-preview
+```
+
+Modifies a run.
+
+**Path Parameters**
+
+|Parameter| Type | Required | Description |
+|||||
+|`thread_id` | string | Required | The ID of the thread that was run. |
+|`run_id` | string | Required | The ID of the run to modify. |
+
+**Request body**
+
+|Name | Type | Required | Description |
+| | | | |
+| `metadata` | map | Optional | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maximum of 512 characters long.|
+
+### Returns
+
+The modified [run](#run-object) object matching the specified ID.
+
+### Example modify run request
+
+# [Python 1.x](#tab/python)
+
+```python
+from openai import AzureOpenAI
+
+client = AzureOpenAI(
+ api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_version="2024-02-15-preview",
+ azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
+ )
+
+run = client.beta.threads.runs.update(
+ thread_id="thread_abc123",
+ run_id="run_abc123",
+ metadata={"user_id": "user_abc123"},
+)
+print(run)
+```
+
+# [REST](#tab/rest)
+
+```console
+curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/{thread_id}/runs/{run_id}?api-version=2024-02-15-preview \
+ -H "api-key: $AZURE_OPENAI_KEY" \
+ -H 'Content-Type: application/json'
+ -d '{
+ "metadata": {
+ "user_id": "user_abc123"
+ }
+ }'
+```
+++
+## Submit tool outputs to run
+
+```http
+POST https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/{thread_id}/runs/{run_id}/submit_tool_outputs?api-version=2024-02-15-preview
+```
+
+When a run has the status: "requires_action" and required_action.type is submit_tool_outputs, this endpoint can be used to submit the outputs from the tool calls once they're all completed. All outputs must be submitted in a single request.
+
+**Path Parameters**
+
+|Parameter| Type | Required | Description |
+|||||
+|`thread_id` | string | Required | The ID of the thread to which this run belongs.|
+|`run_id` | string | Required | The ID of the run that requires the tool output submission. |
+
+**Request body**
+
+|Name | Type | Required | Description |
+| | | | |
+| `tool_outputs | array | Required | A list of tools for which the outputs are being submitted. |
+
+### Returns
+
+The modified [run](#run-object) object matching the specified ID.
+
+### Example submit tool outputs to run request
+
+# [Python 1.x](#tab/python)
+
+```python
+from openai import AzureOpenAI
+
+client = AzureOpenAI(
+ api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_version="2024-02-15-preview",
+ azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
+ )
+
+run = client.beta.threads.runs.submit_tool_outputs(
+ thread_id="thread_abc123",
+ run_id="run_abc123",
+ tool_outputs=[
+ {
+ "tool_call_id": "call_abc123",
+ "output": "28C"
+ }
+ ]
+)
+print(run)
+```
+
+# [REST](#tab/rest)
+
+```console
+curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/{thread_id}/runs/{run_id}/submit_tool_outputs?api-version=2024-02-15-preview \
+ -H "api-key: $AZURE_OPENAI_KEY" \
+ -H 'Content-Type: application/json' \
+ -d '{
+ "tool_outputs": [
+ {
+ "tool_call_id": "call_abc123",
+ "output": "28C"
+ }
+ ]
+ }'
+
+```
+++
+## Cancel a run
+
+```http
+POST https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/{thread_id}/runs/{run_id}/cancel?api-version=2024-02-15-preview
+```
+
+Cancels a run that is in_progress.
+
+**Path Parameters**
+
+|Parameter| Type | Required | Description |
+|||||
+|`thread_id` | string | Required | The ID of the thread to which this run belongs.|
+|`run_id` | string | Required | The ID of the run to cancel. |
+
+### Returns
+
+The modified [run](#run-object) object matching the specified ID.
+
+### Example submit tool outputs to run request
+
+# [Python 1.x](#tab/python)
+
+```python
+from openai import AzureOpenAI
+
+client = AzureOpenAI(
+ api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_version="2024-02-15-preview",
+ azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
+ )
+
+run = client.beta.threads.runs.cancel(
+ thread_id="thread_abc123",
+ run_id="run_abc123"
+)
+print(run)
+```
+
+# [REST](#tab/rest)
+
+```console
+curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/{thread_id}/runs/{run_id}/cancel?api-version=2024-02-15-preview \
+ -H "api-key: $AZURE_OPENAI_KEY" \
+ -H 'Content-Type: application/json' \
+ -X POST
+```
+++
+## Run object
+
+Represents an execution run on a thread.
+
+|Name | Type | Description |
+| | | |
+| `id`| string | The identifier, which can be referenced in API endpoints.|
+| `object` | string | The object type, which is always thread.run.|
+| `created_at` | integer | The Unix timestamp (in seconds) for when the run was created.|
+| `thread_id` | string | The ID of the thread that was executed on as a part of this run.|
+| `assistant_id` | string | The ID of the assistant used for execution of this run.|
+| `status` | string | The status of the run, which can be either `queued`, `in_progress`, `requires_action`, `cancelling`, `cancelled`, `failed`, `completed`, or `expired`.|
+| `required_action` | object or null | Details on the action required to continue the run. Will be null if no action is required.|
+| `last_error` | object or null | The last error associated with this run. Will be null if there are no errors.|
+| `expires_at` | integer | The Unix timestamp (in seconds) for when the run will expire.|
+| `started_at` | integer or null | The Unix timestamp (in seconds) for when the run was started.|
+| `cancelled_at` | integer or null | The Unix timestamp (in seconds) for when the run was canceled.|
+| `failed_at` | integer or null | The Unix timestamp (in seconds) for when the run failed.|
+| `completed_at` | integer or null | The Unix timestamp (in seconds) for when the run was completed.|
+| `model` | string | The model deployment name that the assistant used for this run.|
+| `instructions` | string | The instructions that the assistant used for this run.|
+| `tools` | array | The list of tools that the assistant used for this run.|
+| `file_ids` | array | The list of File IDs the assistant used for this run.|
+| `metadata` | map | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maximum of 512 characters long.|
++
+## Run step object
+
+Represent a step in execution of a run.
+
+|Name | Type | Description |
+| | | |
+| `id`| string | The identifier of the run step, which can be referenced in API endpoints.|
+| `object`| string | The object type, which is always thread.run.step.|
+| `created_at`| integer | The Unix timestamp (in seconds) for when the run step was created.|
+| `assistant_id`| string | The ID of the assistant associated with the run step.|
+| `thread_id`| string | The ID of the thread that was run.|
+| `run_id`| string | The ID of the run that this run step is a part of.|
+| `type`| string | The type of run step, which can be either message_creation or tool_calls.|
+| `status`| string | The status of the run step, which can be either `in_progress`, `cancelled`, `failed`, `completed`, or `expired`.|
+| `step_details`| object | The details of the run step.|
+| `last_error`| object or null | The last error associated with this run step. Will be null if there are no errors.|
+| `expired_at`| integer or null | The Unix timestamp (in seconds) for when the run step expired. A step is considered expired if the parent run is expired.|
+| `cancelled_at`| integer or null | The Unix timestamp (in seconds) for when the run step was cancelled.|
+| `failed_at`| integer or null | The Unix timestamp (in seconds) for when the run step failed.|
+| `completed_at`| integer or null | The Unix timestamp (in seconds) for when the run step completed.|
+| `metadata`| map | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maximum of 512 characters long.|
ai-services Assistants Reference Threads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/assistants-reference-threads.md
+
+ Title: Azure OpenAI Service Assistants Python & REST API threads reference
+
+description: Learn how to use Azure OpenAI's Python & REST API threads with Assistants
+++ Last updated : 02/01/2024++
+recommendations: false
+++
+# Assistants API (Preview) threads reference
+
+This article provides reference documentation for Python and REST for the new Assistants API (Preview). More in-depth step-by-step guidance is provided in the [getting started guide](./how-to/assistant.md).
+
+## Create a thread
+
+```http
+POST https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads?api-version=2024-02-15-preview
+```
+
+Create a thread.
+
+**Request body**
+
+|Name | Type | Required | Description |
+| | | | |
+|`messages`|array| Optional | A list of messages to start the thread with. |
+|`metadata`| map | Optional | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maximum of 512 characters long. |
+
+### Returns
+
+A [thread object](#thread-object).
+
+### Example create thread request
+
+# [Python 1.x](#tab/python)
+
+```python
+from openai import AzureOpenAI
+
+client = AzureOpenAI(
+ api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_version="2024-02-15-preview",
+ azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
+ )
+
+empty_thread = client.beta.threads.create()
+print(empty_thread)
+```
+
+# [REST](#tab/rest)
+
+```console
+curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads?api-version=2024-02-15-preview \
+ -H "api-key: $AZURE_OPENAI_KEY" \
+ -H 'Content-Type: application/json' \
+ -d ''
+```
+++
+## Retrieve thread
+
+```http
+GET https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads{thread_id}?api-version=2024-02-15-preview
+```
+
+Retrieves a thread.
+
+**Path parameters**
++
+|Parameter| Type | Required | Description |
+|||||
+|`thread_id` | string | Required | The ID of the thread to retrieve |
+
+### Returns
+
+The thread object matching the specified ID.
+
+### Example retrieve thread request
+
+# [Python 1.x](#tab/python)
+
+```python
+from openai import AzureOpenAI
+
+client = AzureOpenAI(
+ api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_version="2024-02-15-preview",
+ azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
+ )
+
+my_thread = client.beta.threads.retrieve("thread_abc123")
+print(my_thread)
+```
+
+# [REST](#tab/rest)
+
+```console
+curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/{thread_id}?api-version=2024-02-15-preview \
+ -H "api-key: $AZURE_OPENAI_KEY" \
+ -H 'Content-Type: application/json'
+```
+++
+## Modify thread
+
+```http
+POST https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads{thread_id}?api-version=2024-02-15-preview
+```
+
+Modifies a thread.
+
+**Path Parameters**
+
+|Parameter| Type | Required | Description |
+|||||
+|`thread_id` | string | Required | The ID of the thread to modify. |
+
+**Request body**
+
+|Name | Type | Required | Description |
+| | | | |
+| metadata| map | Optional | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maximum of 512 characters long.|
+
+### Returns
+
+The modified [thread object](#thread-object) matching the specified ID.
+
+### Example modify thread request
+
+# [Python 1.x](#tab/python)
+
+```python
+from openai import AzureOpenAI
+
+client = AzureOpenAI(
+ api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_version="2024-02-15-preview",
+ azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
+ )
+
+my_updated_thread = client.beta.threads.update(
+ "thread_abc123",
+ metadata={
+ "modified": "true",
+ "user": "abc123"
+ }
+)
+print(my_updated_thread)
+```
+
+# [REST](#tab/rest)
+
+```console
+curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/{thread_id}?api-version=2024-02-15-preview \
+ -H "api-key: $AZURE_OPENAI_KEY" \
+ -H 'Content-Type: application/json' \
+ -d '{
+ "metadata": {
+ "modified": "true",
+ "user": "abc123"
+ }
+ }'
+```
+++
+## Delete thread
+
+```http
+DELETE https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads{thread_id}?api-version=2024-02-15-preview
+```
+
+Delete a thread
+
+**Path Parameters**
+
+|Parameter| Type | Required | Description |
+|||||
+|`thread_id` | string | Required | The ID of the thread to delete. |
+
+### Returns
+
+Deletion status.
+
+### Example delete thread request
+
+# [Python 1.x](#tab/python)
+
+```python
+from openai import AzureOpenAI
+
+client = AzureOpenAI(
+ api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_version="2024-02-15-preview",
+ azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
+ )
+
+response = client.beta.threads.delete("thread_abc123")
+print(response)
+```
+
+# [REST](#tab/rest)
+
+```console
+curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/threads/{thread_id}?api-version=2024-02-15-preview \
+ -H "api-key: $AZURE_OPENAI_KEY" \
+ -H 'Content-Type: application/json' \
+ -X DELETE
+```
++
+## Thread object
+
+| Field | Type | Description |
+||||
+| `id` | string | The identifier, which can be referenced in API endpoints.|
+| `object` | string | The object type, which is always thread. |
+| `created_at` | integer | The Unix timestamp (in seconds) for when the thread was created. |
+| `metadata` | map | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maximum of 512 characters long. |
ai-services Assistants Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/assistants-reference.md
+
+ Title: Azure OpenAI Service Assistants Python & REST API reference
+
+description: Learn how to use Azure OpenAI's Python & REST API with Assistants.
+++ Last updated : 02/07/2024++
+recommendations: false
+++
+# Assistants API (Preview) reference
+
+This article provides reference documentation for Python and REST for the new Assistants API (Preview). More in-depth step-by-step guidance is provided in the [getting started guide](./how-to/assistant.md).
+
+## Create an assistant
+
+```http
+POST https://YOUR_RESOURCE_NAME.openai.azure.com/openai/assistants?api-version=2024-02-15-preview
+```
+
+Create an assistant with a model and instructions.
+
+### Request body
+
+|Name | Type | Required | Description |
+| | | | |
+| model| | Required | Model deployment name of the model to use.|
+| name | string or null | Optional | The name of the assistant. The maximum length is 256 characters.|
+| description| string or null | Optional | The description of the assistant. The maximum length is 512 characters.|
+| instructions | string or null | Optional | The system instructions that the assistant uses. The maximum length is 32768 characters.|
+| tools | array | Optional | Defaults to []. A list of tools enabled on the assistant. There can be a maximum of 128 tools per assistant. Tools can currently be of types `code_interpreter`, or `function`.|
+| file_ids | array | Optional | Defaults to []. A list of file IDs attached to this assistant. There can be a maximum of 20 files attached to the assistant. Files are ordered by their creation date in ascending order.|
+| metadata | map | Optional | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maximum of 512 characters long.|
+
+### Returns
+
+An [assistant](#assistant-object) object.
+
+### Example create assistant request
+
+# [Python 1.x](#tab/python)
+
+```python
+from openai import AzureOpenAI
+
+client = AzureOpenAI(
+ api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_version="2024-02-15-preview",
+ azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
+ )
+
+assistant = client.beta.assistants.create(
+ instructions="You are an AI assistant that can write code to help answer math questions",
+ model="<REPLACE WITH MODEL DEPLOYMENT NAME>", # replace with model deployment name.
+ tools=[{"type": "code_interpreter"}]
+)
+```
+
+# [REST](#tab/rest)
+
+```console
+curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/assistants?api-version=2024-02-15-preview \
+ -H "api-key: $AZURE_OPENAI_KEY" \
+ -H 'Content-Type: application/json' \
+ -d '{
+ "instructions": "You are an AI assistant that can write code to help answer math questions.",
+ "tools": [
+ { "type": "code_interpreter" }
+ ],
+ "model": "gpt-4-1106-preview"
+ }'
+```
+++
+## Create assistant file
+
+```http
+POST https://YOUR_RESOURCE_NAME.openai.azure.com/openai/assistants/{assistant_id}/files?api-version=2024-02-15-preview
+```
+
+Create an assistant file by attaching a `File` to an `assistant`.
+
+**Path parameters**
+
+|Parameter| Type | Required | Description |
+|||||
+|`assistant_id`| string | Required | The ID of the assistant that the file should be attached to. |
+
+**Request body**
+
+| Name | Type | Required | Description |
+||||
+| file_id | string | Required | A File ID (with purpose="assistants") that the assistant should use. Useful for tools like code_interpreter that can access files. |
+
+### Returns
+
+An [assistant file](#assistant-file-object) object.
+
+### Example create assistant file request
+
+# [Python 1.x](#tab/python)
+
+```python
+from openai import AzureOpenAI
+
+client = AzureOpenAI(
+ api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_version="2024-02-15-preview",
+ azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
+ )
+
+assistant_file = client.beta.assistants.files.create(
+ assistant_id="asst_abc123",
+ file_id="assistant-abc123"
+)
+print(assistant_file)
+```
+
+# [REST](#tab/rest)
+
+```console
+curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/assistants/{assistant_id}/files?api-version=2024-02-15-preview \
+ -H "api-key: $AZURE_OPENAI_KEY" \
+ -H 'Content-Type: application/json' \
+ -d '{
+ "file_id": "assistant-abc123"
+ }'
+```
+++
+## List assistants
+
+```http
+GET https://YOUR_RESOURCE_NAME.openai.azure.com/openai/assistants?api-version=2024-02-15-preview
+```
+
+Returns a list of all assistants.
+
+**Query parameters**
+
+|Parameter| Type | Required | Description |
+|||||
+| `limit` | integer | Optional | A limit on the number of objects to be returned. Limit can range between 1 and 100, and the default is 20.|
+| `order` | string | Optional - Defaults to desc | Sort order by the created_at timestamp of the objects. asc for ascending order and desc for descending order.|
+| `after` | string | Optional | A cursor for use in pagination. `after` is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include after=obj_foo in order to fetch the next page of the list. |
+|`before`| string | Optional | A cursor for use in pagination. `before` is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include before=obj_foo in order to fetch the previous page of the list. |
+
+### Returns
+
+A list of [assistant](#assistant-object) objects
+
+### Example list assistants
+
+# [Python 1.x](#tab/python)
+
+```python
+from openai import AzureOpenAI
+
+client = AzureOpenAI(
+ api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_version="2024-02-15-preview",
+ azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
+ )
+
+my_assistants = client.beta.assistants.list(
+ order="desc",
+ limit="20",
+)
+print(my_assistants.data)
+
+```
+
+# [REST](#tab/rest)
+
+```console
+curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/assistants?api-version=2024-02-15-preview \
+ -H "api-key: $AZURE_OPENAI_KEY" \
+ -H 'Content-Type: application/json'
+```
+++
+## List assistant files
+
+```http
+GET https://YOUR_RESOURCE_NAME.openai.azure.com/openai/assistants/{assistant_id}/files?api-version=2024-02-15-preview
+```
+
+Returns a list of assistant files.
+
+**Path parameters**
+
+|Parameter| Type | Required | Description |
+|||||
+| assistant_id | string | Required | The ID of the assistant the file belongs to. |
+
+**Query parameters**
+
+|Parameter| Type | Required | Description |
+|||||
+| `limit` | integer | Optional | A limit on the number of objects to be returned. Limit can range between 1 and 100, and the default is 20.|
+| `order` | string | Optional - Defaults to desc | Sort order by the created_at timestamp of the objects. asc for ascending order and desc for descending order.|
+| `after` | string | Optional | A cursor for use in pagination. `after` is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include after=obj_foo in order to fetch the next page of the list. |
+|`before`| string | Optional | A cursor for use in pagination. `before` is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include before=obj_foo in order to fetch the previous page of the list. |
+
+### Returns
+
+A list of [assistant file](#assistant-file-object) objects
+
+### Example list assistant files
+
+# [Python 1.x](#tab/python)
+
+```python
+from openai import AzureOpenAI
+
+client = AzureOpenAI(
+ api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_version="2024-02-15-preview",
+ azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
+ )
+
+assistant_files = client.beta.assistants.files.list(
+ assistant_id="asst_abc123"
+)
+print(assistant_files)
+```
+
+# [REST](#tab/rest)
+
+```console
+curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/assistants/{assistant-id}/files?api-version=2024-02-15-preview \
+ -H "api-key: $AZURE_OPENAI_KEY" \
+ -H 'Content-Type: application/json'
+```
+++
+## Retrieve assistant
+
+```http
+GET https://YOUR_RESOURCE_NAME.openai.azure.com/openai/assistants/{assistant_id}?api-version=2024-02-15-preview
+```
+
+Retrieves an assistant.
+
+**Path parameters**
+
+|Parameter| Type | Required | Description |
+||||--|
+| `assistant_id` | string | Required | The ID of the assistant to retrieve. |
+
+**Returns**
+
+The [assistant](#assistant-object) object matching the specified ID.
+
+### Example retrieve assistant
+
+# [Python 1.x](#tab/python)
+
+```python
+client = AzureOpenAI(
+ api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_version="2024-02-15-preview",
+ azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
+ )
+
+my_assistant = client.beta.assistants.retrieve("asst_abc123")
+print(my_assistant)
+```
+
+# [REST](#tab/rest)
+
+```console
+curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/assistants/{assistant-id}?api-version=2024-02-15-preview \
+ -H "api-key: $AZURE_OPENAI_KEY" \
+ -H 'Content-Type: application/json'
+```
+++
+## Retrieve assistant file
+
+```http
+GET https://YOUR_RESOURCE_NAME.openai.azure.com/openai/assistants/{assistant_id}/files/{file-id}?api-version=2024-02-15-preview
+```
+
+Retrieves an Assistant file.
+
+**Path parameters**
+
+|Parameter| Type | Required | Description |
+||||
+| assistant_id | string | Required | The ID of the assistant the file belongs to. |
+|file_id| string | Required | The ID of the file we're getting |
+
+### Returns
+
+The [assistant file](#assistant-file-object) object matching the specified ID
+
+### Example retrieve assistant file
+
+# [Python 1.x](#tab/python)
+
+```python
+client = AzureOpenAI(
+ api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_version="2024-02-15-preview",
+ azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
+ )
+
+assistant_file = client.beta.assistants.files.retrieve(
+ assistant_id="asst_abc123",
+ file_id="assistant-abc123"
+)
+print(assistant_file)
+```
+
+# [REST](#tab/rest)
+
+```console
+curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/assistants/{assistant-id}/files/{file-id}?api-version=2024-02-15-preview \
+ -H "api-key: $AZURE_OPENAI_KEY" \
+ -H 'Content-Type: application/json'
+```
+++
+## Modify assistant
+
+```http
+POST https://YOUR_RESOURCE_NAME.openai.azure.com/openai/assistants/{assistant_id}?api-version=2024-02-15-preview
+```
+
+Modifies an assistant.
+
+**Path parameters**
+
+|Parameter| Type | Required | Description |
+|||||
+| assistant_id | string | Required | The ID of the assistant the file belongs to. |
+
+**Request Body**
+
+| Parameter | Type | Required | Description |
+| | | | |
+| `model` | | Optional | The model deployment name of the model to use. |
+| `name` | string or null | Optional | The name of the assistant. The maximum length is 256 characters. |
+| `description` | string or null | Optional | The description of the assistant. The maximum length is 512 characters. |
+| `instructions` | string or null | Optional | The system instructions that the assistant uses. The maximum length is 32768 characters. |
+| `tools` | array | Optional | Defaults to []. A list of tools enabled on the assistant. There can be a maximum of 128 tools per assistant. Tools can be of types code_interpreter, or function. |
+| `file_ids` | array | Optional | Defaults to []. A list of File IDs attached to this assistant. There can be a maximum of 20 files attached to the assistant. Files are ordered by their creation date in ascending order. If a file was previously attached to the list but does not show up in the list, it will be deleted from the assistant. |
+| `metadata` | map | Optional | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maximum of 512 characters long. |
+
+**Returns**
+
+The modified [assistant object](#assistant-object).
+
+### Example modify assistant
+
+# [Python 1.x](#tab/python)
+
+```python
+client = AzureOpenAI(
+ api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_version="2024-02-15-preview",
+ azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
+ )
+
+my_updated_assistant = client.beta.assistants.update(
+ "asst_abc123",
+ instructions="You are an HR bot, and you have access to files to answer employee questions about company policies. Always respond with info from either of the files.",
+ name="HR Helper",
+ tools=[{"type": "code-interpreter"}],
+ model="gpt-4", #model = model deployment name
+ file_ids=["assistant-abc123", "assistant-abc456"],
+)
+
+print(my_updated_assistant)
+```
+
+# [REST](#tab/rest)
+
+```console
+curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/assistants/{assistant-id}?api-version=2024-02-15-preview \
+ -H "api-key: $AZURE_OPENAI_KEY" \
+ -H 'Content-Type: application/json' \
+ -d '{
+ "instructions": "You are an HR bot, and you have access to files to answer employee questions about company policies. Always response with info from either of the files.",
+ "tools": [{"type": "code-interpreter"}],
+ "model": "gpt-4",
+ "file_ids": ["assistant-abc123", "assistant-abc456"]
+ }'
+```
+++
+## Delete assistant
+
+```http
+DELETE https://YOUR_RESOURCE_NAME.openai.azure.com/openai/assistants/{assistant_id}?api-version=2024-02-15-preview
+```
+
+Delete an assistant.
+
+**Path parameters**
+
+|Parameter| Type | Required | Description |
+|||||
+| `assistant_id` | string | Required | The ID of the assistant the file belongs to. |
+
+**Returns**
+
+Deletion status.
+
+### Example delete assistant
+
+# [Python 1.x](#tab/python)
+
+```python
+client = AzureOpenAI(
+ api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_version="2024-02-15-preview",
+ azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
+ )
+
+response = client.beta.assistants.delete("asst_abc123")
+print(response)
+```
+
+# [REST](#tab/rest)
+
+```console
+curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/assistants/{assistant-id}?api-version=2024-02-15-preview \
+ -H "api-key: $AZURE_OPENAI_KEY" \
+ -H 'Content-Type: application/json' \
+ -X DELETE
+```
+++
+## Delete assistant file
+
+```http
+DELETE https://YOUR_RESOURCE_NAME.openai.azure.com/openai/assistants/{assistant_id}/files/{file-id}?api-version=2024-02-15-preview
+```
+
+Delete an assistant file.
+
+**Path parameters**
+
+|Parameter| Type | Required | Description |
+|||||
+| `assistant_id` | string | Required | The ID of the assistant the file belongs to. |
+| `file_id` | string | Required | The ID of the file to delete |
+
+**Returns**
+
+File deletion status
+
+### Example delete assistant file
+
+# [Python 1.x](#tab/python)
+
+```python
+client = AzureOpenAI(
+ api_key=os.getenv("AZURE_OPENAI_KEY"),
+ api_version="2024-02-15-preview",
+ azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
+ )
+
+deleted_assistant_file = client.beta.assistants.files.delete(
+ assistant_id="asst_abc123",
+ file_id="assistant-abc123"
+)
+print(deleted_assistant_file)
+
+```
+
+# [REST](#tab/rest)
+
+```console
+curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/assistants/{assistant_id}/files/{file-id}?api-version=2024-02-15-preview
+``` \
+ -H "api-key: $AZURE_OPENAI_KEY" \
+ -H 'Content-Type: application/json' \
+ -X DELETE
+```
+++
+## Assistant object
+
+| Field | Type | Description |
+||||
+| `id` | string | The identifier, which can be referenced in API endpoints.|
+| `object` | string | The object type, which is always assistant.|
+| `created_at` | integer | The Unix timestamp (in seconds) for when the assistant was created.|
+| `name` | string or null | The name of the assistant. The maximum length is 256 characters.|
+| `description` | string or null | The description of the assistant. The maximum length is 512 characters.|
+| `model` | string | Name of the model deployment name to use.|
+| `instructions` | string or null | The system instructions that the assistant uses. The maximum length is 32768 characters.|
+| `tools` | array | A list of tool enabled on the assistant. There can be a maximum of 128 tools per assistant. Tools can be of types code_interpreter, or function.|
+| `file_ids` | array | A list of file IDs attached to this assistant. There can be a maximum of 20 files attached to the assistant. Files are ordered by their creation date in ascending order.|
+| `metadata` | map | Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maximum of 512 characters long.|
+
+## Assistant file object
+
+| Field | Type | Description |
+||||
+| `id`| string | The identifier, which can be referenced in API endpoints.|
+|`object`| string | The object type, which is always `assistant.file` |
+|`created_at` | integer | The Unix timestamp (in seconds) for when the assistant file was created.|
+|`assistant_id` | string | The assistant ID that the file is attached to. |
ai-services Use Your Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/use-your-data.md
Once data is ingested, an [Azure AI Search](/azure/search/search-what-is-azure-s
1. Ingestion assets are created in Azure AI Search resource and Azure storage account. Currently these assets are: indexers, indexes, data sources, a [custom skill](/azure/search/cognitive-search-custom-skill-interface) in the search resource, and a container (later called the chunks container) in the Azure storage account. You can specify the input Azure storage container using the [Azure OpenAI studio](https://oai.azure.com/), or the [ingestion API](../reference.md#start-an-ingestion-job).
-2. Data is read from the input container, contents are opened and chunked into small chunks with a maximum of 1024 tokens each. If vector search is enabled, the service will calculate the vector representing the embeddings on each chunk. The output of this step (called the "preprocessed" or "chunked" data) is stored in the chunks container created in the previous step.
+2. Data is read from the input container, contents are opened and chunked into small chunks with a maximum of 1,024 tokens each. If vector search is enabled, the service calculates the vector representing the embeddings on each chunk. The output of this step (called the "preprocessed" or "chunked" data) is stored in the chunks container created in the previous step.
3. The preprocessed data is loaded from the chunks container, and indexed in the Azure AI Search index.
Upgrade to a higher pricing tier or delete unused assets.
*Could not execute skill because the Web API request failed*
-*Could not execute skill because Web API skill response is invalid*
+*Could not execute skill because Web API skill response is invalid.*
Resolution:
Break down the input documents into smaller documents and try again.
**Permissions Issues**
-*This request is not authorized to perform this operation*
+*This request is not authorized to perform this operation.*
Resolution:
Azure OpenAI on your data provides several search options you can use when you a
| *semantic* | Semantic search | Additional pricing for [semantic search](/azure/search/semantic-search-overview#availability-and-pricing) usage. |Improves the precision and relevance of search results by using a reranker (with AI models) to understand the semantic meaning of query terms and documents returned by the initial search ranker| | *vector* | Vector search | [Additional pricing](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) on your Azure OpenAI account from calling the embedding model. |Enables you to find documents that are similar to a given query input based on the vector embeddings of the content. | | *hybrid (vector + keyword)* | A hybrid of vector search and keyword search | [Additional pricing](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) on your Azure OpenAI account from calling the embedding model. |Performs similarity search over vector fields using vector embeddings, while also supporting flexible query parsing and full text search over alphanumeric fields using term queries.|
-| *hybrid (vector + keyword) + semantic* | A hybrid of vector search, semantic and keyword search for retrieval. | [Additional pricing](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) on your Azure OpenAI account from calling the embedding model, and additional pricing for [semantic search](/azure/search/semantic-search-overview#availability-and-pricing) usage. |Leverages vector embeddings, language understanding and flexible query parsing to create rich search experiences and generative AI apps that can handle complex and diverse information retrieval scenarios. |
+| *hybrid (vector + keyword) + semantic* | A hybrid of vector search, semantic, and keyword search for retrieval. | [Additional pricing](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) on your Azure OpenAI account from calling the embedding model, and additional pricing for [semantic search](/azure/search/semantic-search-overview#availability-and-pricing) usage. |Leverages vector embeddings, language understanding and flexible query parsing to create rich search experiences and generative AI apps that can handle complex and diverse information retrieval scenarios. |
The optimal search option can vary depending on your dataset and use-case. You might need to experiment with multiple options to determine which works best for your use-case.
You can use the following parameter to change how your data is ingested in Azure
|Parameter name | Description | |||
-| **Chunk size** | Azure OpenAI on your data processes your documents by splitting them into chunks before indexing them in Azure Search. The chunk size is the maximum number of tokens for any chunk in the search index. The default chunk size is 1024 tokens. However, given the uniqueness of your data, you may find a different chunk size (such as 256, 512, or 1536 tokens for example) more effective. Adjusting the chunk size can enhance the performance of the chat bot. While finding the optimal chunk size requires some trial and error, start by considering the nature of your dataset. A smaller chunk size is generally better for datasets with direct facts and less context, while a larger chunk size might be beneficial for more contextual information, though it can affect retrieval performance. This is the `chunkSize` parameter in the API.|
+| **Chunk size** | Azure OpenAI on your data processes your documents by splitting them into chunks before indexing them in Azure Search. The chunk size is the maximum number of tokens for any chunk in the search index. The default chunk size is 1024 tokens. However, given the uniqueness of your data, you might find a different chunk size (such as 256, 512, or 1,536 tokens for example) more effective. Adjusting the chunk size can enhance the performance of the chat bot. While finding the optimal chunk size requires some trial and error, start by considering the nature of your dataset. A smaller chunk size is generally better for datasets with direct facts and less context, while a larger chunk size might be beneficial for more contextual information, though it can affect retrieval performance. This is the `chunkSize` parameter in the API.|
## Runtime parameters
-You can modify the following additional settings in the **Data parameters** section in Azure OpenAI Studio and [the API](../reference.md#completions-extensions). You do not need to re-ingest your your data when you update these parameters.
+You can modify the following additional settings in the **Data parameters** section in Azure OpenAI Studio and [the API](../reference.md#completions-extensions). You do not need to re-ingest your data when you update these parameters.
|Parameter name | Description |
You can use Azure OpenAI on your data with an Azure OpenAI resource in the follo
* Japan East * North Central US * Norway East
+* South Africa North
* South Central US * South India * Sweden Central
ai-services Assistant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/assistant.md
print(image_file_id) # Outputs: assistant-1YGVTvNzc2JXajI5JU9F0HMD
### Download image ```python
-from openai import AzureOpenAI
-
-client = AzureOpenAI(
- api_key=os.getenv("AZURE_OPENAI_KEY"),
- api_version="2024-02-15-preview",
- azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
- )
- content = client.files.content(image_file_id) image= content.write_to_file("sinewave.png")
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/whats-new.md
Azure OpenAI now supports the API that powers OpenAI's GPTs. Azure OpenAI Assist
- [Code Interpreter](./how-to/code-interpreter.md) - [Function calling](./how-to/assistant-functions.md) - [Assistants model & region availability](./concepts/models.md#assistants-preview)
+- [Assistants Python & REST reference](./assistants-reference.md)
- [Assistants Samples](https://github.com/Azure-Samples/azureai-samples/tree/main/scenarios/Assistants) ### OpenAI text to speech voices public preview
Azure OpenAI Service now supports text to speech APIs with OpenAI's voices. Get
- You can now set the [chunk size](./concepts/use-your-data.md#ingestion-parameters) parameter when your data is ingested. Adjusting the chunk size can enhance the model's responses by setting the maximum number of tokens for any given chunk of your data in the search index.
+### New regional support for Azure OpenAI on your data
+
+You can now use Azure OpenAI on your data in the following Azure region:
+* South Africa North
+ ## December 2023 ### Azure OpenAI on your data
ai-services Sovereign Clouds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/translator/sovereign-clouds.md
curl -X POST "https://api.cognitive.microsofttranslator.us/translate?api-version
``` > [!div class="nextstepaction"]
-> [Azure Government: Translator text reference](../../azure-government/documentation-government-cognitiveservices.md#translator)
+> [Azure Government: Translator text reference](../../azure-government/documentation-government-cognitiveservices.md)
### [Azure operated by 21Vianet](#tab/china)
aks App Routing Dns Ssl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/app-routing-dns-ssl.md
Title: Set up advanced Ingress configurations on Azure Kubernetes Service
-description: Understand the advanced configuration options that are supported with the application routing add-on for Azure Kubernetes Service.
+ Title: Set up a custom domain name and SSL certificate with the application routing add-on for Azure Kubernetes Service (AKS)
+description: Understand the advanced configuration options that are supported with the application routing add-on for Azure Kubernetes Service (AKS).
Last updated 12/04/2023
-# Set up a custom domain name and SSL certificate with the application routing add-on
+# Set up a custom domain name and SSL certificate with the application routing add-on
An Ingress is an API object that defines rules, which allow external access to services in an Azure Kubernetes Service (AKS) cluster. When you create an Ingress object that uses the application routing add-on nginx Ingress classes, the add-on creates, configures, and manages one or more Ingress controllers in your AKS cluster.
aks App Routing Nginx Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/app-routing-nginx-configuration.md
Title: Advanced ingress and NGINX ingress controller configuration
-description: Understand the advanced configuration options that are supported with the application routing add-on with the NGINX ingress controller for Azure Kubernetes Service.
+ Title: Configure multiple ingress controllers and NGINX ingress annotations with the application routing add-on for Azure Kubernetes Service (AKS)
+description: Understand the advanced configuration options that are supported with the application routing add-on with the NGINX ingress controller for Azure Kubernetes Service (AKS).
Last updated 11/21/2023
-# Advanced NGINX ingress controller and ingress configurations with the application routing add-on
+# Advanced NGINX ingress controller and ingress configurations with the application routing add-on
The application routing add-on supports two ways to configure ingress controllers and ingress objects:+ - [Configuration of the NGINX ingress controller](#configuration-of-the-nginx-ingress-controller) such as creating multiple controllers, configuring private load balancers, and setting static IP addresses. - [Configuration per ingress resource](#configuration-per-ingress-resource-through-annotations) through annotations.
aks Azure Hybrid Benefit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-hybrid-benefit.md
+
+ Title: Use Azure Hybrid Benefit
+
+description: Learn how to save costs for Windows workloads by using existing Windows Server licenses on Azure Kubernetes Service.
+ Last updated : 02/09/2024++++
+# What is Azure Hybrid Benefit for Azure Kubernetes Service?
+
+Azure Hybrid Benefit is a program that enables you to significantly reduce the costs of running workloads in the cloud. With Azure Hybrid Benefit for Azure Kubernetes Service (AKS), you can maximize the value of your on-premises licenses and modernize your applications at no extra cost. Azure Hybrid Benefit enables you to use your on-premises licenses that also have either active Software Assurance (SA) or a qualifying subscription to get Windows virtual machines (VMs) on Azure at a reduced cost.
+
+For more information on qualifications for Azure Hybrid Benefit, what is included with it, how to stay compliant, and more, check out [Azure Hybrid Benefit for Windows Server](/azure/virtual-machines/windows/hybrid-use-benefit-licensing).
+
+>[!Note]
+>Azure Hybrid Benefit for Azure Kubernetes Service follows the same licensing guidance as Azure Hybrid Benefit for Windows Server VMs on Azure.
+
+## Enable Azure Hybrid Benefit for Azure Kubernetes Service
+
+Azure Hybrid Benefit for Azure Kubernetes Service can be enabled at cluster creation or on an existing AKS cluster. You can enable and disable Azure Hybrid Benefit using either the Azure CLI or Azure PowerShell. In the following examples, be sure to replace the variable definitions with values matching your own cluster.
+
+### Use Azure CLI to manage Azure Hybrid Benefit for AKS
+
+To create a new AKS cluster with Azure Hybrid Benefit enabled:
+
+```azurecli
+PASSWORD='tempPassword1234$'
+RG_NAME='myResourceGroup'
+CLUSTER='myAKSCluster'
+
+az aks create --resource-group $RG_NAME --name $CLUSTER --load-balancer-sku Standard --network-plugin azure --windows-admin-username azure --windows-admin-password $PASSWORD --enable-ahub
+```
+
+To enable Azure Hybrid Benefit on an existing AKS cluster:
+
+```azurecli
+RG_NAME='myResourceGroup'
+CLUSTER='myAKSCluster'
+
+az aks update --resouce-group $RG_NAME --name $CLUSTER--enable-ahub
+```
+
+To disable Azure Hybrid Benefit for an AKS cluster:
+
+```azurecli
+RG_NAME='myResourceGroup'
+CLUSTER='myAKSCluster'
+
+az aks update --resource-group $RG_NAME --name $CLUSTER --disable-ahub
+```
+
+### Use Azure PowerShell to manage Azure Hybrid Benefit for AKS
+
+To create a new AKS cluster with Azure Hybrid Benefit enabled:
+
+```powershell
+$password= ConvertTo-SecureString -AsPlainText "Password!!123" -Force
+$rg_name = "myResourceGroup"
+$cluster = "myAKSCluster"
+
+New-AzAksCluster -ResourceGroupName $rg_name -Name $cluster -WindowsProfileAdminUserName azureuser -WindowsProfileAdminUserPassword $cred -NetworkPlugin azure -NodeVmSetType VirtualMachineScaleSets --EnableAHUB
+```
+
+To enable Azure Hybrid Benefit on an existing AKS cluster:
+
+```powershell
+$rg_name = "myResourceGroup"
+$cluster = "myAKSCluster"
+
+Get-AzAksCluster -ResourceGroupName $rg_name -Name $cluster | Set-AzAksCluster -EnableAHUB
+```
+
+>[!Note]
+>It is currently not possible to disable Azure Hybrid Benefit for AKS using Azure PowerShell.
+
+## Next steps
+
+To learn more about Windows containers on AKS, see the following resources:
+
+* [Learn how to deploy, manage, and monitor Windows containers on AKS](/training/paths/deploy-manage-monitor-wincontainers-aks).
+* Open an issue or provide feedback in the [Windows containers GitHub repository](https://github.com/microsoft/Windows-Containers/issues).
+* Review the [third-party partner solutions for Windows on AKS](windows-aks-partner-solutions.md).
aks Confidential Containers Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/confidential-containers-overview.md
Title: Confidential Containers (preview) with Azure Kubernetes Service (AKS) description: Learn about Confidential Containers (preview) on an Azure Kubernetes Service (AKS) cluster to maintain security and protect sensitive information. Previously updated : 11/13/2023 Last updated : 02/09/2024 # Confidential Containers (preview) with Azure Kubernetes Service (AKS)
Confidential Containers provide a set of features and capabilities to further se
Confidential Containers builds on Kata Confidential Containers and hardware-based encryption to encrypt container memory. It establishes a new level of data confidentiality by preventing data in memory during computation from being in clear text, readable format. Trust is earned in the container through hardware attestation, allowing access to the encrypted data by trusted entities.
-Together with [Pod Sandboxing][pod-sandboxing-overview], you can run sensitive workloads isolated in Azure to protect your data and workloads. Confidential Containers helps significantly reduce the risk of unauthorized access from:
+Together with [Pod Sandboxing][pod-sandboxing-overview], you can run sensitive workloads isolated in Azure to protect your data and workloads. What makes a container confidential:
-* Your AKS cluster admin
-* The AKS control plane & daemon sets
-* The cloud and host operator
-* The AKS worker node operating system
-* Another pod running on the same VM node
-* Cloud Service Providers (CSPs) and from guest applications through a separate trust model
-
-Confidential Containers also enable application owners to enforce their application security requirements (for example, deny access to Azure tenant admin, Kubernetes admin, etc.).
+* Transparency: The confidential container environment where your sensitive application is shared, you can see and verify if it's safe. All components of the Trusted Computing Base (TCB) are to be open sourced.
+* Auditability: You have the ability to verify and see what version of the CoCo environment package including Linux Guest OS and all the components are current. Microsoft signs to the guest OS and container runtime environment for verifications through attestation. It also releases a secure hash algorithm (SHA) of guest OS builds to build a string audibility and control story.
+* Full attestation: Anything that is part of the TEE shall be fully measured by the CPU with ability to verify remotely. The hardware report from AMD SEV-SNP processor shall reflect container layers and container runtime configuration hash through the attestation claims. Application can fetch the hardware report locally including the report that reflects Guest OS image and container runtime.
+* Code integrity: Runtime enforcement is always available through customer defined policies for containers and container configuration, such as immutable policies and container signing.
+* Isolation from operator: Security designs that assume least privilege and highest isolation shielding from all untrusted parties including customer/tenant admins. It includes hardening existing Kubernetes control plane access (kubelet) to confidential pods.
With other security measures or data protection controls, as part of your overall architecture, these capabilities help you meet regulatory, industry, or governance compliance requirements for securing sensitive information.
automation Automation Linux Hrw Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-linux-hrw-install.md
# Deploy an agent-based Linux Hybrid Runbook Worker in Automation
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ > [!IMPORTANT] > Azure Automation Agent-based User Hybrid Runbook Worker (Windows and Linux) will retire on **31 August 2024** and wouldn't be supported after that date. You must complete migrating existing Agent-based User Hybrid Runbook Workers to Extension-based Workers before 31 August 2024. Moreover, starting **1 November 2023**, creating new Agent-based Hybrid Workers wouldn't be possible. [Learn more](migrate-existing-agent-based-hybrid-worker-to-extension-based-workers.md).
automation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/overview.md
# Change Tracking and Inventory overview
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ > [!Important] > - Change Tracking and Inventory using Log Analytics agent will retire on **31 August 2024** and we recommend that you use Azure Monitoring Agent as the new supporting agent. Follow the guidelines for [migration from Change Tracking and inventory using Log Analytics to Change Tracking and inventory using Azure Monitoring Agent version](guidance-migration-log-analytics-monitoring-agent.md).
automation Dsc Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/quickstarts/dsc-configuration.md
# Configure a VM with Desired State Configuration
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ By enabling Azure Automation State Configuration, you can manage and monitor the configurations of your Windows and Linux servers using Desired State Configuration (DSC). Configurations that drift from a desired configuration can be identified or auto-corrected. This quickstart steps through enabling an Azure Linux VM and deploying a LAMP stack using Azure Automation State Configuration. ## Prerequisites
automation Update Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/update-management.md
# Troubleshoot Update Management issues
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ This article discusses issues that you might run into when using the Update Management feature to assess and manage updates on your machines. There's an agent troubleshooter for the Hybrid Runbook Worker agent to help determine the underlying problem. To learn more about the troubleshooter, see [Troubleshoot Windows update agent issues](update-agent-issues.md) and [Troubleshoot Linux update agent issues](update-agent-issues-linux.md). For other feature deployment issues, see [Troubleshoot feature deployment issues](onboarding.md). >[!NOTE]
automation Deploy Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/deploy-updates.md
# How to deploy updates and review results
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ This article describes how to schedule an update deployment and review the process after the deployment is complete. You can configure an update deployment from a selected Azure virtual machine, from the selected Azure Arc-enabled server, or from the Automation account across all configured machines and servers. Under each scenario, the deployment you create targets that selected machine or server, or in the case of creating a deployment from your Automation account, you can target one or more machines. When you schedule an update deployment from an Azure VM or Azure Arc-enabled server, the steps are the same as deploying from your Automation account, with the following exceptions:
automation Manage Updates For Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/manage-updates-for-vm.md
Last updated 08/25/2021
# Manage updates and patches for your VMs
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ Software updates in Azure Automation Update Management provides a set of tools and resources that can help manage the complex task of tracking and applying software updates to machines in Azure and hybrid cloud. An effective software update management process is necessary to maintain operational efficiency, overcome security issues, and reduce the risks of increased cyber security threats. However, because of the changing nature of technology and the continual appearance of new security threats, effective software update management requires consistent and continual attention. > [!NOTE]
After the deployment is complete, review the process to determine the success of
* To learn how to create alerts to notify you about update deployment results, see [create alerts for Update Management](configure-alerts.md).
-* You can [query Azure Monitor logs](query-logs.md) to analyze update assessments, deployments, and other related management tasks. It includes pre-defined queries to help you get started.
+* You can [query Azure Monitor logs](query-logs.md) to analyze update assessments, deployments, and other related management tasks. It includes pre-defined queries to help you get started.
automation Operating System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/operating-system-requirements.md
# Operating systems supported by Update Management
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ This article details the Windows and Linux operating systems supported and system requirements for machines or servers managed by Update Management. ## Supported operating systems
automation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/overview.md
# Update Management overview
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ > [!Important] > - Azure Automation Update Management will retire on **31 August 2024**. Follow the guidelines for [migration to Azure Update Manager](../../update-manager/guidance-migration-automation-update-management-azure-update-manager.md). > - Azure Log Analytics agent, also known as the Microsoft Monitoring Agent (MMA) will be [retired in August 2024](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/). Azure Automation Update Management solution relies on this agent and may encounter issues once the agent is retired as it does not work with Azure Monitoring Agent (AMA). Therefore, if you are using the Azure Automation Update Management solution, we recommend that you move to Azure Update Manager for your software update needs. All the capabilities of Azure Automation Update management solution will be available on Azure Update Manager before the retirement date. Follow the [guidance](../../update-center/guidance-migration-automation-update-management-azure-update-manager.md) to move your machines and schedules from Automation Update Management to Azure Update Manager.
automation View Update Assessments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/view-update-assessments.md
# View update assessments in Update Management
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ In Update Management, you can view information about your machines, missing updates, update deployments, and scheduled update deployments. You can view the assessment information scoped to the selected Azure virtual machine, from the selected Azure Arc-enabled server, or from the Automation account across all configured machines and servers. ## View update assessment
automation Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/whats-new-archive.md
# Archive for What's new in Azure Automation?
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ The primary [What's new in Azure Automation?](whats-new.md) article contains updates for the last six months, while this article contains all the older information. What's new in Azure Automation? provides you with information about:
azure-arc Azure Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/azure-rbac.md
For a conceptual overview of this feature, see [Azure RBAC on Azure Arc-enabled
- [Upgrade your agents](agent-upgrade.md#manually-upgrade-agents) to the latest version. > [!NOTE]
-> You can't set up this feature for Red Hat OpenShift, or for managed Kubernetes offerings of cloud providers like Elastic Kubernetes Service or Google Kubernetes Engine where the user doesn't have access to the API server of the cluster. For Azure Kubernetes Service (AKS) clusters, this [feature is available natively](../../aks/manage-azure-rbac.md) and doesn't require the AKS cluster to be connected to Azure Arc. For AKS on Azure Stack HCI, see [Use Azure RBAC for AKS hybrid clusters (preview)](/azure/aks/hybrid/azure-rbac-aks-hybrid).
+> You can't set up this feature for Red Hat OpenShift, or for managed Kubernetes offerings of cloud providers like Elastic Kubernetes Service or Google Kubernetes Engine where the user doesn't have access to the API server of the cluster. For Azure Kubernetes Service (AKS) clusters, this [feature is available natively](../../aks/manage-azure-rbac.md) and doesn't require the AKS cluster to be connected to Azure Arc.
<a name='set-up-azure-ad-applications'></a>
azure-arc Extensions Release https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/extensions-release.md
Title: "Available extensions for Azure Arc-enabled Kubernetes clusters" Previously updated : 11/03/2023 Last updated : 02/08/2024 description: "See which extensions are currently available for Azure Arc-enabled Kubernetes clusters and view release notes."
description: "See which extensions are currently available for Azure Arc-enabled
The following extensions are currently available for use with Arc-enabled Kubernetes clusters. All of these extensions are [cluster-scoped](conceptual-extensions.md#extension-scope), except for Azure API Management on Azure Arc, which is namespace-scoped.
-> [!NOTE]
-> Installing Azure Arc extensions on [Azure Kubernetes Service (AKS) hybrid clusters provisioned from Azure](extensions.md#aks-hybrid-clusters-provisioned-from-azure-preview) is currently in preview, with support for the Azure Arc-enabled Open Service Mesh, Azure Key Vault Secrets Provider, Flux (GitOps) and Microsoft Defender for Cloud extensions.
- ## Azure Monitor Container Insights - **Supported distributions**: All Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters
For more information, see [Understand Azure Policy for Kubernetes clusters](../.
## Azure Key Vault Secrets Provider -- **Supported distributions**: AKS on Azure Stack HCI, AKS hybrid clusters provisioned from Azure, Cluster API Azure, Google Kubernetes Engine, Canonical Kubernetes Distribution, OpenShift Kubernetes Distribution, Amazon Elastic Kubernetes Service, VMware Tanzu Kubernetes Grid
+- **Supported distributions**: AKS on Azure Stack HCI, AKS enabled by Azure Arc, Cluster API Azure, Google Kubernetes Engine, Canonical Kubernetes Distribution, OpenShift Kubernetes Distribution, Amazon Elastic Kubernetes Service, VMware Tanzu Kubernetes Grid
The Azure Key Vault Provider for Secrets Store CSI Driver allows for the integration of Azure Key Vault as a secrets store with a Kubernetes cluster via a CSI volume. For Azure Arc-enabled Kubernetes clusters, you can install the Azure Key Vault Secrets Provider extension to fetch secrets.
For more information, see [Use the Azure Key Vault Secrets Provider extension to
## Microsoft Defender for Containers -- **Supported distributions**: AKS hybrid clusters provisioned from Azure, Cluster API Azure, Azure Red Hat OpenShift, Red Hat OpenShift (version 4.6 or newer), Google Kubernetes Engine Standard, Amazon Elastic Kubernetes Service, VMware Tanzu Kubernetes Grid, Rancher Kubernetes Engine, Canonical Kubernetes Distribution
+- **Supported distributions**: AKS enabled by Azure Arc, Cluster API Azure, Azure Red Hat OpenShift, Red Hat OpenShift (version 4.6 or newer), Google Kubernetes Engine Standard, Amazon Elastic Kubernetes Service, VMware Tanzu Kubernetes Grid, Rancher Kubernetes Engine, Canonical Kubernetes Distribution
Microsoft Defender for Containers is the cloud-native solution that is used to secure your containers so you can improve, monitor, and maintain the security of your clusters, containers, and their applications. It gathers information related to security like audit log data from the Kubernetes cluster, and provides recommendations and threat alerts based on gathered data.
For more information, see [Enable Microsoft Defender for Containers](../../defen
## Azure Arc-enabled Open Service Mesh -- **Supported distributions**: AKS, AKS on Azure Stack HCI, AKS hybrid clusters provisioned from Azure, Cluster API Azure, Google Kubernetes Engine, Canonical Kubernetes Distribution, Rancher Kubernetes Engine, OpenShift Kubernetes Distribution, Amazon Elastic Kubernetes Service, VMware Tanzu Kubernetes Grid
+- **Supported distributions**: AKS, AKS on Azure Stack HCI, AKS enabled by Azure Arc, Cluster API Azure, Google Kubernetes Engine, Canonical Kubernetes Distribution, Rancher Kubernetes Engine, OpenShift Kubernetes Distribution, Amazon Elastic Kubernetes Service, VMware Tanzu Kubernetes Grid
[Open Service Mesh (OSM)](https://docs.openservicemesh.io/) is a lightweight, extensible, Cloud Native service mesh that allows users to uniformly manage, secure, and get out-of-the-box observability features for highly dynamic microservice environments.
azure-arc Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/extensions.md
Title: "Deploy and manage Azure Arc-enabled Kubernetes cluster extensions" Previously updated : 04/27/2023 Last updated : 02/08/2024 description: "Create and manage extension instances on Azure Arc-enabled Kubernetes clusters."
Before you begin, read the [conceptual overview of Arc-enabled Kubernetes cluste
* If you haven't connected a cluster yet, use our [quickstart](quickstart-connect-cluster.md). * [Upgrade your agents](agent-upgrade.md#manually-upgrade-agents) to the latest version.
-> [!NOTE]
-> Installing Azure Arc extensions on [AKS hybrid clusters provisioned from Azure](extensions.md#aks-hybrid-clusters-provisioned-from-azure-preview) is currently in preview, with support for the Azure Arc-enabled Open Service Mesh, Azure Key Vault Secrets Provider, Flux (GitOps) and Microsoft Defender for Cloud extensions.
- ## Create extension instance To create a new extension instance, use `k8s-extension create`, passing in values for the required parameters.
az k8s-extension create --name azuremonitor-containers --extension-type Microso
> [!NOTE] > The service is unable to retain sensitive information for more than 48 hours. If Azure Arc-enabled Kubernetes agents don't have network connectivity for more than 48 hours and can't determine whether to create an extension on the cluster, the extension transitions to `Failed` state. Once that happens, you'll need to run `k8s-extension create` again to create a fresh extension Azure resource. >
-> Azure Monitor Container Insights is a singleton extension (only one required per cluster). You'll need to clean up any previous Helm chart installations of Azure Monitor Container Insights (without extensions) before installing the same via extensions. Follow the instructions for [deleting the Helm chart](../../azure-monitor/containers/container-insights-optout-hybrid.md) before running `az k8s-extension create`.
+> Azure Monitor Container Insights is a singleton extension (only one required per cluster). You'll need to clean up any previous Helm chart installations of Azure Monitor Container Insights (without extensions) before installing the same via extensions. Follow the instructions for [deleting the Helm chart](/azure/azure-monitor/containers/kubernetes-monitoring-disable#remove-container-insights-with-helm) before running `az k8s-extension create`.
### Required parameters
The following parameters are required when using `az k8s-extension create` to cr
| `--resource-group` | The resource group containing the Azure Arc-enabled Kubernetes resource | | `--cluster-type` | The cluster type on which the extension instance has to be created. For most scenarios, use `connectedClusters`, which corresponds to Azure Arc-enabled Kubernetes clusters. |
-> [!NOTE]
-> When working with [AKS hybrid clusters provisioned from Azure](#aks-hybrid-clusters-provisioned-from-azure-preview), you must set `--cluster-type` to use `provisionedClusters` and also add `--cluster-resource-provider microsoft.hybridcontainerservice` to the command. Installing Azure Arc extensions on AKS hybrid clusters provisioned from Azure is currently in preview.
- ### Optional parameters Use one or more of these optional parameters as needed for your scenarios, along with the required parameters.
az k8s-extension delete --name azuremonitor-containers --cluster-name <clusterNa
> [!NOTE] > The Azure resource representing this extension gets deleted immediately. The Helm release on the cluster associated with this extension is only deleted when the agents running on the Kubernetes cluster have network connectivity and can reach out to Azure services again to fetch the desired state.
-> [!IMPORTANT]
-> When working with [AKS hybrid clusters provisioned from Azure](#aks-hybrid-clusters-provisioned-from-azure-preview), you must add `--yes` to the delete command. Installing Azure Arc extensions on AKS hybrid clusters provisioned from Azure is currently in preview.
-
-## AKS hybrid clusters provisioned from Azure (preview)
-
-You can deploy extensions to AKS hybrid clusters provisioned from Azure. However, there are a few key differences to keep in mind in order to deploy successfully:
-
-* The value for the `--cluster-type` parameter must be `provisionedClusters`.
-* You must add `--cluster-resource-provider microsoft.hybridcontainerservice` to your commands.
-* When deleting an extension instance, you must add `--yes` to the command:
-
- ```azurecli
- az k8s-extension delete --name azuremonitor-containers --cluster-name <clusterName> --resource-group <resourceGroupName> --cluster-type provisionedClusters --cluster-resource-provider microsoft.hybridcontainerservice --yes
- ```
-
-In addition, you must be using the latest version of the Azure CLI `k8s-extension` module (version >= 1.3.3). Use the following commands to add or update to the latest version:
-
-```azurecli
-# add if you do not have this installed
-az extension add --name k8s-extension
-
-# update if you do have the module installed
-az extension update --name k8s-extension
-```
-
-> [!IMPORTANT]
-> Installing Azure Arc extensions on AKS hybrid clusters provisioned from Azure is currently in preview.
## Next steps
azure-arc Gitops Flux2 Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/gitops-flux2-parameters.md
Title: "GitOps (Flux v2) supported parameters" description: "Understand the supported parameters for GitOps (Flux v2) in Azure for use in Azure Arc-enabled Kubernetes and Azure Kubernetes Service (AKS) clusters." Previously updated : 12/11/2023 Last updated : 02/08/2024
This article describes some of the parameters and arguments available for the `a
| Parameter | Format | Notes | | - | - | - | | `--cluster-name` `-c` | String | Name of the cluster resource in Azure. |
-| `--cluster-type` `-t` | Allowed values: `connectedClusters`, `managedClusters`, `provisionedClusters` | Use `connectedClusters` for Azure Arc-enabled Kubernetes clusters, `managedClusters` for AKS clusters, or `provisionedClusters` for [AKS hybrid clusters provisioned from Azure](extensions.md#aks-hybrid-clusters-provisioned-from-azure-preview) (installing extensions on these clusters is currently in preview). |
+| `--cluster-type` `-t` | Allowed values: `connectedClusters`, `managedClusters`| Use `connectedClusters` for Azure Arc-enabled Kubernetes clusters or `managedClusters` for AKS clusters. |
| `--resource-group` `-g` | String | Name of the Azure resource group that holds the cluster resource. | | `--name` `-n`| String | Name of the Flux configuration in Azure. | | `--namespace` `--ns` | String | Name of the namespace to deploy the configuration. Default: `default`. |
-| `--scope` `-s` | String | Permission scope for the operators. Possible values are `cluster` (full access) or `namespace` (restricted access). Default: `cluster`.
+| `--scope` `-s` | String | Permission scope for the operators. Possible values are `cluster` (full access) or `namespace` (restricted access). Default: `cluster`. |
| `--suspend` | flag | Suspends all source and kustomize reconciliations defined in this Flux configuration. Reconciliations active at the time of suspension will continue. | ## Source general arguments
azure-arc Tutorial Akv Secrets Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-akv-secrets-provider.md
Title: Use Azure Key Vault Secrets Provider extension to fetch secrets into Azure Arc-enabled Kubernetes clusters description: Learn how to set up the Azure Key Vault Provider for Secrets Store CSI Driver interface as an extension on Azure Arc enabled Kubernetes cluster Previously updated : 07/27/2023 Last updated : 02/09/2024
Capabilities of the Azure Key Vault Secrets Provider extension include:
- A cluster with a supported Kubernetes distribution that has already been [connected to Azure Arc](quickstart-connect-cluster.md). The following Kubernetes distributions are currently supported for this scenario: - Cluster API Azure - Azure Kubernetes Service (AKS) clusters on Azure Stack HCI
- - AKS hybrid clusters provisioned from Azure
+ - AKS enabled by Azure Arc
- Google Kubernetes Engine - OpenShift Kubernetes Distribution - Canonical Kubernetes Distribution
Capabilities of the Azure Key Vault Secrets Provider extension include:
- Azure Red Hat OpenShift - Ensure you've met the [general prerequisites for cluster extensions](extensions.md#prerequisites). You must use version 0.4.0 or newer of the `k8s-extension` Azure CLI extension.
-> [!TIP]
-> When using this extension with [AKS hybrid clusters provisioned from Azure](extensions.md#aks-hybrid-clusters-provisioned-from-azure-preview) you must set `--cluster-type` to use `provisionedClusters` and also add `--cluster-resource-provider microsoft.hybridcontainerservice` to the command. Installing Azure Arc extensions on AKS hybrid clusters provisioned from Azure is currently in preview.
- ## Install the Azure Key Vault Secrets Provider extension on an Arc-enabled Kubernetes cluster You can install the Azure Key Vault Secrets Provider extension on your connected cluster in the Azure portal, by using Azure CLI, or by deploying an ARM template.
-> [!TIP]
-> If the cluster is behind an outbound proxy server, ensure that you connect it to Azure Arc using the [proxy configuration](quickstart-connect-cluster.md#connect-using-an-outbound-proxy-server) option before installing the extension.
+Only one instance of the extension can be deployed on each Azure Arc-enabled Kubernetes cluster.
> [!TIP]
-> Only one instance of the extension can be deployed on each Azure Arc-enabled Kubernetes cluster.
+> If the cluster is behind an outbound proxy server, ensure that you connect it to Azure Arc using the [proxy configuration](quickstart-connect-cluster.md#connect-using-an-outbound-proxy-server) option before installing the extension.
### Azure portal
Before you move on to the next section, take note of the following properties:
Currently, the Secrets Store CSI Driver on Arc-enabled clusters can be accessed through a service principal. Follow these steps to provide an identity that can access your Key Vault. 1. Follow the steps [to create a service principal in Azure](../../active-directory/develop/howto-create-service-principal-portal.md#register-an-application-with-azure-ad-and-create-a-service-principal). Take note of the Client ID and Client Secret generated in this step.
-1. Provide Azure Key Vault GET permission to the created service principal by [following these steps](../../key-vault/general/assign-access-policy.md).
+1. Next, [ensure Azure Key Vault has GET permission to the created service principal](../../key-vault/general/assign-access-policy.md#assign-an-access-policy).
1. Use the client ID and Client Secret from the first step to create a Kubernetes secret on the connected cluster: ```bash
Currently, the Secrets Store CSI Driver on Arc-enabled clusters can be accessed
kubectl label secret secrets-store-creds secrets-store.csi.k8s.io/used=true ```
-1. Create a SecretProviderClass with the following YAML, filling in your values for key vault name, tenant ID, and objects to retrieve from your AKV instance:
+1. Create a `SecretProviderClass` with the following YAML, filling in your values for key vault name, tenant ID, and objects to retrieve from your AKV instance:
```yml # This is a SecretProviderClass example using service principal to access Keyvault
Currently, the Secrets Store CSI Driver on Arc-enabled clusters can be accessed
tenantId: <tenant-Id> # The tenant ID of the Azure Key Vault instance ```
- For use with national clouds, change `cloudName` to `AzureUSGovernmentCloud` for U.S. Government Cloud, or to `AzureChinaCloud` for Azure China Cloud.
+ For use with national clouds, change `cloudName` to `AzureUSGovernmentCloud` for Azure Government, or to `AzureChinaCloud` for Microsoft Azure operated by 21Vianet.
1. Apply the SecretProviderClass to your cluster:
You can also change these settings after installation by using the `az k8s-exten
az k8s-extension update --cluster-name $CLUSTER_NAME --resource-group $RESOURCE_GROUP --cluster-type connectedClusters --name akvsecretsprovider --configuration-settings secrets-store-csi-driver.enableSecretRotation=true secrets-store-csi-driver.rotationPollInterval=3m secrets-store-csi-driver.syncSecret.enabled=true ```
-You can use other configuration settings as needed for your deployment. For example, to change the kubelet root directory while creating a cluster, modify the az k8s-extension create command:
+You can use other configuration settings as needed for your deployment. For example, to change the kubelet root directory while creating a cluster, modify the `az k8s-extension create` command:
```azurecli-interactive az k8s-extension create --cluster-name $CLUSTER_NAME --resource-group $RESOURCE_GROUP --cluster-type connectedClusters --extension-type Microsoft.AzureKeyVaultSecretsProvider --name akvsecretsprovider --configuration-settings linux.kubeletRootDir=/path/to/kubelet secrets-store-csi-driver.linux.kubeletRootDir=/path/to/kubelet
azure-arc Tutorial Arc Enabled Open Service Mesh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-arc-enabled-open-service-mesh.md
Azure Arc-enabled Open Service Mesh can be deployed through Azure portal, Azure
- The following Kubernetes distributions are currently supported: - AKS (Azure Kubernetes Service) Engine - AKS clusters on Azure Stack HCI
- - AKS hybrid clusters provisioned from Azure
+ - AKS enabled by Azure Arc
- Cluster API Azure - Google Kubernetes Engine - Canonical Kubernetes Distribution
Azure Arc-enabled Open Service Mesh can be deployed through Azure portal, Azure
- VMware Tanzu Kubernetes Grid - Azure Monitor integration with Azure Arc-enabled Open Service Mesh is available [in preview with limited support](#monitoring-application-using-azure-monitor-and-applications-insights-preview).
-> [!TIP]
-> When using this extension with [AKS hybrid clusters provisioned from Azure](extensions.md#aks-hybrid-clusters-provisioned-from-azure-preview) you must set `--cluster-type` to use `provisionedClusters` and also add `--cluster-resource-provider microsoft.hybridcontainerservice` to the command. Installing Azure Arc extensions on AKS hybrid clusters provisioned from Azure is currently in preview.
- ## Basic installation using Azure portal To deploy using Azure portal, once you have an Arc connected cluster, go to the cluster's **Open Service Mesh** section.
azure-arc Tutorial Use Gitops Flux2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/tutorial-use-gitops-flux2.md
Title: "Tutorial: Deploy applications using GitOps with Flux v2" description: "This tutorial shows how to use GitOps with Flux v2 to manage configuration and application deployment in Azure Arc and AKS clusters." Previously updated : 12/01/2023 Last updated : 02/08/2024
Before you dive in, take a moment to [learn how GitOps with Flux works conceptua
> [!IMPORTANT] > The `microsoft.flux` extension released major version 1.0.0. This includes the [multi-tenancy feature](conceptual-gitops-flux2.md#multi-tenancy). If you have existing GitOps Flux v2 configurations that use a previous version of the `microsoft.flux` extension, you can upgrade to the [latest version](extensions-release.md#flux-gitops) manually using the Azure CLI: `az k8s-extension create -g <RESOURCE_GROUP> -c <CLUSTER_NAME> -n flux --extension-type microsoft.flux -t <CLUSTER_TYPE>` (use `-t connectedClusters` for Arc clusters and `-t managedClusters` for AKS clusters).
-> [!TIP]
-> When using this extension with [AKS hybrid clusters provisioned from Azure](extensions.md#aks-hybrid-clusters-provisioned-from-azure-preview) you must set `--cluster-type` to use `provisionedClusters` and also add `--cluster-resource-provider microsoft.hybridcontainerservice` to the command. Installing Azure Arc extensions on AKS hybrid clusters provisioned from Azure is currently in preview.
- ## Prerequisites To deploy applications using GitOps with Flux v2, you need:
To deploy applications using GitOps with Flux v2, you need:
> Ensure that the AKS cluster is created with MSI (not SPN), because the `microsoft.flux` extension won't work with SPN-based AKS clusters. > For new AKS clusters created with `az aks create`, the cluster will be MSI-based by default. For already created SPN-based clusters that need to be converted to MSI, run `az aks update -g $RESOURCE_GROUP -n $CLUSTER_NAME --enable-managed-identity`. For more information, see [Use a managed identity in AKS](../../aks/use-managed-identity.md).
-* Read and write permissions on the `Microsoft.ContainerService/managedClusters` resource type. If using [AKS hybrid clusters provisioned from Azure (preview)](extensions.md#aks-hybrid-clusters-provisioned-from-azure-preview), read and write permissions on the `Microsoft.ContainerService/provisionedClusters` resource type).
+* Read and write permissions on the `Microsoft.ContainerService/managedClusters` resource type.
#### Common to both cluster types
False whl k8s-extension C:\Users\somename\.azure\c
> Ensure that the AKS cluster is created with MSI (not SPN), because the `microsoft.flux` extension won't work with SPN-based AKS clusters. > For new AKS clusters created with `az aks create`, the cluster will be MSI-based by default. For already created SPN-based clusters that need to be converted to MSI, run `az aks update -g $RESOURCE_GROUP -n $CLUSTER_NAME --enable-managed-identity`. For more information, see [Use a managed identity in AKS](../../aks/use-managed-identity.md).
-* Read and write permissions on the `Microsoft.ContainerService/managedClusters` resource type. If using [AKS hybrid clusters provisioned from Azure (preview)](extensions.md#aks-hybrid-clusters-provisioned-from-azure-preview), read and write permissions on the `Microsoft.ContainerService/provisionedClusters` resource type).
+* Read and write permissions on the `Microsoft.ContainerService/managedClusters` resource type.
#### Common to both cluster types
The following example uses the `az k8s-configuration create` command to apply a
* The resource group that contains the cluster is `flux-demo-rg`. * The name of the Azure Arc cluster is `flux-demo-arc`.
-* The cluster type is Azure Arc (`-t connectedClusters`), but this example also works with AKS (`-t managedClusters`) and AKS hybrid clusters provisioned from Azure (`-t provisionedClusters`).
+* The cluster type is Azure Arc (`-t connectedClusters`), but this example also works with AKS (`-t managedClusters`).
* The name of the Flux configuration is `cluster-config`. * The namespace for configuration installation is `cluster-config`. * The URL for the public Git repository is `https://github.com/Azure/gitops-flux2-kustomize-helm-mt`.
az k8s-extension delete -g flux-demo-rg -c flux-demo-arc -n flux -t connectedClu
``` > [!TIP]
-> These commands use `-t connectedClusters`, which is appropriate for an Azure Arc-enabled Kubernetes cluster. For an AKS cluster, use `-t managedClusters` instead. For AKS hybrid clusters provisioned from Azure, use `-t provisionedClusters`.
+> These commands use `-t connectedClusters`, which is appropriate for an Azure Arc-enabled Kubernetes cluster. For an AKS cluster, use `-t managedClusters` instead.
### [Azure portal](#tab/azure-portal)
azure-arc Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/deploy-cli.md
Title: Azure Arc resource bridge deployment command overview description: Learn about the Azure CLI commands that can be used to manage your Azure Arc resource bridge deployment. Previously updated : 11/03/2023 Last updated : 02/09/2024
- [Connect VMware vCenter Server to Azure with Arc resource bridge](../vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script.md) - [Connect System Center Virtual Machine Manager (SCVMM) to Azure with Arc resource bridge](../system-center-virtual-machine-manager/quickstart-connect-system-center-virtual-machine-manager-to-arc.md#download-the-onboarding-script) - [Azure Stack HCI VM Management through Arc resource bridge](/azure-stack/hci/manage/azure-arc-vm-management-prerequisites)-- [AKS on HCI (AKS hybrid) - Arc resource bridge deployment](/azure/aks/hybrid/deploy-arc-resource-bridge-windows-server) This topic provides an overview of the [Azure CLI commands](/cli/azure/arcappliance) that are used to manage Arc resource bridge deployment, in the order in which they are typically used for deployment.
Three configuration files are generated: resource.yaml, appliance.yaml and infra
This command also calls the `validate` command to check the configuration files. > [!NOTE]
-> Azure Stack HCI and Hybrid AKS use different commands to create the Arc resource bridge configuration files.
+> Azure Stack HCI uses different commands to create the Arc resource bridge configuration files.
## `az arcappliance validate`
While the Arc resource bridge is connecting the ARM resource to the on-premises
`Status` transitions between `WaitingForHeartbeat` -> `Validating` -> `Connecting` -> `Connected` -> `Running`. -- WaitingForHeartbeat: Azure is waiting to receive a signal from the appliance VM
+- `WaitingForHeartbeat`: Azure is waiting to receive a signal from the appliance VM.
-- Validating: Appliance VM is checking Azure services for connectivity and serviceability
+- `Validating`: Appliance VM is checking Azure services for connectivity and serviceability.
-- Connecting: Appliance VM is syncing on-premises resources to Azure
+- `Connecting`: Appliance VM is syncing on-premises resources to Azure.
-- Connected: Appliance VM completed sync of on-premises resources to Azure
+- `Connected`: Appliance VM completed sync of on-premises resources to Azure.
-- Running: Appliance VM and Azure have completed hybrid sync and Arc resource bridge is now operational.
+- `Running`: Appliance VM and Azure have completed hybrid sync and Arc resource bridge is now operational.
Successful Arc resource bridge creation results in `ProvisioningState = Succeeded` and `Status = Running`.
If a deployment fails, run this command to clean up the environment before you a
- Explore the full list of [Azure CLI commands and required parameters](/cli/azure/arcappliance) for Arc resource bridge. - Get [troubleshooting tips for Arc resource bridge](troubleshoot-resource-bridge.md).--
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/overview.md
Title: Azure Arc resource bridge overview description: Learn how to use Azure Arc resource bridge to support VM self-servicing on Azure Stack HCI, VMware, and System Center Virtual Machine Manager. Previously updated : 11/27/2023 Last updated : 02/09/2024 # What is Azure Arc resource bridge?
-Azure Arc resource bridge is a Microsoft managed product that is part of the core Azure Arc platform. It is designed to host other Azure Arc services. In this release, the resource bridge supports VM self-servicing and management from Azure, for virtualized Windows and Linux virtual machines hosted in an on-premises environment on [Azure Stack HCI](/azure-stack/hci/manage/azure-arc-vm-management-overview), VMware ([Arc-enabled VMware vSphere](../vmware-vsphere/index.yml)), and System Center Virtual Machine Manager (SCVMM) [Arc-enabled SCVMM](../system-center-virtual-machine-manager/index.yml).
+Azure Arc resource bridge is a Microsoft managed product that is part of the core Azure Arc platform. It is designed to host other Azure Arc services. In this release, the resource bridge supports VM self-servicing and management from Azure, for virtualized Windows and Linux virtual machines hosted in an on-premises environment on Azure Stack HCI ([Azure Arc VM management](/azure-stack/hci/manage/azure-arc-vm-management-overview)), VMware ([Arc-enabled VMware vSphere](../vmware-vsphere/overview.md)), and System Center Virtual Machine Manager ([Arc-enabled SCVMM](../system-center-virtual-machine-manager/overview.md)).
Azure Arc resource bridge is a Kubernetes management cluster installed on the customerΓÇÖs on-premises infrastructure. The resource bridge is provided credentials to the infrastructure control plane that allows it to apply guest management services on the on-premises resources. Arc resource bridge enables projection of on-premises resources as ARM resources and management from ARM as "Arc-enabled" Azure resources.
Azure Arc resource bridge can host other Azure services or solutions running on-
* Cluster extension: The Azure service deployed to run on-premises. Currently, it supports three
- * Azure Arc-enabled VMware
* Azure Arc VM management on Azure Stack HCI
+ * Azure Arc-enabled VMware
* Azure Arc-enabled System Center Virtual Machine Manager (SCVMM) * Custom locations: A deployment target where you can create Azure resources. It maps to different resource for different Azure services. For example, for Arc-enabled VMware, the custom locations resource maps to an instance of vCenter, and for Azure Arc VM management on Azure Stack HCI, it maps to an HCI cluster instance.
In order to use Arc resource bridge in a region, Arc resource bridge and the Arc
Arc resource bridge supports the following Azure regions: -- East US -- East US 2-- West US 2-- West US 3-- Central US-- North Central US-- South Central US-- Canada Central-- Australia East-- West Europe-- North Europe-- UK South-- UK West-- Sweden Central-- Japan East-- Southeast Asia-- East Asia-- Central India
+* East US
+* East US 2
+* West US 2
+* West US 3
+* Central US
+* North Central US
+* South Central US
+* Canada Central
+* Australia East
+* West Europe
+* North Europe
+* UK South
+* UK West
+* Sweden Central
+* Japan East
+* Southeast Asia
+* East Asia
+* Central India
### Regional resiliency
The following private cloud environments and their versions are officially suppo
### Supported versions
-For Arc-enabled private clouds in General Availability, the minimum supported version of Arc resource bridge is 1.0.15.
+The minimum supported version of Arc resource bridge is 1.0.15.
Generally, the latest released version and the previous three versions (n-3) of Arc resource bridge are supported. For example, if the current version is 1.0.18, then the typical n-3 supported versions are:
Arc resource bridge typically releases a new version on a monthly cadence, at th
* Learn how [Azure Arc-enabled VMware vSphere extends Azure's governance and management capabilities to VMware vSphere infrastructure](../vmware-vsphere/overview.md). * Learn how [Azure Arc-enabled SCVMM extends Azure's governance and management capabilities to System Center managed infrastructure](../system-center-virtual-machine-manager/overview.md).
-* Learn about [provisioning and managing on-premises Windows and Linux VMs running on Azure Stack HCI clusters](/azure-stack/hci/manage/azure-arc-enabled-virtual-machines).
+* Learn about [provisioning and managing on-premises Windows and Linux VMs running on Azure Stack HCI clusters](/azure-stack/hci/manage/azure-arc-vm-management-overview).
* Review the [system requirements](system-requirements.md) for deploying and managing Arc resource bridge.
azure-arc System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/system-requirements.md
Title: Azure Arc resource bridge system requirements description: Learn about system requirements for Azure Arc resource bridge. Previously updated : 11/03/2023 Last updated : 02/09/2024 # Azure Arc resource bridge system requirements
If deploying Arc resource bridge on VMware, Azure CLI 64-bit is required to be i
If deploying on Azure Stack HCI, then Azure CLI 32-bit should be installed on the management machine.
-Arc Appliance CLI extension, 'arcappliance', needs to be installed on the CLI. This can be done by running: `az extension add --name arcappliance`
+Arc Appliance CLI extension, `arcappliance`, needs to be installed on the CLI. This can be done by running: `az extension add --name arcappliance`
## Minimum resource requirements
Arc resource bridge has the following minimum resource requirements:
These minimum requirements enable most scenarios. However, a partner product may support a higher resource connection count to Arc resource bridge, which requires the bridge to have higher resource requirements. Failure to provide sufficient resources may cause errors during deployment, such as disk copy errors. Review the partner product's documentation for specific resource requirements.
-> [!NOTE]
-> To use Azure Kubernetes Service (AKS) on Azure Stack HCI with Arc resource bridge, AKS must be deployed prior to deploying Arc resource bridge. If Arc resource bridge has already been deployed, AKS can't be installed unless you delete Arc resource bridge first. Once AKS is deployed to Azure Stack HCI, you can deploy Arc resource bridge again.
- ## IP address prefix (subnet) requirements The IP address prefix (subnet) where Arc resource bridge will be deployed requires a minimum prefix of /29. The IP address prefix must have enough available IP addresses for the gateway IP, control plane IP, appliance VM IP, and reserved appliance VM IP. Please work with your network engineer to ensure that there is an available subnet with the required available IP addresses and IP address prefix for Arc resource bridge.
The machine used to run the commands to deploy and maintain Arc resource bridge
Management machine requirements: -- [Azure CLI x64](/cli/azure/install-azure-cli-windows?tabs=azure-cli) installed.-- Open communication to Control Plane IP (`controlplaneendpoint` parameter in `createconfig` command).-- Open communication to Appliance VM IP. -- Open communication to the reserved Appliance VM IP.
+- [Azure CLI x64](/cli/azure/install-azure-cli-windows?tabs=azure-cli) installed
+- Open communication to Control Plane IP (`controlplaneendpoint` parameter in `createconfig` command)
+- Open communication to Appliance VM IP
+- Open communication to the reserved Appliance VM IP
- if applicable, communication over port 443 to the private cloud management console (ex: VMware vCenter host machine) - Internal and external DNS resolution. The DNS server must resolve internal names, such as the vCenter endpoint for vSphere or cloud agent service endpoint for Azure Stack HCI. The DNS server must also be able to resolve external addresses that are [required URLs](network-requirements.md#outbound-connectivity) for deployment. - Internet access ## Appliance VM IP address requirements
-Arc resource bridge consists of an appliance VM that is deployed on-premises. The appliance VM has visibility into the on-premises infrastructure and can tag on-premises resources (guest management) for projection into Azure Resource Manager (ARM).
+Arc resource bridge consists of an appliance VM that is deployed on-premises. The appliance VM has visibility into the on-premises infrastructure and can tag on-premises resources (guest management) for projection into Azure Resource Manager (ARM).
-The appliance VM is assigned an IP address from the `k8snodeippoolstart` parameter in the `createconfig` command; it may be referred to in partner products as Start Range IP, RB IP Start or VM IP 1.
+The appliance VM is assigned an IP address from the `k8snodeippoolstart` parameter in the `createconfig` command; it may be referred to in partner products as Start Range IP, RB IP Start or VM IP 1.
The appliance VM IP is the starting IP address for the appliance VM IP pool range. The VM IP pool range requires a minimum of 2 IP addresses.
Appliance VM IP address requirements:
- If using DHCP, then the address must be reserved and outside of the assignable DHCP range of IPs. No other machine on the network will use or receive this IP from DHCP. DHCP is generally not recommended because a change in IP address (ex: due to an outage) impacts the resource bridge availability. - Must be from within the IP address prefix.-- Internal and external DNS resolution.
+- Internal and external DNS resolution.
- If using a proxy, the proxy server has to be reachable from this IP and all IPs within the VM IP pool. ## Reserved appliance VM IP requirements
-Arc resource bridge reserves an additional IP address to be used for the appliance VM upgrade.
+Arc resource bridge reserves an additional IP address to be used for the appliance VM upgrade.
The reserved appliance VM IP is assigned an IP address via the `k8snodeippoolend` parameter in the `az arcappliance createconfig` command. This IP address may be referred to as End Range IP, RB IP End, or VM IP 2.
The appliance VM hosts a management Kubernetes cluster with a control plane that
Control plane IP requirements: - Open communication with the management machine.
- - Static IP address assigned; the IP address should be outside the DHCP range but still available on the network segment. This IP address can't be assigned to any other machine on the network.
- - If using DHCP, the control plane IP should be a single reserved IP that is outside of the assignable DHCP range of IPs. No other machine on the network will use or receive this IP from DHCP. DHCP is generally not recommended because a change in IP address (ex: due to an outage) impacts the resource bridge availability.
- - If using Azure Kubernetes Service on Azure Stack HCI (AKS hybrid) and installing Arc resource bridge, then the control plane IP for the resource bridge can't be used by the AKS hybrid cluster. For specific instructions on deploying Arc resource bridge with AKS on Azure Stack HCI, see [AKS on HCI (AKS hybrid) - Arc resource bridge deployment](/azure/aks/hybrid/deploy-arc-resource-bridge-windows-server).
- - If using a proxy, the proxy server has to be reachable from IPs within the IP address prefix, including the reserved appliance VM IP.
+ - Static IP address assigned; the IP address should be outside the DHCP range but still available on the network segment. This IP address can't be assigned to any other machine on the network.
+ - If using DHCP, the control plane IP should be a single reserved IP that is outside of the assignable DHCP range of IPs. No other machine on the network will use or receive this IP from DHCP. DHCP is generally not recommended because a change in IP address (ex: due to an outage) impacts the resource bridge availability.
+
+- If using a proxy, the proxy server has to be reachable from IPs within the IP address prefix, including the reserved appliance VM IP.
## DNS server
-
+ DNS server(s) must have internal and external endpoint resolution. The appliance VM and control plane need to resolve the management machine and vice versa. All three IPs must be able to reach the required URLs for deployment. ## Gateway
-
+ The gateway IP should be an IP from within the subnet designated in the IP address prefix. ## Example minimum configuration for static IP deployment
-
+ The following example shows valid configuration values that can be passed during configuration file creation for Arc resource bridge. It is strongly recommended to use static IP addresses when deploying Arc resource bridge. Notice that the IP addresses for the gateway, control plane, appliance VM and DNS server (for internal resolution) are within the IP address prefix. This key detail helps ensure successful deployment of the appliance VM.
Notice that the IP addresses for the gateway, control plane, appliance VM and DN
Arc resource bridge may require a separate user account with the necessary roles to view and manage resources in the on-premises infrastructure (such as Arc-enabled VMware vSphere). If so, during creation of the configuration files, the `username` and `password` parameters will be required. The account credentials are then stored in a configuration file locally within the appliance VM. > [!WARNING]
-> Arc resource bridge can only use a user account that does not have multifactor authentication enabled.
-If the user account is set to periodically change passwords, [the credentials must be immediately updated on the resource bridge](maintenance.md#update-credentials-in-the-appliance-vm). This user account may also be set with a lockout policy to protect the on-premises infrastructure, in case the credentials aren't updated and the resource bridge makes multiple attempts to use expired credentials to access the on-premises control center.
+> Arc resource bridge can only use a user account that does not have multifactor authentication enabled. If the user account is set to periodically change passwords, [the credentials must be immediately updated on the resource bridge](maintenance.md#update-credentials-in-the-appliance-vm). This user account can also be set with a lockout policy to protect the on-premises infrastructure, in case the credentials aren't updated and the resource bridge makes multiple attempts to use expired credentials to access the on-premises control center.
For example, with Arc-enabled VMware, Arc resource bridge needs a separate user account for vCenter with the necessary roles. If the [credentials for the user account change](troubleshoot-resource-bridge.md#insufficient-permissions), then the credentials stored in Arc resource bridge must be immediately updated by running `az arcappliance update-infracredentials` from the [management machine](#management-machine-requirements). Otherwise, the appliance will make repeated attempts to use the expired credentials to access vCenter, which will result in a lockout of the account.
There are several different types of configuration files, based on the on-premis
### Appliance configuration files
-Three configuration files are created when the `createconfig` command completes (or the equivalent commands used by Azure Stack HCI and AKS hybrid): `<appliance-name>-resource.yaml`, `<appliance-name>-appliance.yaml` and `<appliance-name>-infra.yaml`.
+Three configuration files are created when the `createconfig` command completes (or the equivalent commands used by Azure Stack HCI): `<appliance-name>-resource.yaml`, `<appliance-name>-appliance.yaml` and `<appliance-name>-infra.yaml`.
By default, these files are generated in the current CLI directory when `createconfig` completes. These files should be saved in a secure location on the management machine, because they're required for maintaining the appliance VM. Because the configuration files reference each other, all three files must be stored in the same location. If the files are moved from their original location at deployment, open the files to check that the reference paths to the configuration files are accurate.
The appliance VM hosts a management Kubernetes cluster. The kubeconfig is a low-
### HCI login configuration file (Azure Stack HCI only)
-Arc resource bridge uses a MOC login credential called [KVA token](/azure-stack/hci/manage/deploy-arc-resource-bridge-using-command-line#set-up-arc-vm-management) (kvatoken.tok) to interact with Azure Stack HCI. The KVA token is generated with the appliance configuration files when deploying Arc resource bridge. This token is also used when collecting logs for Arc resource bridge, so it should be saved in a secure location with the rest of the appliance configuration files. This file is saved in the directory provided during configuration file creation or the default CLI directory.
-
-## AKS on Azure Stack HCI with Arc resource bridge
-
-When you deploy Arc resource bridge with AKS on Azure Stack HCI (AKS-HCI), the following configurations must be applied:
--- Arc resource bridge and AKS-HCI should share the same `vswitchname` and be in the same subnet, sharing the same value for the parameter, `ipaddressprefix` .--- The IP address prefix (subnet) must contain enough IP addresses for both the Arc resource bridge and AKS-HCI.--- Arc resource bridge should be given a unique `vnetname` that is different from the one used for AKS Hybrid. --- The Arc resource bridge requires different IP addresses for `vippoolstart/end` and `k8snodeippoolstart/end`. These IPs can't be shared between the two.--- Arc resource bridge and AKS-HCI must each have a unique control plane IP.-
-For instructions to deploy Arc resource bridge on AKS Hybrid, see [How to install Azure Arc Resource Bridge on Windows Server - AKS hybrid](/azure/aks/hybrid/deploy-arc-resource-bridge-windows-server).
+Arc resource bridge uses a MOC login credential called KVA token (`kvatoken.tok`) to interact with Azure Stack HCI. The KVA token is generated with the appliance configuration files when deploying Arc resource bridge. This token is also used when collecting logs for Arc resource bridge, so it should be saved in a secure location with the rest of the appliance configuration files. This file is saved in the directory provided during configuration file creation or the default CLI directory.
## Next steps - Understand [network requirements for Azure Arc resource bridge](network-requirements.md).- - Review the [Azure Arc resource bridge overview](overview.md) to understand more about features and benefits.- - Learn about [security configuration and considerations for Azure Arc resource bridge](security-overview.md).-----
azure-arc Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/upgrade.md
Arc resource bridge can be manually upgraded from the management machine. You mu
Manual upgrade generally takes between 30-90 minutes, depending on network speeds. The upgrade command takes your Arc resource bridge to the next appliance version, which might not be the latest available appliance version. Multiple upgrades could be needed to reach a [supported version](#supported-versions). You can check your appliance version by checking the Azure resource of your Arc resource bridge.
-To manually upgrade your Arc resource bridge, make sure you're using the latest `az arcappliance` CLI extension by running the extension upgrade command from the management machine:
+Before upgrading, you'll need the latest Azure CLI extension for `arcappliance`:
```azurecli az extension add --upgrade --name arcappliance
az arcappliance upgrade <private cloud> --config-file <file path to ARBname-appl
For example, to upgrade a resource bridge on VMware, run: `az arcappliance upgrade vmware --config-file c:\contosoARB01-appliance.yaml`
-To upgrade a resource bridge on System Center Virtual Machine Manager (SCVMM), run: `az arcappliance upgrade scvmm --config-file c:\contosoARB01-appliance.yaml`
+To upgrade a resource bridge on SCVMM, run: `az arcappliance upgrade scvmm --config-file c:\contosoARB01-appliance.yaml`
To upgrade a resource bridge on Azure Stack HCI, please transition to 23H2 and use the built-in upgrade management tool. More info available [here](/azure-stack/hci/update/about-updates-23h2).
azure-arc Agent Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes-archive.md
# Archive for What's new with Azure Connected Machine agent
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ The primary [What's new in Azure Connected Machine agent?](agent-release-notes.md) article contains updates for the last six months, while this article contains all the older information. The Azure Connected Machine agent receives improvements on an ongoing basis. This article provides you with information about:
azure-arc Manage Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-agent.md
# Managing and maintaining the Connected Machine agent
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ After initial deployment of the Azure Connected Machine agent, you may need to reconfigure the agent, upgrade it, or remove it from the computer. These routine maintenance tasks can be done manually or through automation (which reduces both operational error and expenses). This article describes the operational aspects of the agent. See the [azcmagent CLI documentation](azcmagent.md) for command line reference information. ## Installing a specific version of the agent
azure-arc Manage Vm Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-vm-extensions.md
# Virtual machine extension management with Azure Arc-enabled servers
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ Virtual machine (VM) extensions are small applications that provide post-deployment configuration and automation tasks on Azure VMs. For example, if a virtual machine requires software installation, anti-virus protection, or to run a script in it, a VM extension can be used. Azure Arc-enabled servers enables you to deploy, remove, and update Azure VM extensions to non-Azure Windows and Linux VMs, simplifying the management of your hybrid machine through their lifecycle. VM extensions can be managed using the following methods on your hybrid machines or servers managed by Arc-enabled servers:
azure-arc Plan Evaluate On Azure Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/plan-evaluate-on-azure-virtual-machine.md
# Evaluate Azure Arc-enabled servers on an Azure virtual machine
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ Azure Arc-enabled servers is designed to help you connect servers running on-premises or in other clouds to Azure. Normally, you would not use Azure Arc-enabled servers on an Azure virtual machine because all the same capabilities are natively available for these VMs, including a representation of the VM in Azure Resource Manager, VM extensions, managed identities, and Azure Policy. If you attempt to install Azure Arc-enabled servers on an Azure VM, you'll receive an error message stating that it is unsupported and the agent installation will be canceled. While you cannot install Azure Arc-enabled servers on an Azure VM for production scenarios, it is possible to configure Azure Arc-enabled servers to run on an Azure VM for *evaluation and testing purposes only*. This article will help you set up an Azure VM before you can enable Azure Arc-enabled servers on it.
azure-arc Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/prerequisites.md
# Connected Machine agent prerequisites
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ This topic describes the basic requirements for installing the Connected Machine agent to onboard a physical server or virtual machine to Azure Arc-enabled servers. Some [onboarding methods](deployment-options.md) may have more requirements. ## Supported environments
azure-arc Run Command https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/run-command.md
The Connected Machine agent supports local configurations that allow you to set
For Windows:
-`azcmagent config set extensions.blocklist " microsoft.cplat.core/runcommandhandlerwindows"`
+`azcmagent config set extensions.blocklist "microsoft.cplat.core/runcommandhandlerwindows"`
For Linux:
-`azcmagent config set extensions.blocklist " microsoft.cplat.core/runcommandhandlerlinux"`
+`azcmagent config set extensions.blocklist "microsoft.cplat.core/runcommandhandlerlinux"`
## Azure CLI
azure-arc Remove Vcenter From Arc Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/remove-vcenter-from-arc-vmware.md
description: This article explains the steps to cleanly remove your VMware vCent
-+ Last updated 11/30/2023 # Customer intent: As an infrastructure admin, I want to cleanly remove my VMware vCenter environment from Azure Arc-enabled VMware vSphere.+ # Remove your VMware vCenter environment from Azure Arc
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ In this article, you learn how to cleanly remove your VMware vCenter environment from Azure Arc-enabled VMware vSphere. For VMware vSphere environments that you no longer want to manage with Azure Arc-enabled VMware vSphere, follow the steps in the article to: 1. Remove guest management from VMware virtual machines
In this article, you learn how to cleanly remove your VMware vCenter environment
## 1. Remove guest management from VMware virtual machines To prevent continued billing of Azure management services after you remove the vSphere environment from Azure Arc, you must first cleanly remove guest management from all Arc-enabled VMware vSphere virtual machines where it was enabled.
-When you enable guest management on Arc-enabled VMware vSphere virtual machines, the Arc connected machine agent is installed on them.
+When you enable guest management on Arc-enabled VMware vSphere virtual machines, the Arc connected machine agent is installed on them.
Once guest management is enabled, you can install VM extensions on them and use Azure management services like the Log Analytics on them. To cleanly remove guest management, you must follow the steps below to remove any VM extensions from the virtual machine, disconnect the agent, and uninstall the software from your virtual machine. It's important to complete each of the three steps to fully remove all related software components from your virtual machines.
To run the deboarding script, follow these steps:
2. Run the following command to allow the script to run because it's an unsigned script. (If you close the session before you complete all the steps, run this command again for the new session.) ```powershell-interactive
- Set-ExecutionPolicy -Scope Process -ExecutionPolicy Bypass
+ Set-ExecutionPolicy -Scope Process -ExecutionPolicy Bypass
``` 3. Run the script.
To run the deboarding script, follow these steps:
- **ApplianceConfigFilePath (optional)**: Path to kubeconfig, output from deploy command. Providing applianceconfigfilepath also deletes the appliance VM running on the vCenter. -- **Force**: Using the Force flag deletes all the Azure resources without reaching resource bridge. Use this option if resource bridge VM isn't in running state.
+- **Force**: Using the Force flag deletes all the Azure resources without reaching resource bridge. Use this option if resource bridge VM isn't in running state.
### Remove VMware vSphere resources from Azure manually
If you aren't using the deboarding script, follow these steps to remove the VMwa
6. Select **Remove from Azure**.
- This action only removes these resource representations from Azure. The resources continue to remain in your vCenter.
+ This action only removes these resource representations from Azure. The resources continue to remain in your vCenter.
7. Do the steps 4, 5, and 6 for **Resources pools/clusters/hosts**, **Templates**, **Networks**, and **Datastores**
azure-arc Troubleshoot Guest Management Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/troubleshoot-guest-management-issues.md
# Customer intent: As a VI admin, I want to understand the troubleshooting process for guest management issues.+ # Troubleshoot Guest Management for Linux VMs
-This article provides information on how to troubleshoot and resolve the issues that can occur while you enable guest management on Arc-enabled VMware vSphere virtual machines.
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+
+This article provides information on how to troubleshoot and resolve the issues that can occur while you enable guest management on Arc-enabled VMware vSphere virtual machines.
## Troubleshoot issues while enabling Guest Management on a domain-joined Linux VM
Default: The default set of PAM service names includes:
## Troubleshoot issues while enabling Guest Management on RHEL-based Linux VMs
-Applies to:
+Applies to:
- RedHat Linux - CentOS
Before you enable the guest agent, follow these steps on the VM:
1. Create file `vmtools_unconfined_rpm_script_kcs5347781.te` using the following:
- `policy_module(vmtools_unconfined_rpm_script_kcs5347781, 1.0)
+ `policy_module(vmtools_unconfined_rpm_script_kcs5347781, 1.0)
gen_require(` type vmtools_unconfined_t; ')
If you don't see your problem here or you can't resolve your issue, try one of t
- Connect with [@AzureSupport](https://twitter.com/azuresupport), the official Microsoft Azure account for improving customer experience. Azure Support connects the Azure community to answers, support, and experts. -- [Open an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md).
+- [Open an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md).
azure-cache-for-redis Cache Java Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-java-get-started.md
Title: 'Quickstart: Use Azure Cache for Redis in Java'
-description: In this quickstart, you'll create a new Java app that uses Azure Cache for Redis
--
+description: In this quickstart, you create a new Java app that uses Azure Cache for Redis
++ Last updated 01/04/2022
Clone the repo [Java quickstart](https://github.com/Azure-Samples/azure-cache-re
[!INCLUDE [redis-cache-access-keys](includes/redis-cache-access-keys.md)]
-## Setting up the working environment
+## Set up the working environment
-Depending on your operating system, add environment variables for your **Host name** and **Primary access key** that you noted above. Open a command prompt, or a terminal window, and set up the following values:
+Depending on your operating system, add environment variables for your **Host name** and **Primary access key** that you noted previously. Open a command prompt, or a terminal window, and set up the following values:
-```dos
-set REDISCACHEHOSTNAME=<YOUR_HOST_NAME>.redis.cache.windows.net
-set REDISCACHEKEY=<YOUR_PRIMARY_ACCESS_KEY>
-```
+### [Linux](#tab/bash)
```bash
-export REDISCACHEHOSTNAME=<YOUR_HOST_NAME>.redis.cache.windows.net
-export REDISCACHEKEY=<YOUR_PRIMARY_ACCESS_KEY>
+export REDISCACHEHOSTNAME=<your-host-name>.redis.cache.windows.net
+export REDISCACHEKEY=<your-primary-access-key>
+```
+
+### [Windows](#tab/cmd)
+
+```cmd
+set REDISCACHEHOSTNAME=<your-host-name>.redis.cache.windows.net
+set REDISCACHEKEY=<your-primary-access-key>
``` ++ Replace the placeholders with the following values: -- `<YOUR_HOST_NAME>`: The DNS host name, obtained from the *Properties* section of your Azure Cache for Redis resource in the Azure portal.-- `<YOUR_PRIMARY_ACCESS_KEY>`: The primary access key, obtained from the *Access keys* section of your Azure Cache for Redis resource in the Azure portal.
+- `<your-host-name>`: The DNS host name, obtained from the *Properties* section of your Azure Cache for Redis resource in the Azure portal.
+- `<your-primary-access-key>`: The primary access key, obtained from the *Access keys* section of your Azure Cache for Redis resource in the Azure portal.
-## Understanding the Java sample
+## Understand the Java sample
In this sample, you use Maven to run the quickstart app. 1. Change to the new *redistest* project directory.
-1. Open the *pom.xml* file. In the file, you'll see a dependency for [Jedis](https://github.com/xetorthio/jedis):
+1. Open the *pom.xml* file. In the file, you see a dependency for [Jedis](https://github.com/xetorthio/jedis):
```xml <dependency>
In this sample, you use Maven to run the quickstart app.
## Build and run the app
-1. First, if you haven't already, you must set the environment variables as noted above.
+1. First, if you haven't already, you must set the environment variables as noted previously.
- ```dos
- set REDISCACHEHOSTNAME=<YOUR_HOST_NAME>.redis.cache.windows.net
- set REDISCACHEKEY=<YOUR_PRIMARY_ACCESS_KEY>
- ```
+ ### [Linux](#tab/bash)
+
+ ```bash
+ export REDISCACHEHOSTNAME=<your-host-name>.redis.cache.windows.net
+ export REDISCACHEKEY=<your-primary-access-key>
+ ```
+
+ ### [Windows](#tab/cmd)
+
+ ```cmd
+ set REDISCACHEHOSTNAME=<your-host-name>.redis.cache.windows.net
+ set REDISCACHEKEY=<your-primary-access-key>
+ ```
+
+
1. Execute the following Maven command to build and run the app:
- ```dos
- mvn compile
- mvn exec:java -D exec.mainClass=example.demo.App
- ```
+ ### [Linux](#tab/bash)
+
+ ```bash
+ mvn compile
+ mvn exec:java -D exec.mainClass=example.demo.App
+ ```
+
+ ### [Windows](#tab/cmd)
+
+ ```cmd
+ mvn compile
+ mvn exec:java -D exec.mainClass=example.demo.App
+ ```
+
+
-In the example below, you see the `Message` key previously had a cached value. The value was updated to a new value using `jedis.set`. The app also executed the `PING` and `CLIENT LIST` commands.
+In the following output, you can see that the `Message` key previously had a cached value. The value was updated to a new value using `jedis.set`. The app also executed the `PING` and `CLIENT LIST` commands.
+```output
+Cache Command : Ping
+Cache Response : PONG
+
+Cache Command : GET Message
+Cache Response : Hello! The cache is working from Java!
+
+Cache Command : SET Message
+Cache Response : OK
+
+Cache Command : GET Message
+Cache Response : Hello! The cache is working from Java!
+
+Cache Command : CLIENT LIST
+Cache Response : id=777430 addr= :58989 fd=22 name= age=1 idle=0 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=0 qbuf-free=32768 obl=0 oll=0 omem=0 ow=0 owmem=0 events=r cmd=client numops=6
+```
## Clean up resources
If you continue to use the quickstart code, you can keep the resources created i
Otherwise, if you're finished with the quickstart sample application, you can delete the Azure resources created in this quickstart to avoid charges. > [!IMPORTANT]
-> Deleting a resource group is irreversible and that the resource group and all the resources in it are permanently deleted. Make sure that you do not accidentally delete the wrong resource group or resources. If you created the resources for hosting this sample inside an existing resource group that contains resources you want to keep, you can delete each resource individually on the left instead of deleting the resource group.
+> Deleting a resource group is irreversible and that the resource group and all the resources in it are permanently deleted. Make sure that you do not accidentally delete the wrong resource group or resources. If you created the resources for hosting this sample inside an existing resource group that contains resources you want to keep, you can delete each resource individually instead of deleting the resource group.
> 1. Sign in to the [Azure portal](https://portal.azure.com) and select **Resource groups**. 1. In the **Filter by name** textbox, type the name of your resource group. The instructions for this article used a resource group named *TestResources*. On your resource group in the result list, select **...** then **Delete resource group**.
- :::image type="content" source="./media/cache-java-get-started/azure-cache-redis-delete-resource-group.png" alt-text="Azure resource group deleted":::
+ :::image type="content" source="media/cache-java-get-started/azure-cache-redis-delete-resource-group.png" alt-text="Screenshot of the Azure portal that shows the Resource groups page with the Delete resource group button highlighted." lightbox="media/cache-java-get-started/azure-cache-redis-delete-resource-group.png":::
-1. You'll be asked to confirm the deletion of the resource group. Type the name of your resource group to confirm, and select **Delete**.
+1. Type the name of your resource group to confirm deletion and then select **Delete**.
After a few moments, the resource group and all of its contained resources are deleted.
azure-cache-for-redis Cache Java Redisson Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-java-redisson-get-started.md
+
+ Title: "Quickstart: Use Azure Cache for Redis in Java with Redisson Redis client"
+description: In this quickstart, you create a new Java app that uses Azure Cache for Redis and Redisson as Redis client.
++ Last updated : 01/18/2024++
+ms.devlang: java
+
+#Customer intent: As a Java developer, new to Azure Cache for Redis, I want to create a new Java app that uses Azure Cache for Redis and Redisson as Redis client.
++
+# Quickstart: Use Azure Cache for Redis in Java with Redisson Redis client
+
+In this quickstart, you incorporate Azure Cache for Redis into a Java app using the [Redisson](https://redisson.org/) Redis client and JCP standard JCache API. These services give you access to a secure, dedicated cache that is accessible from any application within Azure. This article provides two options for selecting the Azure identity to use for the Redis connection.
+
+## Skip to the code on GitHub
+
+This quickstart uses the Maven archetype feature to generate the scaffolding for the app. The quickstart directs you to modify the generated code to arrive at the working sample app. If you want to skip straight to the completed code, see the [Java quickstart](https://github.com/Azure-Samples/azure-cache-redis-samples/tree/main/quickstart/java-redisson-jcache) on GitHub.
+
+## Prerequisites
+
+- Azure subscription - [create one for free](https://azure.microsoft.com/free/)
+- [Use Microsoft Entra ID for cache authentication](cache-azure-active-directory-for-authentication.md)
+- [Apache Maven](https://maven.apache.org/download.cgi)
+
+## Create an Azure Cache for Redis
+++
+## Set up the working environment
+
+The steps in this section show you two options for how to select the Azure identity used for the Redis connection. The sample code looks at the value of the `AUTH_TYPE` environment variable and takes action depending on the value.
+
+### Identity option 1: Authentication with Redis Key
+
+Depending on your operating system, add environment variables for your cache's host name and primary access key. Open a command prompt, or a terminal window, and set up the following values:
+
+### [Linux](#tab/bash)
+
+```bash
+export REDIS_CACHE_HOSTNAME=<your-host-name>.redis.cache.windows.net
+export REDIS_CACHE_KEY=<your-primary-access-key>
+export AUTH_TYPE=RedisKey
+```
+
+### [Windows](#tab/cmd)
+
+```cmd
+set REDIS_CACHE_HOSTNAME=<your-host-name>.redis.cache.windows.net
+set REDIS_CACHE_KEY=<your-primary-access-key>
+set AUTH_TYPE=RedisKey
+```
+++
+Replace the placeholders with the following values:
+
+- `<your-host-name>`: The DNS host name, obtained from the *Properties* section of your Azure Cache for Redis resource in the Azure portal.
+- `<your-primary-access-key>`: The primary access key, obtained from the *Access keys* section of your Azure Cache for Redis resource in the Azure portal.
+
+### Identity option 2: Authentication with Microsoft Entra ID
+
+Depending on your operating system, add environment variables for your cache's host name and user name. Open a command prompt, or a terminal window, and set up the following values:
+
+### [Linux](#tab/bash)
+
+```bash
+export REDIS_CACHE_HOSTNAME=<your-host-name>.redis.cache.windows.net
+export USER_NAME=<user-name>
+export AUTH_TYPE=MicrosoftEntraID
+```
+
+### [Windows](#tab/cmd)
+
+```cmd
+set REDIS_CACHE_HOSTNAME=<your-host-name>.redis.cache.windows.net
+set USER_NAME=<user-name>
+set AUTH_TYPE=MicrosoftEntraID
+```
+++
+Replace the placeholders with the following values:
+
+- `<your-host-name>`: The DNS host name, obtained from the *Properties* section of your Azure Cache for Redis resource in the Azure portal.
+- `<user-name>`: Object ID of your managed identity or service principal.
+ - You can get the user name by using the following steps:
+
+ 1. In the Azure portal, navigate to your Azure Cache for Redis instance.
+ 1. On the navigation pane, select **Data Access Configuration**.
+ 1. On the **Redis Users** tab, find the **Username** column.
+
+ :::image type="content" source="media/cache-java-redisson-get-started/user-name.png" alt-text="Screenshot of the Azure portal that shows the Azure Cache for Redis Data Access Configuration page with the Redis Users tab and a Username value highlighted." lightbox="media/cache-java-redisson-get-started/user-name.png":::
+
+## Create a new Java app
+
+Using Maven, generate a new quickstart app:
+
+### [Linux](#tab/bash)
+
+```bash
+mvn archetype:generate \
+ -DarchetypeGroupId=org.apache.maven.archetypes \
+ -DarchetypeArtifactId=maven-archetype-quickstart \
+ -DarchetypeVersion=1.3 \
+ -DinteractiveMode=false \
+ -DgroupId=example.demo \
+ -DartifactId=redis-redisson-test \
+ -Dversion=1.0
+```
+
+### [Windows](#tab/cmd)
+
+```cmd
+mvn archetype:generate \
+ -DarchetypeGroupId=org.apache.maven.archetypes \
+ -DarchetypeArtifactId=maven-archetype-quickstart \
+ -DarchetypeVersion=1.3 \
+ -DinteractiveMode=false \
+ -DgroupId=example.demo \
+ -DartifactId=redis-redisson-test \
+ -Dversion=1.0
+```
+++
+Change to the new *redis-redisson-test* project directory.
+
+Open the *pom.xml* file and add a dependency for [Redisson](https://github.com/redisson/redisson#maven):
+
+```xml
+ <dependency>
+ <groupId>com.azure</groupId>
+ <artifactId>azure-identity</artifactId>
+ <version>1.8.2</version>
+ </dependency>
+
+ <dependency>
+ <groupId>org.redisson</groupId>
+ <artifactId>redisson</artifactId>
+ <version>3.24.3</version>
+ </dependency>
+```
+
+Save the *pom.xml* file.
+
+Open *App.java* and replace the code with the following code:
+
+```java
+package example.demo;
+
+import com.azure.core.credential.TokenRequestContext;
+import com.azure.identity.DefaultAzureCredential;
+import com.azure.identity.DefaultAzureCredentialBuilder;
+import org.redisson.Redisson;
+import org.redisson.api.RedissonClient;
+import org.redisson.config.Config;
+import org.redisson.jcache.configuration.RedissonConfiguration;
+
+import javax.cache.Cache;
+import javax.cache.CacheManager;
+import javax.cache.Caching;
+import javax.cache.configuration.Configuration;
+import javax.cache.configuration.MutableConfiguration;
+import java.time.LocalDateTime;
++
+/**
+ * Redis test
+ *
+ */
+public class App {
+ public static void main(String[] args) {
+
+ Config redissonconfig = getConfig();
+
+ RedissonClient redissonClient = Redisson.create(redissonconfig);
+
+ MutableConfiguration<String, String> jcacheConfig = new MutableConfiguration<>();
+ Configuration<String, String> config = RedissonConfiguration.fromInstance(redissonClient, jcacheConfig);
+
+ // Perform cache operations using JCache
+ CacheManager manager = Caching.getCachingProvider().getCacheManager();
+ Cache<String, String> map = manager.createCache("test", config);
+
+ // Simple get and put of string data into the cache
+ System.out.println("\nCache Command : GET Message");
+ System.out.println("Cache Response : " + map.get("Message"));
+
+ System.out.println("\nCache Command : SET Message");
+ map.put("Message",
+ String.format("Hello! The cache is working from Java! %s", LocalDateTime.now()));
+
+ // Demonstrate "SET Message" executed as expected
+ System.out.println("\nCache Command : GET Message");
+ System.out.println("Cache Response : " + map.get("Message"));
+
+ redissonClient.shutdown();
+ }
+
+ private static Config getConfig(){
+ if ("MicrosoftEntraID".equals(System.getenv("AUTH_TYPE"))) {
+ System.out.println("Auth with Microsoft Entra ID");
+ return getConfigAuthWithAAD();
+ } else if ("RedisKey".equals(System.getenv("AUTH_TYPE"))) {
+ System.out.println("Auth with Redis key");
+ return getConfigAuthWithKey();
+ }
+ System.out.println("Auth with Redis key");
+ return getConfigAuthWithKey();
+ }
+
+ private static Config getConfigAuthWithKey() {
+ // Connect to the Azure Cache for Redis over the TLS/SSL port using the key
+ Config redissonconfig = new Config();
+ redissonconfig.useSingleServer().setPassword(System.getenv("REDIS_CACHE_KEY"))
+ .setAddress(String.format("rediss://%s:6380", System.getenv("REDIS_CACHE_HOSTNAME")));
+ return redissonconfig;
+ }
+
+ private static Config getConfigAuthWithAAD() {
+ //Construct a Token Credential from Identity library, e.g. DefaultAzureCredential / ClientSecretCredential / Client CertificateCredential / ManagedIdentityCredential etc.
+ DefaultAzureCredential defaultAzureCredential = new DefaultAzureCredentialBuilder().build();
+
+ // Fetch a Microsoft Entra token to be used for authentication.
+ String token = defaultAzureCredential
+ .getToken(new TokenRequestContext()
+ .addScopes("acca5fbb-b7e4-4009-81f1-37e38fd66d78/.default")).block().getToken();
+
+ // Connect to the Azure Cache for Redis over the TLS/SSL port using the key
+ Config redissonconfig = new Config();
+ redissonconfig.useSingleServer()
+ .setAddress(String.format("rediss://%s:6380", System.getenv("REDIS_CACHE_HOSTNAME")))
+ .setUsername(System.getenv("USER_NAME")) // (Required) Username is Object ID of your managed identity or service principal
+ .setPassword(token); // Microsoft Entra access token as password is required.
+ return redissonconfig;
+ }
+
+}
+```
+
+This code shows you how to connect to an Azure Cache for Redis instance using Microsoft Entra ID with the JCache API support from the Redisson client library. The code also stores and retrieves a string value in the cache. For more information on JCache, see the [JCache specification](https://jcp.org/en/jsr/detail?id=107).
+
+Save *App.java*.
+
+## Build and run the app
+
+Execute the following Maven command to build and run the app:
+
+### [Linux](#tab/bash)
+
+```bash
+mvn compile exec:java -Dexec.mainClass=example.demo.App
+```
+
+### [Windows](#tab/cmd)
+
+```cmd
+mvn compile exec:java -Dexec.mainClass=example.demo.App
+```
+++
+In the following output, you can see that the `Message` key previously had a cached value, which was set in the last run. The app updated that cached value.
+
+```output
+Cache Command : GET Message
+Cache Response : Hello! The cache is working from Java! 2023-12-05T15:13:11.398873
+
+Cache Command : SET Message
+
+Cache Command : GET Message
+Cache Response : Hello! The cache is working from Java! 2023-12-05T15:45:45.748667
+```
+
+## Clean up resources
+
+If you plan to continue with the next tutorial, you can keep the resources created in this quickstart and reuse them.
+
+Otherwise, if you're finished with the quickstart sample application, you can delete the Azure resources created in this quickstart to avoid charges.
+
+> [!IMPORTANT]
+> Deleting a resource group is irreversible and that the resource group and all the resources in it are permanently deleted. Make sure that you do not accidentally delete the wrong resource group or resources. If you created the resources for hosting this sample inside an existing resource group that contains resources you want to keep, you can delete each resource individually instead of deleting the resource group.
+
+1. Sign in to the [Azure portal](https://portal.azure.com) and select **Resource groups**.
+
+1. In the **Filter by name** textbox, type the name of your resource group. The instructions for this article used a resource group named `TestResources`. On your resource group in the result list, select **Test Resources** then **Delete resource group**.
+
+ :::image type="content" source="media/cache-java-redisson-get-started/redis-cache-delete-resource-group.png" alt-text="Screenshot of the Azure portal that shows the Resource group page with the Delete resource group button highlighted." lightbox="media/cache-java-redisson-get-started/redis-cache-delete-resource-group.png":::
+
+1. Type the name of your resource group to confirm deletion and then select **Delete**.
+
+After a few moments, the resource group and all of its contained resources are deleted.
+
+## Next steps
+
+In this quickstart, you learned how to use Azure Cache for Redis from a Java application with Redisson Redis client and JCache. Continue to the next quickstart to use Azure Cache for Redis with an ASP.NET web app.
+
+> [!div class="nextstepaction"]
+> [Create an ASP.NET web app that uses an Azure Cache for Redis.](./cache-web-app-howto.md)
+> [!div class="nextstepaction"]
+> [Use Java with Azure Cache for Redis on Azure Kubernetes Service](/azure/developer/java/ee/how-to-deploy-java-liberty-jcache)
azure-government Documentation Government Cognitiveservices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-cognitiveservices.md
This article provides developer guidance for using Computer Vision, Face API, Te
<a name='part-1-provision-cognitive-services-accounts'></a>
-## Part 1: Provision Azure AI services accounts
+## Provision Azure AI services accounts
In order to access any of the Azure AI services APIs, you must first provision an Azure AI services account for each of the APIs you want to access. You can create Azure AI services in the [Azure Government portal](https://portal.azure.us/), or you can use Azure PowerShell to access the APIs and services as described in this article.
In order to access any of the Azure AI services APIs, you must first provision a
```powershell Register-AzResourceProvider -ProviderNamespace Microsoft.CognitiveServices ```
-2. In the PowerShell command below, replace `rg-name`, `name-of-your-api`, and `location-of-resourcegroup` with your relevant account information.
+2. In the PowerShell command below, replace `<rg-name>`, `<name-of-your-api>`, and `<location-of-resourcegroup>` with your relevant account information.
Replace the `type of API` tag with any of the following APIs you want to access: - ComputerVision - Face
- - TextAnalytics
+ - Language
- TextTranslation
+ - OpenAI
```powershell
- New-AzCognitiveServicesAccount -ResourceGroupName 'rg-name' -name 'name-of-your-api' -Type <type of API> -SkuName S0 -Location 'location-of-resourcegroup'
+ New-AzCognitiveServicesAccount -ResourceGroupName '<rg-name>' -name '<name-of-your-api>' -Type <type of API> -SkuName S0 -Location '<location-of-resourcegroup>'
``` Example:
In order to access any of the Azure AI services APIs, you must first provision a
3. Copy and save the "Endpoint" attribute somewhere as you will need it when making calls to the API.
-### Retrieve Account Key
+### Retrieve account key
You must retrieve an account key to access the specific API.
Copy and save the first key somewhere as you will need it to make calls to the A
Now you are ready to make calls to the APIs.
-## Part 2: API Quickstarts
+## Follow API quickstarts
-The Quickstarts below will help you to get started with the APIs available through Azure AI services in Azure Government.
+The quickstarts below will help you to get started with the APIs available through Azure AI services in Azure Government.
-
-## Computer Vision
-
-### Prerequisites
--- Get the [Microsoft Computer Vision API Windows SDK](https://github.com/Microsoft/Cognitive-vision-windows).--- Make sure Visual Studio has been installed:
- - [Visual Studio 2019](https://www.visualstudio.com/vs/), including the **Azure development** workload.
-
- >[!NOTE]
- > After you install or upgrade to Visual Studio 2019, you might also need to manually update the Visual Studio 2019 tools for Azure Functions. You can update the tools from the **Tools** menu under **Extensions and Updates...** > **Updates** > **Visual Studio Marketplace** > **Azure Functions and Web Jobs Tools** > **Update**.
- >
- >
-
-### Variations
--- The URI for accessing Computer Vision in Azure Government is different than in Azure. For a list of Azure Government endpoints, see [Compare Azure Government and global Azure](./compare-azure-government-global-azure.md#guidance-for-developers).-
-### Analyze an image with Computer Vision using C#
-
-With the [Analyze Image method](https://westcentralus.dev.cognitive.microsoft.com/docs/services/56f91f2d778daf23d8ec6739/operations/56f91f2e778daf14a499e1fa), you can extract visual features based on image content. You can upload an image or specify an image URL and choose which features to return, including:
--- A detailed list of tags related to the image content.-- A description of image content in a complete sentence.-- The coordinates, gender, and age of any faces contained in the image.-- The ImageType (clip art or a line drawing).-- The dominant color, the accent color, or whether an image is black & white.-- The category defined in this [taxonomy](../ai-services/computer-vision/category-taxonomy.md).-- Does the image contain adult or sexually suggestive content?-
-### Analyze an image C# example request
-
-1. Create a new Console solution in Visual Studio.
-2. Replace Program.cs with the following code.
-3. Change the `uriBase` to the "Endpoint" attribute that you saved from Part 1, and keep the "/analyze" after the endpoint.
-4. Replace the `subscriptionKey` value with your valid subscription key.
-5. Run the program.
-
-```csharp
-using System;
-using System.IO;
-using System.Net.Http;
-using System.Net.Http.Headers;
-using System.Text;
-
-namespace VisionApp1
-{
- static class Program
- {
- // **********************************************
- // *** Update or verify the following values. ***
- // **********************************************
-
- // Replace the subscriptionKey string value with your valid subscription key.
- const string subscriptionKey = "<subscription key>";
-
- //Copy and paste the "Endpoint" attribute that you saved before into the uriBase string "/analyze" at the end.
- //Example: https://virginia.api.cognitive.microsoft.us/vision/v1.0/analyze
-
- const string uriBase = "<endpoint>/analyze";
-
- static void Main()
- {
- // Get the path and filename to process from the user.
- Console.WriteLine("Analyze an image:");
- Console.Write("Enter the path to an image you wish to analyze: ");
- string imageFilePath = Console.ReadLine();
-
- // Execute the REST API call.
- MakeAnalysisRequest(imageFilePath);
-
- Console.WriteLine("\nPlease wait a moment for the results to appear. Then, press Enter to exit...\n");
- Console.ReadLine();
- }
--
- /// <summary>
- /// Gets the analysis of the specified image file by using the Computer Vision REST API.
- /// </summary>
- /// <param name="imageFilePath">The image file.</param>
- static async void MakeAnalysisRequest(string imageFilePath)
- {
- HttpClient client = new HttpClient();
-
- // Request headers.
- client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", subscriptionKey);
-
- // Request parameters. A third optional parameter is "details".
- string requestParameters = "visualFeatures=Categories,Description,Color&language=en";
-
- // Assemble the URI for the REST API Call.
- string uri = uriBase + "?" + requestParameters;
-
- HttpResponseMessage response;
-
- // Request body. Posts a locally stored JPEG image.
- byte[] byteData = GetImageAsByteArray(imageFilePath);
-
- using (ByteArrayContent content = new ByteArrayContent(byteData))
- {
- // This example uses content type "application/octet-stream".
- // The other content types you can use are "application/json" and "multipart/form-data".
- content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");
-
- // Execute the REST API call.
- response = await client.PostAsync(uri, content);
-
- // Get the JSON response.
- string contentString = await response.Content.ReadAsStringAsync();
-
- // Display the JSON response.
- Console.WriteLine("\nResponse:\n");
- Console.WriteLine(JsonPrettyPrint(contentString));
- }
- }
--
- /// <summary>
- /// Returns the contents of the specified file as a byte array.
- /// </summary>
- /// <param name="imageFilePath">The image file to read.</param>
- /// <returns>The byte array of the image data.</returns>
- static byte[] GetImageAsByteArray(string imageFilePath)
- {
- FileStream fileStream = new FileStream(imageFilePath, FileMode.Open, FileAccess.Read);
- BinaryReader binaryReader = new BinaryReader(fileStream);
- return binaryReader.ReadBytes((int)fileStream.Length);
- }
--
- /// <summary>
- /// Formats the given JSON string by adding line breaks and indents.
- /// </summary>
- /// <param name="json">The raw JSON string to format.</param>
- /// <returns>The formatted JSON string.</returns>
- static string JsonPrettyPrint(string json)
- {
- if (string.IsNullOrEmpty(json))
- return string.Empty;
-
- json = json.Replace(Environment.NewLine, "").Replace("\t", "");
-
- StringBuilder sb = new StringBuilder();
- bool quote = false;
- bool ignore = false;
- int offset = 0;
- int indentLength = 3;
-
- foreach (char ch in json)
- {
- switch (ch)
- {
- case '"':
- if (!ignore) quote = !quote;
- break;
- case '\'':
- if (quote) ignore = !ignore;
- break;
- }
-
- if (quote)
- sb.Append(ch);
- else
- {
- switch (ch)
- {
- case '{':
- case '[':
- sb.Append(ch);
- sb.Append(Environment.NewLine);
- sb.Append(new string(' ', ++offset * indentLength));
- break;
- case '}':
- case ']':
- sb.Append(Environment.NewLine);
- sb.Append(new string(' ', --offset * indentLength));
- sb.Append(ch);
- break;
- case ',':
- sb.Append(ch);
- sb.Append(Environment.NewLine);
- sb.Append(new string(' ', offset * indentLength));
- break;
- case ':':
- sb.Append(ch);
- sb.Append(' ');
- break;
- default:
- if (ch != ' ') sb.Append(ch);
- break;
- }
- }
- }
-
- return sb.ToString().Trim();
- }
- }
- }
-```
-### Analyze an Image response
-
-A successful response is returned in JSON. Shown below is an example of a successful response:
-
-```json
-
-{
- "categories": [
- {
- "name": "people_baby",
- "score": 0.52734375
- },
- {
- "name": "people_young",
- "score": 0.4375
- }
- ],
- "description": {
- "tags": [
- "person",
- "indoor",
- "clothing",
- "woman",
- "white",
- "table",
- "food",
- "girl",
- "smiling",
- "posing",
- "holding",
- "black",
- "sitting",
- "young",
- "plate",
- "hair",
- "wearing",
- "cake",
- "large",
- "shirt",
- "dress",
- "eating",
- "standing",
- "blue"
- ],
- "captions": [
- {
- "text": "a woman posing for a picture",
- "confidence": 0.460196158842535
- }
- ]
- },
- "requestId": "7c20cc50-f5eb-453b-abb5-98378917431c",
- "metadata": {
- "width": 721,
- "height": 960,
- "format": "Jpeg"
- },
- "color": {
- "dominantColorForeground": "Black",
- "dominantColorBackground": "White",
- "dominantColors": [
- "White"
- ],
- "accentColor": "7C4F57",
- "isBWImg": false
- }
-}
-```
-For more information, see [public documentation](../ai-services/computer-vision/index.yml) and [public API documentation](https://westus.dev.cognitive.microsoft.com/docs/services/56f91f2d778daf23d8ec6739/operations/56f91f2e778daf14a499e1fa) for Computer Vision.
-
-## Face API
-
-### Prerequisites
--- Get the [Microsoft Face API Windows SDK](https://www.nuget.org/packages/Microsoft.ProjectOxford.Face/).--- Make sure Visual Studio has been installed:
- - [Visual Studio 2019](https://www.visualstudio.com/vs/), including the **Azure development** workload.
-
- >[!NOTE]
- > After you install or upgrade to Visual Studio 2019, you might also need to manually update the Visual Studio 2019 tools for Azure Functions. You can update the tools from the **Tools** menu under **Extensions and Updates...** > **Updates** > **Visual Studio Marketplace** > **Azure Functions and Web Jobs Tools** > **Update**.
- >
- >
-
-### Variations
--- The URI for accessing the Face API in Azure Government is different than in Azure. For a list of Azure Government endpoints, see [Compare Azure Government and global Azure](./compare-azure-government-global-azure.md#guidance-for-developers).-
-### Detect faces in images with Face API using C#
-
-Use the [Face - Detect method](https://westcentralus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) to detect faces in an image and return face attributes including:
--- Face ID: Unique ID used in several Face API scenarios. -- Face Rectangle: The left, top, width, and height indicating the location of the face in the image.-- Landmarks: An array of 27-point face landmarks pointing to the important positions of face components.-- Facial attributes including age, gender, smile intensity, head pose, and facial hair. -
-### Face detect C# example request
-
-The sample is written in C# using the Face API client library.
-
-1. Create a new Console solution in Visual Studio.
-2. Replace Program.cs with the following code.
-3. Replace the `subscriptionKey` value with the key value that you retrieved above.
-4. Change the `uriBase` value to the "Endpoint" attribute you retrieved above.
-5. Run the program.
-6. Enter the path to an image on your hard drive.
-
-```csharp
-
-using System;
-using System.IO;
-using System.Net.Http;
-using System.Net.Http.Headers;
-using System.Text;
-
-namespace FaceApp1
-{
-
- static class Program
- {
- // **********************************************
- // *** Update or verify the following values. ***
- // **********************************************
-
- // Replace the subscriptionKey string value with your valid subscription key.
- const string subscriptionKey = "<subscription key>";
-
- //Copy and paste the "Endpoint" attribute that you saved before into the uriBase string "/detect" at the end.
- //Example: https://virginia.api.cognitive.microsoft.us/face/v1.0/detect
- const string uriBase ="<endpoint>/detect";
-
- static void Main()
- {
- // Get the path and filename to process from the user.
- Console.WriteLine("Detect faces:");
- Console.Write("Enter the path to an image with faces that you wish to analzye: ");
- string imageFilePath = Console.ReadLine();
-
- // Execute the REST API call.
- MakeAnalysisRequest(imageFilePath);
-
- Console.WriteLine("\nPlease wait a moment for the results to appear. Then, press Enter to exit...\n");
- Console.ReadLine();
- }
--
- /// <summary>
- /// Gets the analysis of the specified image file by using the Computer Vision REST API.
- /// </summary>
- /// <param name="imageFilePath">The image file.</param>
- static async void MakeAnalysisRequest(string imageFilePath)
- {
- HttpClient client = new HttpClient();
-
- // Request headers.
- client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", subscriptionKey);
-
- // Request parameters. A third optional parameter is "details".
- string requestParameters = "returnfaceId=true&returnfaceLandmarks=false&returnfaceAttributes=age,gender,headPose,smile,facialHair,glasses,emotion";
-
- // Assemble the URI for the REST API Call.
- string uri = uriBase + "?" + requestParameters;
-
- HttpResponseMessage response;
-
- // Request body. Posts a locally stored JPEG image.
- byte[] byteData = GetImageAsByteArray(imageFilePath);
-
- using (ByteArrayContent content = new ByteArrayContent(byteData))
- {
- // This example uses content type "application/octet-stream".
- // The other content types you can use are "application/json" and "multipart/form-data".
- content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");
-
- // Execute the REST API call.
- response = await client.PostAsync(uri, content);
-
- // Get the JSON response.
- string contentString = await response.Content.ReadAsStringAsync();
-
- // Display the JSON response.
- Console.WriteLine("\nResponse:\n");
- Console.WriteLine(JsonPrettyPrint(contentString));
- }
- }
--
- /// <summary>
- /// Returns the contents of the specified file as a byte array.
- /// </summary>
- /// <param name="imageFilePath">The image file to read.</param>
- /// <returns>The byte array of the image data.</returns>
- static byte[] GetImageAsByteArray(string imageFilePath)
- {
- FileStream fileStream = new FileStream(imageFilePath, FileMode.Open, FileAccess.Read);
- BinaryReader binaryReader = new BinaryReader(fileStream);
- return binaryReader.ReadBytes((int)fileStream.Length);
- }
--
- /// <summary>
- /// Formats the given JSON string by adding line breaks and indents.
- /// </summary>
- /// <param name="json">The raw JSON string to format.</param>
- /// <returns>The formatted JSON string.</returns>
- static string JsonPrettyPrint(string json)
- {
- if (string.IsNullOrEmpty(json))
- return string.Empty;
-
- json = json.Replace(Environment.NewLine, "").Replace("\t", "");
-
- StringBuilder sb = new StringBuilder();
- bool quote = false;
- bool ignore = false;
- int offset = 0;
- int indentLength = 3;
-
- foreach (char ch in json)
- {
- switch (ch)
- {
- case '"':
- if (!ignore) quote = !quote;
- break;
- case '\'':
- if (quote) ignore = !ignore;
- break;
- }
-
- if (quote)
- sb.Append(ch);
- else
- {
- switch (ch)
- {
- case '{':
- case '[':
- sb.Append(ch);
- sb.Append(Environment.NewLine);
- sb.Append(new string(' ', ++offset * indentLength));
- break;
- case '}':
- case ']':
- sb.Append(Environment.NewLine);
- sb.Append(new string(' ', --offset * indentLength));
- sb.Append(ch);
- break;
- case ',':
- sb.Append(ch);
- sb.Append(Environment.NewLine);
- sb.Append(new string(' ', offset * indentLength));
- break;
- case ':':
- sb.Append(ch);
- sb.Append(' ');
- break;
- default:
- if (ch != ' ') sb.Append(ch);
- break;
- }
- }
- }
-
- return sb.ToString().Trim();
- }
- }
-}
-```
-### Face detect response
-
-A successful response is returned in JSON. Shown below is an example of a successful response:
-
-```json
-Response:
-[
- {
- "faceId": "0ed7f4db-1207-40d4-be2e-84694e42d682",
- "faceRectangle": {
- "top": 60,
- "left": 83,
- "width": 361,
- "height": 361
- },
- "faceAttributes": {
- "smile": 0.284,
- "headPose": {
- "pitch": 0.0,
- "roll": -12.2,
- "yaw": -16.7
- },
- "gender": "female",
- "age": 16.5,
- "facialHair": {
- "moustache": 0.0,
- "beard": 0.0,
- "sideburns": 0.0
- },
- "glasses": "NoGlasses",
- "emotion": {
- "anger": 0.003,
- "contempt": 0.001,
- "disgust": 0.001,
- "fear": 0.002,
- "happiness": 0.284,
- "neutral": 0.694,
- "sadness": 0.012,
- "surprise": 0.004
- }
- }
- }
-]
-```
-For more information, see [public documentation](../ai-services/computer-vision/overview-identity.md), and [public API documentation](https://westus.dev.cognitive.microsoft.com/docs/services/563879b61984550e40cbbe8d/operations/563879b61984550f30395236) for Face API.
--
-## Text Analytics
-
-For instructions on how to use Text Analytics, see [Quickstart: Use the Text Analytics client library and REST API](../ai-services/language-service/language-detection/overview.md?tabs=version-3-1&pivots=programming-language-csharp).
-
-### Variations
--- The URI for accessing Text Analytics in Azure Government is different than in Azure. For a list of Azure Government endpoints, see [Compare Azure Government and global Azure](./compare-azure-government-global-azure.md#guidance-for-developers).--
-## Translator
-
-### Prerequisites
--- Make sure Visual Studio has been installed:
- - [Visual Studio 2019](https://www.visualstudio.com/vs/), including the **Azure development** workload.
-
- >[!NOTE]
- > After you install or upgrade to Visual Studio 2019, you might also need to manually update the Visual Studio 2019 tools for Azure Functions. You can update the tools from the **Tools** menu under **Extensions and Updates...** > **Updates** > **Visual Studio Marketplace** > **Azure Functions and Web Jobs Tools** > **Update**.
- >
- >
-
-### Variations
--- The URI for accessing Translator in Azure Government is different than in Azure. For a list of Azure Government endpoints, see [Compare Azure Government and global Azure](./compare-azure-government-global-azure.md#guidance-for-developers).-- [Virtual Network support](../ai-services/cognitive-services-virtual-networks.md) for Translator service is limited to only `US Gov Virginia` region.
- The URI for accessing the API is:
- - `https://<your-custom-domain>.cognitiveservices.azure.us/translator/text/v3.0`
- - You can find your custom domain endpoint in the overview blade on the Azure Government portal once the resource is created.
-- There are 2 regions: `US Gov Virginia` and `US Gov Arizona`.-
-### Text translation method
-
-The below example uses [Text Translation - Translate method](../ai-services/translator/reference/v3-0-translate.md) to translate a string of text from a language into another specified language. There are multiple [language codes](https://api.cognitive.microsofttranslator.com/languages?api-version=3.0&scope=translation) that can be used with Translator.
-
-### Text translation C# example request
-
-The sample is written in C#.
-
-1. Create a new Console solution in Visual Studio.
-2. Replace Program.cs with the corresponding code below.
-3. Replace the `endpoint` value with the URI as explained in the `Variations` section.
-4. Replace the `subscriptionKey` value with the key value that you retrieved above.
-5. Replace the `region` value with the region value where you created your translator resource.
-6. Replace the `text` value with text that you want to translate.
-7. Run the program.
-
-You can also test out different languages and texts by replacing the `text`, `from`, and `to` variables in Program.cs.
-
-```csharp
-using System;
-using System.Collections.Generic;
-using Microsoft.Rest;
-using System.Net.Http;
-using System.Threading;
-using System.Threading.Tasks;
-using System.Net;
-using System.IO;
-using Newtonsoft.Json;
-using System.Text;
-
-namespace TextTranslator
-{
- class Program
- {
- static string host = "PASTE ENDPOINT HERE";
- static string path = "/translate?api-version=3.0";
- // Translate to German.
- static string params_ = "&to=de";
-
- static string uri = host + path + params_;
-
- // NOTE: Replace this example key with a valid subscription key.
- static string key = "PASTE KEY HERE";
-
- // NOTE: Replace this example region with a valid region.
- static string region = "PASTE REGION HERE";
-
- static string text = "Hello world!";
-
- async static void Translate()
- {
- System.Object[] body = new System.Object[] { new { Text = text } };
- var requestBody = JsonConvert.SerializeObject(body);
-
- using (var client = new HttpClient())
- using (var request = new HttpRequestMessage())
- {
- request.Method = HttpMethod.Post;
- request.RequestUri = new Uri(uri);
- request.Content = new StringContent(requestBody, Encoding.UTF8, "application/json");
- request.Headers.Add("Ocp-Apim-Subscription-Key", key);
- request.Headers.Add("Ocp-Apim-Subscription-Region", region);
-
- var response = await client.SendAsync(request);
- var responseBody = await response.Content.ReadAsStringAsync();
- var result = JsonConvert.SerializeObject(JsonConvert.DeserializeObject(responseBody), Formatting.Indented);
-
- Console.OutputEncoding = UnicodeEncoding.UTF8;
- Console.WriteLine(result);
- }
- }
-
- static void Main(string[] args)
- {
- Translate();
- Console.ReadLine();
- }
- }
-}
-```
-For more information, see [public documentation](../ai-services/translator/translator-overview.md) and [public API documentation](../ai-services/translator/reference/v3-0-reference.md) for Translator.
+> [!NOTE]
+> The URI for accessing Azure AI Services resources in Azure Government is different than in Azure. For a list of Azure Government endpoints, see [Compare Azure Government and global Azure](./compare-azure-government-global-azure.md#guidance-for-developers).
+
+- [Azure AI Vision](../ai-services/computer-vision/index.yml) | [quickstart](/azure/ai-services/computer-vision/quickstarts-sdk/image-analysis-client-library-40?tabs=visual-studio%2Cwindows&pivots=programming-language-csharp)
+- [Azure Face](../ai-services/computer-vision/overview-identity.md) | [quickstart](/azure/ai-services/computer-vision/quickstarts-sdk/identity-client-library?tabs=windows%2Cvisual-studio&pivots=programming-language-rest-api)
+- [Azure AI Language](/azure/ai-services/language-service/) | [quickstart](../ai-services/language-service/language-detection/overview.md?tabs=version-3-1&pivots=programming-language-csharp)
+- [Azure AI Translator](../ai-services/translator/translator-overview.md) | [quickstart](/azure/ai-services/translator/quickstart-text-rest-api?tabs=csharp)
+ > [!NOTE]
+ > [Virtual Network support](../ai-services/cognitive-services-virtual-networks.md) for Translator service is limited to only `US Gov Virginia` region. The URI for accessing the API is:
+ > - `https://<your-custom-domain>.cognitiveservices.azure.us/translator/text/v3.0`
+ > - You can find your custom domain endpoint in the overview blade on the Azure Government portal once the resource is created.
+ > There are two regions: `US Gov Virginia` and `US Gov Arizona`.
+- [Azure OpenAI](/azure/ai-services/openai/) | [quickstart](/en-us/azure/ai-services/openai/chatgpt-quickstart?tabs=command-line%2Cpython&pivots=programming-language-studio)
### Next Steps
azure-government Documentation Government Stig Linux Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-stig-linux-vm.md
Last updated 06/14/2023
# Deploy STIG-compliant Linux Virtual Machines (Preview)
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ Microsoft Azure Security Technical Implementation Guides (STIGs) solution templates help you accelerate your [DoD STIG compliance](https://public.cyber.mil/stigs/) by delivering an automated solution to deploy virtual machines and apply STIGs through the Azure portal. This quickstart shows how to deploy a STIG-compliant Linux virtual machine (Preview) on Azure or Azure Government using the corresponding portal.
Sign in at the [Azure portal](https://portal.azure.com/) or [Azure Government po
1. The deployed virtual machine can be found in the resource group used for the deployment. Since inbound RDP is disallowed, Azure Bastion must be used to connect to the VM. ## High availability and resiliency
-
+ Our solution template creates a single instance virtual machine using premium or standard operating system disk, which supports [SLA for Virtual Machines](https://azure.microsoft.com/support/legal/sla/virtual-machines/v1_9/).
-
+ We recommend you deploy multiple instances of virtual machines configured behind Azure Load Balancer and/or Azure Traffic Manager for higher availability and resiliency.
-
+ ## Business continuity and disaster recovery (BCDR)
-
+ As an organization you need to adopt a business continuity and disaster recovery (BCDR) strategy that keeps your data safe, and your apps and workloads online, when planned and unplanned outages occur.
-
+ [Azure Site Recovery](../site-recovery/site-recovery-overview.md) helps ensure business continuity by keeping business apps and workloads running during outages. Site Recovery replicates workloads running on physical and virtual machines from a primary site to a secondary location. When an outage occurs at your primary site, you fail over to secondary location, and access apps from there. After the primary location is running again, you can fail back to it.
-
+ Site Recovery can manage replication for:
-
+ - Azure VMs replicating between Azure regions. - On-premises VMs, Azure Stack VMs, and physical servers.
-
+ To learn more about backup and restore options for virtual machines in Azure, continue to [Overview of backup options for VMs](../virtual-machines/backup-recovery.md). ## Clean up resources
azure-linux Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-linux/faq.md
# Frequently asked questions about the Azure Linux Container Host for AKS
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ This article answers common questions about the Azure Linux Container Host. ## General FAQs
If you choose to manually upgrade your node image instead of using automatic nod
### Some packages (CNCF, K8s) have a more aggressive release cycle, and I don't want to be up to a year behind. Does the Azure Linux Container Host have any plans for more frequent upgrades?
-The Azure Linux Container Host adopts newer CNCF packages like K8s with higher cadence and doesn't delay them for annual releases. However, major compiler upgrades or deprecating language stacks like Python 2.7x may be held for major releases.
+The Azure Linux Container Host adopts newer CNCF packages like K8s with higher cadence and doesn't delay them for annual releases. However, major compiler upgrades or deprecating language stacks like Python 2.7x may be held for major releases.
azure-monitor Convert Classic Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/convert-classic-resource.md
Workspace-based resources:
> [!IMPORTANT] > * On February 29, 2024, continuous export will be deprecated as part of the classic Application Insights deprecation. > -
->
-> * You can enable [diagnostic settings on classic Application Insights]() before you [migrate to a workspace-based Application Insights resource](convert-classic-resource.md).All [workspace-based Application Insights resources](./create-workspace-resource.md) must use [diagnostic settings](./create-workspace-resource.md#export-telemetry).
->
+> * [Workspace-based Application Insights resources](./create-workspace-resource.md) are not compatible with continuous export. We recommend migrating to [diagnostic settings](../essentials/diagnostic-settings.md) on classic Application Insights resources before transitioning to a workspace-based Application Insights. This ensures continuity and compatibility of your diagnostic settings.
+>
> * Diagnostic settings export might increase costs. For more information, see [Diagnostic settings-based export](export-telemetry.md#diagnostic-settings-based-export).
->
## New capabilities
No. There's no impact to [Live Metrics](live-stream.md#live-metrics-monitor-and-
### What happens with continuous export after migration?
-To continue with automated exports, you will need to migrate to [diagnostic settings](/previous-versions/azure/azure-monitor/app/continuous-export-diagnostic-setting) before migrating to workspace-based resource. The diagnostic setting will carry over in the migration to workspace-based Application Insights.
+To continue with automated exports, you'll need to migrate to [diagnostic settings](/previous-versions/azure/azure-monitor/app/continuous-export-diagnostic-setting) before migrating to workspace-based resource. The diagnostic setting carries over in the migration to workspace-based Application Insights.
### How do I ensure a successful migration of my App Insights resource using Terraform?
-If you are using Terraform to manage your Azure resources, it is important to use the latest version of the Terraform azurerm provider before attempting to upgrade your App Insights resource. Using an older version of the provider, such as version 3.12, may result in the deletion of the classic component before creating the replacement workspace-based Application Insights resource. This can cause the loss of previous data and require updating the configurations in your monitored apps with new connection string and instrumentation key values.
+If you're using Terraform to manage your Azure resources, it's important to use the latest version of the Terraform azurerm provider before attempting to upgrade your App Insights resource. Using an older version of the provider, such as version 3.12, may result in the deletion of the classic component before creating the replacement workspace-based Application Insights resource. It can cause the loss of previous data and require updating the configurations in your monitored apps with new connection string and instrumentation key values.
To avoid this issue, make sure to use the latest version of the Terraform [azurerm provider](https://registry.terraform.io/providers/hashicorp/azurerm/latest), version 3.89 or higher, which performs the proper migration steps by issuing the appropriate ARM call to upgrade the App Insights classic resource to a workspace-based resource while preserving all the old data and connection string/instrumentation key values.
+### Can I still use the old API to create Application Insights resources programmatically?
+Yes, calls to the old API for creating Application Insights resources continue to work as before. The old API version doesn't include a reference to the Log Analytics resource. However, when you trigger a legacy API call, it automatically creates a resource and the required association between Application Insights and Log Analytics.
+
+### Should I migrate diagnostic settings on classic Application Insights before moving to a workspace-based AI?
+Yes, we recommend migrating diagnostic settings on classic Application Insights resources before transitioning to a workspace-based Application Insights. It ensures continuity and compatibility of your diagnostic settings.
+
+### What is the migration process for Application Insights resources?
+The migration of Application Insights resources to the new format isn't instantaneous on the day of deprecation. Instead, it occurs over time. We'll gradually migrate all Application Insights resources, ensuring a smooth transition with minimal disruption to your services.
+ ## Troubleshooting This section offers troubleshooting tips for common issues. ### Access mode
-**Error message:** "The selected workspace is configured with workspace-based access mode. Some APM features may be impacted. Select another workspace or allow resource-based access in the workspace settings. You can override this error by using CLI."
+**Error message:** "The selected workspace is configured with workspace-based access mode. Some Application Performance Monitoring (APM) features may be impacted. Select another workspace or allow resource-based access in the workspace settings. You can override this error by using CLI."
For your workspace-based Application Insights resource to operate properly, you need to change the access control mode of your target Log Analytics workspace to the **Resource or workspace permissions** setting. This setting is located in the Log Analytics workspace UI under **Properties** > **Access control mode**. For instructions, see the [Log Analytics configure access control mode guidance](../logs/manage-access.md#access-control-mode). If your access control mode is set to the exclusive **Require workspace permissions** setting, migration via the portal migration experience remains blocked.
If you can't change the access control mode for security reasons for your curren
**Error message:** "Continuous Export needs to be disabled before continuing. After migration, use Diagnostic Settings for export."
-The legacy **Continuous export** functionality isn't supported for workspace-based resources. Prior to migrating, you need to enable diagnostic settings and disable continuous export.
+The legacy **Continuous export** functionality isn't supported for workspace-based resources. Before migrating, you need to enable diagnostic settings and disable continuous export.
-1. [Enable Diagnostic Settings](/previous-versions/azure/azure-monitor/app/continuous-export-diagnostic-setting) on you classic Application Insights resource.
+1. [Enable Diagnostic Settings](/previous-versions/azure/azure-monitor/app/continuous-export-diagnostic-setting) on your classic Application Insights resource.
1. From your Application Insights resource view, under the **Configure** heading, select **Continuous export**. :::image type="content" source="./media/convert-classic-resource/continuous-export.png" lightbox="./media/convert-classic-resource/continuous-export.png" alt-text="Screenshot that shows the Continuous export menu item.":::
azure-monitor Java Standalone Upgrade From 2X https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-upgrade-from-2x.md
There are typically no code changes when upgrading to 3.x. The 3.x SDK dependenc
| `applicationinsights-web-auto` | Replace with `3.4.3` or later of `applicationinsights-web` | | | `applicationinsights-logging-log4j1_2` | Remove the dependency and remove the Application Insights appender from your log4j configuration. | No longer needed since Log4j 1.2 is autoinstrumented in the 3.x Java agent. | | `applicationinsights-logging-log4j2` | Remove the dependency and remove the Application Insights appender from your log4j configuration. | No longer needed since Log4j 2 is autoinstrumented in the 3.x Java agent. |
-| `applicationinsights-logging-log4j1_2` | Remove the dependency and remove the Application Insights appender from your logback configuration. | No longer needed since Logback is autoinstrumented in the 3.x Java agent. |
+| `applicationinsights-logging-logback` | Remove the dependency and remove the Application Insights appender from your logback configuration. | No longer needed since Logback is autoinstrumented in the 3.x Java agent. |
| `applicationinsights-spring-boot-starter` | Replace with `3.4.3` or later of `applicationinsights-web` | The cloud role name will no longer default to `spring.application.name`, see the [3.x configuration docs](./java-standalone-config.md#cloud-role-name) for configuring the cloud role name. | ## Step 2: Add the 3.x Java agent
azure-monitor Kubernetes Monitoring Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/kubernetes-monitoring-enable.md
This article provides onboarding guidance for the following types of clusters. A
- [Azure Kubernetes clusters (AKS)](../../aks/intro-kubernetes.md) - [Arc-enabled Kubernetes clusters](../../azure-arc/kubernetes/overview.md)-- [AKS hybrid clusters (preview)](/azure/aks/hybrid/aks-hybrid-options-overview) ## Prerequisites
This article provides onboarding guidance for the following types of clusters. A
- Prerequisites for [Azure Arc-enabled Kubernetes cluster extensions](../../azure-arc/kubernetes/extensions.md#prerequisites). - Verify the [firewall requirements](kubernetes-monitoring-firewall.md) in addition to the [Azure Arc-enabled Kubernetes network requirements](../../azure-arc/kubernetes/network-requirements.md). - If you previously installed monitoring for AKS, ensure that you have [disabled monitoring](kubernetes-monitoring-disable.md) before proceeding to avoid issues during the extension install.
- - If you previously installed monitoring on a cluster using a script without cluster extensions, follow the instructions at [Disable Container insights on your hybrid Kubernetes cluster](container-insights-optout-hybrid.md) to delete this Helm chart.
+ - If you previously installed monitoring on a cluster using a script without cluster extensions, follow the instructions at [Disable monitoring of your Kubernetes cluster](kubernetes-monitoring-disable.md) to delete this Helm chart.
The following command only deletes the extension instance, but doesn't delete th
az k8s-extension delete --name azuremonitor-containers --cluster-type connectedClusters --cluster-name <cluster-name> --resource-group <resource-group> ```
-#### AKS hybrid cluster
--
-```azurecli
-### Use default Log Analytics workspace
-az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type provisionedclusters --cluster-resource-provider "microsoft.hybridcontainerservice" --extension-type Microsoft.AzureMonitor.Containers --configuration-settings amalogs.useAADAuth=true
-
-### Use existing Log Analytics workspace
-az k8s-extension create --name azuremonitor-containers --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type provisionedclusters --cluster-resource-provider "microsoft.hybridcontainerservice" --extension-type Microsoft.AzureMonitor.Containers --configuration-settings amalogs.useAADAuth=true --configuration-settings logAnalyticsWorkspaceResourceID=<workspace-resource-id>
-
-```
-
-See the [resource requests and limits section of Helm chart](https://github.com/microsoft/Docker-Provider/blob/ci_prod/charts/azuremonitor-containers/values.yaml) for the available configuration settings.
-
-**Example**
-
-```azurecli
-az aks enable-addons -a monitoring -n <cluster-name> -g <cluster-resource-group-name> --workspace-resource-id "/subscriptions/my-subscription/resourceGroups/my-resource-group/providers/Microsoft.OperationalInsights/workspaces/my-workspace"
-```
-
-**Delete extension instance**
-
-The following command only deletes the extension instance, but doesn't delete the Log Analytics workspace. The data in the Log Analytics resource is left intact.
-
-```azurecli
-az k8s-extension delete --cluster-name <cluster-name> --resource-group <resource-group> --cluster-type provisionedclusters --cluster-resource-provider "microsoft.hybridcontainerservice" --name azuremonitor-containers --yes
-```
- ### [Azure Resource Manager](#tab/arm) Both ARM and Bicep templates are provided in this section.
azure-monitor Create Diagnostic Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/create-diagnostic-settings.md
You can configure diagnostic settings in the Azure portal either from the Azure
1. **Destination details**: Select the checkbox for each destination. Options appear so that you can add more information.
- :::image type="content" source="media/diagnostic-settings/send-to-log-analytics-event-hubs.png" alt-text="Screenshot that shows Send to Log Analytics and Stream to an event hub." border="false":::
+ :::image type="content" source="media/diagnostic-settings/send-to-log-analytics-event-hubs.png" alt-text="Screenshot that shows the available options under the Destination details section." border="false":::
- 1. **Log Analytics**: Enter the subscription and workspace. If you don't have a workspace, you must [create one before you proceed](../logs/quick-create-workspace.md).
-
- 1. **Event Hubs**: Specify the following criteria:
-
- - **Subscription**: The subscription that the event hub is part of.
- - **Event hub namespace**: If you don't have one, you must [create one](../../event-hubs/event-hubs-create.md).
- - **Event hub name (optional)**: The name to send all data to. If you don't specify a name, an event hub is created for each log category. If you're sending to multiple categories, you might want to specify a name to limit the number of event hubs created. For more information, see [Azure Event Hubs quotas and limits](../../event-hubs/event-hubs-quotas.md).
- - **Event hub policy name** (also optional): A policy defines the permissions that the streaming mechanism has. For more information, see [Event Hubs features](../../event-hubs/event-hubs-features.md#publisher-policy).
+ 1. **Send to Log Analytics workspace**: Select your **Subscription** and the **Log Analytics workspace** where you want to send the data. If you don't have a workspace, you must [create one before you proceed](../logs/quick-create-workspace.md).
1. **Archive to a storage account**: Select your **Subscription** and the **Storage account** where you want to store the data.
You can configure diagnostic settings in the Azure portal either from the Azure
> [!TIP] > Use the [Azure Storage Lifecycle Policy](../../storage/blobs/lifecycle-management-policy-configure.md?tabs=azure-portal) to manage the length of time that your logs are retained. The Retention Policy as set in the Diagnostic Setting settings is now deprecated.
- 1. **Partner integration**: You must first install partner integration into your subscription. Configuration options vary by partner. For more information, see [Azure Monitor partner integrations](../../partner-solutions/overview.md).
+ 1. **Stream to an event hub**: Specify the following criteria:
+
+ - **Subscription**: The subscription that the event hub is part of.
+ - **Event hub namespace**: If you don't have one, you must [create one](../../event-hubs/event-hubs-create.md).
+ - **Event hub name (optional)**: The name to send all data to. If you don't specify a name, an event hub is created for each log category. If you're sending to multiple categories, you might want to specify a name to limit the number of event hubs created. For more information, see [Azure Event Hubs quotas and limits](../../event-hubs/event-hubs-quotas.md).
+ - **Event hub policy name** (also optional): A policy defines the permissions that the streaming mechanism has. For more information, see [Event Hubs features](../../event-hubs/event-hubs-features.md#publisher-policy).
+
+ 1. **Send to partner solution**: You must first install Azure Native ISV Services into your subscription. Configuration options vary by partner. For more information, see [Azure Native ISV Services overview](../../partner-solutions/overview.md).
1. If the service supports both [resource-specific](resource-logs.md#resource-specific) and [Azure diagnostics](resource-logs.md#azure-diagnostics-mode) mode, then an option to select the [destination table](resource-logs.md#select-the-collection-mode) displays when you select **Log Analytics workspace** as a destination. You should usually select **Resource specific** since the table structure allows for more flexibility and more efficient queries.
When you deploy a diagnostic setting, you receive an error message similar to "M
The problem occurs when you use a Resource Manager template, REST API, the CLI, or Azure PowerShell. Diagnostic settings created via the Azure portal aren't affected because only the supported category names are presented.
-The problem occurs because of a recent change in the underlying API. Metric categories other than **AllMetrics** aren't supported and never were except for a few specific Azure services. In the past, other category names were ignored when deploying a diagnostic setting. The Azure Monitor back end redirected these categories to **AllMetrics**. As of February 2021, the back end was updated to specifically confirm the metric category provided is accurate. This change has caused some deployments to fail.
+The problem occurs because of a recent change in the underlying API. Metric categories other than **AllMetrics** aren't supported and never were except for a few specific Azure services. In the past, other category names were ignored when deploying a diagnostic setting. The Azure Monitor back end redirected these categories to **AllMetrics**. As of February 2021, the back end was updated to specifically confirm the metric category provided is accurate. This change can cause some deployments to fail.
If you receive this error, update your deployments to replace any metric category names with **AllMetrics** to fix the issue. If the deployment was previously adding multiple categories, only keep one with the **AllMetrics** reference. If you continue to have the problem, contact Azure support through the Azure portal.
Diagnostic settings don't support resource IDs with non-ASCII characters. For ex
### Possibility of duplicated or dropped data
-Every effort is made to ensure all log data is sent correctly to your destinations, however it's not possible guarantee 100% data transfer of logs between endpoints. Retries and other mechanisms are in place to work around these issues and attempt to ensure log data arrives at the endpoint.
+Every effort is made to ensure all log data is sent correctly to your destinations, however it's not possible to guarantee 100% data transfer of logs between endpoints. Retries and other mechanisms are in place to work around these issues and attempt to ensure log data arrives at the endpoint.
## Next steps
azure-monitor Diagnostic Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/diagnostic-settings.md
Currently, there are two category groups:
- **All**: Every resource log offered by the resource. - **Audit**: All resource logs that record customer interactions with data or the settings of the service. Audit logs are an attempt by each resource provider to provide the most relevant audit data, but might not be considered sufficient from an auditing standards perspective depending on your use case. As mentioned above, what's collected is dynamic, and Microsoft may change it over time as new resource log categories become available.
-The "Audit" category is a subset of "All", but the Azure portal and REST API consider them separate settings. Selecting "All" does collect all audit logs regardless of if the "Audit" category is also selected.
+The "Audit" category group is a subset of the "All" category group, but the Azure portal and REST API consider them separate settings. Selecting the "All" category group does collect all audit logs even if the "Audit" category group is also selected.
-The following image shows the logs category groups on the add diagnostics settings page.
+The following image shows the logs category groups on the **Add diagnostics settings** page.
:::image type="content" source="./media/diagnostic-settings/audit-category-group.png" alt-text="A screenshot showing the logs category groups.":::
To ensure the security of data in transit, all destination endpoints are configu
| [Log Analytics workspace](../logs/workspace-design.md) | Metrics are converted to log form. This option might not be available for all resource types. Sending them to the Azure Monitor Logs store (which is searchable via Log Analytics) helps you to integrate them into queries, alerts, and visualizations with existing log data. | [Azure Storage account](../../storage/blobs/index.yml) | Archiving logs and metrics to a Storage account is useful for audit, static analysis, or back up. Compared to using Azure Monitor Logs or a Log Analytics workspace, Storage is less expensive, and logs can be kept there indefinitely. | | [Azure Event Hubs](../../event-hubs/index.yml) | When you send logs and metrics to Event Hubs, you can stream data to external systems such as third-party SIEMs and other Log Analytics solutions. |
-| [Azure Monitor partner integrations](../../partner-solutions/overview.md)| Specialized integrations can be made between Azure Monitor and other non-Microsoft monitoring platforms. Integration is useful when you're already using one of the partners. |
+| [Azure Monitor partner solutions](../../partner-solutions/overview.md)| Specialized integrations can be made between Azure Monitor and other non-Microsoft monitoring platforms. Integration is useful when you're already using one of the partners. |
## Activity log settings
This section discusses requirements and limitations.
### Time before telemetry gets to destination
-After you set up a diagnostic setting, data should start flowing to your selected destination(s) within 90 minutes. When sending logs to a Log Analytics workspace, the table will be created automatically if it doesn't already exist. The table is only created when the first log records are received. If you get no information within 24 hours, then you might be experiencing one of the following issues:
+After you set up a diagnostic setting, data should start flowing to your selected destination(s) within 90 minutes. When sending logs to a Log Analytics workspace, the table is created automatically if it doesn't already exist. The table is only created when the first log records are received. If you get no information within 24 hours, then you might be experiencing one of the following issues:
- No logs are being generated. - Something is wrong in the underlying routing mechanism.
The following table provides unique requirements for each destination including
| Destination | Requirements | |:|:| | Log Analytics workspace | The workspace doesn't need to be in the same region as the resource being monitored.|
-| Storage account | Don't use an existing storage account that has other, non-monitoring data stored in it. Spliting the types of data up allow you better control access to the data. If you're archiving the activity log and resource logs together, you might choose to use the same storage account to keep all monitoring data in a central location.<br><br>To prevent modification of the data, send it to immutable storage. Set the immutable policy for the storage account as described in [Set and manage immutability policies for Azure Blob Storage](../../storage/blobs/immutable-policy-configure-version-scope.md). You must follow all steps in this linked article including enabling protected append blobs writes.<br><br>The storage account needs to be in the same region as the resource being monitored if the resource is regional.<br><br> Diagnostic settings can't access storage accounts when virtual networks are enabled. You must enable **Allow trusted Microsoft services** to bypass this firewall setting in storage accounts so that the Azure Monitor diagnostic settings service is granted access to your storage account.<br><br>[Azure DNS zone endpoints (preview)](../../storage/common/storage-account-overview.md#azure-dns-zone-endpoints-preview) and [Azure Premium LRS](../../storage/common/storage-redundancy.md#locally-redundant-storage) (locally redundant storage) storage accounts aren't supported as a log or metric destination.|
+| Storage account | Don't use an existing storage account that has other, nonmonitoring data stored in it. Splitting the types of data up allow you better control access to the data. If you're archiving the activity log and resource logs together, you might choose to use the same storage account to keep all monitoring data in a central location.<br><br>To prevent modification of the data, send it to immutable storage. Set the immutable policy for the storage account as described in [Set and manage immutability policies for Azure Blob Storage](../../storage/blobs/immutable-policy-configure-version-scope.md). You must follow all steps in this linked article including enabling protected append blobs writes.<br><br>The storage account needs to be in the same region as the resource being monitored if the resource is regional.<br><br> Diagnostic settings can't access storage accounts when virtual networks are enabled. You must enable **Allow trusted Microsoft services** to bypass this firewall setting in storage accounts so that the Azure Monitor diagnostic settings service is granted access to your storage account.<br><br>[Azure DNS zone endpoints (preview)](../../storage/common/storage-account-overview.md#azure-dns-zone-endpoints-preview) and [Azure Premium LRS](../../storage/common/storage-redundancy.md#locally-redundant-storage) (locally redundant storage) storage accounts aren't supported as a log or metric destination.|
| Event Hubs | The shared access policy for the namespace defines the permissions that the streaming mechanism has. Streaming to Event Hubs requires Manage, Send, and Listen permissions. To update the diagnostic setting to include streaming, you must have the ListKey permission on that Event Hubs authorization rule.<br><br>The event hub namespace needs to be in the same region as the resource being monitored if the resource is regional. <br><br> Diagnostic settings can't access Event Hubs resources when virtual networks are enabled. You must enable **Allow trusted Microsoft services** to bypass this firewall setting in Event Hubs so that the Azure Monitor diagnostic settings service is granted access to your Event Hubs resources.|
-| Partner integrations | The solutions vary by partner. Check the [Azure Monitor partner integrations documentation](../../partner-solutions/overview.md) for details.
+| Partner solutions | The solutions vary by partner. Check the [Azure Native ISV Services documentation](../../partner-solutions/overview.md) for details.|
> [!CAUTION] > If you want to store diagnostic logs in a Log Analytics workspace, there are two points to consider to avoid seeing duplicate data in Application Insights:
azure-monitor Prometheus Metrics Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-metrics-overview.md
Azure Monitor managed service for Prometheus can currently collect data from any
## Enable The only requirement to enable Azure Monitor managed service for Prometheus is to create an [Azure Monitor workspace](azure-monitor-workspace-overview.md), which is where Prometheus metrics are stored. Once this workspace is created, you can onboard services that collect Prometheus metrics. -- To collect Prometheus metrics from your Kubernetes cluster, see [Enable monitoring for Kubernetes clusters](../containers/kubernetes-monitoring-enable.md).
+- To collect Prometheus metrics from your Kubernetes cluster, see [Enable monitoring for Kubernetes clusters](../containers/kubernetes-monitoring-enable.md#enable-prometheus-and-grafana).
- To configure remote-write to collect data from your self-managed Prometheus server, see [Azure Monitor managed service for Prometheus remote write - managed identity](prometheus-remote-write-managed-identity.md). ## Grafana integration
azure-netapp-files Nfs Access Control Lists https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/nfs-access-control-lists.md
# Understand NFSv4.x access control lists in Azure NetApp Files
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ The NFSv4.x protocol can provide access control in the form of [access control lists (ACLs)](/windows/win32/secauthz/access-control-lists), which conceptually similar to ACLs used in [SMB via Windows NTFS permissions](network-attached-file-permissions-smb.md). An NFSv4.x ACL consists of individual [Access Control Entries (ACEs)](/windows/win32/secauthz/access-control-entries), each of which provides an access control directive to the server. :::image type="content" source="./media/nfs-access-control-lists/access-control-entity-to-client-diagram.png" alt-text="Diagram of access control entity to Azure NetApp Files." lightbox="./media/nfs-access-control-lists/access-control-entity-to-client-diagram.png":::
azure-resource-manager Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/install.md
Title: Set up Bicep development and deployment environments description: How to configure Bicep development and deployment environments Previously updated : 11/03/2023 Last updated : 02/08/2024
Let's make sure your environment is set up for working with Bicep files. To auth
| | [VS Code and Bicep extension](#visual-studio-code-and-bicep-extension) | [manual](#install-manually) | | | [Air-gapped cloud](#install-on-air-gapped-cloud) | download |
+> [!WARNING]
+> The Bicep CLI's stability in emulated environments isn't guaranteed, as emulation tools like Rosetta2 and QEMU typically don't perfectly emulate the architecture.
+ ## Visual Studio Code and Bicep extension To create Bicep files, you need a good Bicep editor. We recommend:
azure-vmware Remove Arc Enabled Azure Vmware Solution Vsphere Resources From Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/remove-arc-enabled-azure-vmware-solution-vsphere-resources-from-azure.md
# Remove Arc-enabled Azure VMware Solution vSphere resources from Azure
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ In this article, learn how to cleanly remove your VMware vCenter environment from Azure Arc-enabled VMware vSphere. For VMware vSphere environments that you no longer want to manage with Azure Arc-enabled VMware vSphere, use the information in this article to perform the following actions: - Remove guest management from VMware virtual machines (VMs).
backup Azure Kubernetes Service Cluster Manage Backups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-cluster-manage-backups.md
- devx-track-azurecli - ignite-2023 Previously updated : 04/26/2023 Last updated : 02/09/2024
To enable Trusted Access between Backup vault and AKS cluster, use the following
-g <aksclusterrg> \ --cluster-name <aksclustername> \ -n <randomRoleBindingName> \
- --source-resource-id $(az dataprotection backup-vault show -g <vaultrg> -v <VaultName> --query id -o tsv) \
+ --source-resource-id $(az dataprotection backup-vault show -g <vaultrg> --vault <VaultName> --query id -o tsv) \
--roles Microsoft.DataProtection/backupVaults/backup-operator ```
backup Backup Azure Database Postgresql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-database-postgresql-overview.md
END LOOP;
END; $do$ ```
+ )
+
+ > [!NOTE]
+ > If a database for which backup was already configured is failing with **UserErrorMissingDBPermissions** Please refer to this [troubleshooting guide](backup-azure-database-postgresql-troubleshoot.md) for assistance in resolving the issue.
## Use the PG admin tool
backup Backup Azure Database Postgresql Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-database-postgresql-troubleshoot.md
The Azure Backup service uses the credentials mentioned in the key-vault to acce
## UserErrorMissingDBPermissions
-The Azure Backup service uses the credentials mentioned in the key-vault to access the database as a database user. The relevant key vault and the secret are [provided during configuration of backup](backup-azure-database-postgresql.md#configure-backup-on-azure-postgresql-databases). Grant appropriate permissions to the relevant backup or the database user to perform this operation on the database.
+
+The Azure Backup service uses the credentials mentioned in the key-vault to access the database as a database user. The relevant key vault and the secret are [provided during configuration of backup](backup-azure-database-postgresql.md#configure-backup-on-azure-postgresql-databases). The key-vault associated with this backup instance can be found by accessing the backup instance and selecting the JSON view. You'll see the key-vault name and secret details listed under the **datasourceAuthCredentials** section as shown in the below screenshot.
+ ## UserErrorSecretValueInUnsupportedFormat
batch Batch Docker Container Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-docker-container-workloads.md
description: Learn how to run and scale apps from container images on Azure Batc
Last updated 01/19/2024 ms.devlang: csharp
-# ms.devlang: csharp, python
# Use Azure Batch to run container workloads
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ Azure Batch lets you run and scale large numbers of batch computing jobs on Azure. Batch tasks can run directly on virtual machines (nodes) in a Batch pool, but you can also set up a Batch pool to run tasks in Docker-compatible containers on the nodes. This article shows you how to create a pool of compute nodes that support running container tasks, and then run container tasks on the pool. The code examples here use the Batch .NET and Python SDKs. You can also use other Batch SDKs and tools, including the Azure portal, to create container-enabled Batch pools and to run container tasks.
batch Batch Pool Compute Intensive Sizes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-pool-compute-intensive-sizes.md
Last updated 05/01/2023
# Use RDMA or GPU instances in Batch pools
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ To run certain Batch jobs, you can take advantage of Azure VM sizes designed for large-scale computation. For example: * To run multi-instance [MPI workloads](batch-mpi.md), choose H-series or other sizes that have a network interface for Remote Direct Memory Access (RDMA). These sizes connect to an InfiniBand network for inter-node communication, which can accelerate MPI applications.
batch Batch Pool Node Error Checking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-pool-node-error-checking.md
# Azure Batch pool and node errors
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ Some Azure Batch pool creation and management operations happen immediately. Detecting failures for these operations is straightforward, because errors usually return immediately from the API, command line, or user interface. However, some operations are asynchronous, run in the background, and take several minutes to complete. This article describes ways to detect and avoid failures that can occur in the background operations for pools and nodes. Make sure to set your applications to implement comprehensive error checking, especially for asynchronous operations. Comprehensive error checking can help you promptly identify and diagnose issues.
batch Batch Rendering Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-rendering-applications.md
# Pre-installed applications on Batch rendering VM images
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ It's possible to use any rendering applications with Azure Batch. However, Azure Marketplace VM images are available with common applications pre-installed. Where applicable, pay-for-use licensing is available for the pre-installed rendering applications. When a Batch pool is created, the required applications can be specified and both the cost of VM and applications will be billed per minute. Application prices are listed on the [Azure Batch pricing page](https://azure.microsoft.com/pricing/details/batch/#graphic-rendering).
batch Batch Rendering Functionality https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-rendering-functionality.md
# Azure Batch rendering capabilities
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ Standard Azure Batch capabilities are used to run rendering workloads and applications. Batch also includes specific features to support rendering workloads. For an overview of Batch concepts, including pools, jobs, and tasks, see [this article](./batch-service-workflow-features.md).
batch Virtual File Mount https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/virtual-file-mount.md
Last updated 08/22/2023
# Mount a virtual file system on a Batch pool
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ Azure Batch supports mounting cloud storage or an external file system on Windows or Linux compute nodes in Batch pools. When a compute node joins the pool, the virtual file system mounts and acts as a local drive on that node. This article shows you how to mount a virtual file system on a pool of compute nodes by using the [Batch Management Library for .NET](/dotnet/api/overview/azure/batch). Mounting the file system to the pool makes accessing data easier and more efficient than requiring tasks to get their own data from a large shared data set. Consider a scenario where multiple tasks need access to a common set of data, like rendering a movie. Each task renders one or more frames at once from the scene files. By mounting a drive that contains the scene files, it's easier for each compute node to access the shared data.
chaos-studio Chaos Studio Fault Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-fault-library.md
# Azure Chaos Studio fault and action library
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ The faults listed in this article are currently available for use. To understand which resource types are supported, see [Supported resource types and role assignments for Azure Chaos Studio](./chaos-studio-fault-providers.md). ## Time delay
The parameters **destinationFilters** and **inboundDestinationFilters** use the
| Prerequisites | Agent must run as administrator. If the agent is installed as a VM extension, it runs as administrator by default. | | Urn | urn:csci:microsoft:agent:networkDisconnectViaFirewall/1.0 | | Parameters (key, value) | |
-| destinationFilters | Delimited JSON array of packet filters that define which outbound packets to target for fault injection. Maximum of three. |
+| destinationFilters | Delimited JSON array of packet filters that define which outbound packets to target for fault injection. |
| address | IP address that indicates the start of the IP range. | | subnetMask | Subnet mask for the IP address range. | | portLow | (Optional) Port number of the start of the port range. |
Currently, only virtual machine scale sets configured with the **Uniform** orche
| - | | | Capability name | IncrementCertificateVersion-1.0 | | Target type | Microsoft-KeyVault |
-| Description | Generates a new certificate version and thumbprint by using the Key Vault Certificate client library. Current working certificate is upgraded to this version. |
+| Description | Generates a new certificate version and thumbprint by using the Key Vault Certificate client library. Current working certificate is upgraded to this version. Certificate version is not reverted after the fault duration. |
| Prerequisites | None. | | Urn | urn:csci:microsoft:keyvault:incrementCertificateVersion/1.0 | | Fault type | Discrete. |
chaos-studio Chaos Studio Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-versions.md
# Azure Chaos Studio version compatibility
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ The following reference shows relevant version support and compatibility for features within Chaos Studio. ## Operating systems supported by the agent
The *Chaos Studio fault version* column refers to the individual fault version f
## Browser compatibility
-Review the Azure portal documentation on [Supported devices](../azure-portal/azure-portal-supported-browsers-devices.md) for more information on browser support.
+Review the Azure portal documentation on [Supported devices](../azure-portal/azure-portal-supported-browsers-devices.md) for more information on browser support.
cloud-services Cloud Services Choose Me https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-choose-me.md
The PaaS nature of Azure Cloud Services has other implications, too. One of the
## Next steps * [Create a cloud service app in .NET](cloud-services-dotnet-get-started.md) * [Create a cloud service app in Node.js](cloud-services-nodejs-develop-deploy-app.md)
-* [Create a cloud service app in PHP](../cloud-services-php-create-web-role.md)
+* [Create a cloud service app in PHP](cloud-services-php-create-web-role.md)
* [Create a cloud service app in Python](cloud-services-python-ptvs.md)
cloud-services Cloud Services Php Create Web Role https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-php-create-web-role.md
+
+ Title: Create Azure web and worker roles for PHP
+description: A guide to creating PHP web and worker roles in an Azure cloud service, and configuring the PHP runtime.
+
+ms.assetid: 9f7ccda0-bd96-4f7b-a7af-fb279a9e975b
+
+ms.devlang: php
+ Last updated : 04/11/2018+++
+# Create PHP web and worker roles
+## Overview
++
+This guide will show you how to create PHP web or worker roles in a Windows development environment, choose a specific version of PHP from the "built-in" versions available, change the PHP configuration, enable extensions, and finally, deploy to Azure. It also describes how to configure a web or worker role to use a PHP runtime (with custom configuration and extensions) that you provide.
+
+Azure provides three compute models for running applications: Azure App Service, Azure Virtual Machines, and Azure Cloud Services. All three models support PHP. Cloud Services, which includes web and worker roles, provides *platform as a service (PaaS)*. Within a cloud service, a web role provides a dedicated Internet Information Services (IIS) web server to host front-end web applications. A worker role can run asynchronous, long-running or perpetual tasks independent of user interaction or input.
+
+For more information about these options, see [Compute hosting options provided by Azure](cloud-services-choose-me.md).
+
+## Download the Azure SDK for PHP
+
+The [Azure SDK for PHP](https://github.com/Azure/azure-sdk-for-php) consists of several components. This article will use two of them: Azure PowerShell and the Azure emulators. These two components can be installed via the Microsoft Web Platform Installer. For more information, see [How to install and configure Azure PowerShell](/powershell/azure/).
+
+## Create a Cloud Services project
+
+The first step in creating a PHP web or worker role is to create an Azure Service project. an Azure Service project serves as a logical container for web and worker roles, and it contains the project's [service definition (.csdef)] and [service configuration (.cscfg)] files.
+
+To create a new Azure Service project, run Azure PowerShell as an administrator, and execute the following command:
+
+```powershell
+PS C:\>New-AzureServiceProject myProject
+```
+
+This command will create a new directory (`myProject`) to which you can add web and worker roles.
+
+## Add PHP web or worker roles
+
+To add a PHP web role to a project, run the following command from within the project's root directory:
+
+```powershell
+PS C:\myProject> Add-AzurePHPWebRole roleName
+```
+
+For a worker role, use this command:
+
+```powershell
+PS C:\myProject> Add-AzurePHPWorkerRole roleName
+```
+
+> [!NOTE]
+> The `roleName` parameter is optional. If it is omitted, the role name will be automatically generated. The first web role created will be `WebRole1`, the second will be `WebRole2`, and so on. The first worker role created will be `WorkerRole1`, the second will be `WorkerRole2`, and so on.
+>
+>
+
+## Use your own PHP runtime
+
+In some cases, instead of selecting a built-in PHP runtime and configuring it as described above, you may want to provide your own PHP runtime. For example, you can use the same PHP runtime in a web or worker role that you use in your development environment. This makes it easier to ensure that the application will not change behavior in your production environment.
+
+### Configure a web role to use your own PHP runtime
+
+To configure a web role to use a PHP runtime that you provide, follow these steps:
+
+1. Create an Azure Service project and add a PHP web role as described previously in this topic.
+2. Create a `php` folder in the `bin` folder that is in your web role's root directory, and then add your PHP runtime (all binaries, configuration files, subfolders, etc.) to the `php` folder.
+3. (OPTIONAL) If your PHP runtime uses the [Microsoft Drivers for PHP for SQL Server][sqlsrv drivers], you will need to configure your web role to install [SQL Server Native Client 2012][sql native client] when it is provisioned. To do this, add the [sqlncli.msi x64 installer] to the `bin` folder in your web role's root directory. The startup script described in the next step will silently run the installer when the role is provisioned. If your PHP runtime does not use the Microsoft Drivers for PHP for SQL Server, you can remove the following line from the script shown in the next step:
+
+ ```console
+ msiexec /i sqlncli.msi /qn IACCEPTSQLNCLILICENSETERMS=YES
+ ```
+
+4. Define a startup task that configures [Internet Information Services (IIS)][iis.net] to use your PHP runtime to handle requests for `.php` pages. To do this, open the `setup_web.cmd` file (in the `bin` file of your web role's root directory) in a text editor and replace its contents with the following script:
+
+ ```cmd
+ @ECHO ON
+ cd "%~dp0"
+
+ if "%EMULATED%"=="true" exit /b 0
+
+ msiexec /i sqlncli.msi /qn IACCEPTSQLNCLILICENSETERMS=YES
+
+ SET PHP_FULL_PATH=%~dp0php\php-cgi.exe
+ SET NEW_PATH=%PATH%;%RoleRoot%\base\x86
+
+ %WINDIR%\system32\inetsrv\appcmd.exe set config -section:system.webServer/fastCgi /+"[fullPath='%PHP_FULL_PATH%',maxInstances='12',idleTimeout='60000',activityTimeout='3600',requestTimeout='60000',instanceMaxRequests='10000',protocol='NamedPipe',flushNamedPipe='False']" /commit:apphost
+ %WINDIR%\system32\inetsrv\appcmd.exe set config -section:system.webServer/fastCgi /+"[fullPath='%PHP_FULL_PATH%'].environmentVariables.[name='PATH',value='%NEW_PATH%']" /commit:apphost
+ %WINDIR%\system32\inetsrv\appcmd.exe set config -section:system.webServer/fastCgi /+"[fullPath='%PHP_FULL_PATH%'].environmentVariables.[name='PHP_FCGI_MAX_REQUESTS',value='10000']" /commit:apphost
+ %WINDIR%\system32\inetsrv\appcmd.exe set config -section:system.webServer/handlers /+"[name='PHP',path='*.php',verb='GET,HEAD,POST',modules='FastCgiModule',scriptProcessor='%PHP_FULL_PATH%',resourceType='Either',requireAccess='Script']" /commit:apphost
+ %WINDIR%\system32\inetsrv\appcmd.exe set config -section:system.webServer/fastCgi /"[fullPath='%PHP_FULL_PATH%'].queueLength:50000"
+ ```
+5. Add your application files to your web role's root directory. This will be the web server's root directory.
+6. Publish your application as described in the [Publish your application](#publish-your-application) section below.
+
+> [!NOTE]
+> The `download.ps1` script (in the `bin` folder of the web role's root directory) can be deleted after you follow the steps described above for using your own PHP runtime.
+>
+>
+
+### Configure a worker role to use your own PHP runtime
+
+To configure a worker role to use a PHP runtime that you provide, follow these steps:
+
+1. Create an Azure Service project and add a PHP worker role as described previously in this topic.
+2. Create a `php` folder in the worker role's root directory, and then add your PHP runtime (all binaries, configuration files, subfolders, etc.) to the `php` folder.
+3. (OPTIONAL) If your PHP runtime uses [Microsoft Drivers for PHP for SQL Server][sqlsrv drivers], you will need to configure your worker role to install [SQL Server Native Client 2012][sql native client] when it is provisioned. To do this, add the [sqlncli.msi x64 installer] to the worker role's root directory. The startup script described in the next step will silently run the installer when the role is provisioned. If your PHP runtime does not use the Microsoft Drivers for PHP for SQL Server, you can remove the following line from the script shown in the next step:
+
+ ```console
+ msiexec /i sqlncli.msi /qn IACCEPTSQLNCLILICENSETERMS=YES
+ ```
+
+4. Define a startup task that adds your `php.exe` executable to the worker role's PATH environment variable when the role is provisioned. To do this, open the `setup_worker.cmd` file (in the worker role's root directory) in a text editor and replace its contents with the following script:
+
+ ```cmd
+ @echo on
+
+ cd "%~dp0"
+
+ echo Granting permissions for Network Service to the web root directory...
+ icacls ..\ /grant "Network Service":(OI)(CI)W
+ if %ERRORLEVEL% neq 0 goto error
+ echo OK
+
+ if "%EMULATED%"=="true" exit /b 0
+
+ msiexec /i sqlncli.msi /qn IACCEPTSQLNCLILICENSETERMS=YES
+
+ setx Path "%PATH%;%~dp0php" /M
+
+ if %ERRORLEVEL% neq 0 goto error
+
+ echo SUCCESS
+ exit /b 0
+
+ :error
+
+ echo FAILED
+ exit /b -1
+ ```
+5. Add your application files to your worker role's root directory.
+6. Publish your application as described in the [Publish your application](#publish-your-application) section below.
+
+## Run your application in the compute and storage emulators
+
+The Azure emulators provide a local environment in which you can test your Azure application before you deploy it to the cloud. There are some differences between the emulators and the Azure environment. To understand this better, see [Use the Azure Storage Emulator for development and testing](../storage/common/storage-use-emulator.md).
+
+Note that you must have PHP installed locally to use the compute emulator. The compute emulator will use your local PHP installation to run your application.
+
+To run your project in the emulators, execute the following command from your project's root directory:
+
+```powershell
+PS C:\MyProject> Start-AzureEmulator
+```
+
+You will see output similar to this:
+
+```output
+Creating local package...
+Starting Emulator...
+Role is running at http://127.0.0.1:81
+Started
+```
+
+You can see your application running in the emulator by opening a web browser and browsing to the local address shown in the output (`http://127.0.0.1:81` in the example output above).
+
+To stop the emulators, execute this command:
+
+```powershell
+PS C:\MyProject> Stop-AzureEmulator
+```
+
+## Publish your application
+
+To publish your application, you need to first import your publish settings by using the [Import-AzurePublishSettingsFile](/powershell/module/servicemanagement/azure/import-azurepublishsettingsfile) cmdlet. Then you can publish your application by using the [Publish-AzureServiceProject](/powershell/module/servicemanagement/azure/publish-azureserviceproject) cmdlet. For information about signing in, see [How to install and configure Azure PowerShell](/powershell/azure/).
+
+## Next steps
+
+For more information, see the [PHP Developer Center](https://azure.microsoft.com/develop/php/).
+
+[install ps and emulators]: https://go.microsoft.com/fwlink/p/?linkid=320376&clcid=0x409
+[service definition (.csdef)]: /previous-versions/azure/reference/ee758711(v=azure.100)
+[service configuration (.cscfg)]: /previous-versions/azure/reference/ee758710(v=azure.100)
+[iis.net]: https://www.iis.net/
+[sql native client]: /sql/sql-server/sql-server-technical-documentation
+[sqlsrv drivers]: https://php.net/sqlsrv
+[sqlncli.msi x64 installer]: https://go.microsoft.com/fwlink/?LinkID=239648
cloud-services Cloud Services Python How To Use Service Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-python-how-to-use-service-management.md
# Use service management from Python
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ [!INCLUDE [Cloud Services (classic) deprecation announcement](includes/deprecation-announcement.md)] This guide shows you how to programmatically perform common service management tasks from Python. The **ServiceManagementService** class in the [Azure SDK for Python](https://github.com/Azure/azure-sdk-for-python) supports programmatic access to much of the service management-related functionality that is available in the [Azure portal]. You can use this functionality to create, update, and delete cloud services, deployments, data management services, and virtual machines. This functionality can be useful in building applications that need programmatic access to service management.
communication-services Voice And Video Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/analytics/logs/voice-and-video-logs.md
The call summary log contains data to help you identify key properties of all ca
| `sdkVersion` | The version string for the Communication Services Calling SDK version that each relevant endpoint uses (for example, `"1.1.00.20212500"`). | | `osVersion` | A string that represents the operating system and version of each endpoint device. | | `participantTenantId` | The ID of the Microsoft tenant associated with the identity of the participant. The tenant can either be the Azure tenant that owns the Azure Communication Services resource or the Microsoft tenant of an M365 identity. This field is used to guide cross-tenant redaction.
-|`participantType` | Description of the participant as a combination of its client (Azure Communication Services or Teams), and its identity, (Azure Communication Services or Microsoft 365). Possible values include: Azure Communication Services (Azure Communication Services identity and Azure Communication Services SDK), Teams (Teams identity and Teams client), Azure Communication Services as Teams external user (Azure Communication Services identity and Azure Communication Services SDK in Teams call or meeting), and Azure Communication Services as Microsoft 365 user (M365 identity and Azure Communication Services client).
+|`participantType` | Description of the participant as a combination of its client (Azure Communication Services or Teams), and its identity, (Azure Communication Services or Microsoft 365). Possible values include: Azure Communication Services (Azure Communication Services identity and Azure Communication Services SDK), Teams (Teams identity and Teams client), Azure Communication Services as Teams external user (Azure Communication Services identity and Azure Communication Services SDK in Teams call or meeting), Azure Communication Services as Microsoft 365 user (M365 identity and Azure Communication Services client), and Teams Voice Apps.
| `pstnPartcipantCallType `|It represents the type and direction of PSTN participants including Emergency calling, direct routing, transfer, forwarding, etc.| ### Call diagnostic log schema
communication-services Call Flows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-flows.md
The section below gives an overview of the call flows in Azure Communication Ser
## About signaling and media protocols
-When you establish a peer-to-peer or group call, two protocols are used behind the scenes - HTTP (REST) for signaling and SRTP for media.
+When you establish a peer-to-peer or group call, two protocols are used behind the scenes - HTTPS (REST) for signaling and SRTP for media.
-Signaling between the SDKs or between SDKs and Communication Services Signaling Controllers is handled with HTTP REST (TLS). Azure Communication Services uses TLS 1.2. For Real-Time Media Traffic (RTP), the User Datagram Protocol (UDP) is preferred. If the use of UDP is prevented by your firewall, the SDK will use the Transmission Control Protocol (TCP) for media.
+Signaling between the SDKs or between SDKs and Communication Services Signaling Controllers is handled with HTTPS REST (TLS). Azure Communication Services uses TLS 1.2. For Real-Time Media Traffic (RTP), the User Datagram Protocol (UDP) is preferred. If the use of UDP is prevented by your firewall, the SDK will use the Transmission Control Protocol (TCP) for media.
Let's review the signaling and media protocols in various scenarios.
communication-services Direct Routing Sip Specification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/telephony/direct-routing-sip-specification.md
Call context headers are currently available only for Call Automation SDK. Call
### User-To-User header
-SIP User-To-User (UUI) header is an industry standard to pass contextual information during a call setup process. The maximum length of a UUI header key is 64 chars. The maximum length of UUI header value is 256 chars.
+SIP User-To-User (UUI) header is an industry standard to pass contextual information during a call setup process. The maximum length of a UUI header key is 64 chars. The maximum length of UUI header value is 256 chars. The UUI header value might consist of alphanumeric characters and a few selected symbols, including "=", ";", ".", "!", "%", "*", "_", "+", "~", "-".
### Custom header
-Azure Communication Services also supports up to five custom SIP headers. Custom SIP header key must start with a mandatory `X-MS-Custom-` prefix. The maximum length of a SIP header key is 64 chars, including the `X-MS-Custom-` prefix. The maximum length of SIP header value is 256 chars.
+Azure Communication Services also supports up to five custom SIP headers. Custom SIP header key must start with a mandatory `X-MS-Custom-` prefix. The maximum length of a SIP header key is 64 chars, including the `X-MS-Custom-` prefix. The SIP header key might consist of alphanumeric characters and a few selected symbols, including ".", "!", "%", "*", "_", "+", "~", "-". The maximum length of the SIP header value is 256 characters. The SIP header value might consist of alphanumeric characters and a few selected symbols, including "=", ";", ".", "!", "%", "*", "_", "+", "~", "-".
For implementation details refer to [How to pass contextual data between calls](../../how-tos/call-automation/custom-context.md).
communications-gateway Monitor Azure Communications Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/monitor-azure-communications-gateway.md
Previously updated : 08/23/2023 Last updated : 01/25/2024 # Monitoring Azure Communications Gateway
Azure Communications Gateway collects metrics. See [Monitoring Azure Communicati
You can analyze metrics for Azure Communications Gateway, along with metrics from other Azure services, by opening **Metrics** from the **Azure Monitor** menu. See [Analyze metrics with Azure Monitor metrics explorer](../azure-monitor/essentials/analyze-metrics.md) for details on using this tool.
-All Azure Communications Gateway metrics support the **Region** dimension, allowing you to filter any metric by the Service Locations defined in your Azure Communications Gateway resource.
+Azure Communications Gateway metrics support the **Region** dimension, allowing you to filter any metric by the Service Locations defined in your Azure Communications Gateway resource. Connectivity metrics also support the **OPTIONS or INVITE** dimension.
-You can also split a metric by the **Region** dimension to visualize how different segments of the metric compare with each other.
+You can also split a metric by these dimensions to visualize how different segments of the metric compare with each other.
For more information on filtering and splitting, see [Advanced features of Azure Monitor](../azure-monitor/essentials/metrics-charts.md).
communications-gateway Monitoring Azure Communications Gateway Data Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/monitoring-azure-communications-gateway-data-reference.md
Previously updated : 01/25/2023 Last updated : 02/01/2024
This section lists all the automatically collected metrics collected for Azure C
| Active Calls | Count | Count of the total number of active calls. | | Active Emergency Calls | Count | Count of the total number of active emergency calls.|
+### Connectivity metrics
+
+The metrics in the following table refer to the connection between your network and the Azure Communications Gateway resource.
+
+| Metric | Unit | Description |
+|:-|:-|:|
+| SIP 2xx Responses Received | Count | Count of the total number of SIP 2xx responses received for OPTIONS and INVITEs.|
+| SIP 2xx Responses Sent | Count | Count of the total number of SIP 2xx responses sent for OPTIONS and INVITEs.|
+| SIP 3xx Responses Received | Count | Count of the total number of SIP 3xx responses received for OPTIONS and INVITEs.|
+| SIP 3xx Responses Sent | Count | Count of the total number of SIP 3xx responses sent for OPTIONS and INVITEs.|
+| SIP 4xx Responses Received | Count | Count of the total number of SIP 4xx responses received for OPTIONS and INVITEs.|
+| SIP 4xx Responses Sent | Count | Count of the total number of SIP 4xx responses sent for OPTIONS and INVITEs.|
+| SIP 5xx Responses Received | Count | Count of the total number of SIP 5xx responses received for OPTIONS and INVITEs.|
+| SIP 5xx Responses Sent | Count | Count of the total number of SIP 5xx responses sent for OPTIONS and INVITEs.|
+| SIP 6xx Responses Received | Count | Count of the total number of SIP 6xx responses received for OPTIONS and INVITEs.|
+| SIP 6xx Responses Sent | Count | Count of the total number of SIP 6xx responses sent for OPTIONS and INVITEs.|
## Metric Dimensions
Azure Communications Gateway has the following dimensions associated with its me
| Dimension Name | Description | | - | -- | | **Region** | The Service Locations defined in your Azure Communications Gateway resource. |
+| **OPTIONS or INVITE** | The SIP method for responses being sent and received:<br>- SIP OPTIONS responses sent and received by your Azure Communications Gateway resource to monitor its connectivity to its peers<br>- SIP INVITE responses sent and received by your Azure Communications Gateway resource. |
## Next steps
communications-gateway Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/whats-new.md
Previously updated : 11/17/2023 Last updated : 02/01/2024 # What's new in Azure Communications Gateway? This article covers new features and improvements for Azure Communications Gateway.
+## February 2024
+
+### Connectivity metrics
+
+From February 2024, you can monitor the health of the connection between your network and Azure Communications Gateway with new metrics for responses to SIP INVITE and OPTIONS exchanges. You can view statistics for all INVITE and OPTIONS requests, or narrow your view down to individual regions, request types or response codes. For more information on the available metrics, see [Connectivity metrics](monitoring-azure-communications-gateway-data-reference.md#connectivity-metrics). For an overview of working with metrics, see [Analyzing, filtering and splitting metrics in Azure Monitor](monitor-azure-communications-gateway.md#analyzing-filtering-and-splitting-metrics-in-azure-monitor).
+ ## November 2023 ### Support for Zoom Phone Cloud Peering
confidential-ledger Create Client Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/create-client-certificate.md
# Creating a Client Certificate
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ The Azure confidential ledger APIs require client certificate-based authentication. Only those certificates added to an allowlist during ledger creation or a ledger update can be used to call the confidential ledger Functional APIs. You need a certificate in PEM format. You can create more than one certificate and add or delete them using ledger Update API.
connectors Connectors Azure Monitor Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-azure-monitor-logs.md
tags: connectors
# Connect to Log Analytics or Application Insights from workflows in Azure Logic Apps > [!NOTE] >
To build workflows in Azure Logic Apps that retrieve data from a Log Analytics w
For example, you can create a logic app workflow that sends Azure Monitor log data in an email message from your Office 365 Outlook account, create a bug in Azure DevOps, or post a Slack message. This connector provides only actions, so to start a workflow, you can use a Recurrence trigger to specify a simple schedule or any trigger from another service.
-This how-to guide describes how to build a [Consumption logic app workflow](../logic-apps/logic-apps-overview.md#resource-environment-differences) that sends the results of an Azure Monitor log query by email.
+This guide describes how to build a logic app workflow that sends the results of an Azure Monitor log query by email.
## Connector technical reference
Both of the following actions can run a log query against a Log Analytics worksp
- The [Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md) or [Application Insights resource](../azure-monitor/app/app-insights-overview.md) that you want to connect. -- The [Consumption logic app workflow](../logic-apps/logic-apps-overview.md#resource-environment-differences) from where you want to access your Log Analytics workspace or Application Insights resource. To use an Azure Monitor Logs action, start your workflow with any trigger. This guide uses the [**Recurrence** trigger](connectors-native-recurrence.md).
+- The [Standard or Consumption logic app workflow](../logic-apps/logic-apps-overview.md#resource-environment-differences) from where you want to access your Log Analytics workspace or Application Insights resource. To use an Azure Monitor Logs action, start your workflow with any trigger. This guide uses the [**Recurrence** trigger](connectors-native-recurrence.md).
- An Office 365 Outlook account to complete the example in this guide. Otherwise, you can use any email provider that has an available connector in Azure Logic Apps. ## Add an Azure Monitor Logs action
-1. In the [Azure portal](https://portal.azure.com), open your Consumption logic app and workflow in the designer.
+### [Standard](#tab/standard)
+
+1. In the [Azure portal](https://portal.azure.com), open your Standard logic app and workflow in the designer.
-1. In your workflow where you want to add the Azure Monitor Logs action, follow these general steps to add an Azure Monitor Logs action](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
+1. In your workflow where you want to add the Azure Monitor Logs action, [follow these general steps to add an Azure Monitor Logs action](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
This example continues with the action named **Run query and visualize results**.
-1. In the connection box, from the **Tenant** list, select your Microsoft Entra tenant, and then select **Create**.
+1. In the connection box, provide the following information:
+
+ | Property | Description |
+ |-|-|
+ | **Connection Name** | A name for the connection |
+ | **Authentication Type** | The authentication type to use for the connection. For more information, see [Add authentication to outbound calls](../logic-apps/logic-apps-securing-a-logic-app.md#add-authentication-to-outbound-calls). |
+ | **Tenant ID** | Your Microsoft Entra tenant. **Note**: The account associated with the current connection is used later to send the email. |
+
+1. When you're done, select **Sign in** or **Create New**, based on the selected authentication type.
+
+1. In the **Run query and visualize results** action box, provide the following information:
+
+ | Property | Required | Value | Description |
+ |-|-|-|-|
+ | **Subscription** | Yes | <*Azure-subscription*> | The Azure subscription for your Log Analytics workspace or Application Insights application. |
+ | **Resource Group** | Yes | <*Azure-resource-group*> | The Azure resource group for your Log Analytics workspace or Application Insights application. |
+ | **Resource Type** | Yes | **Log Analytics Workspace** or **Application Insights** | The resource type to connect from your workflow. This example continues by selecting **Log Analytics Workspace**. |
+ | **Resource Name** | Yes | <*Azure-resource-name*> | The name for your Log Analytics workspace or Application Insights resource. |
+
+1. In the **Query** box, enter the following Kusto query to retrieve the specified log data from the following sources:
> [!NOTE]
- >
- > The account associated with the current connection is used later to send the email.
- > To use a different account, select **Change connection**.
+ >
+ > When you create your own queries, make sure they work correctly in Log Analytics before you add them to your Azure Monitor Logs action.
+
+ * Log Analytics workspace
+
+ The following example query selects errors that occurred within the last day, reports their total number, and sorts them in ascending order.
+
+ ```Kusto
+ Event
+ | where EventLevelName == "Error"
+ | where TimeGenerated > ago(1day)
+ | summarize TotalErrors=count() by Computer
+ | sort by Computer asc
+ ```
+
+ * Application Insights resource
+
+ The following example query selects the failed requests within the last day and correlates them with exceptions that occurred as part of the operation, based on the `operation_Id` identifier. The query then segments the results by using the `autocluster()` algorithm.
+
+ ```kusto
+ requests
+ | where timestamp > ago(1d)
+ | where success == "False"
+ | project name, operation_Id
+ | join ( exceptions
+ | project problemId, outerMessage, operation_Id
+ ) on operation_Id
+ | evaluate autocluster()
+ ```
+
+1. For **Time Range**, select **Set in query**.
+
+ The following table describes the options for **Time Range**:
+
+ | Time Range | Description |
+ ||-|
+ | **Exact** | Dynamically provide the start time and end time. |
+ | **Relative** | Set the relative value such as the last hour, last 12 hours, and so on. |
+ | **Set in query** | Applies when the **TimeGenerated** filter is included in query. |
+
+1. For **Chart Type**, select **Html Table**.
+
+1. Save your workflow. On the designer toolbar, select **Save**.
+
+### [Consumption](#tab/consumption)
+
+1. In the [Azure portal](https://portal.azure.com), open your Consumption logic app and workflow in the designer.
+
+1. In your workflow where you want to add the Azure Monitor Logs action, [follow these general steps to add an Azure Monitor Logs action](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
+
+ This example continues with the action named **Run query and visualize results**.
+
+1. In the connection box, provide the following information:
+
+ | Property | Description |
+ |-|-|
+ | **Connection Name** | A name for the connection |
+ | **Authentication Type** | The authentication type to use for the connection. For more information, see [Add authentication to outbound calls](../logic-apps/logic-apps-securing-a-logic-app.md#add-authentication-to-outbound-calls). |
+ | **Tenant ID** | Your Microsoft Entra tenant. **Note**: The account associated with the current connection is used later to send the email. To use a different account, after the Azure Monitor Logs action appears, select **Change connection**. |
+
+1. When you're done, select **Sign in** or **Create**, based on the selected authentication type.
1. In the **Run query and visualize results** action box, provide the following information:
Both of the following actions can run a log query against a Log Analytics worksp
1. Save your workflow. On the designer toolbar, select **Save**. ++ ## Add an email action
-1. In your workflow where you want to add the Office 365 Outlook action, follow one of these steps:
+### [Standard](#tab/standard)
- - To add an action under the last step, select **New step**.
+1. In your workflow where you want to add the Office 365 Outlook action, [follow these general steps to add the **Office 365 Outlook** action named **Send an email (V2)**](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=standard#add-action)
- - To add an action between steps, move your pointer use over the connecting arrow. Select the plus sign (**+**) that appears, and then select **Add an action**.
+1. In the **To** box, enter the recipient's email address. For this example, use your own email address.
+
+1. In the **Subject** box, enter a subject for the email, for example, **Top daily errors or failures**.
+
+1. Click inside the **Body** box, and then select the **Dynamic content** option (lightning icon), so that you can select outputs from previous steps in the workflow.
+
+1. In the dynamic content list, under **Run query and visualize results**, select **Body**, which represents the results of the query that you previously entered in the Log Analytics action.
+
+1. From the **Advanced parameters** list, select **Attachments**.
+
+ The **Send an email** action now includes the **Attachments** section with the **Attachment name** and **Attachment content** properties.
+
+1. For the added properties, follow these steps:
-1. Under the **Choose an operation** search box, select **Standard**. In the search box, enter **Office 365 send email**.
+ 1. In the **Attachment name** box, open the dynamic content list. Under **Run query and visualize results**, select **Attachment Name**.
-1. From the actions list, select the action named **Send an email (V2)**.
+ 1. In the **Attachment content** box, open the dynamic content list. Under **Run query and visualize results**, select **Attachment Content**.
+
+1. Save your workflow. On the designer toolbar, select **Save**.
+
+### [Consumption](#tab/consumption)
+
+1. In your workflow where you want to add the Office 365 Outlook action, [follow these general steps to add the **Office 365 Outlook** action named **Send an email (V2)**](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=consumption#add-action)
1. In the **To** box, enter the recipient's email address. For this example, use your own email address.
Both of the following actions can run a log query against a Log Analytics worksp
1. For the added properties, follow these steps:
- 1. In the **Attachment Name** box, from the dynamic content list that appears, under **Run query and visualize results**, select the **Attachment Name** output.
+ 1. Click inside the **Attachment Name** box to open the dynamic content list. Under **Run query and visualize results**, select **Attachment Name**.
- 1. In the **Attachment Content** box, from the dynamic content list that appears, under **Run query and visualize results**, select the **Attachment Content** output.
+ 1. Click inside the **Attachment Content** box to open the dynamic content list. Under **Run query and visualize results**, select **Attachment Content**.
1. Save your workflow. On the designer toolbar, select **Save**.
-### Test your workflow
++
+## Test your workflow
+
+### [Standard](#tab/standard)
+
+1. On workflow menu, select **Overview**.
+
+1. On the **Overview** toolbar, select **Run** > **Run**.
+
+1. When the workflow completes, check your email.
+
+ > [!NOTE]
+ >
+ > The workflow generates an email with a JPG file that shows the query result set.
+ > If your query doesn't return any results, the workflow won't create a JPG file.
+
+ For the Log Analytics workspace example, the email that you receive has a body that looks similar to the following example:
+
+ ![Screenshot shows data report from a Log Analytics workspace in an example email.](media/connectors-azure-monitor-logs/sample-mail-log-analytics-workspace.png)
+
+ For an Application Insights resource, the email that you receive has a body that looks similar to the following example:
+
+ ![Screenshot shows data report from an Application Insights resource in an example email.](media/connectors-azure-monitor-logs/sample-email-application-insights-resource.png)
+
+### [Consumption](#tab/consumption)
1. On the designer toolbar, select **Run Trigger** > **Run**.
Both of the following actions can run a log query against a Log Analytics worksp
For the Log Analytics workspace example, the email that you receive has a body that looks similar to the following example:
- ![Screenshot that shows the data report from a Log Analytics workspace in an example email.](media/connectors-azure-monitor-logs/sample-mail-log-analytics-workspace.png)
+ ![Screenshot shows the data report from a Log Analytics workspace in an example email.](media/connectors-azure-monitor-logs/sample-mail-log-analytics-workspace.png)
For an Application Insights resource, the email that you receive has a body that looks similar to the following example:
- ![Screenshot that shows the data report from an Application Insights resource in an example email.](media/connectors-azure-monitor-logs/sample-email-application-insights-resource.png)
+ ![Screenshot shows the data report from an Application Insights resource in an example email.](media/connectors-azure-monitor-logs/sample-email-application-insights-resource.png)
++ ## Next steps
connectors Connectors Native Webhook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-native-webhook.md
Title: Wait and respond to events
-description: Automate workflows that trigger, pause, and resume based on events at a service endpoint by using Azure Logic Apps.
+ Title: Subscribe and wait for events in workflows
+description: Add a trigger or action that subscribes to an endpoint and waits for events before running your workflow in Azure Logic Apps.
ms.suite: integration Previously updated : 01/04/2024 Last updated : 02/09/2024 tags: connectors
-# Create and run automated event-based workflows by using HTTP webhooks in Azure Logic Apps
+# Subscribe and wait for events to run workflows using HTTP webhooks in Azure Logic Apps
-With [Azure Logic Apps](../logic-apps/logic-apps-overview.md) and the HTTP Webhook built-in connector, you can create an automated workflow that subscribes to a service endpoint, waits for specific events, and runs specific actions, rather than regularly check or *poll* the service endpoint.
+Rather than use a trigger that regularly checks or *polls* a service endpoint or action that calls that endpoint, you can use an **HTTP Webhook** trigger or action that subscribes to a service endpoint, waits for specific events, and runs specific actions in your workflow.
Here are some example webhook-based workflows: * Wait for an event to arrive from [Azure Event Hubs](https://github.com/logicappsio/EventHubAPI) before triggering a workflow run. * Wait for an approval before continuing a workflow.
-This how-to guide shows how to use the HTTP Webhook trigger and Webhook action so that your logic app workflow can receive and respond to events at a service endpoint.
+This guide shows how to use the HTTP Webhook trigger and Webhook action so that your workflow can receive and respond to events at a service endpoint.
## How do webhooks work?
Similar to the webhook trigger, a webhook action works is also event-based. Afte
For example, the Office 365 Outlook connector's [**Send approval email**](connectors-create-api-office365-outlook.md) action is an example of webhook action that follows this pattern. You can extend this pattern into any service by using the webhook action.
-For more information, see these topics:
+For more information, see the following documentation:
* [Webhooks and subscriptions](../logic-apps/logic-apps-workflow-actions-triggers.md#webhooks-and-subscriptions) * [Create custom APIs that support a webhook](../logic-apps/logic-apps-create-api-app.md) For information about encryption, security, and authorization for inbound calls to your logic app, such as [Transport Layer Security (TLS)](https://en.wikipedia.org/wiki/Transport_Layer_Security), previously known as Secure Sockets Layer (SSL), or [Microsoft Entra ID Open Authentication (Microsoft Entra ID OAuth)](../active-directory/develop/index.yml), see [Secure access and data - Access for inbound calls to request-based triggers](../logic-apps/logic-apps-securing-a-logic-app.md#secure-inbound-requests).
+## Connector technical reference
+
+For more information about trigger and action parameters, see [HTTP Webhook parameters](../logic-apps/logic-apps-workflow-actions-triggers.md#http-webhook-trigger).
+ ## Prerequisites * An Azure account and subscription. If you don't have an Azure subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* The URL for an already deployed endpoint or API that supports the webhook subscribe and unsubscribe pattern for [webhook triggers in logic apps](../logic-apps/logic-apps-create-api-app.md#webhook-triggers) or [webhook actions in logic apps](../logic-apps/logic-apps-create-api-app.md#webhook-actions) as appropriate
+* The URL for an already deployed endpoint or API that supports the webhook subscribe and unsubscribe pattern for [webhook triggers in workflows](../logic-apps/logic-apps-create-api-app.md#webhook-triggers) or [webhook actions in workflows](../logic-apps/logic-apps-create-api-app.md#webhook-actions) as appropriate
-* The logic app where you want to wait for specific events at the target endpoint. To start with the HTTP Webhook trigger, create a blank logic app workflow. To use the HTTP Webhook action, start your logic app with any trigger that you want. This example uses the HTTP trigger as the first step.
+* The Standard or Consumption logic app workflow where you want to wait for specific events at the target endpoint. To start with the HTTP Webhook trigger, create a logic app with a blank workflow. To use the HTTP Webhook action, start your workflow with any trigger that you want. This example uses the HTTP trigger as the first step.
## Add an HTTP Webhook trigger
-This built-in trigger calls the subscribe endpoint on the target service and registers a callback URL with the target service. Your logic app then waits for the target service to send an `HTTP POST` request to the callback URL. When this event happens, the trigger fires and passes any data in the request along to the workflow.
+This built-in trigger calls the subscribe endpoint on the target service and registers a callback URL with the target service. Your workflow then waits for the target service to send an `HTTP POST` request to the callback URL. When this event happens, the trigger fires and passes any data in the request along to the workflow.
-1. In the [Azure portal](https://portal.azure.com), pen your blank logic app workflow in the designer.
+### [Standard](#tab/standard)
-1. In the designer's search box, enter `http webhook` as your filter. From the **Triggers** list, select the **HTTP Webhook** trigger.
+1. In the [Azure portal](https://portal.azure.com), open your Standard logic app and blank workflow in the designer.
- ![Select HTTP Webhook trigger](./media/connectors-native-webhook/select-http-webhook-trigger.png)
+1. [Follow these general steps to add the trigger named **HTTP Webhook** to your workflow](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=standard#add-trigger).
- This example renames the trigger to `HTTP Webhook trigger` so that the step has a more descriptive name. Also, the example later adds an HTTP Webhook action, and both names must be unique.
+ This example renames the trigger to **HTTP Webhook trigger** so that the step has a more descriptive name. Also, the example later adds an HTTP Webhook action, and both names must be unique.
1. Provide the values for the [HTTP Webhook trigger parameters](../logic-apps/logic-apps-workflow-actions-triggers.md#http-webhook-trigger) that you want to use for the subscribe and unsubscribe calls.
- In this example, the trigger includes the methods, URIs, and message bodies to use when performing the subscribe and unsubscribe operations.
+ | Property | Required | Description |
+ |-|-|-|
+ | **Subscription - Method** | Yes | The method to use when subscribing to the target endpoint |
+ | **Subscribe - URI** | Yes | The URL to use for subscribing to the target endpoint |
+ | **Subscribe - Body** | No | Any message body to include in the subscribe request. This example includes the callback URL that uniquely identifies the subscriber, which is your logic app, by using the `@listCallbackUrl()` expression to retrieve your logic app's callback URL. |
+ | **Unsubscribe - Method** | No | The method to use when unsubscribing from the target endpoint |
+ | **Unsubscribe - URI** | No | The URL to use for unsubscribing from the target endpoint |
+ | **Unsubscribe - Body** | No | An optional message body to include in the unsubscribe request <br><br>**Note**: This property doesn't support using the `listCallbackUrl()` function. However, the trigger automatically includes and sends the headers, `x-ms-client-tracking-id` and `x-ms-workflow-operation-name`, which the target service can use to uniquely identify the subscriber. |
+
+ > [!NOTE]
+ >
+ > For the **Unsubsubscribe - Method** and **Unsubscribe - URI** properties, add them
+ > to your action by opening the **Advanced parameters** list.
+
+ For example, the following trigger includes the methods, URIs, and message bodies to use when performing the subscribe and unsubscribe operations.
+
+ :::image type="content" source="media/connectors-native-webhook/webhook-trigger-parameters-standard.png" alt-text="Screenshot shows Standard workflow with HTTP Webhook trigger parameters." lightbox="media/connectors-native-webhook/webhook-trigger-parameters-standard.png":::
+
+ If you need to use authentication, you can add the **Subscribe - Authentication** and **Unsubscribe - Authentication** properties. For more information about authentication types available for HTTP Webhook, see [Add authentication to outbound calls](../logic-apps/logic-apps-securing-a-logic-app.md#add-authentication-outbound).
- ![Enter HTTP Webhook trigger parameters](./media/connectors-native-webhook/http-webhook-trigger-parameters.png)
+1. Continue building your workflow with actions that run when the trigger fires.
+
+1. When you're done, save your workflow. On the designer toolbar, select **Save**.
+
+### [Consumption](#tab/consumption)
+
+1. In the [Azure portal](https://portal.azure.com), open your Consumption logic app and blank workflow in the designer.
+
+1. [Follow these general steps to add the trigger named **HTTP Webhook** to your workflow](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=consumption#add-trigger).
+
+ This example renames the trigger to **HTTP Webhook trigger** so that the step has a more descriptive name. Also, the example later adds an HTTP Webhook action, and both names must be unique.
+
+1. Provide the values for the [HTTP Webhook trigger parameters](../logic-apps/logic-apps-workflow-actions-triggers.md#http-webhook-trigger) that you want to use for the subscribe and unsubscribe calls.
| Property | Required | Description | |-|-|-|
This built-in trigger calls the subscribe endpoint on the target service and reg
| **Subscribe - Body** | No | Any message body to include in the subscribe request. This example includes the callback URL that uniquely identifies the subscriber, which is your logic app, by using the `@listCallbackUrl()` expression to retrieve your logic app's callback URL. | | **Unsubscribe - Method** | No | The method to use when unsubscribing from the target endpoint | | **Unsubscribe - URI** | No | The URL to use for unsubscribing from the target endpoint |
- | **Unsubscribe - Body** | No | An optional message body to include in the unsubscribe request <p><p>**Note**: This property doesn't support using the `listCallbackUrl()` function. However, the trigger automatically includes and sends the headers, `x-ms-client-tracking-id` and `x-ms-workflow-operation-name`, which the target service can use to uniquely identify the subscriber. |
- ||||
+ | **Unsubscribe - Body** | No | An optional message body to include in the unsubscribe request <br><br>**Note**: This property doesn't support using the `listCallbackUrl()` function. However, the trigger automatically includes and sends the headers, `x-ms-client-tracking-id` and `x-ms-workflow-operation-name`, which the target service can use to uniquely identify the subscriber. |
+
+ For example, the following trigger includes the methods, URIs, and message bodies to use when performing the subscribe and unsubscribe operations.
+
+ :::image type="content" source="media/connectors-native-webhook/webhook-trigger-parameters-consumption.png" alt-text="Screenshot shows Consumption workflow with HTTP Webhook trigger parameters." lightbox="media/connectors-native-webhook/webhook-trigger-parameters-consumption.png":::
1. To add other trigger properties, open the **Add new parameter** list.
- ![Add more trigger properties](./media/connectors-native-webhook/http-webhook-trigger-add-properties.png)
+ :::image type="content" source="media/connectors-native-webhook/webhook-trigger-add-properties-consumption.png" alt-text="Screenshot shows Consumption workflow with HTTP Webhook trigger and more properties." lightbox="media/connectors-native-webhook/webhook-trigger-add-properties-consumption.png":::
For example, if you need to use authentication, you can add the **Subscribe - Authentication** and **Unsubscribe - Authentication** properties. For more information about authentication types available for HTTP Webhook, see [Add authentication to outbound calls](../logic-apps/logic-apps-securing-a-logic-app.md#add-authentication-outbound).
-1. Continue building your logic app's workflow with actions that run when the trigger fires.
+1. Continue building your workflow with actions that run when the trigger fires.
-1. When you're finished, done, remember to save your logic app. On the designer toolbar, select **Save**.
+1. When you're done, save your workflow. On the designer toolbar, select **Save**.
- Saving your logic app calls the subscribe endpoint on the target service and registers the callback URL. Your logic app then waits for the target service to send an `HTTP POST` request to the callback URL. When this event happens, the trigger fires and passes any data in the request along to the workflow. If this operation completes successfully, the trigger unsubscribes from the endpoint, and your logic app continues the remaining workflow.
++
+Saving your workflow calls the subscribe endpoint on the target service and registers the callback URL. Your workflow then waits for the target service to send an `HTTP POST` request to the callback URL. When this event happens, the trigger fires and passes any data in the request along to the workflow. If this operation completes successfully, the trigger unsubscribes from the endpoint, and your workflow continues to the next action.
## Add an HTTP Webhook action
-This built-in action calls the subscribe endpoint on the target service and registers a callback URL with the target service. Your logic app then pauses and waits for target service to send an `HTTP POST` request to the callback URL. When this event happens, the action passes any data in the request along to the workflow. If the operation completes successfully, the action unsubscribes from the endpoint, and your logic app continues running the remaining workflow.
+This built-in action calls the subscribe endpoint on the target service and registers a callback URL with the target service. Your workflow then pauses and waits for target service to send an `HTTP POST` request to the callback URL. When this event happens, the action passes any data in the request along to the workflow. If the operation completes successfully, the action unsubscribes from the endpoint, and your workflow continues to the next action.
-1. Sign in to the [Azure portal](https://portal.azure.com). Open your logic app in Logic App Designer.
+This example uses the **HTTP Webhook** trigger as the first step.
- This example uses the HTTP Webhook trigger as the first step.
+### [Standard](#tab/standard)
-1. Under the step where you want to add the HTTP Webhook action, select **New step**.
+1. In the [Azure portal](https://portal.azure.com), open your Standard logic app and workflow in the designer.
- To add an action between steps, move your pointer over the arrow between steps. Select the plus sign (**+**) that appears, and then select **Add an action**.
+1. [Follow these general steps to add the action named **HTTP Webhook** to your workflow](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=standard#add-action).
-1. In the designer's search box, enter `http webhook` as your filter. From the **Actions** list, select the **HTTP Webhook** action.
+ This example renames the action to **HTTP Webhook action** so that the step has a more descriptive name.
- ![Select HTTP Webhook action](./media/connectors-native-webhook/select-http-webhook-action.png)
+1. Provide the values for the HTTP Webhook action parameters, which are similar to the [HTTP Webhook trigger parameters](../logic-apps/logic-apps-workflow-actions-triggers.md#http-webhook-trigger), that you want to use for the subscribe and unsubscribe calls.
- This example renames the action to "HTTP Webhook action" so that the step has a more descriptive name.
+ | Property | Required | Description |
+ |-|-|-|
+ | **Subscription - Method** | Yes | The method to use when subscribing to the target endpoint |
+ | **Subscribe - URI** | Yes | The URL to use for subscribing to the target endpoint |
+ | **Subscribe - Body** | No | Any message body to include in the subscribe request. This example includes the callback URL that uniquely identifies the subscriber, which is your logic app, by using the `@listCallbackUrl()` expression to retrieve your logic app's callback URL. |
+ | **Unsubscribe - Method** | No | The method to use when unsubscribing from the target endpoint |
+ | **Unsubscribe - URI** | No | The URL to use for unsubscribing from the target endpoint |
+ | **Unsubscribe - Body** | No | An optional message body to include in the unsubscribe request <br><br>**Note**: This property doesn't support using the `listCallbackUrl()` function. However, the action automatically includes and sends the headers, `x-ms-client-tracking-id` and `x-ms-workflow-operation-name`, which the target service can use to uniquely identify the subscriber. |
-1. Provide the values for the HTTP Webhook action parameters, which are similar to the [HTTP Webhook trigger parameters](../logic-apps/logic-apps-workflow-actions-triggers.md#http-webhook-trigger), that you want to use for the subscribe and unsubscribe calls.
+ > [!NOTE]
+ >
+ > For the **Unsubsubscribe - Method** and **Unsubscribe - URI** properties, add them
+ > to your action by opening the **Advanced parameters** list.
+
+ For example, the following action includes the methods, URIs, and message bodies to use when performing the subscribe and unsubscribe operations.
+
+ :::image type="content" source="media/connectors-native-webhook/webhook-action-parameters-standard.png" alt-text="Screenshot shows Standard workflow with HTTP Webhook action parameters." lightbox="media/connectors-native-webhook/webhook-action-parameters-standard.png":::
+
+1. To add other action properties, open the **Advanced parameters** list.
+
+ For example, if you need to use authentication, you can add the **Subscribe - Authentication** and **Unsubscribe - Authentication** properties. For more information about authentication types available for HTTP Webhook, see [Add authentication to outbound calls](../logic-apps/logic-apps-securing-a-logic-app.md#add-authentication-outbound).
+
+1. When you're done, save your workflow. On the designer toolbar, select **Save**.
+
+### [Consumption](#tab/consumption)
- In this example, the action includes the methods, URIs, and message bodies to use when performing the subscribe and unsubscribe operations.
+1. In the [Azure portal](https://portal.azure.com), open your Consumption logic app and workflow in the designer.
- ![Enter HTTP Webhook action parameters](./media/connectors-native-webhook/http-webhook-action-parameters.png)
+1. [Follow these general steps to add the action named **HTTP Webhook** to your workflow](../logic-apps/create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
+
+ This example renames the action to **HTTP Webhook action** so that the step has a more descriptive name.
+
+1. Provide the values for the HTTP Webhook action parameters, which are similar to the [HTTP Webhook trigger parameters](../logic-apps/logic-apps-workflow-actions-triggers.md#http-webhook-trigger), that you want to use for the subscribe and unsubscribe calls.
| Property | Required | Description | |-|-|-|
This built-in action calls the subscribe endpoint on the target service and regi
| **Subscribe - Body** | No | Any message body to include in the subscribe request. This example includes the callback URL that uniquely identifies the subscriber, which is your logic app, by using the `@listCallbackUrl()` expression to retrieve your logic app's callback URL. | | **Unsubscribe - Method** | No | The method to use when unsubscribing from the target endpoint | | **Unsubscribe - URI** | No | The URL to use for unsubscribing from the target endpoint |
- | **Unsubscribe - Body** | No | An optional message body to include in the unsubscribe request <p><p>**Note**: This property doesn't support using the `listCallbackUrl()` function. However, the action automatically includes and sends the headers, `x-ms-client-tracking-id` and `x-ms-workflow-operation-name`, which the target service can use to uniquely identify the subscriber. |
- ||||
+ | **Unsubscribe - Body** | No | An optional message body to include in the unsubscribe request <br><br>**Note**: This property doesn't support using the `listCallbackUrl()` function. However, the action automatically includes and sends the headers, `x-ms-client-tracking-id` and `x-ms-workflow-operation-name`, which the target service can use to uniquely identify the subscriber. |
+
+ For example, the following action includes the methods, URIs, and message bodies to use when performing the subscribe and unsubscribe operations.
+
+ :::image type="content" source="media/connectors-native-webhook/webhook-action-parameters-consumption.png" alt-text="Screenshot shows Consumption workflow with HTTP Webhook action parameters." lightbox="media/connectors-native-webhook/webhook-action-parameters-consumption.png":::
1. To add other action properties, open the **Add new parameter** list.
- ![Add more action properties](./media/connectors-native-webhook/http-webhook-action-add-properties.png)
+ :::image type="content" source="media/connectors-native-webhook/webhook-action-add-properties-consumption.png" alt-text="Screenshot shows Consumption workflow with HTTP Webhook action and more properties." lightbox="media/connectors-native-webhook/webhook-action-add-properties-consumption.png":::
For example, if you need to use authentication, you can add the **Subscribe - Authentication** and **Unsubscribe - Authentication** properties. For more information about authentication types available for HTTP Webhook, see [Add authentication to outbound calls](../logic-apps/logic-apps-securing-a-logic-app.md#add-authentication-outbound).
-1. When you're finished, remember to save your logic app. On the designer toolbar, select **Save**.
+1. When you're done, save your workflow. On the designer toolbar, select **Save**.
++
- Now, when this action runs, your logic app calls the subscribe endpoint on the target service and registers the callback URL. The logic app then pauses the workflow and waits for the target service to send an `HTTP POST` request to the callback URL. When this event happens, the action passes any data in the request along to the workflow. If the operation completes successfully, the action unsubscribes from the endpoint, and your logic app continues running the remaining workflow.
+When this action runs, your workflow calls the subscribe endpoint on the target service and registers the callback URL. The workflow then pauses and waits for the target service to send an `HTTP POST` request to the callback URL. When this event happens, the action passes any data in the request along to the workflow. If the operation completes successfully, the action unsubscribes from the endpoint, and your workflow continues to the next action.
## Trigger and action outputs
Here is more information about the outputs from an HTTP Webhook trigger or actio
| headers | object | The headers from the request | | body | object | The object with the body content from the request | | status code | int | The status code from the request |
-||||
| Status code | Description | |-|-|
Here is more information about the outputs from an HTTP Webhook trigger or actio
| 403 | Forbidden | | 404 | Not Found | | 500 | Internal server error. Unknown error occurred. |
-|||
-
-## Connector reference
-
-For more information about trigger and action parameters, which are similar to each other, see [HTTP Webhook parameters](../logic-apps/logic-apps-workflow-actions-triggers.md#http-webhook-trigger).
## Next steps
container-registry Container Registry Soft Delete Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-soft-delete-policy.md
Title: Enable soft delete policy
-description: Learn how to enable a soft delete policy in your Azure Container Registry for recovering accidentally deleted artifacts for a set retention period.
-- Previously updated : 10/31/2023
+ Title: "Recover deleted artifacts with soft delete policy in Azure Container Registry (Preview)"
+description: Learn how to enable the soft delete policy in Azure Container Registry to manage and recover the accidentally deleted artifacts as soft deleted artifacts with a set retention period.
+ + Last updated : 01/22/2024+
-# Enable soft delete policy in Azure Container Registry (Preview)
+# Recover deleted artifacts with soft delete policy in Azure Container Registry (Preview)
-Azure Container Registry (ACR) allows you to enable the *soft delete policy* to recover any accidentally deleted artifacts for a set retention period.
+Azure Container Registry (ACR) allows you to enable the *soft delete policy* to recover any accidentally deleted artifacts for a set retention period.
:::image type="content" source="./media/container-registry-delete/02-soft-delete.png" alt-text="Diagram of soft delete artifacts lifecycle.":::
-This feature is available in all the service tiers (also known as SKUs). For information about registry service tiers, see [Azure Container Registry service tiers](container-registry-skus.md).
+## Aspects of soft delete policy
-> [!NOTE]
->The soft deleted artifacts are billed as per active sku pricing for storage.
+The soft delete policy can be enabled/disabled at any time. Once you enable the soft-delete policy in ACR, it manages the deleted artifacts as soft deleted artifacts with a set retention period. Thereby you have ability to list, filter, and restore the soft deleted artifacts.
-The article gives you an overview of the soft delete policy and walks you through the step by step process to enable the soft delete policy using Azure CLI and Azure portal.
+### Retention period
-You can use the Azure Cloud Shell or a local installation of the Azure CLI to run the command examples in this article. If you'd like to use it locally, version 2.0.74 or later is required. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
+The default retention period for soft deleted artifacts is seven days, but itΓÇÖs possible to set the retention period value between one to 90 days. You can set, update, and change the retention policy value. The soft deleted artifacts expire once the retention period is complete.
-## Prerequisites
+### Autopurge
-* The user will require following permissions (at registry level) to perform soft delete operations:
+The autopurge runs every 24 hours and always considers the current value of retention days before permanently deleting the soft deleted artifacts. For example, after five days of soft deleting the artifact, if you change the value of retention days from seven to 14 days, the artifact will only expire after 14 days from the initial soft delete.
- | Permission | Description |
- |||
- | Microsoft.ContainerRegistry/registries/deleted/read | List soft-deleted artifacts |
- | Microsoft.ContainerRegistry/registries/deleted/restore/action | Restore soft-deleted artifact |
-## About soft delete policy
-The soft delete policy can be enabled/disabled at your convenience.
-Once you enable the soft delete policy, ACR manages the deleted artifacts as the soft deleted artifacts with a set retention period. Thereby you have ability to list, filter, and restore the soft deleted artifacts. Once the retention period is complete, all the soft deleted artifacts are auto-purged.
-## Retention period
-The default retention period is seven days. It's possible to set the retention period value between one to 90 days. The user can set, update and change the retention policy value. The soft deleted artifacts will expire once the retention period is complete.
+## Availability and pricing information
-## Auto-purge
+This feature is available in all the service tiers (also known as SKUs). For information about registry service tiers, see [Azure Container Registry service tiers](container-registry-skus.md).
-The auto-purge runs every 24 hours. The auto-purge always considers the current value of `retention days` before permanently deleting the soft deleted artifacts.
-For example, after five days of soft deleting the artifact, if the user changes the value of retention days from seven to 14 days, the artifact will only expire after 14 days from the initial soft delete.
+> [!NOTE]
+>The soft deleted artifacts are billed as per active sku pricing for storage.
## Preview limitations
+> [!IMPORTANT]
+> The soft delete policy is currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+ * ACR currently doesn't support manually purging soft deleted artifacts. * The soft delete policy doesn't support a geo-replicated registry. * ACR doesn't allow enabling both the retention policy and the soft delete policy. See [retention policy for untagged manifests.](container-registry-retention-policy.md)
-## Known issues
->* Enabling the soft delete policy with Availability Zones through ARM template leaves the registry stuck in the `creation` state. If you see this error, please delete and recreate the registry disabling Geo-replication on the registry.
->* Accessing the manage deleted artifacts blade after disabling the soft delete policy will throw an error message with 405 status.
->* The customers with restrictions on permissions to restore, will see an issue as File not found.
+## Prerequisites
+
+* The user requires following permissions (at registry level) to perform soft delete operations:
+
+| Permission | Description |
+| - | -- |
+| Microsoft.ContainerRegistry/registries/deleted/read | List soft-deleted artifacts |
+| Microsoft.ContainerRegistry/registries/deleted/restore/action | Restore soft-deleted artifact |
+
+* You can use the Azure Cloud Shell or a local installation of the Azure CLI to run the command examples in this article. If you'd like to use it locally, version 2.0.74 or later is required. Run `az --version` for the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
+
+* Sign in to the [Azure portal](https://ms.portal.azure.com/).
+ ## Enable soft delete policy for registry - CLI 1. Update soft delete policy for a given `MyRegistry` ACR with a retention period set between 1 to 90 days.
For example, after five days of soft deleting the artifact, if the user changes
az acr config soft-delete show -r MyRegistry ```
-### List the soft-delete artifacts- CLI
+### List the soft deleted artifacts- CLI
The `az acr repository list-deleted` commands enable fetching and listing of the soft deleted repositories. For more information use `--help`.
The `az acr manifest list-deleted-tags` commands enable fetching and listing of
az acr manifest list-deleted-tags -r MyRegistry -n hello-world:latest ```
-### Restore the soft delete artifacts - CLI
+### Restore the soft deleted artifacts - CLI
The `az acr manifest restore` commands restore a single image by tag and digest.
The `az acr manifest restore` commands restore a single image by tag and digest.
az acr manifest restore -r MyRegistry -n hello-world:latest ```
-Force restore will overwrite the existing tag with the same name in the repository. If the soft delete policy is enabled during force restore. The overwritten tag will be soft deleted. You can force restore with specific arguments `--force, -f`.
+Force restore overwrites the existing tag with the same name in the repository. If the soft delete policy is enabled during force restore. The overwritten tag is soft deleted. You can force restore with specific arguments `--force, -f`.
3. Force restore the image of a `hello-world` repository by tag `latest`and digest `sha256:abc123` in a given `MyRegistry` ACR.
Force restore will overwrite the existing tag with the same name in the reposito
``` > [!IMPORTANT]
->* Restoring a [manifest list](push-multi-architecture-images.md#manifest-list) won't recursively restore any underlying soft deleted manifests.
->* If you're restoring soft deleted [ORAS artifacts](container-registry-oras-artifacts.md), then restoring a subject doesn't recursively restore the referrer chain. Also, the subject has to be restored first, only then a referrer manifest is allowed to restore. Otherwise it throws an error.
+> Restoring a [manifest list](push-multi-architecture-images.md#manifest-list) won't recursively restore any underlying soft deleted manifests.
+> If you're restoring soft deleted [ORAS artifacts](container-registry-oras-artifacts.md), then restoring a subject doesn't recursively restore the referrer chain. Also, the subject has to be restored first, only then a referrer manifest is allowed to restore. Otherwise it throws an error.
## Enable soft delete policy for registry - Portal
You can also enable a registry's soft delete policy in the [Azure portal](https:
4. Select the checkbox to **Enable Soft Delete**.
-5. Select the number of days between `0` and `90` days to retain the soft deleted artifacts.
+5. Select the number of days between `0` and `90` days for retaining the soft deleted artifacts.
6. Select **Save** to save your changes.
You can also enable a registry's soft delete policy in the [Azure portal](https:
1. Navigate to your Azure Container Registry. 2. In the **Menu** section, Select **Services**, and Select **Repositories**. 3. In the **Repositories**, Select your preferred **Repository**.
-4. Click on the **Manage deleted artifacts** to see all the soft deleted artifacts.
+4. Select on the **Manage deleted artifacts** to see all the soft deleted artifacts.
> [!NOTE] > Once you enable the soft delete policy and perform actions such as untag a manifest or delete an artifact, You will be able to find these tags and artifacts in the Managed delete artifacts before the number of retention days expire.
You can also enable a registry's soft delete policy in the [Azure portal](https:
-5. Filter the deleted artifact you have to restore
-6. Select the artifact, and Click on the **Restore** in the right column.
+5. Filter the deleted artifact you have to restore.
+6. Select the artifact, and select on the **Restore** in the right column.
7. A **Restore Artifact** window pops up.
You can also enable a registry's soft delete policy in the [Azure portal](https:
8. Select the tag to restore, here you have an option to choose, and recover any additional tags.
-9. Click on **Restore**.
+9. Select on **Restore**.
You can also enable a registry's soft delete policy in the [Azure portal](https:
1. Navigate to your Azure Container Registry. 2. In the **Menu** section, Select **Services**, 3. In the **Services** tab, Select **Repositories**.
-4. In the **Repositories** tab, Click on **Manage Deleted Repositories**.
+4. In the **Repositories** tab, select on **Manage Deleted Repositories**.
You can also enable a registry's soft delete policy in the [Azure portal](https:
6. Select the deleted repository, filter the deleted artifact from on the **Manage deleted artifacts**.
-7. Select the artifact, and Click on the **Restore** in the right column.
+7. Select the artifact, and select on the **Restore** in the right column.
8. A **Restore Artifact** window pops up.
You can also enable a registry's soft delete policy in the [Azure portal](https:
-9. Select the tag to restore, here you have an option to choose, and recover any additional tags.
-10. Click on **Restore**.
+9. Select the tag to restore, here you have an option to choose, and recover any other tags.
+10. Select on **Restore**.
You can also enable a registry's soft delete policy in the [Azure portal](https:
> [!IMPORTANT]
->* Importing a soft deleted image at both source and target resources is blocked.
->* Pushing an image to the soft deleted repository will restore the soft deleted repository.
->* Pushing an image that shares a same manifest digest with the soft deleted image is not allowed. Instead restore the soft deleted image.
+> Importing a soft deleted image at both source and target resources is blocked.
+> Pushing an image to the soft deleted repository will restore the soft deleted repository.
+> Pushing an image that shares a same manifest digest with the soft deleted image is not allowed. Instead restore the soft deleted image.
## Next steps
cosmos-db Continuous Backup Restore Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/continuous-backup-restore-introduction.md
You can choose to restore any combination of provisioned throughput containers,
The following configurations aren't restored after the point-in-time recovery:
+* A subset of containers under a shared throughput database cannot be restored. The entire database can be restored as a whole.
* Firewall, VNET, Data plane RBAC or private endpoint settings. * All the Regions from the source account. * Stored procedures, triggers, UDFs.
cosmos-db Monitor Resource Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/monitor-resource-logs.md
Here, we walk through the process of creating diagnostic settings for your accou
| Category | API | Definition | Key Properties | | | | | |
- | **DataPlaneRequests** | All APIs | Logs back-end requests as data plane operations, which are requests executed to create, update, delete or retrieve data within the account. | `Requestcharge`, `statusCode`, `clientIPaddress`, `partitionID`, `resourceTokenPermissionId` `resourceTokenPermissionMode` |
+ | **DataPlaneRequests** | Recommended for API for NoSQL | Logs back-end requests as data plane operations, which are requests executed to create, update, delete or retrieve data within the account. | `Requestcharge`, `statusCode`, `clientIPaddress`, `partitionID`, `resourceTokenPermissionId` `resourceTokenPermissionMode` |
| **MongoRequests** | API for MongoDB | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB for MongoDB. When you enable this category, make sure to disable DataPlaneRequests. | `Requestcharge`, `opCode`, `retryCount`, `piiCommandText` |
- | **CassandraRequests** | API for Apache Cassandra | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB for Cassandra. When you enable this category, make sure to disable DataPlaneRequests. | `operationName`, `requestCharge`, `piiCommandText` |
- | **GremlinRequests** | API for Apache Gremlin | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB for Gremlin. When you enable this category, make sure to disable DataPlaneRequests. | `operationName`, `requestCharge`, `piiCommandText`, `retriedDueToRateLimiting` |
- | **QueryRuntimeStatistics** | API for NoSQL | This table details query operations executed against an API for NoSQL account. By default, the query text and its parameters are obfuscated to avoid logging personal data with full text query logging available by request. | `databasename`, `partitionkeyrangeid`, `querytext` |
+ | **CassandraRequests** | API for Apache Cassandra | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB for Cassandra. | `operationName`, `requestCharge`, `piiCommandText` |
+ | **GremlinRequests** | API for Apache Gremlin | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB for Gremlin. | `operationName`, `requestCharge`, `piiCommandText`, `retriedDueToRateLimiting` |
+ | **QueryRuntimeStatistics** | API for NoSQL | This table details query operations executed against an API for NoSQL account. By default, the query text and its parameters are obfuscated to avoid logging persona l data with full text query logging available by request. | `databasename`, `partitionkeyrangeid`, `querytext` |
| **PartitionKeyStatistics** | All APIs | Logs the statistics of logical partition keys by representing the estimated storage size (KB) of the partition keys. This table is useful when troubleshooting storage skews. This PartitionKeyStatistics log is only emitted if the following conditions are true: 1. At least 1% of the documents in the physical partition have same logical partition key. 2. Out of all the keys in the physical partition, the PartitionKeyStatistics log captures the top three keys with largest storage size. </li></ul> If the previous conditions aren't met, the partition key statistics data isn't available. It's okay if the above conditions aren't met for your account, which typically indicates you have no logical partition storage skew. **Note**: The estimated size of the partition keys is calculated using a sampling approach that assumes the documents in the physical partition are roughly the same size. If the document sizes aren't uniform in the physical partition, the estimated partition key size might not be accurate. | `subscriptionId`, `regionName`, `partitionKey`, `sizeKB` | | **PartitionKeyRUConsumption** | API for NoSQL or API for Apache Gremlin | Logs the aggregated per-second RU/s consumption of partition keys. This table is useful for troubleshooting hot partitions. Currently, Azure Cosmos DB reports partition keys for API for NoSQL accounts only and for point read/write, query, and stored procedure operations. | `subscriptionId`, `regionName`, `partitionKey`, `requestCharge`, `partitionKeyRangeId` | | **ControlPlaneRequests** | All APIs | Logs details on control plane operations, which include, creating an account, adding or removing a region, updating account replication settings etc. | `operationName`, `httpstatusCode`, `httpMethod`, `region` |
- | **TableApiRequests** | API for Table | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB for Table. When you enable this category, make sure to disable DataPlaneRequests. | `operationName`, `requestCharge`, `piiCommandText` |
+ | **TableApiRequests** | API for Table | Logs user-initiated requests from the front end to serve requests to Azure Cosmos DB for Table.| `operationName`, `requestCharge`, `piiCommandText` |
1. Once you select your **Categories details**, then send your Logs to your preferred destination. If you're sending Logs to a **Log Analytics Workspace**, make sure to select **Resource specific** as the Destination table.
cost-management-billing Quick Create Budget Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/quick-create-budget-template.md
description: Quickstart showing how to Create a budget with an Azure Resource Ma
-tags: azure-resource-manager
cost-management-billing Tutorial Acm Create Budgets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/tutorial-acm-create-budgets.md
Title: Tutorial - Create and manage budgets
description: This tutorial helps you plan and account for the costs of Azure services that you consume. Previously updated : 06/07/2023 Last updated : 02/09/2024 -+
Budgets in Cost Management help you plan for and drive organizational accountability. They help you proactively inform others about their spending to manage costs and monitor how spending progresses over time.
-You can configure alerts based on your actual cost or forecasted cost to ensure that your spending is within your organizational spending limit. Notifications are triggered when the budget thresholds you've created are exceeded. Resources are not affected, and your consumption isn't stopped. You can use budgets to compare and track spending as you analyze costs.
+You can configure alerts based on your actual cost or forecasted cost to ensure that your spending is within your organizational spending limit. Notifications are triggered when the budget thresholds are exceeded. Resources aren't affected, and your consumption isn't stopped. You can use budgets to compare and track spending as you analyze costs.
Cost and usage data is typically available within 8-24 hours and budgets are evaluated against these costs every 24 hours. Be sure to get familiar with [Cost and usage data updates](./understand-cost-mgt-data.md#cost-and-usage-data-updates-and-retention) specifics. When a budget threshold is met, email notifications are normally sent within an hour of the evaluation.
-Budgets reset automatically at the end of a period (monthly, quarterly, or annually) for the same budget amount when you select an expiration date in the future. Because they reset with the same budget amount, you need to create separate budgets when budgeted currency amounts differ for future periods. When a budget expires, it's automatically deleted.
+Budgets reset automatically at the end of a period (monthly, quarterly, or annually) for the same budget amount when you select an expiration date in the future. Because they reset with the same budget amount, you need to create separate budgets when budgeted currency amounts differ for future periods. When a budget expires, it automatically gets deleted.
The examples in this tutorial walk you through creating and editing a budget for an Azure Enterprise Agreement (EA) subscription.
Budgets are supported for the following types of Azure account types and scopes:
- Individual agreements - Billing account - Microsoft Customer Agreement scopes
- - Billing account
+ - Billing account - Budget evaluation only supports USD currency, not the billing currency. An exception is that customers in the China 21V cloud have their budgets evaluated in CNY currency.
- Billing profile - Invoice section - Customer
To view budgets, you need at least read access for your Azure account.
If you have a new subscription, you can't immediately create a budget or use other Cost Management features. It might take up to 48 hours before you can use all Cost Management features.
-For Azure EA subscriptions, you must have read access to view budgets. To create and manage budgets, you must have contributor permission.
+You must have read access to view budgets for Azure EA subscriptions. To create and manage budgets, you must have contributor permission.
The following Azure permissions, or scopes, are supported per subscription for budgets by user and group.
Select **Add**.
In the **Create budget** window, make sure that the scope shown is correct. Choose any filters that you want to add. Filters allow you to create budgets on specific costs, such as resource groups in a subscription or a service like virtual machines. For more information about the common filter properties that you can use in budgets and cost analysis, see [Group and filter properties](group-filter.md#group-and-filter-properties).
-After you identify your scope and filters, type a budget name. Then, choose a monthly, quarterly, or annual budget reset period. The reset period determines the time window that's analyzed by the budget. The cost evaluated by the budget starts at zero at the beginning of each new period. When you create a quarterly budget, it works in the same way as a monthly budget. The difference is that the budget amount for the quarter is evenly divided among the three months of the quarter. An annual budget amount is evenly divided among all 12 months of the calendar year.
+After you identify your scope and filters, type a budget name. Then, choose a monthly, quarterly, or annual budget reset period. The reset period determines the time window that gets analyzed by the budget. The cost evaluated by the budget starts at zero at the beginning of each new period. When you create a quarterly budget, it works in the same way as a monthly budget. The difference is that the budget amount for the quarter is evenly divided among the three months of the quarter. An annual budget amount is evenly divided among all 12 months of the calendar year.
-If you have a Pay-As-You-Go, MSDN, or Visual Studio subscription, your invoice billing period might not align to the calendar month. For those subscription types and resource groups, you can create a budget that's aligned to your invoice period or to calendar months. To create a budget aligned to your invoice period, select a reset period of **Billing month**, **Billing quarter**, or **Billing year**. To create a budget aligned to the calendar month, select a reset period of **Monthly**, **Quarterly**, or **Annually**.
+If you have a pay-as-you-go, MSDN, or Visual Studio subscription, your invoice billing period might not align to the calendar month. For those subscription types and resource groups, you can create a budget aligned to your invoice period or to calendar months. To create a budget aligned to your invoice period, select a reset period of **Billing month**, **Billing quarter**, or **Billing year**. To create a budget aligned to the calendar month, select a reset period of **Monthly**, **Quarterly**, or **Annually**.
Next, identify the expiration date when the budget becomes invalid and stops evaluating your costs.
After you configure the budget amount, select **Next** to configure budget alert
## Configure actual costs budget alerts
-Budgets require at least one cost threshold (% of budget) and a corresponding email address. You can optionally include up to five thresholds and five email addresses in a single budget. When a budget threshold is met, email notifications are normally sent within an hour of the evaluation. Actual costs budget alerts are generated for the actual cost you've accrued in relation to the budget thresholds configured.
+Budgets require at least one cost threshold (% of budget) and a corresponding email address. You can optionally include up to five thresholds and five email addresses in a single budget. When a budget threshold is met, email notifications are normally sent within an hour of the evaluation. Actual costs budget alerts are generated for the actual cost accrued in relation to the budget thresholds configured.
## Configure forecasted budget alerts
If you want to receive emails, add azure-noreply@microsoft.com to your approved
In the following example, an email alert gets generated when 90% of the budget is reached. If you create a budget with the Budgets API, you can also assign roles to people to receive alerts. Assigning roles to people isn't supported in the Azure portal. For more about the Budgets API, see [Budgets API](/rest/api/consumption/budgets). If you want to have an email alert sent in a different language, see [Supported locales for budget alert emails](../automate/automate-budget-creation.md#supported-locales-for-budget-alert-emails).
-Alert limits support a range of 0.01% to 1000% of the budget threshold that you've provided.
+Alert limits support a range of 0.01% to 1000% of the budget threshold.
:::image type="content" source="./media/tutorial-acm-create-budgets/budget-set-alert.png" alt-text="Screenshot showing alert conditions." lightbox="./media/tutorial-acm-create-budgets/budget-set-alert.png" :::
-After you create a budget, it's shown in cost analysis. Viewing your budget against your spending trend is one of the first steps when you start to [analyze your costs and spending](./quick-acm-cost-analysis.md).
+After you create a budget, it appears in cost analysis. Viewing your budget against your spending trend is one of the first steps when you start to [analyze your costs and spending](./quick-acm-cost-analysis.md).
:::image type="content" source="./media/tutorial-acm-create-budgets/cost-analysis.png" alt-text="Screenshot showing an example budget with spending shown in cost analysis." lightbox="./media/tutorial-acm-create-budgets/cost-analysis.png" :::
cost-management-billing Tutorial Export Acm Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/tutorial-export-acm-data.md
description: This article shows you how you can create and manage exported Cost Management data so that you can use it in external systems. Previously updated : 01/24/2024 Last updated : 02/09/2024
For Azure Storage accounts:
- Your Azure storage account must be configured for blob or file storage. - Don't configure exports to a storage container when configured as a destination in an [object replication rule](../../storage/blobs/object-replication-overview.md#object-replication-policies-and-rules). - To export to storage accounts with configured firewalls, you need other privileges on the storage account. The other privileges are only required during export creation or modification. They are:
- - Owner role on the storage account.
+ - Owner role on the storage account.
Or
- - Any custom role with `Microsoft.Authorization/roleAssignments/write` and `Microsoft.Authorization/permissions/read` permissions.
+ - Any custom role with `Microsoft.Authorization/roleAssignments/write` and `Microsoft.Authorization/permissions/read` permissions.
Additionally, ensure that you enable [Allow trusted Azure service access](../../storage/common/storage-network-security.md#grant-access-to-trusted-azure-services) to the storage account when you configure the firewall. If you want to use the [Exports REST API](/rest/api/cost-management/exports) to generate exports to a storage account located behind a firewall, use the API version 2023-08-01 or later version. All newer API versions continue to support exports behind the firewall.-- The storage account configuration must have the **Permitted scope for copy operations (preview)** option set to **From any storage account**.
+- The storage account configuration must have the **Permitted scope for copy operations (preview)** option set to **From any storage account**.
:::image type="content" source="./media/tutorial-export-acm-data/permitted-scope-copy-operations.png" alt-text="Screenshot showing From any storage account option set." lightbox="./media/tutorial-export-acm-data/permitted-scope-copy-operations.png" ::: If you have a new subscription, you can't immediately use Cost Management features. It might take up to 48 hours before you can use all Cost Management features.
Remove-AzCostManagementExport -Name DemoExport -Scope 'subscriptions/00000000-00
If you need to export to a storage account behind the firewall for security and compliance requirements, ensure that you have all [prerequisites](#prerequisites) met.
+> [!NOTE]
+> If you have an existing scheduled export and your change your storage network configuration, you must update the export and save it to reflect the changes.
+ Enable **Allow trusted Azure services access** on the storage account. You can turn that on while configuring the firewall of the storage account, from the Networking page. Here's a screenshot showing the page. :::image type="content" source="./media/tutorial-export-acm-data/allow-trusted-access.png" alt-text="Screenshot showing Allow Azure services on the trusted services list exception option." lightbox="./media/tutorial-export-acm-data/allow-trusted-access.png" :::
Add exports to the list of trusted services. For more information, see [Trusted
### Export schedule
-Scheduled exports get affected by the time and day of week of when you initially create the export. When you create a scheduled export, the export runs at the same frequency for each export that runs later. For example, for a daily export of month-to-date costs export set at a daily frequency, the export runs during once each UTC day. Similarly for a weekly export, the export runs every week on the same UTC day as it is scheduled. Individual export runs can occur at different times throughout the day. So, avoid taking a firm dependency on the exact time of the export runs. Run timing depends on the active load present in Azure during a given UTC day. When an export run begins, your data should be available within 4 hours.
+Scheduled exports get affected by the time and day of week of when you initially create the export. When you create a scheduled export, the export runs at the same frequency for each export that runs later. For example, the export runs during once each UTC day for a daily export of month-to-date costs export set at a daily frequency. Similarly for a weekly export, the export runs every week on the same UTC day as it is scheduled. Individual export runs can occur at different times throughout the day. So, avoid taking a firm dependency on the exact time of the export runs. Run timing depends on the active load present in Azure during a given UTC day. When an export run begins, your data should be available within 4 hours.
Exports are scheduled using Coordinated Universal Time (UTC). The Exports API always uses and displays UTC.
There are two runs per day for the first five days of each month after you creat
## Access exported data from other systems
-One of the purposes of exporting your Cost Management data is to access the data from external systems. You might use a dashboard system or other financial system. Such systems vary widely so showing an example wouldn't be practical. However, you can get started with accessing your data from your applications at [Introduction to Azure Storage](../../storage/common/storage-introduction.md).
+One of the purposes of exporting your Cost Management data is to access the data from external systems. You might use a dashboard system or other financial system. Such systems vary widely so showing an example wouldn't be practical. However, you can get started with accessing your data from your applications at [Introduction to Azure Storage](../../storage/common/storage-introduction.md).
## Exports FAQ
For new versions of Excel:
1. Open Excel. 1. Select the **Data** tab at the top.
-1. Select the **From Text/CSV** option.
+1. Select the **From Text/CSV** option.
:::image type="content" source="./media/tutorial-export-acm-data/new-excel-from-text.png" alt-text="Screenshot showing the Excel From Text/CSV option." lightbox="./media/tutorial-export-acm-data/new-excel-from-text.png" ::: 1. Select the CSV file that you want to import.
-1. In the next box, set **File origin** to **65001: Unicode (UTF-8)**.
+1. In the next box, set **File origin** to **65001: Unicode (UTF-8)**.
:::image type="content" source="./media/tutorial-export-acm-data/new-excel-file-origin.png" alt-text="Screenshot showing the Excel File origin option." lightbox="./media/tutorial-export-acm-data/new-excel-file-origin.png" ::: 1. Select **Load**.
cost-management-billing Account Admin Tasks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/account-admin-tasks.md
Title: Account Administrator tasks in the Azure portal
description: Describes how to perform payment operations in Azure portal
-tags: billing
cost-management-billing Add Change Subscription Administrator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/add-change-subscription-administrator.md
Title: Add or change Azure subscription administrators
description: Describes how to add or change an Azure subscription administrator using Azure role-based access control (Azure RBAC).
-tags: billing
cost-management-billing Assign Roles Azure Service Principals https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/assign-roles-azure-service-principals.md
Title: Assign Enterprise Agreement roles to service principals
description: This article helps you assign EA roles to service principals by using PowerShell and REST APIs.
-tags: billing
cost-management-billing Avoid Charges Free Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/avoid-charges-free-account.md
Title: Avoid charges with your Azure free account
description: Understand why you see charges for your Azure free account. Learn ways to avoid these charges.
-tags: billing
cost-management-billing Azurestudents Subscription Disabled https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/azurestudents-subscription-disabled.md
Title: Reactivate disabled Azure for Students subscription
description: Explains why your Azure for Students subscription is disabled and how to reactivate it.
-tags: billing
# Why is my Azure for Students subscription disabled and how do I reactivate it?
-Your Azure for Students subscription might get disabled because you've used all of your credit, your credit has expired, or you've accidentally canceled your subscription. See what issue applies to you and learn how you can get your subscription reactivated.
+Your Azure for Students subscription might get disabled because you used all of your credit. It might also get disabled if your credit is expired or you accidentally canceled your subscription. See what issue applies to you and learn how you can get your subscription reactivated.
-## You've used all of your credit
+## You used all of your credit
-Azure for Students account gives you $100 in credit and a limited quantity of free services for 12 months. Any usage beyond the free services and quantities is deducted from your credit. Once your credit runs out, Azure disables your services and subscription. To continue using Azure services, you must upgrade your subscription to a pay-as-you-go subscription by contacting [Azure support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade). After you upgrade, your subscription still has access to free services for 12 months from your sign-up date. You only get charged for usage beyond the free services and quantities.
+Azure for Students account gives you $100 in credit and a limited quantity of free services for 12 months. Any usage beyond the free services and quantities is deducted from your credit. Once your credit runs out, Azure disables your services and subscription. To continue using Azure services, you must upgrade your subscription to a pay-as-you-go subscription by contacting [Azure support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade). After upgrade, your subscription still has access to free services for 12 months from your sign-up date. You only get charged for usage beyond the free services and quantities.
-You can check your remaining credit on the [Microsoft Azure Sponsorships portal](https://www.microsoftazuresponsorships.com/balance)
+You can check your remaining credit on the [Microsoft Azure Sponsorships portal](https://www.microsoftazuresponsorships.com/balance).
1. Sign in using your Azure for Students account credentials. 2. The balance page gives information about used and remaining credit. You can find your credit expiration date below the credit chart.
The table contains the following columns:
* **Service Resource:** Unit of measurement for the service being consumed. * **Spend:** Amount of credit in USD($) spent on the service.
-## Your credit has expired
+## Your credit expired
-Your Azure for Students credit expires at the end of 12 months. Once your credit expires, Azure disables your subscription. To continue using Azure services, you must upgrade your subscription to a Pay-As-You-Go subscription by contacting [Azure support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade). After you upgrade, Azure charges you pay-as-you-go rates for any services you're using.
+Your Azure for Students credit expires at the end of 12 months. Once your credit expires, Azure disables your subscription. To continue using Azure services, you must upgrade your subscription to a pay-as-you-go subscription by contacting [Azure support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade). After upgrade, Azure charges you pay-as-you-go rates for any services you're using.
-## You've accidentally canceled your subscription
+## You accidentally canceled your subscription
-If you've accidentally canceled your Azure for Students subscription, you can reactivate it by contacting [Azure support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade). Once you reactivate, you still have access to the remaining credit and free services for 12 months from your sign-up date.
+If you accidentally canceled your Azure for Students subscription, you can reactivate it by contacting [Azure support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade). Once you reactivate, you still have access to the remaining credit and free services for 12 months from your sign-up date.
## Need help? Contact us.
cost-management-billing Billing Subscription Transfer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/billing-subscription-transfer.md
description: Describes how to transfer billing ownership of an MOSP Azure subscr
keywords: transfer azure subscription, azure transfer subscription, move azure subscription to another account,azure change subscription owner, transfer azure subscription to another account, azure transfer billing
-tags: billing,top-support-issue
cost-management-billing Cancel Azure Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/cancel-azure-subscription.md
Title: Cancel your Azure subscription
description: Describes how to cancel your Azure subscription, like the Free Trial subscription
-tags: billing
cost-management-billing Change Azure Account Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/change-azure-account-profile.md
Title: Change contact information for an Azure billing account
description: Describes how to change the contact information of your Azure billing account
-tags: billing
cost-management-billing Change Credit Card https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/change-credit-card.md
Title: Add, update, or delete a payment method
description: This article describes how to add, update, or delete a payment method used to pay for an Azure subscription.
-tags: billing
cost-management-billing Check Free Service Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/check-free-service-usage.md
Title: Monitor and track Azure free service usage
description: Learn how to check free service usage in the Azure portal. There's no charge for services included in a free account unless you go over the service limits.
-tags: billing
cost-management-billing Cost Management Automation Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/cost-management-automation-scenarios.md
Title: Automation scenarios for Azure billing and cost management
description: Learn how common billing and cost management scenarios are mapped to different APIs.
-tags: billing
cost-management-billing Cost Management Budget Scenario https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/cost-management-budget-scenario.md
Title: Azure billing and cost management budget scenario
-description: Learn how to use Azure automation to shut down VMs based on specific budget thresholds.
+description: Learn how to use Azure Automation to shut down VMs based on specific budget thresholds.
-tags: billing
Cost control is a critical component to maximizing the value of your investment
Budgets are commonly used as part of cost control. Budgets can be scoped in Azure. For instance, you could narrow your budget view based on subscription, resource groups, or a collection of resources. In addition to using the budgets API to notify you via email when a budget threshold is reached, you can use [Azure Monitor action groups](../../azure-monitor/alerts/action-groups.md) to trigger an orchestrated set of actions resulting from a budget event.
-A common budgets scenario for a customer running a non-critical workload could occur when they want to manage against a budget and also get to a predictable cost when looking at the monthly invoice. This scenario requires some cost-based orchestration of resources that are part of the Azure environment. In this scenario, a monthly budget of $1000 for the subscription is set. Also, notification thresholds are set to trigger a few orchestrations. This scenario starts with an 80% cost threshold, which will stop all VMs in the resource group **Optional**. Then, at the 100% cost threshold, all VM instances will be stopped.
+A common budgets scenario for a customer running a noncritical workload could occur when they want to manage against a budget and also get to a predictable cost when looking at the monthly invoice. This scenario requires some cost-based orchestration of resources that are part of the Azure environment. In this scenario, a monthly budget of $1,000 for the subscription is set. Also, notification thresholds are set to trigger a few orchestrations. This scenario starts with an 80% cost threshold, which will stop all virtual machines (VM) in the resource group **Optional**. Then, at the 100% cost threshold, all VM instances are stopped.
To configure this scenario, you'll complete the following actions by using the steps provided in each section of this tutorial.
These actions included in this tutorial allow you to:
- Create an Azure Automation Runbook to stop VMs by using webhooks. - Create an Azure Logic App to be triggered based on the budget threshold value and call the runbook with the right parameters.-- Create an Azure Monitor Action Group that will be configured to trigger the Azure Logic App when the budget threshold is met.
+- Create an Azure Monitor Action Group that is configured to trigger the Azure Logic App when the budget threshold is met.
- Create the budget with the wanted thresholds and wire it to the action group. ## Create an Azure Automation Runbook
-[Azure Automation](../../automation/automation-intro.md) is a service that enables you to script most of your resource management tasks and run those tasks as either scheduled or on-demand. As part of this scenario, you'll create an [Azure Automation runbook](../../automation/automation-runbook-types.md) that will be used to stop VMs. You'll use the [Stop Azure V2 VMs](https://gallery.technet.microsoft.com/scriptcenter/Stop-Azure-ARM-VMs-1ba96d5b) graphical runbook from the [gallery](../../automation/automation-runbook-gallery.md) to build this scenario. By importing this runbook into your Azure account and publishing it, you can stop VMs when a budget threshold is reached.
+[Azure Automation](../../automation/automation-intro.md) is a service that enables you to script most of your resource management tasks and run those tasks as either scheduled or on-demand. As part of this scenario, you'll create an [Azure Automation runbook](../../automation/automation-runbook-types.md) that will be used to stop VMs. You'll use the [Stop Azure V2 VMs](https://github.com/azureautomation/stop-azure-v2-vms) graphical runbook from the [Azure Automation gallery](https://github.com/azureautomation) to build this scenario. By importing this runbook into your Azure account and publishing it, you can stop VMs when a budget threshold is reached.
### Create an Azure Automation account
These actions included in this tutorial allow you to:
### Import the Stop Azure V2 VMs runbook
-Using an [Azure Automation runbook](../../automation/automation-runbook-types.md), import the [Stop Azure V2 VMs](https://gallery.technet.microsoft.com/scriptcenter/Stop-Azure-ARM-VMs-1ba96d5b) graphical runbook from the gallery.
+Using an [Azure Automation runbook](../../automation/automation-runbook-types.md), import the [Stop Azure V2 VMs](https://github.com/azureautomation/stop-azure-v2-vms) graphical runbook from the gallery.
1. Sign in to the [Azure portal](https://portal.azure.com/) with your Azure account credentials. 1. Open your Automation account by selecting **All services** > **Automation Accounts**. Then, select your Automation Account. 1. Select **Runbooks gallery** from the **Process Automation** section. 1. Set the **Gallery Source** to **Script Center** and select **OK**.
-1. Locate and select the [Stop Azure V2 VMs](https://gallery.technet.microsoft.com/scriptcenter/Stop-Azure-ARM-VMs-1ba96d5b) gallery item within the Azure portal.
+1. Locate and select the [Stop Azure V2 VMs](https://github.com/azureautomation/stop-azure-v2-vms) gallery item within the Azure portal.
1. Select **Import** to display the **Import** area and select **OK**. The runbook overview area will be displayed. 1. Once the runbook has completed the import process, select **Edit** to display the graphical runbook editor and publishing option. ![Azure - Edit graphical runbook](./media/cost-management-budget-scenario/billing-cost-management-budget-scenario-01.png)
Using an [Azure Automation runbook](../../automation/automation-runbook-types.md
## Create webhooks for the runbook
-Using the [Stop Azure V2 VMs](https://gallery.technet.microsoft.com/scriptcenter/Stop-Azure-ARM-VMs-1ba96d5b) graphical runbook, you create two Webhooks to start the runbook in Azure Automation through a single HTTP request. The first webhook invokes the runbook at an 80% budget threshold with the resource group name as a parameter, allowing the optional VMs to be stopped. Then, the second webhook invokes the runbook with no parameters (at 100%), which stops all remaining VM instances.
+Using the [Stop Azure V2 VMs](https://github.com/azureautomation/stop-azure-v2-vms) graphical runbook, you create two Webhooks to start the runbook in Azure Automation through a single HTTP request. The first webhook invokes the runbook at an 80% budget threshold with the resource group name as a parameter, allowing the optional VMs to be stopped. Then, the second webhook invokes the runbook with no parameters (at 100%), which stops all remaining VM instances.
1. From the **Runbooks** page in the [Azure portal](https://portal.azure.com/), select the **StopAzureV2Vm** runbook that displays the runbook's overview area. 1. Select **Webhook** at the top of the page to open the **Add Webhook** area.
Use a conditional statement to check whether the threshold amount has reached 80
1. Select **is greater than or equal to** in the dropdown box of the **Condition**. 1. In the **Choose a value** box of the condition, enter `.8`. ![Screenshot shows the Condition dialog box with values selected.](./media/cost-management-budget-scenario/billing-cost-management-budget-scenario-12.png)
-1. Select **Add** > **Add row** within the Condition box to add an additional part of the condition.
+1. Select **Add** > **Add row** within the Condition box to add another part of the condition.
1. In the **Condition** box, select the textbox containing `Choose a value`. 1. Select **Expression** at the top of the list and enter the following expression in the expression editor: `float()`
Next, you'll configure **Postman** to create a budget by calling the Azure Consu
``` 1. Press **Send** to send the request.
-You now have all the pieces you need to call the [budgets API](/rest/api/consumption/budgets). The budgets API reference has additional details on the specific requests, including:
+You now have all the pieces you need to call the [budgets API](/rest/api/consumption/budgets). The budgets API reference has more details on the specific requests, including:
-- **budgetName** - Multiple budgets are supported. Budget names must be unique.
+- **budgetName** - Multiple budgets are supported. Budget names must be unique.
- **category** - Must be either **Cost** or **Usage**. The API supports both cost and usage budgets. - **timeGrain** - A monthly, quarterly, or yearly budget. The amount resets at the end of the period. - **filters** - Filters allow you to narrow the budget to a specific set of resources within the selected scope. For example, a filter could be a collection of resource groups for a subscription level budget.
cost-management-billing Direct Ea Billing Invoice Documents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/direct-ea-billing-invoice-documents.md
Title: Direct EA billing invoice documents
description: Learn how to understand the invoice files associated with your direct enterprise agreement.
-tags: billing
cost-management-billing Download Azure Invoice Daily Usage Date https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/download-azure-invoice-daily-usage-date.md
description: Describes how to download or view your Azure billing invoice.
keywords: billing invoice,invoice download,azure invoice
-tags: billing
cost-management-billing Ea Direct Portal Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/ea-direct-portal-get-started.md
Title: Get started with your Enterprise Agreement billing account
description: This article explains how Azure Enterprise Agreement (Azure EA) customers can use the Azure portal to manage their billing.
-tags: billing
cost-management-billing Ea Portal Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/ea-portal-get-started.md
Title: Get started with the Azure Enterprise portal
description: This article explains how Azure Enterprise Agreement (Azure EA) customers use the Azure Enterprise portal.
-tags: billing
cost-management-billing Ea Pricing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/ea-pricing.md
Title: View and download your organization's Azure pricing
description: Learn how to view and download pricing or estimate costs with your organization's pricing.
-tags: billing
cost-management-billing Ea Understand Pricesheet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/ea-understand-pricesheet.md
Title: Terms in your Enterprise Agreement price sheet - Azure description: Learn how to read and understand your usage and bill for an Enterprise Agreement.
-tags: billing
cost-management-billing Elevate Access Global Admin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/elevate-access-global-admin.md
description: Describes how to elevate access for a Global Administrator to manage billing accounts using the Azure portal or REST API.
-tags: billing
cost-management-billing Enable Marketplace Purchases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/enable-marketplace-purchases.md
+
+ Title: Enable marketplace purchases in Azure
+description: This article covers the steps used to enable marketplace private offer purchases.
+++++ Last updated : 02/08/2024+++
+# Enable marketplace purchases in Azure
+
+In the Azure portal, you buy non-Microsoft (third-party) software for use in Azure with the Microsoft commercial marketplace. To use the marketplace, you must first set up and configure marketplace policy settings and then assign required user access permissions to billing accounts and subscriptions. This article explains the tasks needed to set up and enable marketplace purchases, with an emphasis on setup steps required for private offers.
+
+This article covers the following steps that are used to enable marketplace private offer purchases:
+
+1. Enable the Azure Marketplace in the Azure portal
+1. Set user permissions to allow individuals to make Marketplace purchases
+1. Set user permissions to allow individuals to accept Marketplace private offers
+1. Optionally, if you have private marketplace enabled, then you can enable private offer purchases in the private marketplace
+
+## Prerequisites
+
+Before you begin, make sure you know your billing account type because the steps needed to enable marketplace purchases vary based on your account type.
+
+If you don't know your billing account type, [check the type of your billing account](manage-billing-access.md#check-the-type-of-your-billing-account).
+
+## Enable marketplace purchase
+
+You enable the marketplace policy setting to enable marketplace purchases. How you navigate to the setting depends on your billing account type. The required permissions also differ. Marketplace purchases support the following account types:
+
+- [Microsoft Customer Agreement (MCA)](#mca--enable-the-marketplace-policy-setting)
+- [Enterprise Agreement (EA)](#ea--enable-the-marketplace-policy-setting)
+
+At a high level, here's how the process to enable purchases works.
++
+### MCA ΓÇô Enable the marketplace policy setting
+
+People with the following permission can enable the policy setting:
+
+- Billing Account owner or contributor
+- Billing Profile owner or contributor
+
+The policy setting applies to all users with access to all Azure subscriptions under the billing account's billing profile.
+
+To enable the policy setting on the Billing Account Profile:
+
+1. Sign in to the Azure portal.
+1. Navigate to or search for **Cost Management + Billing**.
+1. In the left menu, select **Billing scopes**.
+1. Select the appropriate billing account scope.
+1. In the left menu, select **Billing profile**.
+1. In the left menu, select **Policies**.
+1. Set the Azure Marketplace policy to **On**.
+1. Select the **Save** option.
+
+For more information about the Azure Marketplace policy setting, see [purchase control through the billing profile under a Microsoft Customer Agreement (MCA)](/marketplace/purchase-control-options#purchase-control-through-the-billing-profile) .
+
+### EA ΓÇô Enable the marketplace policy setting
+
+Only an Enterprise administrator can enable the policy setting. Enterprise administrators with read only permissions can't enable the proper policies to buy from the marketplace.
+
+The policy setting applies to all users with access to the Azure subscriptions in the billing account (EA enrollment).
+
+To enable the policy setting on the billing account (EA enrollment):
+
+1. Sign in to the Azure portal.
+1. Navigate to or search for **Cost Management + Billing**.
+1. In the left menu, select **Billing scopes**.
+1. Select the billing account scope.
+1. In the left menu, select **Policies**.
+1. Under Azure Marketplace, set the policy to **On**.
+1. Select **Save**.
+
+For more information about the Azure Marketplace policy setting, see [Purchase control through EA billing administration under an Enterprise Agreement (EA)](/marketplace/purchase-control-options#purchase-control-through-ea-billing-administration-under-an-enterprise-agreement-ea) .
+
+## Set user permissions on the Azure subscription
+
+Setting permission for a subscription is needed for both EA or MCA customers to purchase a marketplace private offer, a private plan, or a public plan. The permission granted applies to only the individual users that you select.
+
+To set permission for a subscription:
+
+1. Sign in to the Azure portal.
+1. Navigate to **Subscriptions** and then search for the name of the subscription.
+1. Search for and then select the subscription that you want to manage access for.
+1. Select **Access control (IAM)** from the left-hand pane.
+1. To give access to a user, select **Add** from the top of the page.
+1. In the **Role** drop-down list, select the owner or contributor role.
+1. Enter the email address of the user to whom you want to give access.
+1. Select **Save** to assign the role.
+
+For more information about assigning roles, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md) and [Privileged administrator roles](../../role-based-access-control/role-assignments-steps.md#privileged-administrator-roles).
++
+## Set user permission to accept private offers
+
+The permission (billing role) required to accept private offers and how you grant the permission varies, based on your agreement type.
+
+### MCA ΓÇô Set permission to accept private offers for a user
+
+Only the billing account owner can set user permission. The permission granted applies to only the individual users that you select.
+
+To set user permission for a user:
+
+1. Sign in to the Azure portal.
+1. Navigate to or search for **Cost Management + Billing**.
+1. Select the billing account that you want to manage access for.
+1. Select **Access control (IAM)** from the left-hand pane.
+1. To give access to a user, select **Add** from the top of the page.
+1. In the **Role** list, select either **Billing account owner** or **contributor**.
+1. Enter the email address of the user to whom you want to give access.
+1. Select **Save** to assign the role.
+
+For more information about setting user permission for a billing role, see [Manage billing roles in the Azure portal](understand-mca-roles.md#manage-billing-roles-in-the-azure-portal).
+
+### EA ΓÇô Set permission to accept private offers for a user
+
+Only the EA administrator can set user permission. Enterprise administrators with read only permissions can't set user permission. The permission granted applies to only the individual users that you select.
+
+To set user permission for a user:
+
+1. Sign in to the Azure portal.
+1. Navigate to or search for **Cost Management + Billing**.
+1. In the left menu, select **Billing scopes** and then select the billing account that contains the Azure subscription used for Marketplace purchase.
+1. In the left menu, select **Access Control (IAM)**.
+1. In the top menu, select **+ Add**, and then select **Enterprise administrator**.
+1. Complete the Add role assignment form and then select **Add**.
+
+For more information about adding another enterprise administrator, see [Add another enterprise administrator](direct-ea-administration.md#add-another-enterprise-administrator).
+
+## Optionally enable private offer purchases in the private Azure Marketplace
+
+If you have the private Azure Marketplace enabled, then a private Marketplace admin is required to enable and configure the private Marketplace. To enable Azure private Marketplace in the Azure portal, a global administrator assigns the Marketplace admin role to specific users. The steps to assign the Marketplace admin role is the same for EA and MCA customers.
+
+To assign the Marketplace admin role:
+
+1. Sign in to the Azure portal.
+1. Navigate to or search for **Marketplace**.
+1. Select **Private Marketplace** from the left navigation menu.
+1. Select **Access control (IAM)** to assign the Marketplace admin role.
+1. Select **+ Add** > **Add role assignment**.
+1. Under **Role**, choose **Marketplace Admin**.
+1. Select the desired user from the dropdown list, then select **Done**.
+
+For more information about assigning the Marketplace admin role, see [Assign the Marketplace admin role](/marketplace/create-manage-private-azure-marketplace-new#assign-the-marketplace-admin-role).
+
+### Enable the private offer purchase in the private Marketplace
+
+The Marketplace admin enables the private offer and private plan purchases in the private Marketplace. The Marketplace admin can also enable individual private offers or private plans.
+
+After the private offer purchase is enabled in the private Marketplace, all users in the organization (the Microsoft Entra tenant) can purchase products in enabled collections.
+
+#### To enable private offers and private plans
+
+1. Sign in to the Azure portal.
+1. Navigate to or search for **Marketplace**.
+1. Select **Private Marketplace** from the left-nav menu.
+1. Select **Get Started** to create the private Azure Marketplace. You only have to do this action once.
+1. Select **Settings** from the left-nav menu.
+1. Select the radio button for the desired status (Enabled or Disabled).
+1. Select **Apply** on the bottom of the page.
+1. Update Private Marketplace **Rules** to enable private offers and private plans.
+
+#### To add individual private products to private Marketplace collection
+
+>[!NOTE]
+> - We generally recommend that a Marketplace admin should enable private offers in the Private Marketplace for all users in the organization, using the previous procedure.
+> - Although not recommended, and only if necessary, the following procedures are used by a Marketplace admin to avoid enabling private offers in the Private Marketplace for all users in the organization. The Marketplace admin can add individual private offers on a purchase-by-purchase basis.
+
+#### Set up a collection
+
+1. Sign in to the Azure portal.
+1. Navigate to or search for **Marketplace**.
+1. Select **Private Marketplace** from the left menu.
+1. If no collections were created, select **Get started**.
+1. If collections exist, then select an existing collection or add a new collection.
+
+#### Add a private offer or a private plan to a collection
+
+1. Select the collection name.
+1. Select **Add items**.
+1. Browse the Gallery or use the search field to find the item you want.
+1. Select **Done**.
+
+For more information about setting up and configuring Marketplace product collections, see [Collections overview](/marketplace/create-manage-private-azure-marketplace-new#collections-overview).
++
+## Next steps
+
+- To learn more about setting up and configuring Marketplace product collections, see [Collections overview](/marketplace/create-manage-private-azure-marketplace-new#collections-overview).
+- To read more about the Marketplace in the [Microsoft commercial marketplace customer documentation](/marketplace/).
cost-management-billing Enterprise Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/enterprise-api.md
Title: Azure Enterprise Reporting APIs
description: Learn about the Azure Enterprise Reporting APIs that enable customers to pull consumption data programmatically.
-tags: billing
# Overview of the Azure Enterprise Reporting APIs
-> [!Note]
+> [!NOTE]
> Microsoft no longer updates the Azure Enterprise Reporting APIs. Instead, you should use Cost Management APIs. To learn more, see [Migrate from Azure Enterprise Reporting to Microsoft Cost Management APIs overview](../automate/migrate-ea-reporting-arm-apis-overview.md).
-The Azure Enterprise Reporting APIs enable Enterprise Azure customers to programmatically pull consumption and billing data into preferred data analysis tools. Enterprise customers have signed an [Enterprise Agreement (EA)](https://azure.microsoft.com/pricing/enterprise-agreement/) with Azure to make negotiated Azure Prepayment (previously called monetary commitment) and gain access to custom pricing for Azure resources.
+The Azure Enterprise Reporting APIs enable Enterprise Azure customers to programmatically pull consumption and billing data into preferred data analysis tools. Enterprise customers signed an [Enterprise Agreement (EA)](https://azure.microsoft.com/pricing/enterprise-agreement/) with Azure to make negotiated Azure Prepayment (previously called monetary commitment) and gain access to custom pricing for Azure resources.
All date and time parameters required for APIs must be represented as combined Coordinated Universal Time (UTC) values. Values returned by APIs are shown in UTC format. ## Enabling data access to the API
-* **Generate or retrieve the API key** - Log in to the Enterprise portal, and navigate to Reports > Download Usage > API Access Key to generate or retrieve the API key.
-* **Passing keys in the API** - The API key needs to be passed for each call for Authentication and Authorization. The following property needs to be to the HTTP headers
+* **Generate or retrieve the API key** - Sign in to the Enterprise portal, and navigate to Reports > Download Usage > API Access Key to generate or retrieve the API key.
+* **Passing keys in the API** - The API key needs to be passed for each call for Authentication and Authorization. The following property needs to be to the HTTP headers.
|Request Header Key | Value| |-|-| |Authorization| Specify the value in this format: **bearer {API_KEY}** <br/> Example: bearer eyr....09| ## Consumption-based APIs
-A Swagger endpoint is available [here](https://consumption.azure.com/swagger/ui/index) for the APIs described below which should enable easy introspection of the API and the ability to generate client SDKs using [AutoRest](https://github.com/Azure/AutoRest) or [Swagger CodeGen](https://swagger.io/swagger-codegen/). Data beginning May 1, 2014 is available through this API.
+A Swagger endpoint is available [here](https://consumption.azure.com/swagger/ui/index) for the following APIs. They should enable easy introspection of the API and the ability to generate client software development kits (SDK)s using [AutoRest](https://github.com/Azure/AutoRest) or [Swagger CodeGen](https://swagger.io/swagger-codegen/). Data beginning May 1, 2014 is available through this API.
-* **Balance and Summary** - The [Balance and Summary API](/rest/api/billing/enterprise/billing-enterprise-api-balance-summary) offers a monthly summary of information on balances, new purchases, Azure Marketplace service charges, adjustments and overage charges.
+* **Balance and Summary** - The [Balance and Summary API](/rest/api/billing/enterprise/billing-enterprise-api-balance-summary) offers a monthly summary of information on balances, new purchases, Azure Marketplace service charges, adjustments, and overage charges.
-* **Usage Details** - The [Usage Detail API](/rest/api/billing/enterprise/billing-enterprise-api-usage-detail) offers a daily breakdown of consumed quantities and estimated charges by an Enrollment. The result also includes information on instances, meters and departments. The API can be queried by Billing period or by a specified start and end date.
+* **Usage Details** - The [Usage Detail API](/rest/api/billing/enterprise/billing-enterprise-api-usage-detail) offers a daily breakdown of consumed quantities and estimated charges by an Enrollment. The result also includes information on instances, meters, and departments. Query the API by Billing period or by a specified start and end date.
-* **Marketplace Store Charge** - The [Marketplace Store Charge API](/rest/api/billing/enterprise/billing-enterprise-api-marketplace-storecharge) returns the usage-based marketplace charges breakdown by day for the specified Billing Period or start and end dates (one time fees are not included).
+* **Marketplace Store Charge** - The [Marketplace Store Charge API](/rest/api/billing/enterprise/billing-enterprise-api-marketplace-storecharge) returns the usage-based marketplace charges breakdown by day for the specified Billing Period or start and end dates (one time fees aren't included).
* **Price Sheet** - The [Price Sheet API](/rest/api/billing/enterprise/billing-enterprise-api-pricesheet) provides the applicable rate for each Meter for the given Enrollment and Billing Period. * **Reserved Instance Details** - The [Reserved Instance usage API](/rest/api/billing/enterprise/billing-enterprise-api-reserved-instance-usage) returns the usage of the Reserved Instance purchases. The [Reserved Instance charges API](/rest/api/billing/enterprise/billing-enterprise-api-reserved-instance-usage) shows the billing transactions made. ## Data Freshness
-Etags will be returned in the response of all the above API. A change in Etag indicates the data has been refreshed. In subsequent calls to the same API using the same parameters, pass the captured Etag with the key "If-None-Match" in the header of http request. The response status code would be "NotModified" if the data has not been refreshed any further and no data will be returned. API will return the full dataset for the required period whenever there is an etag change.
+Etags are returned in the response of all the above API. A change in Etag indicates the data was refreshed. In subsequent calls to the same API using the same parameters, pass the captured Etag with the key "If-None-Match" in the header of http request. The response status code is `NotModified` if the data isn't refreshed further and no data is returned. API returns the full dataset for the required period whenever there's an etag change.
## Helper APIs **List Billing Periods** - The [Billing Periods API](/rest/api/billing/enterprise/billing-enterprise-api-billing-periods) returns a list of billing periods that have consumption data for the specified Enrollment in reverse chronological order. Each Period contains a property pointing to the API route for the four sets of data - BalanceSummary, UsageDetails, Marketplace Charges, and Price Sheet.
cost-management-billing Filter View Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/filter-view-subscriptions.md
Title: Filter and view subscriptions
description: This article explains how to filter and view subscriptions in the Azure portal.
-tags: billing
cost-management-billing Find Tenant Id Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/find-tenant-id-domain.md
description: Describes how to find ID and primary domain for your Microsoft Entra tenant.
-tags: billing
cost-management-billing Manage Billing Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/manage-billing-access.md
Title: Manage access to Azure billing
description: Learn how to give access to your Azure billing information to members of your team.
-tags: billing
cost-management-billing Manage Billing Across Tenants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/manage-billing-across-tenants.md
description: Describes how to use associated billing tenants to manage billing across tenants and move subscriptions in different tenants.
-tags: billing
cost-management-billing Manage Tax Information https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/manage-tax-information.md
Title: Update tax details for an Azure billing account
description: This article describes how to update your Azure billing account tax details.
-tags: billing
cost-management-billing Markup China https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/markup-china.md
Title: Markup - Microsoft Azure operated by 21Vianet
description: This article explains how to configure markup rules for use in Microsoft Azure operated by 21Vianet.
-tags: billing
cost-management-billing Mca Check Azure Credits Balance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/mca-check-azure-credits-balance.md
Title: Track Azure credit balance for a Microsoft Customer Agreement
description: Learn how to check the Azure credit balance for a Microsoft Customer Agreement.
-tags: billing
cost-management-billing Mca Enterprise Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/mca-enterprise-operations.md
Title: EA tasks in a Microsoft Customer Agreement - Azure
description: Learn how to complete Enterprise Agreement tasks in your new billing account.
-tags: billing
cost-management-billing Mca Request Billing Ownership https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/mca-request-billing-ownership.md
Title: Transfer Azure product billing ownership to a Microsoft Customer Agreemen
description: Learn how to transfer billing ownership of Azure subscriptions, reservations, and savings plans.
-tags: billing
cost-management-billing Mca Role Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/mca-role-migration.md
description: Describes how to Copy billing roles from one MCA to another MCA across tenants using a PowerShell script.
-tags: billing
cost-management-billing Mca Section Invoice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/mca-section-invoice.md
Title: Organize your invoice based on your needs - Azure
description: Learn how to organize costs on your invoice. You can customize your billing account by creating billing profiles and invoice sections.
-tags: billing
cost-management-billing Mca Setup Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/mca-setup-account.md
Title: Set up billing for Microsoft Customer Agreement - Azure
description: Learn how to set up your billing account for a Microsoft Customer Agreement. See prerequisites for the setup and view other available resources.
-tags: billing
cost-management-billing Mca Understand Pricesheet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/mca-understand-pricesheet.md
Title: Terms in your Microsoft Customer Agreement price sheet - Azure description: Learn how to read and understand your usage and bill for a Microsoft Customer Agreement.
-tags: billing
cost-management-billing Mosp Ea Transfer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/mosp-ea-transfer.md
Title: Transfer an Azure subscription to an Enterprise Agreement
description: This article helps you understand the steps to transfer a Microsoft Customer Agreement subscription or MOSP subscription to an Enterprise Agreement.
-tags: billing
cost-management-billing Mpa Request Ownership https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/mpa-request-ownership.md
Title: Transfer Azure product billing ownership to your Microsoft Partner Agreem
description: Learn how to request billing ownership of Azure billing products from other users for a Microsoft Partner Agreement (MPA).
-tags: billing
cost-management-billing Open Banking Strong Customer Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/open-banking-strong-customer-authentication.md
Title: Open Banking (PSD2) and Strong Customer Authentication (SCA) for Azure cu
description: This article explains why multi-factor authentication is required for some Azure purchases and how to complete authentication.
-tags: billing
cost-management-billing Pay By Invoice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/pay-by-invoice.md
Title: Pay for Azure subscriptions by wire transfer
description: Learn how to pay for Azure subscriptions by wire transfer.
-tags: billing
cost-management-billing Resolve Past Due Balance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/resolve-past-due-balance.md
Title: Past due balance email from Azure
description: Describes how to make payment if your Azure subscription has a past due balance.
-tags: billing
cost-management-billing Spending Limit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/spending-limit.md
Title: Azure spending limit
description: This article describes how an Azure spending limit works and how to remove it.
-tags: billing
cost-management-billing Subscription Disabled https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/subscription-disabled.md
description: Describes when you might have an Azure subscription disabled and ho
keywords: azure subscription disabled
-tags: billing
cost-management-billing Subscription States https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/subscription-states.md
description: This article describes the different states and status of an Azure
keywords: azure subscription state status
-tags: billing
cost-management-billing Subscription Transfer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/subscription-transfer.md
Title: Azure product transfer hub
description: This article helps you understand what's needed to transfer Azure subscriptions, reservations, and savings plans and provides links to other articles for more detailed information.
-tags: billing
cost-management-billing Switch Azure Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/switch-azure-offer.md
Title: Change Azure subscription offer
description: Learn about how to change your Azure subscription and switch to a different offer.
-tags: billing,top-support-issue
cost-management-billing Track Consumption Commitment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/track-consumption-commitment.md
Title: Track your Microsoft Azure Consumption Commitment (MACC)
description: Learn how to track your Microsoft Azure Consumption Commitment (MACC) for a Microsoft Customer Agreement.
-tags: billing
cost-management-billing Upgrade Azure Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/upgrade-azure-subscription.md
description: Learn how to upgrade your Azure free or Azure for Students Starter
keywords: pay as you go upgrade
-tags: billing
cost-management-billing View All Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/view-all-accounts.md
Title: View your billing accounts in Azure portal
description: Learn how to view your billing accounts in the Azure portal. See scope information for Enterprise, Microsoft Customer, and Microsoft Partner Agreements.
-tags: billing
cost-management-billing View Payment History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/view-payment-history.md
Title: View payment history
description: This article describes how to view your payment history for a Microsoft Customer Agreement.
-tags: billing
cost-management-billing Withholding Tax Credit India https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/withholding-tax-credit-india.md
Title: Request a credit for Withholding Tax on your account (India customers) -
description: Learn how to request a credit on your account for Withholding Tax you paid. This article only applies to customers in India.
-tags: billing
cost-management-billing Manage Tenants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/microsoft-customer-agreement/manage-tenants.md
Title: Manage tenants in your Microsoft Customer Agreement billing account - Azure description: The article helps you understand and manage tenants associated with your Microsoft Customer Agreement billing account.
-tags: billing
cost-management-billing Microsoft Customer Agreement Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/microsoft-customer-agreement/microsoft-customer-agreement-get-started.md
Title: Key next steps after accepting your Microsoft Customer Agreement - Azure description: This article helps you get started as you begin to manage Azure billing and subscriptions under your new Microsoft Customer Agreement.
-tags: billing
cost-management-billing Onboard Microsoft Customer Agreement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/microsoft-customer-agreement/onboard-microsoft-customer-agreement.md
Title: Onboard to the Microsoft Customer Agreement (MCA) description: This guide helps customers who buy Microsoft software and services through a Microsoft account manager to set up an MCA contract.
-tags: billing
cost-management-billing Troubleshoot Subscription Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/microsoft-customer-agreement/troubleshoot-subscription-access.md
Title: Troubleshoot subscription access after you sign a Microsoft Customer Agreement - Azure description: This article helps you troubleshoot subscription access after you sign a new Microsoft Customer Agreement.
-tags: billing
cost-management-billing Reservation Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/reservation-apis.md
Title: APIs for Azure reservation automation
description: Learn about the Azure APIs that you can use to programmatically get reservation information.
-tags: billing
cost-management-billing Reserved Instance Windows Software Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/reserved-instance-windows-software-costs.md
Title: Reservations software costs for Azure
description: Learn which software meters are not included in Azure Reserved VM Instance costs.
-tags: billing
cost-management-billing Understand Reserved Instance Usage Ea https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/understand-reserved-instance-usage-ea.md
Title: Understand Azure reservations usage for Enterprise Agreement and Microsof
description: Learn how to read your usage information to understand how an Azure reservation applies to Enterprise Agreement and Microsoft Customer Agreement usage.
-tags: billing
cost-management-billing Understand Reserved Instance Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/understand-reserved-instance-usage.md
Title: Azure reservation usage for an individual subscription
description: Learn how to read your usage to understand how the Azure reservation for your individual subscription with pay-as-you-go rates is applied.
-tags: billing
cost-management-billing Billing Troubleshoot Azure Payment Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/troubleshoot-billing/billing-troubleshoot-azure-payment-issues.md
Title: Troubleshoot Azure payment issues
description: Resolving an issue when updating payment information account in the Azure portal.
-tags: billing
cost-management-billing Troubleshoot Account Not Found https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/troubleshoot-billing/troubleshoot-account-not-found.md
Title: Troubleshoot viewing your billing account in the Azure portal
description: This article helps you troubleshoot problems when trying to view your billing account in the Azure portal.
-tags: billing
cost-management-billing Troubleshoot Cant Find Invoice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/troubleshoot-billing/troubleshoot-cant-find-invoice.md
description: Resolving an issue when trying to view your invoice in the Azure po
-tags: billing
cost-management-billing Troubleshoot Csp Billing Issues Usage File Pivot Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/troubleshoot-billing/troubleshoot-csp-billing-issues-usage-file-pivot-tables.md
Title: Troubleshoot Azure CSP billing issues with usage file pivot tables
description: This article helps you troubleshoot Azure Cloud Solution Provider (CSP) billing issues using pivot tables created from your CSV usage files.
-tags: billing
cost-management-billing Troubleshoot Customer Agreement Billing Issues Usage File Pivot Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/troubleshoot-billing/troubleshoot-customer-agreement-billing-issues-usage-file-pivot-tables.md
Title: Troubleshoot Azure MCA billing issues with usage file pivot tables
description: This article helps you troubleshoot Microsoft Customer Agreement (MCA) billing issues using pivot tables created from your CSV usage files.
-tags: billing
cost-management-billing Troubleshoot Ea Billing Issues Usage File Pivot Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/troubleshoot-billing/troubleshoot-ea-billing-issues-usage-file-pivot-tables.md
Title: Troubleshoot Azure EA billing issues with usage file pivot tables
description: This article helps you troubleshoot Enterprise Agreement (EA) billing issues using pivot tables created from your CSV usage files.
-tags: billing
cost-management-billing Create Subscriptions Deploy Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/troubleshoot-subscription/create-subscriptions-deploy-resources.md
description: Provides help for the message you might see when you try to create multiple subscriptions.
-tags: billing
cost-management-billing No Subscriptions Found https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/troubleshoot-subscription/no-subscriptions-found.md
Title: No subscriptions found error - Azure portal sign in
description: Provides the solution for a problem in which No subscriptions found error occurs during Azure portal sign in.
-tags: billing
cost-management-billing Troubleshoot Azure Sign Up https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/troubleshoot-subscription/troubleshoot-azure-sign-up.md
description: Resolving an issue when trying to sign up for a new account in the
-tags: billing
cost-management-billing Troubleshoot Sign In Issue https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/troubleshoot-subscription/troubleshoot-sign-in-issue.md
description: Helps to resolve the issues in which you can't sign in to the Azure
-tags: billing
cost-management-billing Download Azure Daily Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/download-azure-daily-usage.md
keywords: billing usage, usage charges, usage download, view usage, azure invoic
-tags: billing
cost-management-billing Download Azure Invoice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/download-azure-invoice.md
description: Learn how to view and download your Azure invoice. You can download
keywords: billing invoice,invoice download,azure invoice,azure usage
-tags: billing
cost-management-billing Mca Download Tax Document https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/mca-download-tax-document.md
Title: View tax documents for your Azure invoice
description: Learn how to view and download tax receipts for your billing profile.
-tags: billing
Last updated 04/05/2023
cost-management-billing Mca Understand Your Invoice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/mca-understand-your-invoice.md
Title: Understand your Microsoft Customer Agreement invoice in Azure
description: Learn how to read and understand your Microsoft Customer Agreement bill in Azure
-tags: billing
cost-management-billing Mca Understand Your Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/mca-understand-your-usage.md
Title: Microsoft Customer Agreement Azure usage and charges file terms
description: Learn how to read and understand the sections of the Azure usage and charges CSV for your billing profile.
-tags: billing
cost-management-billing Mpa Invoice Terms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/mpa-invoice-terms.md
Title: Understand your Microsoft Partner Agreement invoice in Azure
description: Learn how to read and understand your Microsoft Partner Agreement bill in Azure
-tags: billing
cost-management-billing Pay Bill https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/pay-bill.md
description: Learn how to pay your bill in the Azure portal. You must be a billi
keywords: billing, past due, balance, pay now,
-tags: billing, past due, pay now, bill, invoice, pay
cost-management-billing Plan Manage Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/plan-manage-costs.md
Title: Plan to manage Azure costs
description: Learn how to plan to manage Azure costs and use cost-tracking and management features for your Azure account.
-tags: billing
cost-management-billing Review Customer Agreement Bill https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/review-customer-agreement-bill.md
Title: Review your Microsoft Customer Agreement bill - Azure
description: Learn how to review your bill and resource usage and to verify charges for your Microsoft Customer Agreement invoice.
-tags: billing
cost-management-billing Review Enterprise Agreement Bill https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/review-enterprise-agreement-bill.md
Title: Review your Azure Enterprise Agreement bill
description: Learn how to read and understand your usage and bill for Azure Enterprise Agreements.
-tags: billing
cost-management-billing Review Individual Bill https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/review-individual-bill.md
Title: Review your individual Azure subscription bill
description: Learn how to understand your bill and resource usage and to verify charges for your individual Azure subscription, including pay-as-you-go.
-tags: billing
cost-management-billing Review Partner Agreement Bill https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/review-partner-agreement-bill.md
Title: Review your Microsoft Partner Agreement invoice - Azure
description: Learn how to review your bill and resource usage and to verify charges for your Microsoft Partner Agreement invoice.
-tags: billing
cost-management-billing Understand Azure Marketplace Charges https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/understand-azure-marketplace-charges.md
Title: Understand your Azure external service charges
description: Learn about billing of external services, formerly known as Marketplace, charges in Azure.
-tags: billing
cost-management-billing Understand Invoice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/understand-invoice.md
Title: Understand your Azure invoice
description: Learn how to read and understand the usage and bill for your Azure subscription
-tags: billing
cost-management-billing Understand Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/understand-usage.md
Title: Understand your detailed usage and charges
description: Learn how to read and understand your detailed usage and charges file. View a list of terms and descriptions used in the file.
-tags: billing
data-factory Concepts Integration Runtime Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-integration-runtime-performance.md
The default cluster size is four driver nodes and four worker nodes (small). As
| | | -- | -- | | 4 | 4 | 8 | Small | | 8 | 8 | 16 | Medium |
-| 16 | 16 | 32 | |
+| 16 | 16 | 32 | Large|
| 32 | 16 | 48 | |
-| 64 | 16 | 80 | Large |
+| 64 | 16 | 80 | |
| 128 | 16 | 144 | | | 256 | 16 | 272 | |
data-factory How To Manage Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-manage-settings.md
Last updated 01/05/2024
You can change the default settings of your Azure Data Factory to meet your own preferences. Azure Data Factory settings are available in the Settings menu in the top right section of the global page header as indicated in the screenshot below. Clicking the **Settings** gear button will open a flyout. Here you can find the settings and preferences that you can set for your data factory.
Here you can find the settings and preferences that you can set for your data fa
Choose your theme to change the look of the Azure Data Factory studio. Use the toggle button to select your data factory theme. This setting controls the look of your data factory. To apply changes, select your **Theme** and make sure to hit the **Ok** button. Your page will reflect the changes made.
Choose your language and the regional format that will influence how data such a
Use the drop-down list to select from the list of available languages. This setting controls the language you see for text throughout your data factory. There are 18 languages supported in addition to English. To apply changes, select a language and make sure to hit the **Apply** button. Your page will refresh and reflect the changes made. > [!NOTE] > Applying language changes will discard any unsaved changes in your data factory.
Use the drop-down list to select from the list of available regional formats. Th
The default shown in **Regional format** will automatically change based on the option you selected for **Language**. You can still use the drop-down list to select a different format. For example, if you select **English** as your language and select **English (United States)** as the regional format, currency will be show in U.S. (United States) dollars. If you select **English** as your language and select **English (Europe)** as the regional format, currency will be show in euros. To apply changes, select a **Regional format** and make sure to hit the **Apply** button. Your page will refresh and reflect the changes made. > [!NOTE]
-> Applying regional format changes will discard any unsaved changes in your data factory.
+> Applying regional format changes will discard any unsaved changes in your data factory.
+
+## Factory Settings
+
+Additionally, you can set specific settings for your Data Factory. In the **Navigate** tab, you'll find **Factory settings** under **General**. In your Factory settings, you can adjust a few settings.
++
+* **Show billing report**
+
+You can select your preferences for your billing report under **Show billing report**. Choose to see your billing **by pipeline** or **by factory**. By default, this setting will be set to **by factory**.
+
+* **Factory environment**
+
+You can set different environment labels for your factory. Choose from **Development**, **Test**, or **Production**. By default, this setting will be set to **None**.
+
+* **Staging**
+
+You can set your **default staging linked service** and **default staging storage folder**. This can be overridden in your factory resource.
## Related content - [Manage the ADF preview experience](how-to-manage-studio-preview-exp.md)
data-factory How To Manage Studio Preview Exp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-manage-studio-preview-exp.md
There are two ways to enable preview experiences.
1. In the banner seen at the top of the screen, you can click **Open settings to learn more and opt in**.
- :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-1.png" alt-text="Screenshot of Azure Data Factory home page with an Opt-in option in a banner at the top of the screen.":::
+ :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-experience-1.png" alt-text="Screenshot of Azure Data Factory home page with an Opt-in option in a banner at the top of the screen.":::
2. Alternatively, you can click the **Settings** button.
- :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-2.png" alt-text="Screenshot of Azure Data Factory home page highlighting Settings gear in top right corner.":::
+ :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-experience-2.png" alt-text="Screenshot of Azure Data Factory home page highlighting Settings gear in top right corner.":::
After opening **Settings**, you'll see an option to turn on **Azure Data Factory Studio preview update**.
- :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-3.png" alt-text="Screenshot of Settings panel highlighting button to turn on Azure Data Factory Studio preview update.":::
+ :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-experience-3.png" alt-text="Screenshot of Settings panel highlighting button to turn on Azure Data Factory Studio preview update.":::
Toggle the button so that it shows **On** and click **Apply**.
- :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-4.png" alt-text="Screenshot of Settings panel showing Azure Data Factory Studio preview update turned on and the Apply button in the bottom left corner.":::
+ :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-experience-4.png" alt-text="Screenshot of Settings panel showing Azure Data Factory Studio preview update turned on and the Apply button in the bottom left corner.":::
- Your data factory will refresh to show the preview features.
+ Your data factory refreshes to show the preview features.
Similarly, you can disable preview features with the same steps. Click **Open settings to opt out** or click the **Settings** button and unselect **Azure Data Factory Studio preview update**.
- :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-exp-5.png" alt-text="Screenshot of Azure Data Factory home page with an Opt-out option in a banner at the top of the screen and Settings gear in the top right corner of the screen.":::
+ :::image type="content" source="media/how-to-manage-studio-preview-exp/data-factory-preview-experience-5.png" alt-text="Screenshot of Azure Data Factory home page with an Opt-out option in a banner at the top of the screen and Settings gear in the top right corner of the screen.":::
> [!NOTE] > Enabling/disabling preview updates will discard any unsaved changes.
There are two ways to enable preview experiences.
[**Pipeline experimental view**](#pipeline-experimental-view) * [Dynamic content flyout](#dynamic-content-flyout)
+ * [Linked service for Web activity] (#linked-service-web-activity)
[**Monitoring experimental view**](#monitoring-experimental-view) * [Error message relocation to Status column](#error-message-relocation-to-status-column)
To see the data-first experimental view, you need to follow these steps to enabl
In your data flow editor, you can find several canvas tools on the right side like the **Search** tool, **Zoom** tool, and **Multi-select** tool. You'll see a new icon under the **Multi-select** tool. This is how you can toggle between the **Classic** and the **Data-first** views. #### Configuration panel The configuration panel for transformations has now been simplified. Previously, the configuration panel showed settings specific to the selected transformation. Now, for each transformation, the configuration panel will only have **Data Preview** that will automatically refresh when changes are made to transformations. If no transformation is selected, the panel will show the pre-existing data flow configurations: **Parameters** and **Settings**.
If no transformation is selected, the panel will show the pre-existing data flow
Settings specific to a transformation will now show in a pop-up instead of the configuration panel. With each new transformation, a corresponding pop-up will automatically appear. You can also find the settings by clicking the gear button in the top right corner of the transformation activity. #### Data preview
If debug mode is on, **Data Preview** in the configuration panel will give you a
**Data preview** now includes Elapsed time (seconds) to show how long your data preview took to load. Columns can be rearranged by dragging a column by its header. You can also sort columns using the arrows next to the column titles and you can export data preview data using **Export to CSV** on the banner above column headers. ### CI/CD experimental view
You now have the option to enable **Auto Save** when you have a Git repository c
To enable **Auto save**, click the toggle button found in the top banner of your screen. Review the pop-up and click **Yes**. When **Auto Save** is enabled, the toggle button shows as blue. ### Pipeline experimental view
A new flyout has been added to make it easier to set dynamic content in your pip
| ForEach | Items | | If/Switch/Until | Expression |
-In supported activities, you'll see an icon next to the setting. Clicking this will open up the flyout where you can choose your dynamic content.
+In supported activities, you'll see an icon next to the setting. Clicking this icon opens up the flyout where you can choose your dynamic content.
++
+#### Linked service for Web activity
+
+There are new settings available for the Web activity.
+
+By default, the **Connection type** will be set to **Inline**, but you can choose to select **Linked service**. Doing so allows you to reference a REST linked service for authentication purposes.
++
+After selecting **Linked service**, use the drop-down menu to select an existing linked service or click **New** to create a new linked service.
++ ### Monitoring experimental view UI (user interfaces) changes have been made to the monitoring page. These changes were made to simplify and streamline your monitoring experience.
-The monitoring experience remains the same as detailed [here](monitor-visually.md), except for items detailed below.
+The monitoring experience remains the same as detailed [here](monitor-visually.md), except for items detailed in the following section.
#### Error message relocation to Status column
To make it easier for you to view errors when you see a **Failed** pipeline run,
Find the error icon in the pipeline monitoring page and in the pipeline **Output** tab after debugging your pipeline. #### Container view > [!NOTE] > This feature is now generally available in the ADF studio.
-When monitoring your pipeline run, you have the option to enable the container view, which will provide a consolidated view of the activities that ran.
+When monitoring your pipeline run, you have the option to enable the container view, which provides a consolidated view of the activities that ran.
This view is available in the output of your pipeline debug run and in the detailed monitoring view found in the monitoring tab. ##### How to enable the container view in pipeline debug output In the **Output** tab in your pipeline, there's a new dropdown to select your monitoring view. Select **Hierarchy** to see the new hierarchy view. If you have iteration or conditional activities, the nested activities are grouped under the parent activity. Click the button next to the iteration or conditional activity to collapse the nested activities for a more consolidated view. ##### How to enable the container view in pipeline monitoring In the detailed view of your pipeline run, there's a new dropdown to select your monitoring view next to the Status filter.
-Select **Container** to see the new container view. If you have iteration or conditional activities, the nested activities will be grouped under the parent activity.
+Select **Container** to see the new container view. If you have iteration or conditional activities, the nested activities are grouped under the parent activity.
Click the button next to the iteration or conditional activity to collapse the nested activities for a more consolidated view. #### Simplified default monitoring view The default monitoring view has been simplified with fewer default columns. You can add/remove columns if youΓÇÖd like to personalize your monitoring view. Changes to the default will be cached. **Default columns**
The default monitoring view has been simplified with fewer default columns. You
You can edit your default view by clicking **Edit Columns**. Add columns by clicking **Add column** or remove columns by clicking the trashcan icon. You can also now view **Pipeline run details** in a new pane in the detailed pipeline monitoring view by clicking **View run detail**. ## Provide feedback
-We want to hear from you! If you see this pop-up, please let us know your thoughts by providing feedback on the updates you've tested.
+We want to hear from you! If you see this pop-up, let us know your thoughts by providing feedback on the updates you tested.
## Related content
databox-online Azure Stack Edge Gpu Create Virtual Machine Marketplace Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-create-virtual-machine-marketplace-image.md
Last updated 05/24/2022 #Customer intent: As an IT admin, I need to understand how to create and upload Azure VM images that I can use with my Azure Stack Edge Pro device so that I can deploy VMs on the device.+ # Use Azure Marketplace image to create VM image for your Azure Stack Edge Pro GPU
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ [!INCLUDE [applies-to-GPU-and-pro-r-and-mini-r-skus](../../includes/azure-stack-edge-applies-to-gpu-pro-r-mini-r-sku.md)] To deploy VMs on your Azure Stack Edge Pro GPU device, you need to create a VM image that you can use to create VMs. This article describes the steps that are required to create a VM image starting from an Azure Marketplace image. You can then use this VM image to deploy VMs on your Azure Stack Edge Pro GPU device.
Before you can use Azure Marketplace images for Azure Stack Edge, make sure you'
## Search for Azure Marketplace images
-You'll now identify a specific Azure Marketplace image that you wish to use. Azure Marketplace hosts thousands of VM images.
+You'll now identify a specific Azure Marketplace image that you wish to use. Azure Marketplace hosts thousands of VM images.
-To find some of the most commonly used Marketplace images that match your search criteria, run the following command.
+To find some of the most commonly used Marketplace images that match your search criteria, run the following command.
```azurecli az vm image list --all [--publisher <Publisher>] [--offer <Offer>] [--sku <SKU>]
Some example queries are:
```azurecli #Returns all images of type ΓÇ£Windows ServerΓÇ¥
-az vm image list --all --publisher "MicrosoftWindowsserver" --offer "WindowsServer"
+az vm image list --all --publisher "MicrosoftWindowsserver" --offer "WindowsServer"
#Returns all Windows Server 2019 Datacenter images from West US published by Microsoft az vm image list --all --location "westus" --publisher "MicrosoftWindowsserver" --offer "WindowsServer" --sku "2019-Datacenter"
-#Returns all VM images from a publisher
-az vm image list --all --publisher "Canonical"
+#Returns all VM images from a publisher
+az vm image list --all --publisher "Canonical"
``` Here is an example output when VM images of a certain publisher, offer, and SKU were queried.
PS /home/user> az vm image list --all --publisher "Canonical" --offer "UbuntuSer
PS /home/user> ```
-In this example, we will select Windows Server 2019 Datacenter Core, version 2019.0.20190410. We will identify this image by its Universal Resource Number (ΓÇ£URNΓÇ¥).
-
+In this example, we will select Windows Server 2019 Datacenter Core, version 2019.0.20190410. We will identify this image by its Universal Resource Number (ΓÇ£URNΓÇ¥).
+ :::image type="content" source="media/azure-stack-edge-create-virtual-machine-marketplace-image/marketplace-image-1.png" alt-text="List of marketplace images"::: ### Commonly used Marketplace images
-Below is a list of URNs for some of the most commonly used images. If you just want the latest version of a particular OS, the version number can be replaced with ΓÇ£latestΓÇ¥ in the URN. For example, ΓÇ£MicrosoftWindowsServer:WindowsServer:2019-Datacenter:LatestΓÇ¥.
+Below is a list of URNs for some of the most commonly used images. If you just want the latest version of a particular OS, the version number can be replaced with ΓÇ£latestΓÇ¥ in the URN. For example, ΓÇ£MicrosoftWindowsServer:WindowsServer:2019-Datacenter:LatestΓÇ¥.
| OS | SKU | Version | URN |
Below is a list of URNs for some of the most commonly used images. If you just w
## Create a new managed disk from the Marketplace image
-Create an Azure Managed Disk from your chosen Marketplace image.
+Create an Azure Managed Disk from your chosen Marketplace image.
1. Set some parameters.
Create an Azure Managed Disk from your chosen Marketplace image.
```azurecli az disk create -g $diskRG -n $diskName --image-reference $urn $sas = az disk grant-access --duration-in-seconds 36000 --access-level Read --name $diskName --resource-group $diskRG
- $diskAccessSAS = ($sas | ConvertFrom-Json)[0].accessSas
+ $diskAccessSAS = ($sas | ConvertFrom-Json)[0].accessSas
``` Here is an example output:
PS /home/user> $diskAccessSAS = ($sas | ConvertFrom-Json)[0].accessSas
PS /home/user> ```
-## Export a VHD from the managed disk to Azure Storage
+## Export a VHD from the managed disk to Azure Storage
This step will export a VHD from the managed disk to your preferred Azure blob storage account. This VHD can then be used to create VM images on Azure Stack Edge. 1. Set the destination storage account where the VHD will be copied.
-
+ ```azurecli $storageAccountName = <destination storage account name> $containerName = <destination container name>
This step will export a VHD from the managed disk to your preferred Azure blob s
``` The VHD copy will take several minutes to complete. Ensure the copy has completed before proceeding by running the following command. The status field will show ΓÇ£SuccessΓÇ¥ when complete.
-
+ ```azurecli
- Get-AzureStorageBlobCopyState ΓÇôContainer $containerName ΓÇôContext $destContext -Blob $destBlobName
+ Get-AzureStorageBlobCopyState ΓÇôContainer $containerName ΓÇôContext $destContext -Blob $destBlobName
``` Here is an example output:
DestinationSnapshotTime :
To delete the managed disk you created, follow these steps: ```azurecli
-az disk revoke-access --name $diskName --resource-group $diskRG
-az disk delete --name $diskName --resource-group $diskRG --yes
+az disk revoke-access --name $diskName --resource-group $diskRG
+az disk delete --name $diskName --resource-group $diskRG --yes
``` The deletion takes a couple minutes to complete.
databox-online Azure Stack Edge Gpu Deploy Virtual Machine Install Gpu Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-install-gpu-extension.md
# Install GPU extension on VMs for your Azure Stack Edge Pro GPU device +
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ [!INCLUDE [applies-to-gpu-pro-pro2-and-pro-r-skus](../../includes/azure-stack-edge-applies-to-gpu-pro-pro-2-pro-r-sku.md)] This article describes how to install GPU driver extension to install appropriate Nvidia drivers on the GPU VMs running on your Azure Stack Edge device. The article covers installation steps for installing a GPU extension using Azure Resource Manager templates on both Windows and Linux VMs.
To deploy Nvidia GPU drivers for an existing VM, edit the `addGPUExtWindowsVM.pa
The file `addGPUExtWindowsVM.parameters.json` takes the following parameters: ```json
-"parameters": {
+"parameters": {
"vmName": {
- "value": "<name of the VM>"
+ "value": "<name of the VM>"
}, "extensionName": {
- "value": "<name for the extension. Example: windowsGpu>"
+ "value": "<name for the extension. Example: windowsGpu>"
}, "publisher": {
- "value": "Microsoft.HpcCompute"
+ "value": "Microsoft.HpcCompute"
}, "type": {
- "value": "NvidiaGpuDriverWindows"
+ "value": "NvidiaGpuDriverWindows"
}, "typeHandlerVersion": {
- "value": "1.5"
+ "value": "1.5"
}, "settings": { "value": {
The file `addGPUExtWindowsVM.parameters.json` takes the following parameters:
``` #### Versions lower than 2205
-
+ The file `addGPUExtWindowsVM.parameters.json` takes the following parameters: ```json
-"parameters": {
+"parameters": {
"vmName": {
- "value": "<name of the VM>"
+ "value": "<name of the VM>"
}, "extensionName": {
- "value": "<name for the extension. Example: windowsGpu>"
+ "value": "<name for the extension. Example: windowsGpu>"
}, "publisher": {
- "value": "Microsoft.HpcCompute"
+ "value": "Microsoft.HpcCompute"
}, "type": {
- "value": "NvidiaGpuDriverWindows"
+ "value": "NvidiaGpuDriverWindows"
}, "typeHandlerVersion": {
- "value": "1.3"
+ "value": "1.3"
}, "settings": { "value": {
To deploy Nvidia GPU drivers for an existing Linux VM, edit the `addGPUExtWindow
If using Ubuntu or Red Hat Enterprise Linux (RHEL), the `addGPUExtLinuxVM.parameters.json` file takes the following parameters: ```powershell
-"parameters": {
+"parameters": {
"vmName": {
- "value": "<name of the VM>"
+ "value": "<name of the VM>"
}, "extensionName": {
- "value": "<name for the extension. Example: linuxGpu>"
+ "value": "<name for the extension. Example: linuxGpu>"
}, "publisher": {
- "value": "Microsoft.HpcCompute"
+ "value": "Microsoft.HpcCompute"
}, "type": {
- "value": "NvidiaGpuDriverLinux"
+ "value": "NvidiaGpuDriverLinux"
}, "typeHandlerVersion": {
- "value": "1.8"
+ "value": "1.8"
}, "settings": { }
If using Ubuntu or Red Hat Enterprise Linux (RHEL), the `addGPUExtLinuxVM.parame
If using Ubuntu or Red Hat Enterprise Linux (RHEL), the `addGPUExtLinuxVM.parameters.json` file takes the following parameters: ```powershell
-"parameters": {
+"parameters": {
"vmName": {
- "value": "<name of the VM>"
+ "value": "<name of the VM>"
}, "extensionName": {
- "value": "<name for the extension. Example: linuxGpu>"
+ "value": "<name for the extension. Example: linuxGpu>"
}, "publisher": {
- "value": "Microsoft.HpcCompute"
+ "value": "Microsoft.HpcCompute"
}, "type": {
- "value": "NvidiaGpuDriverLinux"
+ "value": "NvidiaGpuDriverLinux"
}, "typeHandlerVersion": {
- "value": "1.3"
+ "value": "1.3"
}, "settings": { }
Here's a sample Ubuntu parameter file that was used in this article:
"contentVersion": "1.0.0.0", "parameters": { "vmName": {
- "value": "VM1"
+ "value": "VM1"
}, "extensionName": {
- "value": "gpuLinux"
+ "value": "gpuLinux"
}, "publisher": {
- "value": "Microsoft.HpcCompute"
+ "value": "Microsoft.HpcCompute"
}, "type": {
- "value": "NvidiaGpuDriverLinux"
+ "value": "NvidiaGpuDriverLinux"
}, "typeHandlerVersion": {
- "value": "1.3"
+ "value": "1.3"
}, "settings": { }
Here's a sample Ubuntu parameter file that was used in this article:
If you created your VM using a Red Hat Enterprise Linux Bring Your Own Subscription image (RHEL BYOS), make sure that: -- You've followed the steps in [using RHEL BYOS image](azure-stack-edge-gpu-create-virtual-machine-image.md).
+- You've followed the steps in [using RHEL BYOS image](azure-stack-edge-gpu-create-virtual-machine-image.md).
- After you created the GPU VM, register and subscribe the VM with the Red Hat Customer portal. If your VM isn't properly registered, installation doesn't proceed as the VM isn't entitled. See [Register and automatically subscribe in one step using the Red Hat Subscription Manager](https://access.redhat.com/solutions/253273). This step allows the installation script to download relevant packages for the GPU driver.-- You either manually install the `vulkan-filesystem` package or add CentOS7 repo to your yum repo list. When you install the GPU extension, the installation script looks for a `vulkan-filesystem` package that is on CentOS7 repo (for RHEL7).
+- You either manually install the `vulkan-filesystem` package or add CentOS7 repo to your yum repo list. When you install the GPU extension, the installation script looks for a `vulkan-filesystem` package that is on CentOS7 repo (for RHEL7).
-## Deploy template
+## Deploy template
### [Windows](#tab/windows)
Deploy the template `addGPUextensiontoVM.json` to install the extension on an ex
Run the following command: ```powershell
-$templateFile = "<Path to addGPUextensiontoVM.json>"
-$templateParameterFile = "<Path to addGPUExtWindowsVM.parameters.json>"
+$templateFile = "<Path to addGPUextensiontoVM.json>"
+$templateParameterFile = "<Path to addGPUExtWindowsVM.parameters.json>"
RGName = "<Name of your resource group>" New-AzureRmResourceGroupDeployment -ResourceGroupName $RGName -TemplateFile $templateFile -TemplateParameterFile $templateParameterFile -Name "<Name for your deployment>" ```
Here's a sample output:
### [Linux](#tab/linux)
-Deploy the template `addGPUextensiontoVM.json` to install the extension to an existing VM.
+Deploy the template `addGPUextensiontoVM.json` to install the extension to an existing VM.
Run the following command: ```powershell
-$templateFile = "Path to addGPUextensiontoVM.json"
-$templateParameterFile = "Path to addGPUExtLinuxVM.parameters.json"
-$RGName = "<Name of your resource group>"
+$templateFile = "Path to addGPUextensiontoVM.json"
+$templateParameterFile = "Path to addGPUExtLinuxVM.parameters.json"
+$RGName = "<Name of your resource group>"
New-AzureRmResourceGroupDeployment -ResourceGroupName $RGName -TemplateFile $templateFile -TemplateParameterFile $templateParameterFile -Name "<Name for your deployment>"
-```
+```
> [!NOTE] > The extension deployment is a long running job and takes about 10 minutes to complete.
ForceUpdateTag :
PS C:\WINDOWS\system32> ```
-Extension execution output is logged to the following file. Refer to this file `C:\Packages\Plugins\Microsoft.HpcCompute.NvidiaGpuDriverWindows\1.3.0.0\Status` to track the status of installation.
+Extension execution output is logged to the following file. Refer to this file `C:\Packages\Plugins\Microsoft.HpcCompute.NvidiaGpuDriverWindows\1.3.0.0\Status` to track the status of installation.
A successful install is indicated by a `message` as `Enable Extension` and `status` as `success`.
A successful install is indicated by a `message` as `Enable Extension` and `stat
### [Linux](#tab/linux)
-To check the deployment state of extensions for a given VM, open another PowerShell session (run as administrator), and then run the following command:
+To check the deployment state of extensions for a given VM, open another PowerShell session (run as administrator), and then run the following command:
```powershell Get-AzureRmVMExtension -ResourceGroupName myResourceGroup -VMName <VM Name> -Name <Extension Name> ```
-Here's a sample output:
+Here's a sample output:
```powershell Copyright (C) Microsoft Corporation. All rights reserved.
The extension execution output is logged to the following file: `/var/log/azure/
### [Windows](#tab/windows)
-Sign in to the VM and run the nvidia-smi command-line utility installed with the driver.
+Sign in to the VM and run the nvidia-smi command-line utility installed with the driver.
#### Version 2205 and higher
For more information, see [Nvidia GPU driver extension for Windows](../virtual-m
Follow these steps to verify the driver installation:
-1. Connect to the GPU VM. Follow the instructions in [Connect to a Linux VM](azure-stack-edge-gpu-deploy-virtual-machine-powershell.md#connect-to-a-linux-vm).
+1. Connect to the GPU VM. Follow the instructions in [Connect to a Linux VM](azure-stack-edge-gpu-deploy-virtual-machine-powershell.md#connect-to-a-linux-vm).
Here's a sample output:
Follow these steps to verify the driver installation:
* Management: https://landscape.canonical.com * Support: https://ubuntu.com/advantage System information as of Thu Dec 10 22:57:01 UTC 2020
-
+ System load: 0.0 Processes: 133 Usage of /: 24.8% of 28.90GB Users logged in: 0 Memory usage: 2% IP address for eth0: 10.57.50.60 Swap usage: 0%
-
+ 249 packages can be updated. 140 updates are security updates.
-
- Welcome to Ubuntu 18.04.4 LTS (GNU/Linux 5.0.0-1031-azure x86_64)
+
+ Welcome to Ubuntu 18.04.4 LTS (GNU/Linux 5.0.0-1031-azure x86_64)
* Documentation: https://help.ubuntu.com * Management: https://landscape.canonical.com
- * Support: https://ubuntu.com/advantage
- System information as of Thu Dec 10 22:57:01 UTC 2020
+ * Support: https://ubuntu.com/advantage
+ System information as of Thu Dec 10 22:57:01 UTC 2020
System load: 0.0 Processes: 133 Usage of /: 24.8% of 28.90GB Users logged in: 0 Memory usage: 2% IP address for eth0: 10.57.50.60 Swap usage: 0%
-
+ 249 packages can be updated. 140 updates are security updates.
-
+ New release '20.04.1 LTS' available. Run 'do-release-upgrade' to upgrade to it.
-
+ *** System restart required *** Last login: Thu Dec 10 21:49:29 2020 from 10.90.24.23 To run a command as administrator (user "root"), use "sudo <command>". See "man sudo_root" for details.
-
+ Administrator@VM1:~$ ```
Follow these steps to verify the driver installation:
| N/A 48C P0 27W / 70W | 0MiB / 15109MiB | 5% Default | | | | N/A | +-+-+-+
-
+ +--+ | Processes: | | GPU GI CI PID Type Process name GPU Memory |
PS C:\azure-stack-edge-deploy-vms> Remove-AzureRmVMExtension -ResourceGroupName
Virtual machine extension removal operation This cmdlet will remove the specified virtual machine extension. Do you want to continue? [Y] Yes [N] No [S] Suspend [?] Help (default is "Y"): y Requestld IsSuccessStatusCode StatusCode ReasonPhrase
- - -
+ - -
True OK OK ```
Learn how to:
- [Monitor VM activity on your device](azure-stack-edge-gpu-monitor-virtual-machine-activity.md). - [Manage VM disks](azure-stack-edge-gpu-manage-virtual-machine-disks-portal.md). - [Manage VM network interfaces](azure-stack-edge-gpu-manage-virtual-machine-network-interfaces-portal.md).-- [Manage VM sizes](azure-stack-edge-gpu-manage-virtual-machine-resize-portal.md).
+- [Manage VM sizes](azure-stack-edge-gpu-manage-virtual-machine-resize-portal.md).
databox Data Box Deploy Copy Data Via Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-deploy-copy-data-via-rest.md
Last updated 12/29/2022 #Customer intent: As an IT admin, I need to be able to copy data to Data Box to upload on-premises data from my server onto Azure.+
-# Tutorial: Use REST APIs to Copy data to Azure Data Box Blob storage
+# Tutorial: Use REST APIs to Copy data to Azure Data Box Blob storage
+
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
This tutorial describes procedures to connect to Azure Data Box Blob storage via REST APIs over *http* or *https*. Once connected, the steps required to copy the data to Data Box Blob storage and prepare the Data Box to ship, are also described.
Each of these steps is described in the following sections.
Connection to Azure Blob storage REST APIs over https requires the following steps:
-* Download the certificate from Azure portal. This certificate is used for connecting to the web UI and Azure Blob storage REST APIs.
+* Download the certificate from Azure portal. This certificate is used for connecting to the web UI and Azure Blob storage REST APIs.
* Import the certificate on the client or remote host * Add the device IP and blob service endpoint to the client or remote host * Configure third-party software and verify the connection
Follow these steps to import the `.cer` file into the root store of a Windows or
The method to import a certificate varies by distribution.
-Several, such as Ubuntu and Debian, use the `update-ca-certificates` command.
+Several, such as Ubuntu and Debian, use the `update-ca-certificates` command.
* Rename the Base64-encoded certificate file to have a `.crt` extension and copy it into the `/usr/local/share/ca-certificates directory`. * Run the command `update-ca-certificates`.
Recent versions of RHEL, Fedora, and CentOS use the `update-ca-trust` command.
Consult the documentation specific to your distribution for details.
-### Add device IP address and blob service endpoint
+### Add device IP address and blob service endpoint
Follow the same steps to [add device IP address and blob service endpoint when connecting over *http*](#add-device-ip-address-and-blob-service-endpoint).
databox Data Box Disk Deploy Set Up https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-disk-deploy-set-up.md
Last updated 10/26/2022
# Customer intent: As an IT admin, I need to be able to order Data Box Disk to upload on-premises data from my server onto Azure.+ ::: zone target="docs" # Tutorial: Unpack, connect, and unlock Azure Data Box Disk
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ This tutorial describes how to unpack, connect, and unlock your Azure Data Box Disk. In this tutorial, you learn how to:
Before you begin, make sure that:
2. You have received your disks and the job status in the portal is updated to **Delivered**. 3. You have a client computer on which you can install the Data Box Disk unlock tool. Your client computer must: - Run a [Supported operating system](data-box-disk-system-requirements.md#supported-operating-systems-for-clients).
- - Have other [required software](data-box-disk-system-requirements.md#other-required-software-for-windows-clients) installed if it is a Windows client.
+ - Have other [required software](data-box-disk-system-requirements.md#other-required-software-for-windows-clients) installed if it is a Windows client.
## Unpack your disks
Before you begin, make sure that:
1. Use the included cable to connect the disk to the client computer running a supported OS as stated in the prerequisites. ![Data Box Disk connect](media/data-box-disk-deploy-set-up/data-box-disk-connect-unlock.png)
-
+ 2. In the Azure portal, navigate to your Data Box Disk Order. Search for it by navigating to **General > All resources**, then select your Data Box Disk Order. Use the copy icon to copy the passkey. This passkey will be used to unlock the disks. ![Data Box Disk unlock passkey](media/data-box-disk-deploy-set-up/data-box-disk-get-passkey.png)
Depending on whether you are connected to a Windows or Linux client, the steps t
## Unlock disks on Windows client
-Perform the following steps to connect and unlock your disks.
+Perform the following steps to connect and unlock your disks.
1. In the Azure portal, navigate to your Data Box Disk Order. Search for it by navigating to **General > All resources**, then select your Data Box Disk Order. 2. Download the Data Box Disk toolset corresponding to the Windows client. This toolset contains 3 tools: Data Box Disk Unlock tool, Data Box Disk Validation tool, and Data Box Disk Split Copy tool.
Perform the following steps to connect and unlock your disks.
PS C:\DataBoxDiskUnlockTool\DiskUnlock> .\DataBoxDiskUnlock.exe /SystemCheck Successfully verified that the system can run the tool. PS C:\DataBoxDiskUnlockTool\DiskUnlock>
- ```
+ ```
6. Run `DataBoxDiskUnlock.exe` and supply the passkey you obtained in [Connect to disks and get the passkey](#connect-to-disks-and-get-the-passkey). The drive letter assigned to the disk is displayed. A sample output is shown below.
Perform the following steps to connect and unlock your disks.
/Help: This option provides help on cmdlet usage and examples. PS C:\DataBoxDiskUnlockTool\DiskUnlock>
- ```
+ ```
8. Once the disk is unlocked, you can view the contents of the disk.
If you run into any issues while unlocking the disks, see how to [troubleshoot u
## Unlock disks on Linux client
-Perform the following steps to connect and unlock your disks.
+Perform the following steps to connect and unlock your disks.
1. In the Azure portal, go to **General > Device details**.
-2. Download the Data Box Disk toolset corresponding to the Linux client.
+2. Download the Data Box Disk toolset corresponding to the Linux client.
> [!div class="nextstepaction"] > [Download Data Box Disk toolset for Linux](https://aka.ms/databoxdisktoolslinux) 3. On your Linux client, open a terminal. Navigate to the folder where you downloaded the software. Change the file permissions so that you can execute these files. Type the following command:
- `chmod +x DataBoxDiskUnlock_x86_64`
+ `chmod +x DataBoxDiskUnlock_x86_64`
+
+ `chmod +x DataBoxDiskUnlock_Prep.sh`
- `chmod +x DataBoxDiskUnlock_Prep.sh`
+ A sample output is shown below. Once the chmod command is run, you can verify that the file permissions are changed by running the `ls` command.
- A sample output is shown below. Once the chmod command is run, you can verify that the file permissions are changed by running the `ls` command.
-
```
- [user@localhost Downloads]$ chmod +x DataBoxDiskUnlock_x86_64
- [user@localhost Downloads]$ chmod +x DataBoxDiskUnlock_Prep.sh
- [user@localhost Downloads]$ ls -l
- -rwxrwxr-x. 1 user user 1152664 Aug 10 17:26 DataBoxDiskUnlock_x86_64
+ [user@localhost Downloads]$ chmod +x DataBoxDiskUnlock_x86_64
+ [user@localhost Downloads]$ chmod +x DataBoxDiskUnlock_Prep.sh
+ [user@localhost Downloads]$ ls -l
+ -rwxrwxr-x. 1 user user 1152664 Aug 10 17:26 DataBoxDiskUnlock_x86_64
-rwxrwxr-x. 1 user user 795 Aug 5 23:26 DataBoxDiskUnlock_Prep.sh ```
Perform the following steps to connect and unlock your disks.
`sudo ./DataBoxDiskUnlock_Prep.sh`
- The script will first check whether your client computer is running a supported operating system. A sample output is shown below.
-
+ The script will first check whether your client computer is running a supported operating system. A sample output is shown below.
+ ```
- [user@localhost Documents]$ sudo ./DataBoxDiskUnlock_Prep.sh
- OS = CentOS Version = 6.9
- Release = CentOS release 6.9 (Final)
- Architecture = x64
-
- The script will install the following packages and dependencies.
- epel-release
- dislocker
- ntfs-3g
- fuse-dislocker
+ [user@localhost Documents]$ sudo ./DataBoxDiskUnlock_Prep.sh
+ OS = CentOS Version = 6.9
+ Release = CentOS release 6.9 (Final)
+ Architecture = x64
+
+ The script will install the following packages and dependencies.
+ epel-release
+ dislocker
+ ntfs-3g
+ fuse-dislocker
Do you wish to continue? y|n :| ```
-
-5. Type `y` to continue the install. The packages that the script installs are:
- - **epel-release** - Repository that contains the following three packages.
- - **dislocker and fuse-dislocker** - These utilities helps decrypting BitLocker encrypted disks.
- - **ntfs-3g** - Package that helps mount NTFS volumes.
-
- Once the packages are successfully installed, the terminal will display a notification to that effect.
+
+5. Type `y` to continue the install. The packages that the script installs are:
+ - **epel-release** - Repository that contains the following three packages.
+ - **dislocker and fuse-dislocker** - These utilities helps decrypting BitLocker encrypted disks.
+ - **ntfs-3g** - Package that helps mount NTFS volumes.
+
+ Once the packages are successfully installed, the terminal will display a notification to that effect.
```
- Dependency Installed: compat-readline5.x86 64 0:5.2-17.I.el6 dislocker-libs.x86 64 0:0.7.1-8.el6 mbedtls.x86 64 0:2.7.4-l.el6        ruby.x86 64 0:1.8.7.374-5.el6
- ruby-libs.x86 64 0:1.8.7.374-5.el6
- Complete!
- Loaded plugins: fastestmirror, refresh-packagekit, security
- Setting up Remove Process
- Resolving Dependencies
- --> Running transaction check
- > Package epel-release.noarch 0:6-8 will be erased --> Finished Dependency Resolution
- Dependencies Resolved
- Package        Architecture        Version        Repository        Size
- Removing: epel-release        noarch         6-8        @extras        22 k
- Transaction Summary                                
- Remove        1 Package(s)
- Installed size: 22 k
- Downloading Packages:
- Running rpmcheckdebug
- Running Transaction Test
- Transaction Test Succeeded
- Running Transaction
- Erasing : epel-release-6-8.noarch
- Verifying : epel-release-6-8.noarch
- Removed:
- epel-release.noarch 0:6-8
- Complete!
- Dislocker is installed by the script.
+ Dependency Installed: compat-readline5.x86 64 0:5.2-17.I.el6 dislocker-libs.x86 64 0:0.7.1-8.el6 mbedtls.x86 64 0:2.7.4-l.el6        ruby.x86 64 0:1.8.7.374-5.el6
+ ruby-libs.x86 64 0:1.8.7.374-5.el6
+ Complete!
+ Loaded plugins: fastestmirror, refresh-packagekit, security
+ Setting up Remove Process
+ Resolving Dependencies
+ --> Running transaction check
+ > Package epel-release.noarch 0:6-8 will be erased --> Finished Dependency Resolution
+ Dependencies Resolved
+ Package        Architecture        Version        Repository        Size
+ Removing: epel-release        noarch         6-8        @extras        22 k
+ Transaction Summary                                
+ Remove        1 Package(s)
+ Installed size: 22 k
+ Downloading Packages:
+ Running rpmcheckdebug
+ Running Transaction Test
+ Transaction Test Succeeded
+ Running Transaction
+ Erasing : epel-release-6-8.noarch
+ Verifying : epel-release-6-8.noarch
+ Removed:
+ epel-release.noarch 0:6-8
+ Complete!
+ Dislocker is installed by the script.
OpenSSL is already installed. ```
-6. Run the Data Box Disk Unlock tool. Supply the passkey from the Azure portal you obtained in [Connect to disks and get the passkey](#connect-to-disks-and-get-the-passkey). Optionally specify a list of BitLocker encrypted volumes to unlock. The passkey and volume list should be specified within single quotes.
+6. Run the Data Box Disk Unlock tool. Supply the passkey from the Azure portal you obtained in [Connect to disks and get the passkey](#connect-to-disks-and-get-the-passkey). Optionally specify a list of BitLocker encrypted volumes to unlock. The passkey and volume list should be specified within single quotes.
Type the following command.
-
+ ```bash sudo ./DataBoxDiskUnlock_x86_64 /PassKey:'<Your passkey from Azure portal>' ```
- The sample output is shown below.
-
+ The sample output is shown below.
+ ```output
- [user@localhost Downloads]$ sudo ./DataBoxDiskUnlock_x86_64 /Passkey:'qwerqwerqwer'
-
- START: Mon Aug 13 14:25:49 2018
- Volumes: /dev/sdbl
- Passkey: qwerqwerqwer
-
- Volumes for data copy :
- /dev/sdbl: /mnt/DataBoxDisk/mountVoll/
+ [user@localhost Downloads]$ sudo ./DataBoxDiskUnlock_x86_64 /Passkey:'qwerqwerqwer'
+
+ START: Mon Aug 13 14:25:49 2018
+ Volumes: /dev/sdbl
+ Passkey: qwerqwerqwer
+
+ Volumes for data copy :
+ /dev/sdbl: /mnt/DataBoxDisk/mountVoll/
END: Mon Aug 13 14:26:02 2018 ``` The mount point for the volume that you can copy your data to is displayed.
-7. Repeat unlock steps for any future disk reinserts. Use the `help` command if you need help with the Data Box Disk unlock tool.
-
- `sudo ./DataBoxDiskUnlock_x86_64 /Help`
+7. Repeat unlock steps for any future disk reinserts. Use the `help` command if you need help with the Data Box Disk unlock tool.
+
+ `sudo ./DataBoxDiskUnlock_x86_64 /Help`
+
+ The sample output is shown below.
- The sample output is shown below.
-
```
- [user@localhost Downloads]$ sudo ./DataBoxDiskUnlock_x86_64 /Help
- START: Mon Aug 13 14:29:20 2018
- USAGE:
- sudo DataBoxDiskUnlock /PassKey:'<passkey from Azure_portal>'
-
- Example: sudo DataBoxDiskUnlock /PassKey:'passkey'
- Example: sudo DataBoxDiskUnlock /PassKey:'passkey' /Volumes:'/dev/sdbl'
- Example: sudo DataBoxDiskUnlock /Help Example: sudo DataBoxDiskUnlock /Clean
-
- /PassKey: This option takes a passkey as input and unlocks all of your disks.
- Get the passkey from your Data Box Disk order in Azure portal.
- /Volumes: This option is used to input a list of BitLocker encrypted volumes.
- /Help: This option provides help on the tool usage and examples.
- /Unmount: This option unmounts all the volumes mounted by this tool.
-
+ [user@localhost Downloads]$ sudo ./DataBoxDiskUnlock_x86_64 /Help
+ START: Mon Aug 13 14:29:20 2018
+ USAGE:
+ sudo DataBoxDiskUnlock /PassKey:'<passkey from Azure_portal>'
+
+ Example: sudo DataBoxDiskUnlock /PassKey:'passkey'
+ Example: sudo DataBoxDiskUnlock /PassKey:'passkey' /Volumes:'/dev/sdbl'
+ Example: sudo DataBoxDiskUnlock /Help Example: sudo DataBoxDiskUnlock /Clean
+
+ /PassKey: This option takes a passkey as input and unlocks all of your disks.
+ Get the passkey from your Data Box Disk order in Azure portal.
+ /Volumes: This option is used to input a list of BitLocker encrypted volumes.
+ /Help: This option provides help on the tool usage and examples.
+ /Unmount: This option unmounts all the volumes mounted by this tool.
+ END: Mon Aug 13 14:29:20 2018 [user@localhost Downloads]$ ```
-
-8. Once the disk is unlocked, you can go to the mount point and view the contents of the disk. You are now ready to copy the data to *BlockBlob* or *PageBlob* folders.
+
+8. Once the disk is unlocked, you can go to the mount point and view the contents of the disk. You are now ready to copy the data to *BlockBlob* or *PageBlob* folders.
![Data Box Disk contents 2](media/data-box-disk-deploy-set-up/data-box-disk-content-linux.png) > [!NOTE] > Don't format or modify the contents or existing file structure of the disk.
-If you run into any issues while unlocking the disks, see how to [troubleshoot unlock issues](data-box-disk-troubleshoot-unlock.md).
+If you run into any issues while unlocking the disks, see how to [troubleshoot unlock issues](data-box-disk-troubleshoot-unlock.md).
::: zone-end
If you run into any issues while unlocking the disks, see how to [troubleshoot u
or > [!div class="nextstepaction"]
- > [Download Data Box Disk toolset for Linux](https://aka.ms/databoxdisktoolslinux)
+ > [Download Data Box Disk toolset for Linux](https://aka.ms/databoxdisktoolslinux)
3. To unlock the disks on a Windows client, open a Command Prompt window or run Windows PowerShell as administrator on the same computer: - Type the following command in the same folder where Data Box Disk Unlock tool is installed.
- ```
+ ```
.\DataBoxDiskUnlock.exe ```
- - Get the passkey from **General > Device details** in the Azure portal and provide it here. The drive letter assigned to the disk is displayed.
-4. To unlock the disks on a Linux client, open a terminal. Go to the folder where you downloaded the software. Type the following commands to change the file permissions so that you can execute these files:
+ - Get the passkey from **General > Device details** in the Azure portal and provide it here. The drive letter assigned to the disk is displayed.
+4. To unlock the disks on a Linux client, open a terminal. Go to the folder where you downloaded the software. Type the following commands to change the file permissions so that you can execute these files:
``` chmod +x DataBoxDiskUnlock_x86_64 chmod +x DataBoxDiskUnlock_Prep.sh
- ```
+ ```
Execute the script to install all the required binaries. ```
If you run into any issues while unlocking the disks, see how to [troubleshoot u
``` sudo ./DataBoxDiskUnlock_x86_64 /PassKey:'<Your passkey from Azure portal>'
- ```
+ ```
5. Repeat the unlock steps for any future disk reinserts. Use the help command if you need help with the Data Box Disk unlock tool. After the disk is unlocked, you can view the contents of the disk.
databox Data Box Disk System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-disk-system-requirements.md
# Azure Data Box Disk system requirements
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ This article describes the important system requirements for your Microsoft Azure Data Box Disk solution and for the clients connecting to the Data Box Disk. We recommend that you review the information carefully before you deploy your Data Box Disk, and then refer back to it as necessary during the deployment and subsequent operation. The system requirements include the supported platforms for clients connecting to disks, supported storage accounts, and storage types.
databox Data Box Heavy Deploy Copy Data Via Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-heavy-deploy-copy-data-via-rest.md
Last updated 07/03/2019 #Customer intent: As an IT admin, I need to be able to copy data to Data Box Heavy to upload on-premises data from my server onto Azure.+
-# Tutorial: Copy data to Azure Data Box Blob storage via REST APIs
+# Tutorial: Copy data to Azure Data Box Blob storage via REST APIs
+
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
This tutorial describes procedures to connect to Azure Data Box Blob storage via REST APIs over *http* or *https*. Once connected, the steps required to copy the data to Data Box Blob storage are described.
Before you begin, make sure that:
3. You've reviewed the [system requirements for Data Box Blob storage](data-box-system-requirements-rest.md) and are familiar with supported versions of APIs, SDKs, and tools. 4. You've access to a host computer that has the data that you want to copy over to Data Box Heavy. Your host computer must - Run a [Supported operating system](data-box-system-requirements.md).
- - Be connected to a high-speed network. For fastest copy speeds, two 40-GbE connections (one per node) can be utilized in parallel. If you do not have 40-GbE connection available, we recommend that you have at least two 10-GbE connections (one per node).
+ - Be connected to a high-speed network. For fastest copy speeds, two 40-GbE connections (one per node) can be utilized in parallel. If you do not have 40-GbE connection available, we recommend that you have at least two 10-GbE connections (one per node).
5. [Download AzCopy 7.1.0](https://aka.ms/azcopyforazurestack20170417) on your host computer. You'll use AzCopy to copy data to Azure Data Box Blob storage from your host computer.
Use the Azure portal to download certificate.
3. Under **Device credentials**, go to **API access** to device. Click **Download**. This action downloads a **\<your order name>.cer** certificate file. **Save** this file. You will install this certificate on the client or host computer that you will use to connect to the device. ![Download certificate in Azure portal](media/data-box-deploy-copy-data-via-rest/download-cert-1.png)
-
-### Import certificate
+
+### Import certificate
Accessing Data Box Blob storage over HTTPS requires a TLS/SSL certificate for the device. The way in which this certificate is made available to the client application varies from application to application and across operating systems and distributions. Some applications can access the certificate after it is imported into the systemΓÇÖs certificate store, while other applications do not make use of that mechanism.
The method to import a certificate varies by distribution.
> [!IMPORTANT] > For Data Box Heavy, you'll need to repeat all the connection instructions to connect to the second node.
-Several, such as Ubuntu and Debian, use the `update-ca-certificates` command.
+Several, such as Ubuntu and Debian, use the `update-ca-certificates` command.
- Rename the Base64-encoded certificate file to have a `.crt` extension and copy it into the `/usr/local/share/ca-certificates directory`. - Run the command `update-ca-certificates`.
Recent versions of RHEL, Fedora, and CentOS use the `update-ca-trust` command.
Consult the documentation specific to your distribution for details.
-### Add device IP address and blob service endpoint
+### Add device IP address and blob service endpoint
Follow the same steps to [add device IP address and blob service endpoint when connecting over *http*](#add-device-ip-address-and-blob-service-endpoint).
defender-for-iot Eiot Defender For Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/eiot-defender-for-endpoint.md
This section lists sample advanced hunting queries that you can use in Microsoft
Use the following query to identify devices that exist in your corporate network by type of device, such as routers:ΓÇ» ```kusto
-| DeviceInfoΓÇ»
-| summarize arg_max(Timestamp, *) by DeviceIdΓÇ»
-| where DeviceType == "NetworkDevice" and DeviceSubtype ΓÇ»== "Router"ΓÇ»
+DeviceInfo
+| summarize arg_max(Timestamp, *) by DeviceId
+| where DeviceType == "NetworkDevice" and DeviceSubtype == "Router"ΓÇ»
``` ### Find and export vulnerabilities for your IoT devices
Use the following query to identify devices that exist in your corporate network
Use the following query to list all vulnerabilities on your IoT devices: ```kusto
+DeviceInfo
| where DeviceCategory =~ "iot"
-| join kind=inner DeviceTvmSoftwareVulnerabilities on DeviceId
+| join kind=inner DeviceTvmSoftwareVulnerabilities on DeviceId
``` For more information, see [Advanced hunting](/microsoft-365/security/defender/advanced-hunting-overview) and [Understand the advanced hunting schema](/microsoft-365/security/defender/advanced-hunting-schema-tables).
defender-for-iot Tutorial Clearpass https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-clearpass.md
# Integrate ClearPass with Microsoft Defender for IoT
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ This article describes how to integrate Aruba ClearPass with Microsoft Defender for IoT, in order to view both ClearPass and Defender for IoT information in a single place.
This section describes how to integrate Defender for IoT and ClearPass Policy Ma
> [!IMPORTANT] > The legacy Aruba ClearPass integration is supported through October 2024 using sensor version 23.1.3, and won't be supported in upcoming major software versions.. For customers using the legacy integration, we recommend moving to one of the following methods:
->
-> - If you're integrating your security solution with cloud-based systems, we recommend that you use data connectors through [Microsoft Sentinel](#cloud-based-integrations).
+>
+> - If you're integrating your security solution with cloud-based systems, we recommend that you use data connectors through [Microsoft Sentinel](#cloud-based-integrations).
> - For on-premises integrations, we recommend that you either configure your OT sensor to [forward syslog events, or use Defender for IoT APIs](#on-premises-integrations). >
dev-box How To Configure Network Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-configure-network-connections.md
To create a network connection, you need an existing virtual network and subnet.
:::image type="content" source="./media/how-to-manage-network-connection/example-basics-tab.png" alt-text="Screenshot of the Basics tab on the pane for creating a virtual network in the Azure portal." lightbox="./media/how-to-manage-network-connection/example-basics-tab.png"::: > [!IMPORTANT]
- > The region you select for the virtual network is the where Azure deploys the dev boxes.
+ > The region you select for the virtual network is where Azure deploys the dev boxes.
1. On the **IP Addresses** tab, accept the default settings.
event-grid Event Schema Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-schema-policy.md
events. For an introduction to event schemas, see
[Azure Event Grid event schema](./event-schema.md). It also gives you a list of quick starts and tutorials to use Azure Policy as an event source. ## Next steps
tutorials to use Azure Policy as an event source.
[React to Azure Policy events by using Event Grid](../governance/policy/concepts/event-overview.md). - For an introduction to Azure Event Grid, see [What is Event Grid?](./overview.md) - For more information about creating an Azure Event Grid subscription, see
- [Event Grid subscription schema](./subscription-creation-schema.md).
+ [Event Grid subscription schema](./subscription-creation-schema.md).
external-attack-surface-management Understanding Dashboards https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/understanding-dashboards.md
Microsoft Defender External Attack Surface Management (Defender EASM) offers a series of four dashboards designed to help users quickly surface valuable insights derived from their Approved inventory. These dashboards help organizations prioritize the vulnerabilities, risks and compliance issues that pose the greatest threat to their Attack Surface, making it easy to quickly mitigate key issues.
-Defender EASM provides five dashboards:
+Defender EASM provides seven dashboards:
- **Overview**: this dashboard is the default landing page when you access Defender EASM. It provides the key context that can help you familiarize yourself with your attack surface. - **Attack surface summary**: this dashboard summarizes the key observations derived from your inventory. It provides a high-level overview of your Attack Surface and the asset types that comprise it, and surfaces potential vulnerabilities by severity (high, medium, low). This dashboard also provides key context on the infrastructure that comprises your Attack Surface. This context includes insight into cloud hosting, sensitive services, SSL certificate and domain expiry, and IP reputation.
Defender EASM provides five dashboards:
- **GDPR compliance**: this dashboard surfaces key areas of compliance risk based on the General Data Protection Regulation (GDPR) requirements for online infrastructure thatΓÇÖs accessible to European nations. This dashboard provides insight on the status of your websites, SSL certificate issues, exposed personal identifiable information (PII), login protocols, and cookie compliance. - **OWASP Top 10**: this dashboard surfaces any assets that are vulnerable according to OWASPΓÇÖs list of the most critical web application security risks. On this dashboard, organizations can quickly identify assets with broken access control, cryptographic failures, injections, insecure designs, security misconfigurations and other critical risks as defined by OWASP. - **CWE top 25 software weaknesses**: this dashboard is based on the Top 25 Common Weakness Enumeration (CWE) list provided annually by MITRE. These CWEs represent the most common and impactful software weaknesses that are easy to find and exploit. -- **CISA known exploits**: this dashboard displays any assets that are potentially impacted by vulnerabilities that have led to known exploits as defined by CISA. This dashboard helps you prioritize remediation efforts based on vulnerabilities that have been exploited in the past, indicating a higher level of risk for your organization.
+- **CISA known exploits**: this dashboard displays any assets that are potentially impacted by vulnerabilities that led to known exploits as defined by CISA. This dashboard helps you prioritize remediation efforts based on vulnerabilities that were exploited in the past, indicating a higher level of risk for your organization.
## Accessing dashboards
Microsoft identifies organizations' attack surfaces through proprietary technolo
At the top of this dashboard, Defender EASM provides a list of security priorities organized by severity (high, medium, low). Large organizationsΓÇÖ attack surfaces can be incredibly broad, so prioritizing the key findings derived from our expansive data helps users quickly and efficiently address the most important exposed elements of their attack surface. These priorities can include critical CVEs, known associations to compromised infrastructure, use of deprecated technology, infrastructure best practice violations, or compliance issues.
-Insight Priorities are determined by MicrosoftΓÇÖs assessment of the potential impact of each insight. For instance, high severity insights can include vulnerabilities that are new, exploited frequently, particularly damaging, or easily exploited by hackers with a lower skill level. Low severity insights can include use of deprecated technology that is no longer supported, infrastructure that will soon expire, or compliance issues that do not align with security best practices. Each insight contains suggested remediation actions to protect against potential exploits.
+Insight Priorities are determined by MicrosoftΓÇÖs assessment of the potential impact of each insight. For instance, high severity insights can include vulnerabilities that are new, exploited frequently, particularly damaging, or easily exploited by hackers with a lower skill level. Low severity insights can include use of deprecated technology that is no longer supported, infrastructure soon expiring, or compliance issues that do not align with security best practices. Each insight contains suggested remediation actions to protect against potential exploits.
Insights that were recently added to the Defender EASM platform are flagged with a "NEW" label on this dashboard. When we add new insights that impact assets in your Confirmed Inventory, the system also delivers a push notification that routes you to a detailed view of this new insight with a list of the impacted assets.
-Some insights are flagged with "Potential" in the title. A "Potential" insight occurs when Defender EASM is unable to confirm that an asset is impacted by a vulnerability. This is common when our scanning system detects the presence of a specific service but cannot detect the version number. For example, some services enable administrators to hide version information. Vulnerabilities are often associated with specific versions of the software, so manual investigation is required to determine whether the asset is impacted. Other vulnerabilities can be remediated by steps that Defender EASM is unable to detect. For instance, users can make recommended changes to service configurations or run backported patches. If an insight is prefaced with "Potential", the system has reason to believe that the asset is impacted by the vulnerability but is unable to confirm it for one of the above listed reasons. To manually investigate, click the insight name to review remediation guidance that can help you determine whether your assets are impacted.
+Some insights are flagged with "Potential" in the title. A "Potential" insight occurs when Defender EASM is unable to confirm that an asset is impacted by a vulnerability. Potential insights occur when our scanning system detects the presence of a specific service but cannot detect the version number. For example, some services enable administrators to hide version information. Vulnerabilities are often associated with specific versions of the software, so manual investigation is required to determine whether the asset is impacted. Other vulnerabilities can be remediated by steps that Defender EASM is unable to detect. For instance, users can make recommended changes to service configurations or run backported patches. If an insight is prefaced with "Potential", the system has reason to believe that the asset is impacted by the vulnerability but is unable to confirm it for one of the above listed reasons. To manually investigate, click the insight name to review remediation guidance that can help you determine whether your assets are impacted.
![Screenshot of attack surface priorities with clickable options highlighted.](media/Dashboards-2.png)
-A user will usually decide to first investigate any High Severity Observations. You can click the top-listed observation to be directly routed to a list of impacted assets, or instead select ΓÇ£View All __ InsightsΓÇ¥ to see a comprehensive, expandable list of all potential observations within that severity group.
+A user usually decides to first investigate any High Severity Observations. You can click the top-listed observation to be directly routed to a list of impacted assets, or instead select ΓÇ£View All __ InsightsΓÇ¥ to see a comprehensive, expandable list of all potential observations within that severity group.
The Observations page features a list of all potential insights in the left-hand column. This list is sorted by the number of assets that are impacted by each security risk, displaying the issues that impact the greatest number of assets first. To view the details of any security risk, simply click on it from this list.
This section of the Attack Surface Summary dashboard provides insight on the clo
![Screenshot of cloud chart.](media/Dashboards-6.png)
-For instance, your organization might have recently decided to migrate all cloud infrastructure to a single provider to simplify and consolidate their Attack Surface. This chart can help you identify assets that still need to be migrated. Each bar of the chart is clickable, routing users to a filtered list that displays the assets that comprise the chart value.
+For instance, your organization may decide to migrate all cloud infrastructure to a single provider to simplify and consolidate their Attack Surface. This chart can help you identify assets that still need to be migrated. Each bar of the chart is clickable, routing users to a filtered list that displays the assets that comprise the chart value.
### Sensitive services
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/overview.md
# Understanding Azure Machine Configuration
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ Azure Policy's machine configuration feature provides native capability to audit or configure operating system settings as code for machines running in Azure and hybrid [Arc-enabled machines][01]. You can use the feature directly per-machine, or orchestrate it at
governance Event Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/event-overview.md
pushed through [Azure Event Grid](../../../event-grid/index.yml) to subscribers
Critically, you only pay for what you use. Azure Policy events are sent to the Azure Event Grid, which provides reliable delivery services to
-your applications through rich retry policies and dead-letter delivery. Event Grid takes
+your applications through rich retry policies and dead-letter delivery. Event Grid takes
care of the proper routing, filtering, and multicasting of the events to destinations via Event Grid subscriptions.
-To learn more, see [Event Grid message delivery and retry](../../../event-grid/delivery-and-retry.md).
+To learn more, see [Event Grid message delivery and retry](../../../event-grid/delivery-and-retry.md).
> [!NOTE] > Azure Policy state change events are sent to Event Grid after an
To learn more, see [Event Grid message delivery and retry](../../../event-grid/d
> evaluation. ## Event Grid Benefits
-Event Grid has a few benefits for customers and services in the Azure ecosystem:
+Event Grid has a few benefits for customers and services in the Azure ecosystem:
- Automation: To stay current with your policy environment, Event Grid offers an automated mechanism to generate alerts and trigger tasks depending on compliance states. - Durable delivery: In order for services and user applications to respond in real-time to policy compliance events,
Event Grid has a few benefits for customers and services in the Azure ecosystem:
endpoint fails to acknowledge receipt of it or if it doesn't, according to a predetermined retry schedule and retry policy. - Custom event producer: Event Grid event producers and consumers don't need to be Azure or Microsoft services. External applications can receive an alert, show the creation of a remediation task or collect messages on who responds to the
- state change.
+ state change.
See [Route policy state change events to Event Grid with Azure CLI](../tutorials/route-state-change-events.md) for a full tutorial.
-There are two primary entities when using Event Grid:
+There are two primary entities when using Event Grid:
- Events: These events can be anything a user may want to react to that includes if a policy compliance state is created, changed, and deleted of a resource such as a VM or storage accounts. - Event Grid Subscriptions: These event subscriptions are user configured entities that direct the proper set of events
A common Azure Policy event scenario is tracking when the compliance state of a
during policy evaluation. Event-based architecture is an efficient way to react to these changes and aids in the event based reaction to compliance state changes.
-Another scenario is to automatically trigger remediation tasks without manually ticking off _create
-remediation task_ on the policy page. Event Grid checks for compliance state and resources that are currently
+Another scenario is to automatically trigger remediation tasks without manually ticking off _create
+remediation task_ on the policy page. Event Grid checks for compliance state and resources that are currently
noncompliant can be remedied. Learn more about [remediation structure](../concepts/remediation-structure.md). Remediation requires a managed identity and policies must be in Modify or DeployIfNotExists effect. [Learn more about effect types](../how-to/remediate-resources.md).
-Additionally, Event Grid is helpful as an audit system to store state changes and understand cause of noncompliance over
-time. The scenarios for Event Grid are endless and based on the motivation, Event Grid is configurable.
+Additionally, Event Grid is helpful as an audit system to store state changes and understand cause of noncompliance over
+time. The scenarios for Event Grid are endless and based on the motivation, Event Grid is configurable.
:::image type="content" source="../../../event-grid/media/overview/functional-model.png" alt-text="Screenshot of Event Grid model of sources and handlers." lightbox="../../../event-grid/media/overview/functional-model-big.png"::: ## Practices for consuming events
governance Guest Configuration Baseline Docker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/guest-configuration-baseline-docker.md
# Docker security baseline
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ This article details the configuration settings for Docker hosts as applicable in the following implementations:
governance Guest Configuration Baseline Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/guest-configuration-baseline-linux.md
# Linux security baseline
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ This article details the configuration settings for Linux guests as applicable in the following implementations:
internet-analyzer Internet Analyzer Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-analyzer/internet-analyzer-cli.md
# Create an Internet Analyzer test using CLI (Preview) + There are two ways to create an Internet Analyzer resource - using the [Azure portal](internet-analyzer-create-test-portal.md) or using CLI. This section helps you create a new Azure Internet Analyzer resource using our CLI experience.
internet-analyzer Internet Analyzer Create Test Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-analyzer/internet-analyzer-create-test-portal.md
# Create an Internet Analyzer test using Portal (Preview) + There are two ways to create an Internet Analyzer resource- using the Azure portal or using [CLI](internet-analyzer-cli.md). This section helps you create a new Azure Internet Analyzer resource using our portal experience. > [!IMPORTANT]
internet-analyzer Internet Analyzer Custom Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-analyzer/internet-analyzer-custom-endpoint.md
# Measure custom endpoints to evaluate in your Internet Analyzer tests + This article demonstrates how to set up a custom endpoint to measure as part of your Internet Analyzer tests. Custom endpoints help evaluate on-premises workloads, workloads running on other cloud providers, and custom Azure configurations. Comparing two custom endpoints in one test is possible if one endpoint is an Azure resource. For more information on Internet Analyzer, see the [overview](internet-analyzer-overview.md). > [!IMPORTANT]
internet-analyzer Internet Analyzer Embed Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-analyzer/internet-analyzer-embed-client.md
# Embed the Internet Analyzer client + This article shows you how to embed the JavaScript client in your application. Installation of this client is necessary to run tests and receive scorecard analytics. **The profile-specific JavaScript client is provided after the first test has been configured.** From there, you may continue to add or remove tests to that profile without having to embed a new script. For more information on Internet Analyzer, see the [overview](internet-analyzer-overview.md). > [!IMPORTANT]
internet-analyzer Internet Analyzer Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-analyzer/internet-analyzer-faq.md
# Azure Internet Analyzer FAQ (Preview)
-This is the FAQ for Azure Internet Analyzer- if you have additional questions, go to the [feedback forum](https://aka.ms/internetAnalyzerFeedbackForum) and post your question. When a question is frequently asked, we add it to this article so it can be found quickly and easily.
-
-## How do I participate in the preview?
-
-The preview is available to select customers. If you are interested in joining the preview, please do the following:
-1. Sign in to the [Azure portal](https://portal.azure.com).
-2. Navigate to the **Subscriptions** page.
-3. Click on the Azure subscription that you plan to use Internet Analyzer with.
-4. Go to the **Resource providers** settings for the subscription.
-5. Search for **Microsoft.Network** and click on the **Register** (or **Re-register**) button.
-![access request](./media/ia-faq/request-preview-access.png)
-
-6. [Request approval](https://aka.ms/internetAnalyzerContact) by providing us your email address and the Azure subscription ID that was used to make the access request.
-7. Once your request has been approved, you will receive an email confirmation and will be able to create/update/modify Internet Analyzer resources from the newly allowed Azure subscription.
+This is the FAQ for Azure Internet Analyzer- if you have additional questions, go to the [feedback forum](https://aka.ms/internetAnalyzerFeedbackForum) and post your question. When a question is frequently asked, we add it to this article so it can be found quickly and easily.
## Do I need to embed the client to run a test?
internet-analyzer Internet Analyzer Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-analyzer/internet-analyzer-overview.md
# What is Internet Analyzer? (Preview) + Internet Analyzer is a client-side measurement platform to test how networking infrastructure changes impact your customersΓÇÖ performance. Whether youΓÇÖre migrating from on-premises to Azure or evaluating a new Azure service, Internet Analyzer allows you to learn from your usersΓÇÖ data and MicrosoftΓÇÖs rich analytics to better understand and optimize your network architecture with AzureΓÇöbefore you migrate. Internet Analyzer uses a small JavaScript client embedded in your Web application to measure the latency from your end users to your selected set of network destinations, we call _endpoints_. Internet Analyzer allows you to set up multiple side-by-side tests, allowing you to evaluate a variety of scenarios as your infrastructure and customer needs evolves. Internet Analyzer provides custom and preconfigured endpoints, providing you both the convenience and flexibility to make trusted performance decisions for your end users.
internet-analyzer Internet Analyzer Scorecard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-analyzer/internet-analyzer-scorecard.md
# Interpreting your scorecard + The scorecard tab contains the aggregated and analyzed results from your tests. Each test has its own scorecards. Scorecards provide quick and meaningful summaries of measurement results to provide data-driven results for your networking requirements. Internet Analyzer takes care of the analysis, allowing you to focus on the decision. The scorecard tab can be found in the Internet Analyzer resource menu.
The scorecard tab can be found in the Internet Analyzer resource menu.
## Filters
-* ***Test:*** Select the test that youΓÇÖd like to view results for - each test has its own scorecard. Test data will appear once there is enough data to complete the analysis ΓÇô in most cases, this should be within 24 hours.
+* ***Test:*** Select the test that youΓÇÖd like to view results for - each test has its own scorecard. Test data will appear once there's enough data to complete the analysis ΓÇô in most cases, this should be within 24 hours.
* ***Time period & end date:*** Three scorecards are generated daily ΓÇô each scorecard reflects a different aggregation period ΓÇô the 24 hours prior (day), the seven days prior (week), and the 30 days prior (month). Use the ΓÇ£End DateΓÇ¥ filter to select the last day of the time period you want to see. * ***Country:*** For each country that you have end users, a scorecard is generated. The global filter contains all end users. ## Measurement count
-The number of measurements impacts the confidence of the analysis. The higher the count, the more accurate the result. At minimum, tests should aim for a minimum of 100 measurements per endpoint per day. If measurement counts are too low, please configure the JavaScript client to execute more frequently in your application. The measurement counts for endpoints A and B should be very similar although small differences are expected and okay. In the case of large differences, the results should not be trusted.
+The number of measurements impacts the confidence of the analysis. The higher the count, the more accurate the result. At minimum, tests should aim for a minimum of 100 measurements per endpoint per day. If measurement counts are too low, please configure the JavaScript client to execute more frequently in your application. The measurement counts for endpoints A and B should be very similar although small differences are expected and okay. In the case of large differences, the results shouldn't be trusted.
## Percentiles
-Latency, measured in milliseconds, is a popular metric for measuring speed between a source and destination on the Internet. Latency data is not normally distributed (i.e. does not follow a "Bell Curve") because there is a "long-tail" of large latency values that skew results when using statistics such as the arithmetic mean. As an alternative, percentiles provide a "distribution-free" way to analyze data. As an example, the median, or 50th percentile, summarizes the middle of the distribution - half the values are above it and half are below it. A 75th percentile value means it is larger than 75% of all values in the distribution. Internet Analyzer refers to percentiles in shorthand as P50, P75, and P95.
+Latency, measured in milliseconds, is a popular metric for measuring speed between a source and destination on the Internet. Latency data isn't normally distributed (i.e. doesn't follow a "Bell Curve") because there's a "long-tail" of large latency values that skew results when using statistics such as the arithmetic mean. As an alternative, percentiles provide a "distribution-free" way to analyze data. As an example, the median, or 50th percentile, summarizes the middle of the distribution - half the values are above it and half are below it. A 75th percentile value means it's larger than 75% of all values in the distribution. Internet Analyzer refers to percentiles in shorthand as P50, P75, and P95.
Internet Analyzer percentiles are _sample metrics_. This is in contrast to the true _population metric_. For example, the daily true population median latency between students at the University of Southern California and Microsoft is the median latency value of all requests during that day. In practice, measuring the value of all requests is impractical, so we assume that a reasonably large sample is representative of the true population.
-For analysis purposes, P50 (median), is useful as an expected value for a latency distribution. Higher percentiles, such as P95, are useful for identifying how high latency is in the worst cases. If you are interested in understanding customer latency in general, P50 is the correct metric to focus on. If you are concerned with understanding performance for the worst-performing customers, then P95 should be the focus. P75 is a balance between these two.
+For analysis purposes, P50 (median), is useful as an expected value for a latency distribution. Higher percentiles, such as P95, are useful for identifying how high latency is in the worst cases. If you're interested in understanding customer latency in general, P50 is the correct metric to focus on. If you're concerned with understanding performance for the worst-performing customers, then P95 should be the focus. P75 is a balance between these two.
## Deltas
-A delta is the difference in metric values for endpoints A and B. Deltas are computed to show the benefit of B over A. Positive values indicate B performed better than A, whereas negative values indicate B's performance is worse. Deltas can be absolute (e.g. 10 milliseconds) or relative (5%).
+A delta is the difference in metric values for endpoints A and B. Deltas are computed to show the benefit of B over A. Positive values indicate B performed better than A, whereas negative values indicate B's performance is worse. Deltas can be absolute (for example, 10 milliseconds) or relative (5%).
## Confidence interval
internet-analyzer Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-analyzer/troubleshoot.md
# Azure Internet Analyzer troubleshooting + This article contains troubleshooting steps for common Internet Analyzer issues. ## Things to keep in mind - The client script must be embedded into an **HTTPS** website. Measurements won't be collected if the script runs in a plaintext (**http://**) or local (**file://**) website.-- Measurement data will only be collected if the Internet Analyzer profile's client script has been embedded into an application that is receiving real user traffic. Synthetic traffic (for example, Azure WebApp Performance Tests) does not typically execute embedded JavaScript code, so no measurements will be generated by that type of traffic.
+- Measurement data will only be collected if the Internet Analyzer profile's client script has been embedded into an application that is receiving real user traffic. Synthetic traffic (for example, Azure WebApp Performance Tests) doesn't typically execute embedded JavaScript code, so no measurements will be generated by that type of traffic.
## Azure portal **"A scorecard hasn't been generated for the selected filter combination" in the Scorecards section**
This article contains troubleshooting steps for common Internet Analyzer issues.
**"Total Measurement Count" is zero for one or both endpoints in a test** - Time series and measurement counts are computed once an hour, so you'll need to wait at least that amount of time for new measurement data to show up.-- Internet Analyzer only counts successful measurements (i.e., HTTP 200 responses) for its analysis. If one or both endpoints in a test are unreachable or returning a non-200 HTTP code, they will show up with zero total measurements.
+- Internet Analyzer only counts successful measurements (i.e., HTTP 200 responses) for its analysis. If one or both endpoints in a test are unreachable or returning a non-200 HTTP code, they'll show up with zero total measurements.
## Next steps Read the [Internet Analyzer FAQ](internet-analyzer-faq.md)
internet-peering Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/prerequisites.md
Title: Prerequisites to set up peering with Microsoft
-description: Learn about the prerequisites to set up peering with Microsoft.
-+
+description: Learn about the required prerequisites to set up internet peering with Microsoft.
+ - Previously updated : 01/23/2023--+ Last updated : 02/09/2024+
+#CustomerIntent: As an administrator, I want to learn what the prerequisites are to set up internet peering with Microsoft so I can plan correctly for the set up.
# Prerequisites to set up peering with Microsoft
-Ensure the prerequisites below are met before you request for a new peering or convert a legacy peering to Azure resource.
+In this article, you learn about the required prerequisites that you must meet before you request a new peering or convert a legacy peering to Azure resource.
## Azure related prerequisites
-* **Microsoft Azure account:**
+
+- **Microsoft Azure account:**
If you don't have a Microsoft Azure account, create a [Microsoft Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). A valid and active Microsoft Azure subscription is required to set up peering, as the peerings are modeled as resources within Azure subscriptions. It's important to note that: * The Azure resource types used to set up peering are always-free Azure products, so you aren't charged for creating an Azure account or creating a subscription or accessing the Azure resources **PeerAsn** and **Peering** to set up peering. This isn't to be confused with peering agreement for Direct peering between you and Microsoft, the terms for which are explicitly discussed with our peering team. Contact [Microsoft peering](mailto:peering@microsoft.com) if any questions in this regard. * You can use the same Azure subscription to access other Azure products or cloud services, which may be free or paid. When you access a paid product, you'll incur charges. * If you're creating a new Azure account and or subscription, you may be eligible for free Azure credit during a trial period that you may utilize to try Azure Cloud services. If interested, visit [Microsoft Azure account](https://azure.microsoft.com/free) for more info.
-* **Associate Peer ASN:**
+- **Associate Peer ASN:**
Before requesting for peering, first associate your ASN and contact info to your subscription. Follow the instructions in [Associate Peer ASN to Azure Subscription](howto-subscription-association-powershell.md). ## Other prerequisites
-* **PeeringDB profile:**
+
+- **PeeringDB profile:**
Peers are expected to have a complete and up-to-date profile on [PeeringDB](https://www.peeringdb.com). We use this information in our registration system to validate the peer's details such as NOC information, technical contact information, and their presence at the peering facilities etc.
-## Next steps
+## Related content
-* [Create or modify a Direct peering using the Azure portal](howto-direct-portal.md).
-* [Convert a legacy Direct peering to Azure resource using the Azure portal](howto-legacy-direct-portal.md)
-* [Create or modify Exchange peering using the Azure portal](howto-exchange-portal.md)
-* [Convert a legacy Exchange peering to Azure resource using the Azure portal](howto-legacy-exchange-portal.md)
+- [Create or modify a Direct peering using the Azure portal](howto-direct-portal.md).
+- [Convert a legacy Direct peering to Azure resource using the Azure portal](howto-legacy-direct-portal.md).
+- [Create or modify Exchange peering using the Azure portal](howto-exchange-portal.md).
+- [Convert a legacy Exchange peering to Azure resource using the Azure portal](howto-legacy-exchange-portal.md).
internet-peering Walkthrough Direct All https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/walkthrough-direct-all.md
Title: Direct peering walkthrough
-description: Get started with Direct peering.
-
+description: Get started with Direct peering. Learn about the steps that you need to follow to provision and manage a Direct peering.
+ Previously updated : 02/24/2023-- Last updated : 02/09/2024+
+#CustomerIntent: As an administrator, I want to learn about the requirements to create a Direct peering so I can provision and manage Direct peerings.
# Direct peering walkthrough
In this article, you learn how to set up and manage a Direct peering.
:::image type="content" source="./media/walkthrough-direct-all/direct-peering.png" alt-text="Diagram showing Direct peering workflow and connection states." lightbox="./media/walkthrough-direct-all/direct-peering.png":::
-The following steps must be followed to provision a Direct peering:
+To provision a Direct peering, complete the following steps:
1. Review Microsoft [peering policy](policy.md) to understand requirements for Direct peering. 1. Follow the instructions in [Create or modify a Direct peering](howto-direct-powershell.md) to submit a peering request. 1. After you submit a peering request, Microsoft will contact using your registered email address to provide LOA (Letter Of Authorization) or for other information.
-1. Once peering request is approved, connection state changes to *ProvisioningStarted*.
-1. You need to:
- 1. complete wiring according to the LOA
- 1. (optionally) perform link test using 169.254.0.0/16
+1. Once peering request is approved, connection state changes to ***ProvisioningStarted***. Then, you need to:
+ 1. complete wiring according to the LOA.
+ 1. (optionally) perform link test using 169.254.0.0/16.
1. configure BGP session and then notify Microsoft. 1. Microsoft provisions BGP session with DENY ALL policy and validate end-to-end.
-1. If successful, you receive a notification that peering connection state is *Active*.
-1. Traffic will then be allowed through the new peering.
+1. If successful, you receive a notification that peering connection state is ***Active***.
+1. Traffic is then allowed through the new peering.
> [!NOTE] > Connection states are different from standard BGP session states.
-## Convert a legacy Direct peering to Azure resource
+## Convert a legacy Direct peering to an Azure resource
+
+To convert a legacy Direct peering to an Azure resource, complete the following steps:
-The following steps must be followed to convert a legacy Direct peering to Azure resource:
1. Follow the instructions in [Convert a legacy Direct peering to Azure resource](howto-legacy-direct-portal.md)
-1. After you submit the conversion request, Microsoft will review the request and contact you if necessary.
-1. Once approved, you see your Direct peering with a connection state as *Active*.
+1. After you submit the conversion request, Microsoft reviews the request and contacts you if necessary.
+1. Once approved, you see your Direct peering with a connection state as ***Active***.
## Deprovision Direct peering Contact [Microsoft peering](mailto:peering@microsoft.com) team to deprovision a Direct peering.
-When a Direct peering is set for deprovision, you see the connection state as *PendingRemove*.
+When a Direct peering is set for deprovision, the connection state changes to ***PendingRemove***.
> [!NOTE]
-> If you run PowerShell cmdlet to delete the Direct peering when the ConnectionState is *ProvisioningStarted* or *ProvisioningCompleted*, the operation will fail.
+> If you run PowerShell cmdlet to delete the Direct peering when the ConnectionState is ***ProvisioningStarted*** or ***ProvisioningCompleted***, the operation will fail.
-## Next steps
+## Related content
-* Learn about the [Prerequisites to set up peering with Microsoft](prerequisites.md).
+- Learn about the [Prerequisites to set up peering with Microsoft](prerequisites.md).
+- Learn about the [Peering policy](policy.md).
internet-peering Walkthrough Exchange All https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/walkthrough-exchange-all.md
Title: Exchange peering walkthrough
-description: Get started with Exchange peering.
-
+description: Get started with Exchange peering. Learn about the steps that you need to follow to provision and manage an Exchange peering.
+ Previously updated : 02/23/2023-- Last updated : 02/09/2024+
+#CustomerIntent: As an administrator, I want to learn about the requirements to create an Exchange peering so I can provision and manage Exchange peerings.
# Exchange peering walkthrough
In this article, you learn how to set up and manage an Exchange peering.
:::image type="content" source="./media/walkthrough-exchange-all/exchange-peering.png" alt-text="Diagram showing Exchange peering workflow and connection states." lightbox="./media/walkthrough-exchange-all/exchange-peering.png":::
-The following steps must be followed in order to provision an Exchange peering:
+To provision an Exchange peering, complete the following steps:
+ 1. Review Microsoft [peering policy](policy.md) to understand requirements for Exchange peering. 1. Find Microsoft peering location and peering facility ID in [PeeringDB](https://www.peeringdb.com/net/694) 1. Request Exchange peering for a peering location using the instructions in [Create and modify an Exchange peering](howto-exchange-portal.md). 1. After you submit a peering request, Microsoft will review the request and contact you if necessary.
-1. Once peering request is approved, connection state changes to *Approved*.
+1. Once peering request is approved, connection state changes to ***Approved***.
1. Configure BGP session at your end and notify Microsoft. 1. Microsoft provisions BGP session with DENY ALL policy and validate end-to-end.
-1. If successful, you receive a notification that peering connection state is *Active*.
-1. Traffic will then be allowed through the new peering.
+1. If successful, you receive a notification that peering connection state is ***Active***.
+1. Traffic is then allowed through the new peering.
> [!NOTE]
-> Connection states aren't to be confused with standard BGP session states.
+> Connection states are different than standard BGP session states.
## Convert a legacy Exchange peering to Azure resource
-The following steps must be followed in order to convert a legacy Exchange peering to Azure resource:
+
+To convert a legacy Exchange peering to an Azure resource, complete the following steps:
+ 1. Follow the instructions in [Convert a legacy Exchange peering to Azure resource](howto-legacy-exchange-portal.md) 1. After you submit the conversion request, Microsoft will review the request and contact you if necessary.
-1. Once approved, you see your Exchange peering with a connection state as *Active*.
+1. Once approved, you see your Exchange peering with a connection state as ***Active***.
-## Deprovision Exchange peering
+## Deprovision an Exchange peering
-Contact [Microsoft peering](mailto:peering@microsoft.com) to deprovision Exchange peering.
+Contact [Microsoft peering](mailto:peering@microsoft.com) to deprovision an Exchange peering.
-When an Exchange peering is set for deprovision, you see the connection state as *PendingRemove*.
+When an Exchange peering is set for deprovision, the connection state changes to ***PendingRemove***.
-> [!NOTE]
-> If you run PowerShell cmdlet to delete the Exchange peering when the connection state is *ProvisioningStarted* or *ProvisioningCompleted*, the operation will fail.
+> [!IMPORTANT]
+> If you run PowerShell cmdlet to delete the Exchange peering when the connection state is ***ProvisioningStarted*** or ***ProvisioningCompleted***, the operation will fail.
-## Next steps
+## Related content
-* Learn about the [Prerequisites to set up peering with Microsoft](prerequisites.md).
+- Learn about the [Prerequisites to set up peering with Microsoft](prerequisites.md).
+- Learn about the [Peering policy](policy.md).
iot-edge Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/support.md
# Azure IoT Edge supported platforms
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ [!INCLUDE [iot-edge-version-1.4](includes/iot-edge-version-1.4.md)] This article explains what operating system platforms, IoT Edge runtimes, container engines, and components are supported by IoT Edge whether generally available or in preview.
iot-edge Troubleshoot Common Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/troubleshoot-common-errors.md
Title: Troubleshoot Azure IoT Edge common errors
+ Title: Troubleshoot Azure IoT Edge common errors
description: Resolve common issues encountered when using an IoT Edge solution
# Solutions to common issues for Azure IoT Edge
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ [!INCLUDE [iot-edge-version-1.4](includes/iot-edge-version-1.4.md)] Use this article to identify and resolve common issues when using IoT Edge solutions. If you need information on how to find logs and errors from your IoT Edge device, see [Troubleshoot your IoT Edge device](troubleshoot.md).
Some networks have packet overhead, which makes the default docker network MTU (
#### Solution
-1. Check the MTU setting for your docker network.
-
+1. Check the MTU setting for your docker network.
+ `docker network inspect <network name>` 1. Check the MTU setting for the physical network adaptor on your device.
-
+ `ip addr show eth0` >[!NOTE]
Or
```output info: edgelet_docker::runtime -- Starting module edgeHub... warn: edgelet_utils::logging -- Could not start module edgeHub
-warn: edgelet_utils::logging -- caused by: failed to create endpoint edgeHub on network nat: hnsCall failed in Win32:
+warn: edgelet_utils::logging -- caused by: failed to create endpoint edgeHub on network nat: hnsCall failed in Win32:
The process cannot access the file because it is being used by another process. (0x20) ```
For the IoT Edge hub, set an environment variable **OptimizeForPerformance** to
In the Azure portal:
-1. In your IoT Hub, select your IoT Edge device and from the device details page and select **Set Modules** > **Runtime Settings**.
-1. Create an environment variable for the IoT Edge hub module called *OptimizeForPerformance* with type *True/False* that is set to *False*.
+1. In your IoT Hub, select your IoT Edge device and from the device details page and select **Set Modules** > **Runtime Settings**.
+1. Create an environment variable for the IoT Edge hub module called *OptimizeForPerformance* with type *True/False* that is set to *False*.
:::image type="content" source="./media/troubleshoot/optimizeforperformance-false.png" alt-text="Screenshot that shows where to add the OptimizeForPerformance environment variable in the Azure portal.":::
-1. Select **Apply** to save changes, then select **Review + create**.
+1. Select **Apply** to save changes, then select **Review + create**.
The environment variable is now in the `edgeHub` property of the deployment manifest:
-
+ ```json "edgeHub": { "env": {
The security daemon fails to start and module containers aren't created. The `ed
#### Cause
-For all Linux distros except CentOS 7, IoT Edge's default configuration is to use `systemd` socket activation. A permission error happens if you change the configuration file to not use socket activation but leave the URLs as `/var/run/iotedge/*.sock`, since the `iotedge` user can't write to `/var/run/iotedge` meaning it can't unlock and mount the sockets itself.
+For all Linux distros except CentOS 7, IoT Edge's default configuration is to use `systemd` socket activation. A permission error happens if you change the configuration file to not use socket activation but leave the URLs as `/var/run/iotedge/*.sock`, since the `iotedge` user can't write to `/var/run/iotedge` meaning it can't unlock and mount the sockets itself.
#### Solution
Make sure the parent IoT Edge device can receive incoming requests from the down
#### Symptoms
-When attempting to migrate a hierarchy of IoT Edge devices from one IoT hub to another, the top level parent IoT Edge device can connect to IoT Hub, but downstream IoT Edge devices can't. The logs report `Unable to authenticate client downstream-device/$edgeAgent with module credentials`.
+When attempting to migrate a hierarchy of IoT Edge devices from one IoT hub to another, the top level parent IoT Edge device can connect to IoT Hub, but downstream IoT Edge devices can't. The logs report `Unable to authenticate client downstream-device/$edgeAgent with module credentials`.
#### Cause
The credentials for the downstream devices weren't updated properly when the mig
#### Solution When migrating to the new IoT hub (assuming not using DPS), follow these steps in order:
-1. Follow [this guide to export and then import device identities](../iot-hub/iot-hub-bulk-identity-mgmt.md) from the old IoT hub to the new one
+1. Follow [this guide to export and then import device identities](../iot-hub/iot-hub-bulk-identity-mgmt.md) from the old IoT hub to the new one
1. Reconfigure all IoT Edge deployments and configurations in the new IoT hub 1. Reconfigure all parent-child device relationships in the new IoT hub 1. Update each device to point to the new IoT hub hostname (`iothub_hostname` under `[provisioning]` in `config.toml`)
iot-operations Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/troubleshoot/known-issues.md
This article contains known issues for Azure IoT Operations Preview.
- Some clusters that have slow Kubernetes API calls may result in selftest ping failures: `Status {Failed}. Probe failed: Ping: 1/2` from running `az iot ops check` command.
+- You might encounter an error in the KafkaConnector StatefulSet event logs such as `Invalid value: "mq-to-eventhub-connector-<token>--connectionstring": must be no more than 63 characters`. Ensure your KafkaConnector name is of maximum 5 characters.
+ - You may encounter timeout errors in the Kafka connector and Event Grid connector logs. Despite this, the connector will continue to function and forward messages.
key-vault Rbac Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/rbac-guide.md
> Key Vault resource provider supports two resource types: **vaults** and **managed HSMs**. Access control described in this article only applies to **vaults**. To learn more about access control for managed HSM, see [Managed HSM access control](../managed-hsm/access-control.md). > [!NOTE]
-> Azure App Service certificate configuration through Azure Portal does not support Key Vault RBAC permission model. You can use Azure PowerShell, Azure CLI, ARM template deployments with **Key Vault Certificate User** role assignment for App Service global identity, for example Microsoft Azure App Service' in public cloud.
+> Azure App Service certificate configuration through Azure Portal does not support Key Vault RBAC permission model. You can use Azure PowerShell, Azure CLI, ARM template deployments with **Key Vault Certificates User** role assignment for App Service global identity, for example Microsoft Azure App Service' in public cloud.
Azure role-based access control (Azure RBAC) is an authorization system built on [Azure Resource Manager](../../azure-resource-manager/management/overview.md) that provides fine-grained access management of Azure resources.
More about Azure Key Vault management guidelines, see:
| Key Vault Administrator| Perform all data plane operations on a key vault and all objects in it, including certificates, keys, and secrets. Cannot manage key vault resources or manage role assignments. Only works for key vaults that use the 'Azure role-based access control' permission model. | 00482a5a-887f-4fb3-b363-3b7fe8e74483 | | Key Vault Reader | Read metadata of key vaults and its certificates, keys, and secrets. Cannot read sensitive values such as secret contents or key material. Only works for key vaults that use the 'Azure role-based access control' permission model. | 21090545-7ca7-4776-b22c-e363652d74d2 | | Key Vault Certificates Officer | Perform any action on the certificates of a key vault, except manage permissions. Only works for key vaults that use the 'Azure role-based access control' permission model. | a4417e6f-fecd-4de8-b567-7b0420556985 |
-| Key Vault Certificate User | Read entire certificate contents including secret and key portion. Only works for key vaults that use the 'Azure role-based access control' permission model. | db79e9a7-68ee-4b58-9aeb-b90e7c24fcba |
+| Key Vault Certificates User | Read entire certificate contents including secret and key portion. Only works for key vaults that use the 'Azure role-based access control' permission model. | db79e9a7-68ee-4b58-9aeb-b90e7c24fcba |
| Key Vault Crypto Officer | Perform any action on the keys of a key vault, except manage permissions. Only works for key vaults that use the 'Azure role-based access control' permission model. | 14b46e9e-c2b7-41b4-b07b-48a6ebf60603 | | Key Vault Crypto Service Encryption User | Read metadata of keys and perform wrap/unwrap operations. Only works for key vaults that use the 'Azure role-based access control' permission model. | e147488a-f6f5-4113-8e2d-b22465e65bf6 | | Key Vault Crypto User | Perform cryptographic operations using keys. Only works for key vaults that use the 'Azure role-based access control' permission model. | 12338af0-0e69-4776-bea7-57ae8d297424 |
lab-services Connect Virtual Machine Mac Remote Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/connect-virtual-machine-mac-remote-desktop.md
Last updated 02/16/2023
# Connect to a VM using Remote Desktop Protocol on a Mac
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ In this article, you learn how to connect to a lab VM in Azure Lab Services from a Mac by using Remote Desktop Protocol (RDP). ## Install Microsoft Remote Desktop on a Mac
lab-services How To Attach External Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-attach-external-storage.md
Last updated 04/25/2023
# Use external file storage in Azure Lab Services
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ This article covers some of the options for using external file storage in Azure Lab Services. [Azure Files](https://azure.microsoft.com/services/storage/files/) offers fully managed file shares in the cloud, [accessible via SMB 2.1 and SMB 3.0](/azure/storage/files/storage-how-to-use-files-windows). An Azure Files share can be connected either publicly or privately within a virtual network. You can also configure the share to use a lab userΓÇÖs Active Directory credentials for connecting to the file share. If you're on a Linux machine, you can also use Azure NetApp Files with NFS volumes for external file storage with Azure Lab Services. ## Which solution to use
load-balancer Cross Region Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/cross-region-overview.md
- Title: Cross-region load balancer-
-description: Overview of cross region load balancer tier for Azure Load Balancer.
---- Previously updated : 06/23/2023----
-# Cross-region (Global) Load Balancer
-
-Azure Standard Load Balancer supports cross-region load balancing enabling geo-redundant high availability scenarios such as:
-
-* Incoming traffic originating from multiple regions.
-* [Instant global failover](#regional-redundancy) to the next optimal regional deployment.
-* Load distribution across regions to the closest Azure region with [ultra-low latency](#ultra-low-latency).
-* Ability to [scale up/down](#ability-to-scale-updown-behind-a-single-endpoint) behind a single endpoint.
-* Static anycast global IP address
-* [Client IP preservation](#client-ip-preservation)
-* [Build on existing load balancer](#build-cross-region-solution-on-existing-azure-load-balancer) solution with no learning curve
-
-The frontend IP configuration of your cross-region load balancer is static and advertised across [most Azure regions](#participating-regions).
--
-> [!NOTE]
-> The backend port of your load balancing rule on cross-region load balancer should match the frontend port of the load balancing rule/inbound nat rule on regional standard load balancer.
-
-### Regional redundancy
-
-Configure regional redundancy by seamlessly linking a cross-region load balancer to your existing regional load balancers.
-
-If one region fails, the traffic is routed to the next closest healthy regional load balancer.
-
-The health probe of the cross-region load balancer gathers information about availability of each regional load balancer every 5 seconds. If one regional load balancer drops its availability to 0, cross-region load balancer detects the failure. The regional load balancer is then taken out of rotation.
--
-### Ultra-low latency
-
-The geo-proximity load-balancing algorithm is based on the geographic location of your users and your regional deployments.
-
-Traffic started from a client hits the closest participating region and travel through the Microsoft global network backbone to arrive at the closest regional deployment.
-
-For example, you have a cross-region load balancer with standard load balancers in Azure regions:
-
-* West US
-* North Europe
-
-If a flow is started from Seattle, traffic enters West US. This region is the closest participating region from Seattle. The traffic is routed to the closest region load balancer, which is West US.
-
-Azure cross-region load balancer uses geo-proximity load-balancing algorithm for the routing decision.
-
-The configured load distribution mode of the regional load balancers is used for making the final routing decision when multiple regional load balancers are used for geo-proximity.
-
-For more information, see [Configure the distribution mode for Azure Load Balancer](./load-balancer-distribution-mode.md).
-
-Egress traffic follows the routing preference set on the regional load balancers.
-
-### Ability to scale up/down behind a single endpoint
-
-When you expose the global endpoint of a cross-region load balancer to customers, you can add or remove regional deployments behind the global endpoint without interruption.
-
-<!To learn about how to add or remove a regional deployment from the backend, read more [here](TODO: Insert CLI doc here).>
-
-### Static anycast global IP address
-
-Cross-region load balancer comes with a static public IP, which ensures the IP address remains the same. To learn more about static IP, read more [here](../virtual-network/ip-services/public-ip-addresses.md#ip-address-assignment)
-
-### Client IP Preservation
-
-Cross-region load balancer is a Layer-4 pass-through network load balancer. This pass-through preserves the original IP of the packet. The original IP is available to the code running on the virtual machine. This preservation allows you to apply logic that is specific to an IP address.
-
-### Floating IP
-
-Floating IP can be configured at both the global IP level and regional IP level. For more information, visit [Multiple frontends for Azure Load Balancer](./load-balancer-multivip-overview.md)
-
-It is important to note that floating IP configured on the Azure cross-region Load Balancer operates independently of floating IP configurations on backend regional load balancers. If floating IP is enabled on the cross-region load balancer, the appropriate loopback interface needs to be added to the backend VMs.
-
-### Health Probes
-
-Azure cross-region Load Balancer utilizes the health of the backend regional load balancers when deciding where to distribute traffic to. Health checks by cross-region load balancer are done automatically every 5 seconds, given that a user has set up health probes on their regional load balancer.  
-
-## Build cross region solution on existing Azure Load Balancer
-
-The backend pool of cross-region load balancer contains one or more regional load balancers.
-
-Add your existing load balancer deployments to a cross-region load balancer for a highly available, cross-region deployment.
-
-**Home region** is where the cross-region load balancer or Public IP Address of Global tier is deployed.
-This region doesn't affect how the traffic is routed. If a home region goes down, traffic flow is unaffected.
-
-### Home regions
-* Central US
-* East Asia
-* East US 2
-* North Europe
-* Southeast Asia
-* UK South
-* US Gov Virginia
-* West Europe
-* West US
-
-> [!NOTE]
-> You can only deploy your cross-region load balancer or Public IP in Global tier in one of the listed Home regions.
-
-A **participating region** is where the Global public IP of the load balancer is being advertised.
-
-Traffic started by the user travels to the closest participating region through the Microsoft core network.
-
-Cross-region load balancer routes the traffic to the appropriate regional load balancer.
--
-### Participating regions
-* Australia East
-* Australia Southeast
-* Central India
-* Central US
-* East Asia
-* East US
-* East US 2
-* Japan East
-* North Central US
-* North Europe
-* South Central US
-* Southeast Asia
-* UK South
-* US DoD Central
-* US DoD East
-* US Gov Arizona
-* US Gov Texas
-* US Gov Virginia
-* West Central US
-* West Europe
-* West US
-* West US 2
-
-> [!NOTE]
-> The backend regional load balancers can be deployed in any publicly available Azure Region and is not limited to just participating regions.
-
-## Limitations
-
-* Cross-region frontend IP configurations are public only. An internal frontend is currently not supported.
-
-* Private or internal load balancer can't be added to the backend pool of a cross-region load balancer
-
-* NAT64 translation isn't supported at this time. The frontend and backend IPs must be of the same type (v4 or v6).
-
-* UDP traffic isn't supported on Cross-region Load Balancer for IPv6.
-
-* UDP traffic on port 3 isn't supported on Cross-Region Load Balancer
-
-* Outbound rules aren't supported on Cross-region Load Balancer. For outbound connections, utilize [outbound rules](./outbound-rules.md) on the regional load balancer or [NAT gateway](../nat-gateway/nat-overview.md).
-
-## Pricing and SLA
-Cross-region load balancer shares the [SLA](https://azure.microsoft.com/support/legal/sla/load-balancer/v1_0/) of standard load balancer.
-
- ## Next steps
--- See [Tutorial: Create a cross-region load balancer using the Azure portal](tutorial-cross-region-portal.md) to create a cross-region load balancer.-- Learn more about [cross-region load balancer](https://www.youtube.com/watch?v=3awUwUIv950).-- Learn more about [Azure Load Balancer](load-balancer-overview.md).
load-balancer Load Balancer Ipv6 For Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-ipv6-for-linux.md
# Configure DHCPv6 for Linux VMs
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
Some of the Linux virtual-machine images in the Azure Marketplace don't have Dynamic Host Configuration Protocol version 6 (DHCPv6) configured by default. To support IPv6, DHCPv6 must be configured in the Linux OS distribution that you're using. The various Linux distributions configure DHCPv6 in various ways because they use different packages.
load-balancer Load Balancer Standard Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-standard-availability-zones.md
- Title: Azure Load Balancer and Availability Zones-
-description: With this learning path, get started with Azure Standard Load Balancer and Availability Zones.
---- Previously updated : 05/03/2023----
-# Load Balancer and Availability Zones
-
-Azure Load Balancer supports availability zones scenarios. You can use Standard Load Balancer to increase availability throughout your scenario by aligning resources with, and distribution across zones. Review this document to understand these concepts and fundamental scenario design guidance.
-
-A Load Balancer can either be **zone redundant, zonal,** or **non-zonal**. The load balancer's availability zone selection is synonymous with its frontend IP's zone selection. For public load balancers, if the public IP in the Load balancer's frontend is zone redundant then the load balancer is also zone-redundant. If the public IP in the load balancer's frontend is zonal, then the load balancer will also be designated to the same zone. To configure the zone-related properties for your load balancer, select the appropriate type of frontend needed.
-
-## Zone redundant
-
-In a region with Availability Zones, a Standard Load Balancer can be zone-redundant with traffic served by a single IP address. A single frontend IP address survives zone failure. The frontend IP may be used to reach all (non-impacted) backend pool members no matter the zone. Up to one availability zone can fail and the data path survives as long as the remaining zones in the region remain healthy.
-
-The frontend's IP address is served simultaneously by multiple independent infrastructure deployments in multiple availability zones. Any retries or reestablishment will succeed in other zones not affected by the zone failure.
-
-<p align="center">
- <img src="./media/az-zonal/zone-redundant-lb-1.svg" alt="Figure depicts a zone-redundant standard load balancer directing traffic in three different zones to three different subnets in a zone redundant configuration." width="512" title="Virtual Network NAT">
-</p>
-
-*Figure: Zone redundant load balancer*
-
-## Zonal
-
-You can choose to have a frontend guaranteed to a single zone, which is known as a *zonal*. With this scenario, a single zone in a region serves all inbound or outbound flow. Your frontend shares fate with the health of the zone. The data path is unaffected by failures in zones other than where it was guaranteed. You can use zonal frontends to expose an IP address per Availability Zone.
-
-Additionally, the use of zonal frontends directly for load-balanced endpoints within each zone is supported. You can use this configuration to expose per zone load-balanced endpoints to individually monitor each zone. For public endpoints, you can integrate them with a DNS load-balancing product like [Traffic Manager](../traffic-manager/traffic-manager-overview.md) and use a single DNS name.
-
-<p align="center">
- <img src="./media/az-zonal/zonal-lb-1.svg" alt="Figure depicts three zonal standard load balancers each directing traffic in a zone to three different subnets in a zonal configuration." width="512" title="Virtual Network NAT">
-</p>
-
-*Figure: Zonal load balancer*
-
-For a public load balancer frontend, you add a **zones** parameter to the public IP. This public IP is referenced by the frontend IP configuration used by the respective rule.
-
-For an internal load balancer frontend, add a **zones** parameter to the internal load balancer frontend IP configuration. A zonal frontend guarantees an IP address in a subnet to a specific zone.
-
-## Non-Zonal
-
-Load Balancers can also be created in a non-zonal configuration by use of a "no-zone" frontend. In these scenarios, a public load balancer would use a public IP or public IP prefix, an internal load balancer would use a private IP. This option doesn't give a guarantee of redundancy.
-
->[!NOTE]
->All public IP addresses that are upgraded from Basic SKU to Standard SKU will be of type "no-zone". Learn how to [Upgrade a public IP address in the Azure portal](../virtual-network/ip-services/public-ip-upgrade-portal.md).
-
-## <a name="design"></a> Design considerations
-
-Now that you understand the zone-related properties for Standard Load Balancer, the following design considerations might help as you design for high availability.
-
-### Tolerance to zone failure
--- A **zone redundant** frontend can serve a zonal resource in any zone with a single IP address. The IP can survive one zone failure as long as the remaining zones are healthy within the region.-- A **zonal** frontend is a reduction of the service to a single zone and shares fate with the respective zone. If the deployment in your zone goes down, your load balancer won't survive this failure.-
-Members in the backend pool of a load balancer are normally associated with a single zone such as with zonal virtual machines. A common design for production workloads would be to have multiple zonal resources. For example, placing virtual machines from zone 1, 2, and 3 in the backend of a load balancer with a zone-redundant frontend meets this design principle.
-
-### Multiple frontends
-
-Using multiple frontends allow you to load balance traffic on more than one port and/or IP address. When designing your architecture, ensure you account for how zone redundancy interacts with multiple frontends. If your goal is to always have every frontend resilient to failure, then all IP addresses assigned as frontends must be zone-redundant. If a set of frontends is intended to be associated with a single zone, then every IP address for that set must be associated with that specific zone. A load balancer isn't required in each zone. Instead, each zonal front end, or set of zonal frontends, could be associated with virtual machines in the backend pool that are part of that specific availability zone.
-
-### Transition between regional zonal models
-
-In the case where a region is augmented to have [availability zones](../availability-zones/az-overview.md), any existing IPs would remain non-zonal like IPs used for load balancer frontends. To ensure your architecture can take advantage of the new zones, creation of new frontend IPs is recommended. Once created, you can replace the existing non-zonal frontend with a new zone-redundant frontend using the method described [here](../virtual-network/ip-services/configure-public-ip-load-balancer.md#change-or-remove-public-ip-address). All existing load balancing and NAT rules transition to the new frontend.
-
-### Control vs data plane implications
-
-Zone-redundancy doesn't imply hitless data plane or control plane. Zone-redundant flows can use any zone and your flows will use all healthy zones in a region. In a zone failure, traffic flows using healthy zones aren't affected.
-
-Traffic flows using a zone at the time of zone failure may be affected but applications can recover. Traffic continues in the healthy zones within the region upon retransmission when Azure has converged around the zone failure.
-
-Review [Azure cloud design patterns](/azure/architecture/patterns/) to improve the resiliency of your application to failure scenarios.
-
-## Limitations
-
-* Zones can't be changed, updated, or created for the resource after creation.
-* Resources can't be updated from zonal to zone-redundant or vice versa after creation.
-
-## Next steps
-- Learn more about [Availability Zones](../availability-zones/az-overview.md)-- Learn more about [Standard Load Balancer](./load-balancer-overview.md)-- Learn how to [load balance VMs within a zone using a zonal Standard Load Balancer](./quickstart-load-balancer-standard-public-cli.md)-- Learn how to [load balance VMs across zones using a zone redundant Standard Load Balancer](./quickstart-load-balancer-standard-public-cli.md)-- Learn about [Azure cloud design patterns](/azure/architecture/patterns/) to improve the resiliency of your application to failure scenarios.
machine-learning Migrate To V2 Deploy Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-to-v2-deploy-pipelines.md
Title: Upgrade pipeline endpoints to SDK v2
-description: Upgrade pipeline endpoints from v1 to v2 of Azure Machine Learning SDK
+description: Upgrade pipeline endpoints from v1 to v2 of Azure Machine Learning SDK.
Once you have a pipeline up and running, you can publish a pipeline so that it r
## What has changed?
-[Batch Endpoint](concept-endpoints-batch.md) proposes a similar yet more powerful way to handle multiple assets running under a durable API which is why the Published pipelines functionality has been moved to [Pipeline component deployments in batch endpoints](concept-endpoints-batch.md#pipeline-component-deployment).
+[Batch Endpoint](concept-endpoints-batch.md) proposes a similar yet more powerful way to handle multiple assets running under a durable API, which is why the Published pipelines functionality was moved to [Pipeline component deployments in batch endpoints](concept-endpoints-batch.md#pipeline-component-deployment).
[Batch endpoints](concept-endpoints-batch.md) decouples the interface (endpoint) from the actual implementation (deployment) and allow the user to decide which deployment serves the default implementation of the endpoint. [Pipeline component deployments in batch endpoints](concept-endpoints-batch.md#pipeline-component-deployment) allow users to deploy pipeline components instead of pipelines, which make a better use of reusable assets for those organizations looking to streamline their MLOps practice.
Compare how publishing a pipeline has changed from v1 to v2:
# [SDK v2](#tab/v2)
-1. First, we need to get the pipeline we want to publish. However, batch endpoints can't deploy pipelines but pipeline components. We need to convert the pipeline to a component.
+1. First, we need to define the pipeline we want to publish.
```python @pipeline()
Compare how publishing a pipeline has changed from v1 to v2:
return { (..) }
+ ```
+
+1. Batch endpoints don't deploy pipelines but pipeline components. Components propose a more reliable way for having source control of the assets that are being deployed under an endpoint. We can convert any pipeline definition into a pipeline component as follows:
- pipeline_component = pipeline.pipeline_builder.build()
+ ```python
+ pipeline_component = pipeline().component
``` 1. As a best practice, we recommend registering pipeline components so you can keep versioning of them in a centralized way inside the workspace or even the shared registries.
Compare how publishing a pipeline has changed from v1 to v2:
ml_client.components.create(pipeline_component) ```
-1. Then, we need to create the endpoint that will host all the pipeline deployments:
+1. Then, we need to create the endpoint hosting all the pipeline deployments:
```python endpoint_name = "PipelineEndpointTest"
job = ml_client.batch_endpoints.invoke(
endpoint_name=batch_endpoint, ) ```+
+Use `inputs` to indicate the inputs of the job if needed. See [Create jobs and input data for batch endpoints](how-to-access-data-batch-endpoints-jobs.md) for a more detailed explanation about how to indicate inputs and outputs.
+
+```python
+job = ml_client.batch_endpoints.invoke(
+ endpoint_name=batch_endpoint,
+ inputs={
+ "input_data": Input(type=AssetTypes.URI_FOLDER, path="./my_local_data")
+ }
+)
+```
You can also submit a job to a specific version:
run_id = pipeline_endpoint.submit(endpoint_name, pipeline_version="0")
# [SDK v2](#tab/v2)
-In batch endpoints, deployments are not versioned. However, you can deploy multiple pipeline components versions under the same endpoint. In this sense, each pipeline version in v1 will correspond to a different pipeline component version and its corresponding deployment under the endpoint.
+In batch endpoints, deployments aren't versioned. However, you can deploy multiple pipeline components versions under the same endpoint. In this sense, each pipeline version in v1 corresponds to a different pipeline component version and its corresponding deployment under the endpoint.
-Then, you can deploy a specific deployment running under the endpoint if that deployment runs the version you are interested in.
+Then, you can deploy a specific deployment running under the endpoint if that deployment runs the version yo're interested in.
```python job = ml_client.batch_endpoints.invoke(
response = requests.post(
# [SDK v2](#tab/v2)
-Batch endpoints support multiple inputs types. The following example shows how to indicate two different inputs of type `string` and `numeric`:
+Batch endpoints support multiple inputs types. The following example shows how to indicate two different inputs of type `string` and `numeric`. See [Create jobs and input data for batch endpoints (REST)](how-to-access-data-batch-endpoints-jobs.md?tabs=rest) for more detailed examples:
```python batch_endpoint = ml_client.batch_endpoints.get(endpoint_name)
response = requests.post(
) ```
-To know how to indicate inputs and outputs in batch endpoints and all the supported types see [Create jobs and input data for batch endpoints](how-to-access-data-batch-endpoints-jobs.md).
- ## Next steps
machine-learning How To Link Synapse Ml Workspaces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-link-synapse-ml-workspaces.md
Previously updated : 11/04/2022 Last updated : 02/09/2024 #Customer intent: As a workspace administrator, I want to link Azure Synapse workspaces and Azure Machine Learning workspaces and attach Apache Spark pools for a unified data wrangling experience.
managed-grafana How To Manage Plugins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-manage-plugins.md
Grafana supports data source, panel, and app plugins. When you create a new Graf
To install Grafana plugins, follow the process below.
+> [!IMPORTANT]
+> Before adding plugins to your Grafana instance, we recommend that you evaluate these plugins to ensure that they meet your organizational standards for quality, compliance, and security. Third-party plugins have their own release frequency, security implications, testing and update processes that are outside of Microsoft control. Ultimately, it is up to you to determine which plugins meet your requirements and security needs.
+ 1. Open your Azure Managed Grafana instance in the Azure portal. 1. Select **Plugin management**. This page shows a table with three columns containing checkboxes, plugin names, and plugin IDs. Review the checkboxes. A checked box indicates that the corresponding plugin is already installed and can be removed, an unchecked box indicates that the corresponding plugin isn't installed and can be added.
migrate Common Questions Server Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/common-questions-server-migration.md
# Migration and modernization: Common questions
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ This article answers common questions about the Migration and modernization tool. If you've other questions, check these resources: - [General questions](resources-faq.md) about Azure Migrate
migrate Concepts Assessment Calculation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/concepts-assessment-calculation.md
# Assessment overview (migrate to Azure VMs)
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ This article provides an overview of assessments in the [Azure Migrate: Discovery and assessment](migrate-services-overview.md#azure-migrate-discovery-and-assessment-tool) tool. The tool can assess on-premises servers in VMware virtual and Hyper-V environment, and physical servers for migration to Azure. ## What's an assessment?
migrate Migrate Support Matrix Hyper V Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-hyper-v-migration.md
# Support matrix for Hyper-V migration
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ This article summarizes support settings and limitations for migrating Hyper-V VMs with [Migration and modernization](migrate-services-overview.md#migration-and-modernization-tool) . If you're looking for information about assessing Hyper-V VMs for migration to Azure, review the [assessment support matrix](migrate-support-matrix-hyper-v.md). ## Migration limitations
Connect after migration-Linux | To connect to Azure VMs after migration using SS
## Next steps
-[Migrate Hyper-V VMs](tutorial-migrate-hyper-v.md) for migration.
+[Migrate Hyper-V VMs](tutorial-migrate-hyper-v.md) for migration.
migrate Migrate Support Matrix Hyper V https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-hyper-v.md
ms.cutom: engagement-fy24
# Support matrix for Hyper-V assessment
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ This article summarizes prerequisites and support requirements when you discover and assess on-premises servers running in a Hyper-V environment for migration to Azure, using the [Azure Migrate: Discovery and assessment](migrate-services-overview.md#azure-migrate-discovery-and-assessment-tool) tool. If you want to migrate servers running on Hyper-V to Azure, review the [migration support matrix](migrate-support-matrix-hyper-v-migration.md). To set up discovery and assessment of servers running on Hyper-V, you create a project, and add the Azure Migrate: Discovery and assessment tool to the project. After the tool is added, you deploy the [Azure Migrate appliance](migrate-appliance.md). The appliance continuously discovers on-premises servers and sends server metadata and performance data to Azure. After discovery is complete, you gather discovered servers into groups, and run an assessment for a group.
migrate Migrate Support Matrix Physical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-physical.md
# Support matrix for physical server discovery and assessment
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ This article summarizes prerequisites and support requirements when you assess physical servers for migration to Azure, using the [Azure Migrate: Discovery and assessment](migrate-services-overview.md#azure-migrate-discovery-and-assessment-tool) tool. If you want to migrate physical servers to Azure, review the [migration support matrix](migrate-support-matrix-physical-migration.md). To assess physical servers, you create a project, and add the Azure Migrate: Discovery and assessment tool to the project. After adding the tool, you deploy the [Azure Migrate appliance](migrate-appliance.md). The appliance continuously discovers on-premises servers, and sends servers metadata and performance data to Azure. After discovery is complete, you gather discovered servers into groups, and run an assessment for a group.
migrate Migrate Support Matrix Vmware Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-vmware-migration.md
# Support matrix for VMware vSphere migration
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ This article summarizes support settings and limitations for migrating VMware vSphere VMs with [Migration and modernization](migrate-services-overview.md#migration-and-modernization-tool) . If you're looking for information about assessing VMware vSphere VMs for migration to Azure, review the [assessment support matrix](migrate-support-matrix-vmware.md).
migrate Migrate Support Matrix Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-vmware.md
# Support matrix for VMware discovery
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ This article summarizes prerequisites and support requirements for using the [Azure Migrate: Discovery and assessment](migrate-services-overview.md#azure-migrate-discovery-and-assessment-tool) tool to discover and assess servers in a VMware environment for migration to Azure. To assess servers, first, create an Azure Migrate project. The Azure Migrate: Discovery and assessment tool is automatically added to the project. Then, deploy the Azure Migrate appliance. The appliance continuously discovers on-premises servers and sends configuration and performance metadata to Azure. When discovery is completed, gather the discovered servers into groups and run assessments per group.
migrate Prepare For Agentless Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/prepare-for-agentless-migration.md
# Prepare for VMware agentless migration
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ This article provides an overview of the changes performed when you [migrate VMware VMs to Azure via the agentless migration](./tutorial-migrate-vmware.md) method using the Migration and modernization tool. Before you migrate your on-premises VM to Azure, you may require a few changes to make the VM ready for Azure. These changes are important to ensure that the migrated VM can boot successfully in Azure and connectivity to the Azure VM can be established-.
migrate Prepare For Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/prepare-for-migration.md
# Prepare on-premises machines for migration to Azure
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ This article describes how to prepare on-premises machines before you migrate them to Azure using the [Migration and modernization](migrate-services-overview.md#migration-and-modernization-tool) tool. In this article, you:
migrate Troubleshoot Appliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/troubleshoot-appliance.md
# Troubleshoot the Azure Migrate appliance
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ This article helps you troubleshoot issues when you deploy the [Azure Migrate](migrate-services-overview.md) appliance and use the appliance to discover on-premises servers. ## What's supported?
migrate Tutorial App Containerization Java App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-app-containerization-java-app-service.md
Last updated 5/2/2022
# Java web app containerization and migration to Azure App Service
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ In this article, you'll learn how to containerize Java web applications (running on Apache Tomcat) and migrate them to [Azure App Service](https://azure.microsoft.com/services/app-service/) using the Azure Migrate: App Containerization tool. The containerization process doesnΓÇÖt require access to your codebase and provides an easy way to containerize existing applications. The tool works by using the running state of the applications on a server to determine the application components and helps you package them in a container image. The containerized application can then be deployed on Azure App Service. The Azure Migrate: App Containerization tool currently supports:
migrate Tutorial App Containerization Java Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-app-containerization-java-kubernetes.md
Last updated 01/04/2023
# Java web app containerization and migration to Azure Kubernetes Service
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ In this article, you'll learn how to containerize Java web applications (running on Apache Tomcat) and migrate them to [Azure Kubernetes Service (AKS)](https://azure.microsoft.com/services/kubernetes-service/) using the Azure Migrate: App Containerization tool. The containerization process doesnΓÇÖt require access to your codebase and provides an easy way to containerize existing applications. The tool works by using the running state of the applications on a server to determine the application components and helps you package them in a container image. The containerized application can then be deployed on Azure Kubernetes Service (AKS). The Azure Migrate: App Containerization tool currently supports -
migrate Tutorial Discover Import https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-import.md
# Tutorial: Build a business case or assess servers using an imported CSV file
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ As part of your migration journey to Azure, you discover your on-premises inventory and workloads. This tutorial shows you how to build a business case or assess on-premises machines with the Azure Migrate: Discovery and Assessment tool, using an imported comma-separate values (CSV) file.
migrate Tutorial Discover Spring Boot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-spring-boot.md
# Tutorial: Discover Spring Boot applications running in your datacenter (preview)
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ This article describes how to discover Spring Boot applications running on servers in your datacenter, using Azure Migrate: Discovery and assessment tool. The discovery process is completely agentless; no agents are installed on the target servers. In this tutorial, you learn how to:
migrate Tutorial Migrate Aws Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-aws-virtual-machines.md
# Discover, assess, and migrate Amazon Web Services (AWS) VMs to Azure
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ This tutorial shows you how to discover, assess, and migrate Amazon Web Services (AWS) virtual machines (VMs) to Azure VMs, using Azure Migrate: Server Assessment and Migration and modernization tools.
mysql Connect Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/connect-nodejs.md
ms.devlang: javascript
# Quickstart: Use Node.js to connect and query data in Azure Database for MySQL - Flexible Server
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ [!INCLUDE[applies-to-mysql-flexible-server](../includes/applies-to-mysql-flexible-server.md)] In this quickstart, you connect to Azure Database for MySQL flexible server by using Node.js. You then use SQL statements to query, insert, update, and delete data in the database from Mac, Linux, and Windows platforms.
mysql Concepts Migrate Mydumper Myloader https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/migrate/concepts-migrate-mydumper-myloader.md
Last updated 05/03/2023
# Migrate large databases to Azure Database for MySQL using mydumper/myloader
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ [!INCLUDE[applies-to-mysql-single-flexible-server](../includes/applies-to-mysql-single-flexible-server.md)] Azure Database for MySQL is a managed service that you use to run, manage, and scale highly available MySQL databases in the cloud. To migrate MySQL databases larger than 1 TB to Azure Database for MySQL, consider using community tools such as [mydumper/myloader](https://centminmod.com/mydumper.html), which provide the following benefits:
mysql Connect Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/connect-nodejs.md
Last updated 05/03/2023
# Quickstart: Use Node.js to connect and query data in Azure Database for MySQL
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ [!INCLUDE[applies-to-mysql-single-server](../includes/applies-to-mysql-single-server.md)] [!INCLUDE[azure-database-for-mysql-single-server-deprecation](../includes/azure-database-for-mysql-single-server-deprecation.md)]
mysql Whats Happening To Mysql Single Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/whats-happening-to-mysql-single-server.md
Last updated 09/29/2022
Hello! We have news to share - **Azure Database for MySQL - Single Server is on the retirement path** and Azure Database for MySQL - Single Server is scheduled for retirement by **September 16, 2024**.
-As part of this retirement, we will no longer support creating new Single Server instances from the Azure portal beginning **January 16, 2023**. If you still need to create Single Server instances to meet business continuity needs, you can leverage [Azure CLI](./quickstart-create-mysql-server-database-using-azure-cli.md). Additionally, you can still use your Terraform template to create single server instances. You will still be able to create read replicas for your existing single server instance from the **Replication blade** and this will continue to be supported till the sunset date of **September 16, 2024**.
+As part of this retirement, we will no longer support creating new Single Server instances from the Azure portal beginning **January 16, 2023** and Azure CLI beginning **March 19, 2024**. If you still need to create Single Server instances to meet business continuity needs, raise an Azure support ticket. You will still be able to create read replicas and perform restores (PITR and geo-restore) for your existing single server instance and this will continue to be supported till the sunset date of **September 16, 2024**.
After years of evolving the Azure Database for MySQL - Single Server service, it can no longer handle all the new features, functions, and security needs. We recommend upgrading to Azure Database for MySQL - Flexible Server.
To upgrade to Azure Database for MySQL Flexible Server, it's important to know w
**Q. After the Single Server retirement announcement, what if I still need to create a new single server to meet my business needs?**
-**A.** As part of this retirement, we will no longer support creating new Single Server instances from the Azure portal beginning **January 16, 2023**. If you still need to create Single Server instances to meet business continuity needs, you can leverage [Azure CLI](./quickstart-create-mysql-server-database-using-azure-cli.md). Additionally, you can still use your Terraform template to create single server instances.
+**A.** As part of this retirement, we will no longer support creating new Single Server instances from the Azure portal beginning **January 16, 2023**. Additionally, starting **March 19, 2024** you will no longer be able to create new Azure Database for MySQL Single Server instances using Azure CLI. If you still need to create Single Server instances to meet business continuity needs, raise an Azure support ticket.
**Q. After the Single Server retirement announcement, what if I still need to create a new read replica for my single server instance?**
network-watcher Connection Monitor Virtual Machine Scale Set https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/connection-monitor-virtual-machine-scale-set.md
Last updated 01/25/2023 #CustomerIntent: I need to monitor communication between a virtual machine scale set and a virtual machine. If the communication fails, I need to know why, so that I can resolve the problem.+ # Tutorial: Monitor network communication with a virtual machine scale set using the Azure portal
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ Successful communication between a virtual machine scale set and another endpoint, such as virtual machine (VM), can be critical for your organization. Sometimes, the introduction of configuration changes can break communication. In this tutorial, you learn how to:
network-watcher Network Watcher Analyze Nsg Flow Logs Graylog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-analyze-nsg-flow-logs-graylog.md
# Manage and analyze network security group flow logs in Azure using Network Watcher and Graylog
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ [Network security group flow logs](network-watcher-nsg-flow-logging-overview.md) provide information that you can use to understand ingress and egress IP traffic for Azure network interfaces. Flow logs show outbound and inbound flows on a per network security group rule basis, the network interface the flow applies to, 5-tuple information (Source/Destination IP, Source/Destination Port, Protocol) about the flow, and if the traffic was allowed or denied. You can have many network security groups in your network with flow logging enabled. Several network security groups with flow logging enabled can make it cumbersome to parse and gain insights from your logs. This article provides a solution to centrally manage these network security group flow logs using Graylog, an open source log management and analysis tool, and Logstash, an open source server-side data processing pipeline.
network-watcher Network Watcher Diagnose On Premises Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-diagnose-on-premises-connectivity.md
Title: Diagnose on-premises connectivity via VPN gateway
+ Title: Diagnose on-premises VPN connectivity with Azure
-description: This article describes how to diagnose on-premises connectivity via VPN gateway with Azure Network Watcher resource troubleshooting.
-
+description: Learn how to diagnose on-premises VPN connectivity with Azure using Azure Network Watcher VPN troubleshoot tool.
+ Previously updated : 01/20/2021-- Last updated : 02/09/2024+
+#CustomerIntent: As an Azure administrator, I want to learn how to use VPN troubleshoot so I can troubleshoot my VPN virtual network gateways and their connections whenever resources in a virtual network can't communicate with on-premises resources over a VPN connection.
-# Diagnose on-premises connectivity via VPN gateways
+# Diagnose on-premises VPN connectivity with Azure
+
+In this article, you learn how to use Azure Network Watcher VPN troubleshoot capability to diagnose and troubleshoot your VPN gateway and its connection to your on-premises VPN device. For a list of validated VPN devices and their configuration guides, see [VPN devices](../vpn-gateway/vpn-gateway-about-vpn-devices.md?toc=/azure/network-watcher/toc.json#devicetable).
+
+VPN troubleshoot allows you to quickly diagnose issues with your gateway and connections. It checks for common issues and returns a list of diagnostic logs that can be used to further troubleshoot the issue. The logs are stored in a storage account that you specify.
+
+## Prerequisites
-Azure VPN Gateway enables you to create hybrid solution that address the need for a secure connection between your on-premises network and your Azure virtual network. As your requirements are unique, so is the choice of on-premises VPN device. Azure currently supports [several VPN devices](../vpn-gateway/vpn-gateway-about-vpn-devices.md#devicetable) that are constantly validated in partnership with the device vendors. Review the device-specific configuration settings before configuring your on-premises VPN device. Similarly, Azure VPN Gateway is configured with a set of [supported IPsec parameters](../vpn-gateway/vpn-gateway-about-vpn-devices.md#ipsec) that are used for establishing connections. Currently, there's no way for you to specify or select a specific combination of IPsec parameters from the Azure VPN Gateway. For establishing a successful connection between on-premises and Azure, the on-premises VPN device settings must be in accordance with the IPsec parameters prescribed by Azure VPN Gateway. If the settings are incorrect, there's a loss of connectivity and until now troubleshooting these issues wasn't trivial and usually took hours to identify and fix the issue.
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-With the Azure Network Watcher troubleshoot feature, you're able to diagnose any issues with your Gateway and Connections and within minutes have enough information to make an informed decision to rectify the issue.
+- A VPN device in your on-premises network represented by a local network gateway in Azure. For more information about local network gateways, see [Create a local network gateway](../vpn-gateway/tutorial-site-to-site-portal.md#LocalNetworkGateway). For a list of validated VPN devices, see [Validated VPN devices](../vpn-gateway/vpn-gateway-about-vpn-devices.md?toc=/azure/network-watcher/toc.json#devicetable).
+- A VPN virtual network gateway in Azure with a site-to-site connection. For more information about virtual network gateways, see [Create a VPN gateway](../vpn-gateway/tutorial-site-to-site-portal.md?toc=/azure/network-watcher/toc.json#VNetGateway) and [Default IPsec/IKE parameters](../vpn-gateway/vpn-gateway-about-vpn-devices.md?toc=/azure/network-watcher/toc.json#ipsec)
-## Scenario
+## Troubleshoot using Network Watcher VPN troubleshoot
-You want to configure a site-to-site connection between Azure and on-premises using FortiGate as the on-premises VPN Gateway. To achieve this scenario, you would require the following setup:
+Use the VPN troubleshoot capability of Network Watcher to diagnose and troubleshoot your VPN gateway and its connection to your on-premises network.
-1. Virtual Network Gateway - The VPN Gateway on Azure
-1. Local Network Gateway - The [on-premises (FortiGate) VPN Gateway](../vpn-gateway/tutorial-site-to-site-portal.md#LocalNetworkGateway) representation in Azure cloud
-1. Site-to-site connection (route based) - [Connection between the VPN Gateway and the on-premises router](../vpn-gateway/tutorial-site-to-site-portal.md#CreateConnection)
-1. [Configuring FortiGate](https://github.com/Azure/Azure-vpn-config-samples/blob/master/Fortinet/Current/Site-to-Site_VPN_using_FortiGate.md)
+1. In the search box at the top of the portal, enter ***network watcher***. Select **Network Watcher** in the search results.
-Detailed step by step guidance for configuring a Site-to-Site configuration can be found by visiting: [Create a VNet with a Site-to-Site connection using the Azure portal](../vpn-gateway/tutorial-site-to-site-portal.md).
+ :::image type="content" source="./media/network-watcher-diagnose-on-premises-connectivity/portal-search.png" alt-text="Screenshot shows how to search for Network Watcher in the Azure portal." lightbox="./media/packet-capture-vm-portal/portal-search.png":::
-One of the critical configuration steps is configuring the IPsec communication parameters, any misconfiguration leads to loss of connectivity between the on-premises network and Azure. Currently, Azure VPN Gateways are configured to support the following IPsec parameters for Phase 1. As you can see in the table below, the encryption algorithms supported by Azure VPN Gateway are AES256, AES128, and 3DES.
+1. Under **Network diagnostic tools**, select **VPN troubleshoot**.
-### IKE phase 1 setup
+1. In the **VPN troubleshoot**, select **Select storage account** to choose or create a Standard storage account to save the diagnostic files to.
-| **Property** | **PolicyBased** | **RouteBased and Standard or High-Performance VPN gateway** |
-| | | |
-| IKE Version |IKEv1 |IKEv2 |
-| Diffie-Hellman Group |Group 2 (1024 bit) |Group 2 (1024 bit) |
-| Authentication Method |Pre-Shared Key |Pre-Shared Key |
-| Encryption Algorithms |AES256 AES128 3DES |AES256 3DES |
-| Hashing Algorithm |SHA1(SHA128) |SHA1(SHA128), SHA2(SHA256) |
-| Phase 1 Security Association (SA) Lifetime (Time) |28,800 seconds |28,800 seconds |
+1. Select the virtual network gateway and connection that you want to troubleshoot.
-As a user, you would be required to configure your FortiGate, a sample configuration can be found on [GitHub](https://github.com/Azure/Azure-vpn-config-samples/blob/master/Fortinet/Current/fortigate_show%20full-configuration.txt). Unknowingly you configured your FortiGate to use SHA-512 as the hashing algorithm. As this algorithm isn't a supported algorithm for policy-based connections, your VPN connection does work.
+1. Select **Start troubleshooting**.
-These issues are hard to troubleshoot and root causes are often non-intuitive. In this case, you can open a support ticket to get help on resolving the issue. But with Azure Network Watcher troubleshoot API, you can identify these issues on your own.
+1. Once the check is completed, the troubleshooting status of the gateway and connection is displayed. The **Unhealthy** status indicates that there's an issue with the resource.
-## Troubleshooting using Azure Network Watcher
+1. Go to the **vpn** container in the storage account that you previously specified and download the zip file that was generated during the VPN troubleshoot check session. Network Watcher creates a zip folder that contains the following diagnostic log files:
-To diagnose your connection, connect to Azure PowerShell and initiate the `Start-AzNetworkWatcherResourceTroubleshooting` cmdlet. You can find the details on using this cmdlet at [Troubleshoot Virtual Network Gateway and connections - PowerShell](vpn-troubleshoot-powershell.md). This cmdlet may take up to few minutes to complete.
+ :::image type="content" source="./media/network-watcher-diagnose-on-premises-connectivity/vpn-troubleshoot-logs.png" alt-text="Screenshot shows log files created after running VPN troubleshoot check on a virtual network gateway.":::
-Once the cmdlet completes, you can navigate to the storage location specified in the cmdlet to get detailed information on about the issue and logs. Azure Network Watcher creates a zip folder that contains the following log files:
+ > [!NOTE]
+ > - In some cases, only a subset of the log files is generated.
+ > - For newer gateway versions, the IKEErrors.txt, Scrubbed-wfpdiag.txt and wfpdiag.txt.sum have been replaced by an IkeLogs.txt file that contains the whole IKE activity including any errors.
-![1][1]
+A common misconfiguration error is due to using incorrect shared keys where you can check the IKEErrors.txt to see the following error message:
-Open the file called IKEErrors.txt and it displays the following error, indicating an issue with on-premises IKE setting misconfiguration.
+```
+Error: Authentication failed. Check shared key.
+```
+
+Another common error is due the misconfiguration of the IPsec parameters, where you can find the following error message in the IKEErrors.txt file:
``` Error: On-premises device rejected Quick Mode settings. Check values.
- based on log : Peer sent NO_PROPOSAL_CHOSEN notify
+ based on log : Peer sent NO_PROPOSAL_CHOSEN notify
```
-You can get detailed information from the Scrubbed-wfpdiag.txt about the error, as in this case it mentions that there was `ERROR_IPSEC_IKE_POLICY_MATCH` that lead to connection not working properly.
-
-Another common misconfiguration is the specifying incorrect shared keys. If in the preceding example you had specified different shared keys, the IKEErrors.txt shows the following error: `Error: Authentication failed. Check shared key`.
-
-Azure Network Watcher troubleshoot feature enables you to diagnose and troubleshoot your VPN Gateway and Connection with the ease of a simple PowerShell cmdlet. Currently, we support diagnosing the following conditions and are working towards adding more condition.
-
-### Gateway
-
-| Fault Type | Reason | Log|
-||||
-| NoFault | No error is detected |Yes|
-| GatewayNotFound | Cannot find Gateway or Gateway is not provisioned. |No|
-| PlannedMaintenance | Gateway instance is under maintenance. |No|
-| UserDrivenUpdate | A user update is in progress. This could be a resize operation. | No |
-| VipUnResponsive | Cannot reach the primary instance of the Gateway. This happens when the health probe fails. | No |
-| PlatformInActive | There is an issue with the platform. | No|
-| ServiceNotRunning | The underlying service is not running. | No|
-| NoConnectionsFoundForGateway | No Connections exists on the gateway. This is only a warning.| No|
-| ConnectionsNotConnected | None of the Connections is connected. This is only a warning.| Yes|
-| GatewayCPUUsageExceeded | The current Gateway usage CPU usage is > 95%. | Yes |
-
-### Connection
-
-| Fault Type | Reason | Log|
-||||
-| NoFault | No error is detected. |Yes|
-| GatewayNotFound | Cannot find Gateway or Gateway is not provisioned. |No|
-| PlannedMaintenance | Gateway instance is under maintenance. |No|
-| UserDrivenUpdate | A user update is in progress. This could be a resize operation. | No |
-| VipUnResponsive | Cannot reach the primary instance of the Gateway. It happens when the health probe fails. | No |
-| ConnectionEntityNotFound | Connection configuration is missing. | No |
-| ConnectionIsMarkedDisconnected | The Connection is marked "disconnected." |No|
-| ConnectionNotConfiguredOnGateway | The underlying service does not have the Connection configured. | Yes |
-| ConnectionMarkedStandby | The underlying service is marked as standby.| Yes|
-| Authentication | Preshared Key mismatch. | Yes|
-| PeerReachability | The peer gateway is not reachable. | Yes|
-| IkePolicyMismatch | The peer gateway has IKE policies that are not supported by Azure. | Yes|
-| WfpParse Error | An error occurred parsing the WFP log. |Yes|
-
-## Next steps
-
-Learn to check VPN Gateway connectivity with PowerShell and Azure Automation by visiting [Monitor VPN gateways with Azure Network Watcher troubleshooting](network-watcher-monitor-with-azure-automation.md)
-
-[1]: ./media/network-watcher-diagnose-on-premises-connectivity/figure1.png
+For a detailed list of fault types that Network Watcher can diagnose and their logs, see [Gateway faults](vpn-troubleshoot-overview.md#gateway) and [Connection faults](vpn-troubleshoot-overview.md#connection).
+
+## Next step
+
+Learn how to monitor VPN gateways using Azure Automation:
+
+> [!div class="nextstepaction"]
+> [Monitor VPN gateways using VPN troubleshoot and Azure automation](network-watcher-monitor-with-azure-automation.md)
network-watcher Network Watcher Nsg Grafana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-nsg-grafana.md
# Manage and analyze network security group flow logs using Network Watcher and Grafana
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ [Network Security Group (NSG) flow logs](network-watcher-nsg-flow-logging-overview.md) provide information that can be used to understand ingress and egress IP traffic on network interfaces. These flow logs show outbound and inbound flows on a per NSG rule basis, the NIC the flow applies to, 5-tuple information about the flow (Source/Destination IP, Source/Destination Port, Protocol), and if the traffic was allowed or denied. You can have many NSGs in your network with flow logging enabled. This amount of logging data makes it cumbersome to parse and gain insights from your logs. This article provides a solution to centrally manage these NSG flow logs using Grafana, an open source graphing tool, ElasticSearch, a distributed search and analytics engine, and Logstash, which is an open source server-side data processing pipeline.
network-watcher Network Watcher Visualize Nsg Flow Logs Open Source Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-visualize-nsg-flow-logs-open-source-tools.md
# Visualize Azure Network Watcher NSG flow logs using open source tools
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ Network Security Group flow logs provide information that can be used understand ingress and egress IP traffic on Network Security Groups. These flow logs show outbound and inbound flows on a per rule basis, the NIC the flow applies to, 5-tuple information about the flow (Source/Destination IP, Source/Destination Port, Protocol), and if the traffic was allowed or denied. These flow logs can be difficult to manually parse and gain insights from. However, there are several open source tools that can help visualize this data. This article provides a solution to visualize these logs using the Elastic Stack, which allows you to quickly index and visualize your flow logs on a Kibana dashboard.
open-datasets How To Create Azure Machine Learning Dataset From Open Dataset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/open-datasets/how-to-create-azure-machine-learning-dataset-from-open-dataset.md
Last updated 08/05/2020
-#Customer intent: As an experienced Python developer, I want to use Azure Open Datasets in my ML workflows for improved model accuracy.
- # Create Azure Machine Learning datasets from Azure Open Datasets
-In this article, you learn how to bring curated enrichment data into your local or remote machine learning experiments with [Azure Machine Learning](../machine-learning/overview-what-is-azure-machine-learning.md) datasets and [Azure Open Datasets](./index.yml).
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+
+In this article, you learn how to bring curated enrichment data into your local or remote machine learning experiments with [Azure Machine Learning](../machine-learning/overview-what-is-azure-machine-learning.md) datasets and [Azure Open Datasets](./index.yml).
By creating an [Azure Machine Learning dataset](../machine-learning/v1/how-to-create-register-datasets.md), you create a reference to the data source location, along with a copy of its metadata. Because datasets are lazily evaluated, and the data remains in its existing location, you * Incur no extra storage cost.
-* Don't risk unintentionally changing your original data sources.
+* Don't risk unintentionally changing your original data sources.
* Improve ML workflow performance speeds. To understand where datasets fit in Azure Machine Learning's overall data access workflow, see the [Securely access data](../machine-learning/v1/concept-data.md#data-workflow) article.
To create Azure Machine Learning datasets via Azure Open Datasets classes in the
You can retrieve certain `opendatasets` classes as either a `TabularDataset` or `FileDataset`, which allows you to manipulate and/or download the files directly. Other classes can get a dataset **only** by using the `get_tabular_dataset()` or `get_file_dataset()` functions from the `Dataset`class in the Python SDK.
-The following code shows that the MNIST `opendatasets` class can return either a `TabularDataset` or `FileDataset`.
+The following code shows that the MNIST `opendatasets` class can return either a `TabularDataset` or `FileDataset`.
```python
diabetes_tabular = Diabetes.get_tabular_dataset()
Register an Azure Machine Learning dataset with your workspace, so you can share them with others and reuse them across experiments in your workspace. When you register an Azure Machine Learning dataset created from Open Datasets, no data is immediately downloaded, but the data will be accessed later when requested (during training, for example) from a central storage location.
-To register your datasets with a workspace, use the [`register()`](/python/api/azureml-core/azureml.data.abstract_dataset.abstractdataset#register-workspace--name--description-none--tags-none--create-new-version-false-) method.
+To register your datasets with a workspace, use the [`register()`](/python/api/azureml-core/azureml.data.abstract_dataset.abstractdataset#register-workspace--name--description-none--tags-none--create-new-version-false-) method.
```Python titanic_ds = titanic_ds.register(workspace=workspace,
openshift Howto Restrict Egress https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-restrict-egress.md
Last updated 10/10/2023
# Control egress traffic for your Azure Red Hat OpenShift (ARO) cluster
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ This article provides the necessary details that allow you to secure outbound traffic from your Azure Red Hat OpenShift cluster (ARO). With the release of the [Egress Lockdown Feature](./concepts-egress-lockdown.md), all of the required connections for an ARO cluster are proxied through the service. There are additional destinations that you may want to allow to use features such as Operator Hub or Red Hat telemetry. > [!IMPORTANT]
operator-insights How To Install Mcc Edr Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/how-to-install-mcc-edr-agent.md
The VM used for the MCC EDR agent should be set up following best practice for s
- Access to the directory where the logs are stored *(/var/log/az-mcc-edr-uploader/)* - Access to the certificate and private key for the service principal that you create during this procedure
-## Acquire the agent RPM
+## Download the RPM for the agent
-A link to download the MCC EDR agent RPM is provided as part of the Azure Operator Insights onboarding process. See [How do I get access to Azure Operator Insights?](overview.md#how-do-i-get-access-to-azure-operator-insights) for details.
+Download the RPM for the MCC EDR agent using the details you received as part of the [Azure Operator Insights onboarding process](overview.md#how-do-i-get-access-to-azure-operator-insights) or from [https://go.microsoft.com/fwlink/?linkid=2254537](https://go.microsoft.com/fwlink/?linkid=2254537).
## Set up authentication
This process assumes that you're connecting to Azure over ExpressRoute and are u
<Storage private IP>   <ingestion URL> <Key Vault private IP>  <Key Vault URL> ````
-1. Additionally to this, the public IP of the the URL *login.microsoftonline.com* must be added to */etc/hosts*. You can use any of the public addresses resolved by DNS clients.
+1. Add the public IP address of the URL *login.microsoftonline.com* to */etc/hosts*. You can use any of the public addresses resolved by DNS clients.
``` <Public IP>   login.microsoftonline.com
operator-insights How To Install Sftp Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/how-to-install-sftp-agent.md
The VM used for the SFTP agent should be set up following best practice for secu
- Access to the certificate and private key for the service principal that you create during this procedure - Access to the directory for secrets that you create on the VM during this procedure.
-## Acquire the agent RPM
+## Download the RPM for the agent
-A link to download the SFTP agent RPM is provided as part of the Azure Operator Insights onboarding process. See [How do I get access to Azure Operator Insights?](overview.md#how-do-i-get-access-to-azure-operator-insights) for details.
+Download the RPM for the SFTP agent using the details you received as part of the [Azure Operator Insights onboarding process](overview.md#how-do-i-get-access-to-azure-operator-insights) or from [https://go.microsoft.com/fwlink/?linkid=2254734](https://go.microsoft.com/fwlink/?linkid=2254734).
## Set up authentication to Azure
This process assumes that you're connecting to Azure over ExpressRoute and are u
<Storage private IP>   <ingestion URL> <Key Vault private IP>  <Key Vault URL> ````
-1. Additionally to this, the public IP of the URL *login.microsoftonline.com* must be added to */etc/hosts*. You can use any of the public addresses resolved by DNS clients.
+1. Add the public IP address of the URL *login.microsoftonline.com* to */etc/hosts*. You can use any of the public addresses resolved by DNS clients.
``` <Public IP>   login.microsoftonline.com ````+ ## Install and configure agent software Repeat these steps for each VM onto which you want to install the agent:
operator-nexus Howto Configure Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-configure-cluster.md
Previously updated : 03/03/2023 Last updated : 02/08/2024
az networkcloud cluster show --resource-group "$CLUSTER_RG" \
The Cluster deployment is complete when detailedStatus is set to `Running` and detailedStatusMessage shows message `Cluster is up and running`.
+View the management version of the cluster:
+
+```azurecli
+az k8s-extension list --cluster-name <cluster> --resource-group "$MANAGED_CLUSTER_RG" --cluster-type connectedClusters --query "[?name=='nc-platform-extension'].{name:name, extensionType:extensionType, releaseNamespace:scope.cluster.releaseNamespace,provisioningState:provisioningState,version:version}" -o table --subscription "$SUBSCRIPTION_ID"
+```
+ ## Cluster deployment Logging Cluster create Logs can be viewed in the following locations:
operator-nexus Howto Install Cli Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-install-cli-extensions.md
Previously updated : 10/02/2023 Last updated : 02/08/2024 #
Example output:
Name Version -- - monitor-control-service 0.2.0
-connectedmachine 0.6.0
-connectedk8s 1.4.2
+connectedmachine 0.7.0
+connectedk8s 1.6.5
k8s-extension 1.4.3 networkcloud 1.1.0 k8s-configuration 1.7.0
-managednetworkfabric 3.2.0
+managednetworkfabric 4.2.0
customlocation 0.1.3 ssh 2.0.2 ```
orbital Satellite Imagery With Orbital Ground Station https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/satellite-imagery-with-orbital-ground-station.md
# Tutorial: Process Aqua satellite data using NASA-provided tools
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ > [!NOTE] > NASA has deprecated support of the DRL software used to process Aqua satellite imagery. Please see: [DRL Current Status](https://directreadout.sci.gsfc.nasa.gov/home.html). Steps 2, 3, and 4 of this tutorial are no longer relevant but presented for informational purposes only.
postgresql Concepts High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-high-availability.md
- Title: Overview of high availability
-description: Learn about the concepts of high availability with Azure Database for PostgreSQL - Flexible Server.
----- Previously updated : 7/19/2023--
-# High availability concepts in Azure Database for PostgreSQL - Flexible Server
--
-Azure Database for PostgreSQL flexible server offers high availability configurations with automatic failover capabilities. The high availability solution is designed to ensure that committed data is never lost because of failures and that the database won't be a single point of failure in your architecture. When high availability is configured, Azure Database for PostgreSQL - Flexible Server automatically provisions and manages a standby. Write-ahead-logs (WAL) is streamed to the replica in synchronous mode using PostgreSQL streaming replication. There are two high availability architectural models:
-
-* **Zone-redundant HA**: This option provides a complete isolation and redundancy of infrastructure across multiple availability zones within a region. It provides the highest level of availability, but it requires you to configure application redundancy across availability zones. Zone-redundant HA is preferred when you want protection from availability zone failures. However, one should account for added latency for cross-AZ synchronous writes. This latency is more pronounced for applications with short duration transactions. Zone-redundant HA is available in a [subset of Azure regions](./overview.md#azure-regions) where the region supports multiple [availability zones](../../availability-zones/az-overview.md). Uptime [SLA of 99.99%](https://azure.microsoft.com/support/legal/sla/postgresql) is offered in this configuration.
-* **Same-zone HA**: This option provides for infrastructure redundancy with lower network latency because the primary and standby servers will be in the same availability zone. It provides high availability without the need to configure application redundancy across zones. Same-zone HA is preferred when you want to achieve the highest level of availability within a single availability zone. This option lowers the latency impact but makes your application vulnerable to zone failures. Same-zone HA is available in all [Azure regions](./overview.md#azure-regions) where you can deploy Azure Database for PostgreSQL flexible server. Uptime [SLA of 99.95%](https://azure.microsoft.com/support/legal/sla/postgresql) offered in this configuration.
-
-High availability configuration enables automatic failover capability with zero data loss (i.e. RPO=0) both during planned/unplanned events. For example, user-initiated scale compute operation is a planned failover even while unplanned event refers to failures such as underlying hardware and software faults, network failures, and availability zone failures.
-
->[!NOTE]
-> Both these HA deployment models architecturally behave the same. Various discussions in the following sections are applicable to both unless called out otherwise.
-
-## High availability architecture
-
-As mentioned earlier, Azure Database for PostgreSQL flexible server supports two high availability deployment models: zone-redundant HA and same-zone HA. In both deployment models, when the application commits a transaction, the transaction logs (write-ahead logs a.k.a WAL) are written to the data/log disk and also replicated in *synchronous* mode to the standby server. Once the logs are persisted on the standby, the transaction is considered committed and an acknowledgment is sent to the application. The standby server is always in recovery mode applying the transaction logs. However, the primary server doesn't wait for standby to apply these log records. It is possible that under heavy transaction workload, the replica server may fall behind but typically catches up to the primary with workload throughput fluctuations.
-
-### Zone-redundant high availability
-
-This high availability deployment enables Azure Database for PostgreSQL flexible server to be highly available across availability zones. You can choose the region, availability zones for the primary and standby servers. The standby replica server is provisioned in the chosen availability zone in the same region with similar compute, storage, and network configuration as the primary server. Data files and transaction log files (write-ahead logs a.k.a WAL) are stored on locally redundant storage (LRS) within each availability zone, which automatically stores **three** data copies. This provides physical isolation of the entire stack between primary and standby servers.
-
->[!NOTE]
-> Not all regions support availability zone to deploy zone-redundant high availability. See this [Azure regions](./overview.md#azure-regions) list.
-
-Automatic backups are performed periodically from the primary database server, while the transaction logs are continuously archived to the backup storage from the standby replica. Backup data is stored on zone-redundant storage.
-
-### Same-zone high availability
-
-This model of high availability deployment enables Azure Database for PostgreSQL flexible server to be highly available within the same availability zone. This is supported in all regions, including regions that don't support availability zones. You can choose the region and the availability zone to deploy your primary database server. A standby server is **automatically** provisioned and managed in the **same** availability zone in the same region with similar compute, storage, and network configuration as the primary server. Data files and transaction log files (write-ahead logs a.k.a WAL) are stored on locally redundant storage, which automatically stores as **three** synchronous data copies each for primary and standby. This provides physical isolation of the entire stack between primary and standby servers within the same availability zone.
-
-Automatic backups are performed periodically from the primary database server, while the transaction logs are continuously archived to the backup storage from the standby replica. If the region supports availability zones, then backup data is stored on zone-redundant storage (ZRS). In regions that doesn't support availability zones, backup data is stored on local redundant storage (LRS).
--
-## Components and workflow
-
-### Transaction completion
-
-Application transaction triggered writes and commits are first logged to the WAL on the primary server. It is then streamed to the standby server using Postgres streaming protocol. Once the logs are persisted on the standby server storage, the primary server is acknowledged of write completion. Only then and the application is confirmed of the writes. This additional round-trip adds more latency to your application. The percentage of impact depends on the application. This acknowledgment process does not wait for the logs to be applied at the standby server. The standby server is permanently in recovery mode until it is promoted.
-
-### Health check
-
-Azure Database for PostgreSQL flexible server has a health monitoring in place that checks for the primary and standby health periodically. If that detects primary server is not reachable after multiple pings, it makes the decision to initiate an automatic failover or not. The algorithm is based on multiple data points to avoid any false positive situation.
-
-### Failover modes
-
-There are two failover modes.
-
-1. With [**planned failovers**](#failover-processplanned-downtimes) (example: During maintenance window) where the failover is triggered with a known state in which the primary connections are drained, a clean shutdown is performed before the replication is severed. You can also use this to bring the primary server back to your preferred AZ.
-
- 2. With [**unplanned failover**](#failover-processunplanned-downtimes) (example: Primary server node crash), the primary is immediately fenced and hence any in-flight transactions are lost and to be retried by the application.
-
-In both the failover modes, once the replication is severed, the standby server runs the recovery before being promoted as a primary, and opened for read/write. With automatic DNS entries updated with the new primary server endpoint, applications can connect to the server using the same server endpoint. A new standby server is established in the background and that donΓÇÖt block your application connectivity.
-
-### Downtime
-
-In all cases, you must observe any downtime from your application/client side. Your application will be able to reconnect after a failover as soon as the DNS is updated. We take care of a few more aspects including LSN comparisons between primary and standby before fencing the writes. But with unplanned failovers, the time taken for the standby can be longer than 2 minutes in some cases due to the volume of logs to recover before opening for read/write.
-
-## Monitoring for high availability
-
-The health of primary and standby servers are continuously monitored and appropriate actions are taken to remediate issues including triggering a failover to the standby server. Following is the list of high availability statuses that are reported on the overview page:
-
-| **Status** | **Description** |
-| - | |
-| <b> Initializing | In the process of creating a new standby server. |
-| <b> Replicating Data | After the standby is created, it is catching up with the primary. |
-| <b> Healthy | Replication is in steady state and healthy. |
-| <b> Failing Over | The database server is in the process of failing over to the standby. |
-| <b> Removing Standby | In the process of deleting standby server. |
-| <b> Not Enabled | Zone redundant high availability is not enabled. |
-
->[!NOTE]
-> You can enable high availability during server creation or at a later time as well. If you are enabling or disabling high availability during post-create stage, it is recommended to perform the operation when the primary server activity is low.
-
-## Steady-state operations
-
-PostgreSQL client applications are connected to the primary server using the DB server name. Application reads are served directly from the primary server, while commits and writes are confirmed to the application only after the log data is persisted on both the primary server and the standby replica. Due to this additional round-trip, applications can expect elevated latency for writes and commits. You can monitor the health of the high availability on the portal.
--
-1. Clients connect to the Azure Database for PostgreSQL flexible server instance and perform write operations.
-2. Changes are replicated to the standby site.
-3. Primary receives acknowledgment.
-4. Writes/commits are acknowledged.
-
-## Failover process - planned downtimes
-
-Planned downtime events include Azure scheduled periodic software updates and minor version upgrades. When configured in high availability, these operations are first applied to the standby replica while the applications continue to access the primary server. Once the standby replica is updated, primary server connections are drained, and a failover is triggered which activates the standby replica to be the primary with the same database server name. Client applications will have to reconnect with the same database server name to the new primary server and can resume their operations. A new standby server will be established in the same zone as the old primary.
-
-For other user initiated operations such as scale-compute or scale-storage, the changes are applied at the standby first, followed by the primary. Currently, the service is not failed over to the standby and hence while the scale operation is carried out on the primary server, applications will encounter a short downtime.
-
-### Reducing planned downtime with managed maintenance window
-
-With Azure Database for PostgreSQL flexible server, you can optionally schedule Azure initiated maintenance activities by choosing a 60-minute window in a day of your preference where the activities on the databases are expected to be low. Azure maintenance tasks such as patching or minor version upgrades would happen during that maintenance window. If you do not choose a custom window, a system allocated 1-hr window between 11pm-7am local time is chosen for your server.
-
-For Azure Database for PostgreSQL flexible server instances configured with high availability, these maintenance activities are performed on the standby replica first and the service is failed over to the standby to which applications can reconnect.
-
-## Failover process - unplanned downtimes
--- Unplanned outages include software bugs or infrastructure component failures that impact the availability of the database. If the primary server becomes unavailable, it is detected by the monitoring system and initiates a failover process. The process includes a few seconds of wait time to make sure it is not a false positive. The replication to the standby replica is severed and the standby replica is activated to be the primary database server. That includes the standby to recover any residual WAL files. Once it is fully recovered, DNS for the same end point is updated with the standby server's IP address. Clients can then retry connecting to the database server using the same connection string and resume their operations. -
-> [!NOTE]
-> Azure Database for PostgreSQL flexible server instances configured with zone-redundant high availability provide a recovery point objective (RPO) of **Zero** (no data loss). The recovery time objective (RTO) is expected to be **less than 120s** in typical cases. However, depending on the activity in the primary database server at the time of the failover, the failover may take longer.
--
-After the failover, while a new standby server is being provisioned (which usually takes 5-10 minutes), applications can still connect to the primary server and proceed with their read/write operations. Once the standby server is established, it will start recovering the logs that were generated after the failover.
--
-1. Primary database server is down and the clients lose database connectivity.
-2. Standby server is activated to become the new primary server. The client connects to the new primary server using the same connection string. Having the client application in the same zone as the primary database server reduces latency and improves performance.
-3. Standby server is established in the same zone as the old primary server and the streaming replication is initiated.
-4. Once the steady-state replication is established, the client application commits and writes are acknowledged after the data is persisted on both sites.
-
-## On-demand failover
-
-Azure Database for PostgreSQL flexible server provides two methods for you to perform on-demand failover to the standby server. These are useful if you want to test the failover time and downtime impact for your applications and if you want to fail over to the preferred availability zone.
-
-### Forced failover
-
-You can use this feature to simulate an unplanned outage scenario while running your production workload and observe your application downtime. Alternatively, in rare case where your primary server becomes unresponsive for whatever reason, you may use this feature.
-
-This feature brings the primary server down and initiates the failover workflow in which the standby promote operation is performed. Once the standby completes the recovery process till the last committed data, it is promoted to be the primary server. DNS records are updated and your application can connect to the promoted primary server. Your application can continue to write to the primary while a new standby server is established in the background and that doesn't impact the uptime.
-
-The following are the steps during forced-failover:
-
- | **Step** | **Description** | **App downtime expected?** |
- | - | | -- |
- | 1 | Primary server is stopped shortly after the failover request is received. | Yes |
- | 2 | Application encounters downtime as the primary server is down. | Yes |
- | 3 | Internal monitoring system detects the failure and initiates a failover to the standby server. | Yes |
- | 4 | Standby server enters recovery mode before being fully promoted as an independent server. | Yes |
- | 5 | The failover process waits for the standby recovery to complete. | Yes |
- | 6 | Once the server is up, DNS record is updated with the same hostname, but using the standby's IP address. | Yes |
- | 7 | Application can reconnect to the new primary server and resume the operation. | No |
- | 8 | A standby server in the preferred zone is established. | No |
- | 9 | Standby server starts to recover logs (from Azure BLOB) that it missed during its establishment. | No |
- | 10 | A steady-state between the primary and the standby server is established. | No |
- | 11 | Forced failover process is complete. | No |
-
-Application downtime is expected to start after step #1 and persists until step #6 is completed. The rest of the steps happen in the background without impacting the application writes and commits.
-
->[!Important]
->The end-to-end failover process includes (a) failing over to the standby server after the primary failure and (b) establishing a new standby server in a steady-state. As your application incurs downtime only until the failover to the standby is complete, **please measure the downtime from your application/client perspective** instead of the overall end-to-end failover process.
-
-### Planned failover
-
-You can use this feature for failing over to the standby server with reduced downtime. For example, after an unplanned failover, your primary could be on a different availability zone than the application, and you want to bring the primary server back to the previous zone to colocate with your application.
-
-When executing this feature, the standby server is first prepared to make sure it is caught up with recent transactions allowing the application to continue to perform read/writes. The standby is then promoted and the connections to the primary are severed. Your application can continue to write to the primary while a new standby server is established in the background. The following are the steps involved with planned failover.
-
-| **Step** | **Description** | **App downtime expected?** |
- | - | | -- |
- | 1 | Wait for the standby server to have caught-up with primary. | No |
- | 2 | Internal monitoring system initiates the failover workflow. | No |
- | 3 | Application writes are blocked when the standby server is close to primary log sequence number (LSN). | Yes |
- | 4 | Standby server is promoted to be an independent server. | Yes |
- | 5 | DNS record is updated with the new standby server's IP address. | Yes |
- | 6 | Application to reconnect and resume its read/write with new primary | No |
- | 7 | A new standby server in another zone is established. | No |
- | 8 | Standby server starts to recover logs (from Azure BLOB) that it missed during its establishment. | No |
- | 9 | A steady-state between the primary and the standby server is established. | No |
- | 10 | Planned failover process is complete. | No |
-
-Application downtime starts at step #3 and can resume operation post step #5. The rest of the steps happen in the background without impacting application writes and commits.
-
-### Considerations while performing on-demand failovers
-
-* The overall end-to-end operation time may be seen longer than the actual downtime experienced by the application. **Please observe the downtime from the application perspective**.
-* Please do not perform immediate, back-to-back failovers. Wait for at least 15-20 minutes between failovers, which will allow the new standby server to be fully established.
-* For the planned failover with reduced downtime, it is recommended to perform during low activity period.
-
-See [this guide](how-to-manage-high-availability-portal.md) for managing high availability.
--
-## Point-in-time restore of HA servers
-
-For Azure Database for PostgreSQL flexible server instances that are configured with high availability, log data is replicated in real time to the standby server. Any user errors on the primary server - such as an accidental drop of a table or incorrect data updates are replicated to the standby replica as well. So, you cannot use standby to recover from such logical errors. To recover from such errors, you have to perform point-in-time restore from the backup. Using Azure Database for PostgreSQL flexible server's point-in-time restore capability, you can restore to the time before the error occurred. For databases configured with high availability, a new database server will be restored as a single zone Azure Database for PostgreSQL flexible server with a new user-provided server name. You can use the restored server for few use cases:
-
-1. You can use the restored server for production usage and can optionally enable zone-redundant high availability.
- 2. If you just want to restore an object, you can then export the object from the restored database server and import it to your production database server.
- 3. If you want to clone your database server for testing and development purposes, or you want to restore for any other purposes, you can perform point-in-time restore.
-
-## High availability - features
-
-* Standby replica will be deployed in an exact VM configuration same as the primary server, including vCores, storage, network settings (VNET, Firewall), etc.
-
-* You can add high availability for an existing database server.
-
-* You can remove standby replica by disabling high availability.
-
-* For zone-redundant HA, you can choose your availability zones for your primary and standby database servers.
-
-* Operations such as stop, start, and restart are performed on both primary and standby database servers at the same time.
-
-* Automatic backups are performed from the primary database server and stored in a zone redundant backup storage.
-
-* Clients always connect to the end host name of the primary database server.
-
-* Any changes to the server parameters are applied to the standby replica as well.
-
-* Ability to restart the server to pick up any static server parameter changes.
-
-* Periodic maintenance activities such as minor version upgrades happen at the standby first and the service is failed over to reduce downtime.
-
-## High availability - limitations
-
-* High availability is not supported with burstable compute tier.
-* High availability is supported only in regions where multiple zones are available.
-* Due to synchronous replication to the standby server, especially with zone-redundant HA, applications can experience elevated write and commit latency.
-
-* Standby replica cannot be used for read queries.
-
-* Depending on the workload and activity on the primary server, the failover process might take longer than 120 seconds due to recovery involved at the standby replica before it can be promoted.
-* The standby server typically recovers WAL files at the rate of 40 MB/s. If your workload exceeds this limit, you may encounter extended time for the recovery to complete either during the failover or after establishing a new standby.
-
-* Restarting the primary database server also restarts standby replica.
-
-* Configuring additional standbys is not supported.
-
-* Configuring customer initiated management tasks cannot be scheduled during managed maintenance window.
-
-* Planned events such as scale compute and scale storage happens in the standby first and then on the primary server. Currently the server doesn't fail over for these planned operations.
-
-* If logical decoding or logical replication is configured with a HA configured Azure Database for PostgreSQL flexible server instance, in the event of a failover to the standby server, the logical replication slots are not copied over to the standby server.
-
-## Availability for non-HA servers
-
-For Azure Database for PostgreSQL flexible server instances configured **without** high availability, the service still provides built-in availability, storage redundancy and resiliency to help to recover from any planned or unplanned downtime events. Uptime [SLA of 99.9%](https://azure.microsoft.com/support/legal/sla/postgresql) is offered in this non-HA configuration.
-
-During planned or unplanned failover events, if the server goes down, the service maintains high availability of the servers using following automated procedure:
-
-1. A new compute Linux VM is provisioned.
-2. The storage with data files is mapped to the new Virtual Machine
-3. PostgreSQL database engine is brought online on the new Virtual Machine.
-
-Picture below shows transition for VM and storage failure.
--
-### Planned downtime
-
-Here are some planned maintenance scenarios:
-
-| **Scenario** | **Description**|
-| | -- |
-| <b>Compute scale up/down | When the user performs compute scale up/down operation, a new database server is provisioned using the scaled compute configuration. In the old database server, active checkpoints are allowed to complete, client connections are drained, any uncommitted transactions are canceled, and then it is shut down. The storage is then detached from the old database server and attached to the new database server. It will be up and running to accept any connections.|
-| <b>Scaling Up Storage | Scaling up the storage is currently an offline operation which involves a short downtime.|
-| <b>New Software Deployment (Azure) | New features rollout or bug fixes automatically happen as part of serviceΓÇÖs planned maintenance. For more information, see the [documentation](./concepts-maintenance.md), and also check your [portal](https://aka.ms/servicehealthpm).|
-| <b>Minor version upgrades | Azure Database for PostgreSQL automatically patches database servers to the minor version determined by Azure. It happens as part of service's planned maintenance. This would incur a short downtime in terms of seconds, and the database server is automatically restarted with the new minor version. For more information, see the [documentation](./concepts-maintenance.md), and also check your [portal](https://aka.ms/servicehealthpm).|
--
-### Unplanned downtime
-
-Unplanned downtime can occur as a result of unforeseen failures, including underlying hardware fault, networking issues, and software bugs. If the database server goes down unexpectedly, a new database server is automatically provisioned in seconds. The remote storage is automatically attached to the new database server. PostgreSQL engine performs the recovery operation using WAL and database files, and opens up the database server to allow clients to connect. Uncommitted transactions are lost, and they have to be retried by the application. While an unplanned downtime can't be avoided, Azure Database for PostgreSQL flexible server mitigates the downtime by automatically performing recovery operations at both database server and storage layers without requiring human intervention.
-
-Here are some failure scenarios and how Azure Database for PostgreSQL flexible server automatically recovers:
-
-| **Scenario** | **Automatic recovery** |
-| - | - |
-| <B>Database server failure | If the database server is down because of some underlying hardware fault, active connections are dropped, and any inflight transactions are aborted. A new database server is automatically deployed, and the remote data storage is attached to the new database server. After the database recovery is complete, clients can connect to the new database server using the same endpoint. <br /> <br /> The recovery time (RTO) is dependent on various factors including the activity at the time of fault such as large transaction and the amount of recovery to be performed during the database server startup process. <br /> <br /> Applications using the PostgreSQL databases need to be built in a way that they detect and retry dropped connections and failed transactions. |
-| <B>Storage failure | Applications do not see any impact for any storage-related issues such as a disk failure or a physical block corruption. As the data is stored in 3 copies, the copy of the data is served by the surviving storage. Block corruptions are automatically corrected. If a copy of data is lost, a new copy of the data is automatically created. |
-
-Here are some failure scenarios that require user action to recover:
-
-| **Scenario** | **Recovery plan** |
-| - ||
-| <b> Availability zone failure | If the region supports multiple availability zones, then the backups are automatically stored in zone-redundant backup storage. In the event of a zone failure, you can restore from the backup to another availability zone. This provides zone-level resiliency. However, this incurs time to restore and recovery. There could be some data loss as not all WAL records may have been copied to the backup storage. <br> <br> If you prefer to have a short downtime and high uptime, we recommend you to configure your server with zone-redundant high availability. |
-| <b> Logical/user errors | Recovery from user errors, such as accidentally dropped tables or incorrectly updated data, involves performing a [point-in-time recovery](./concepts-backup-restore.md) (PITR), by restoring and recovering the data until the time just before the error had occurred.<br> <br> If you want to restore only a subset of databases or specific tables rather than all databases in the database server, you can restore the database server in a new instance, export the table(s) via [pg_dump](https://www.postgresql.org/docs/current/app-pgdump.html), and then use [pg_restore](https://www.postgresql.org/docs/current/app-pgrestore.html) to restore those tables into your database. |
-
-## Frequently asked questions
-
-### HA configuration questions
-
-* **Where can I see the SLAs offered with Azure Database for PostgreSQL flexible server?** <br>
- [Azure Database for PostgreSQL flexible server SLAs](https://azure.microsoft.com/support/legal/sla/postgresql).
-
-* **Do I need to have HA to protect my server from unplanned outages?** <br>
- No. Azure Database for PostgreSQL flexible server offers local redundant storage with 3 copies of data, zone-redundant backup (in regions where it is supported), and also built-in server resiliency to automatically restart a crashed server and even relocate server to another physical node. Zone redundant HA will provide higher uptime by performing automatic failover to another running (standby) server in another zone and thus provides zone-resilient high availability with zero data loss.
-
-* **Can I choose the availability zones for my primary and standby servers?** <br>
- If you choose same zone HA, then you can only choose the primary server. If you choose zone redundant HA, then you can choose both primary and standby AZs.
-
-* **Is zone redundant HA available in all regions?** <br>
- Zone-redundant HA is available in regions that support multiple AZs in the region. For the latest region support, please see [this documentation](overview.md#azure-regions). We are continuously adding more regions and enabling multiple AZs. Same-zone HA is available in all supported regions.
-
-* **Can I deploy both zone redundant HA and same zone HA at the same time?** <br>
- No. You can deploy only one of those options.
-
-* **Can I directly convert same-zone HA to zone-redundant HA and vice-versa?** <br>
- No. You first have to disable HA, wait for it to complete, and then choose the other HA deployment model.
-
-* **What mode of replication is between primary and standby servers?** <br>
- Synchronous mode of replication is established between the primary and the standby server. Application writes and commits are acknowledged only after the Write Ahead Log (WAL) data is persisted on the standby site. This enables zero data loss in the event of a failover.
-
-* **Synchronous mode incurs latency. What kind of performance impact I can expect for my application?** <br>
- Configuring in HA induces some latency to writes and commits. No impact to read queries. The performance impact varies depending on your workload. As a general guideline, writes and commit impact can be around 20-30% impact.
-
-* **Does the zone-redundant HA provides protection from planned and unplanned outages?** <br>
- Yes. The main purpose of HA is to offer higher uptime to mitigate from any outages. In the event of an unplanned outage - including a fault in database, VM, physical node, data center, or at the AZ-level, the monitoring system automatically fails over the server to the standby. Similarly, during planned outages including minor version updates or infrastructure patching that happen during scheduled maintenance window, the updates are applied at the standby first and the service is failed over while the old primary goes through the update process. This reduces the overall downtime.
-
-* **Can I enable or disable HA at any point of time?** <br>
-
- Yes. You can enable or disable zone-redundant HA at any time except when the server is in certain states like stopped, restarting, or already in the process of failing over.
-
-* **Can I choose the AZ for the standby?** <br>
- No. Currently you cannot choose the AZ for the standby. We plan to add that capability in future.
-
-* **Can I configure HA between private (VNET) and public access?** <br>
- No. You can either configure HA within a VNET (spanned across AZs within a region) or public access.
-
-* **Can I configure HA across regions?** <br>
- No. HA is configured within a region, but across availability zones. However, you can enable Geo-read-replica (s) in asynchronous mode to achieve Geo-resiliency.
-* **Can I use logical replication with HA configured servers?** <br>
- You can configure logical replication with HA. However, after a failover, the logical slot details are not copied over to the standby. Hence, there is currently limited support for this configuration. If you must use logical replication, you will need to re-create it after every failover.
-
-### Replication and failover related questions
-
-* **How does Azure Database for PostgreSQL flexible server provide high availability in the event of a fault - like AZ fault?** <br>
- When you enable your server with zone-redundant HA, a physical standby replica with the same compute and storage configuration as the primary is deployed automatically in a different availability zone than the primary. PostgreSQL streaming replication is established between the primary and standby servers.
-
-* **What is the typical failover process during an outage?** <br>
- When the fault is detected by the monitoring system, it initiates a failover workflow that involves making sure the standby has applied all residual WAL files and fully caught up before opening that for read/write. Then DNS is updated with the IP address of the standby before the clients can reconnect to the server with the same endpoint (host name). A new standby is instantiated to keep the configuration in a highly available mode.
-
-* **What is the typical failover time and expected data loss during an outage?** <br>
- In a typical case, failover time or the downtime experienced by the application perspective is between 60s-120s. This can be longer in cases where the outage incurred during long running transactions, index creation, or during heavy write activities - as the standby may take longer to complete the recovery process.
-
- Since the replication happens in synchronous mode, no data loss is expected.
-
-* **Do you offer SLA for the failover time?** <br>
- For the failover time, we provide guidelines on how long it typically takes for the operation. The official SLA is provided for the overall uptime.
-
-* **Does the application automatically connect to the server after the failover?** <br>
- No. Applications should have retry mechanism to reconnect to the same endpoint (hostname).
-
-* **How do I test the failover?** <br>
- You can use **Forced failover** or **Planned failover** feature to test the failover. See **On-demand failover** section in this document for details.
-
-* **How do I check the replication status?** <br>
- On portal, from the overview page of the server shows the Zone redundant high availability status and the server status. You can also check the status and the AZs for primary and standby from the High Availability blade of the server portal.
-
- From psql, you can run `select * from pg_stat_replication;` which shows the streaming status amongst other details.
-
-* **Do you support read queries on the standby replica?** <br>
- No. We do not support read queries on the standby replica.
-
-* **When I do point-in-time recovery (PITR), will it automatically configure the restored server in HA?** <br>
- No. PITR server is restored as a standalone server. If you want to enable HA, you can do that after the restore is complete.
--
-## Next steps
--- Learn about [business continuity](./concepts-business-continuity.md)-- Learn how to [manage high availability](./how-to-manage-high-availability-portal.md)-- Learn about [backup and recovery](./concepts-backup-restore.md)--
postgresql Concepts Major Version Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-major-version-upgrade.md
Here are some of the important considerations with in-place major version upgrad
If in-place major version upgrade pre-check operations fail, then the upgrade aborts with a detailed error message for all the below limitations. -- In-place major version upgrade currently doesn't support read replicas, so if you have a read replica enabled server, you need to delete the replica before performing the upgrade on the primary server. After the upgrade, you can recreate the replica.
+- In-place major version upgrade currently doesn't support read replicas, so if you have a read replica enabled server, you need to delete the replica before performing the upgrade on the primary server. After the upgrade, you can recreate the replica.
+
+- Azure Database for PostgreSQL - Flexible Server requires the ability to send and receive traffic to destination ports 5432, and 6432 within VNET where Flexible Server is deployed, as well as to Azure storage for log archival. If you configure Network Security Groups (NSG) to restrict traffic to or from your Flexible Server within its deployed subnet, please make sure to allow traffic to destination ports 5432 and 6432 within the subnet and to Azure storage by using service tag **Azure Storage** as a destination.If network rules are not set up properly HA is not enabled automatically post a major version upgrade and you should manually enable HA. Please modify your NSG rules to allow traffic for the destination ports and storage as requested above and enable a high availability feature on the server.
- In-place major version upgrade doesn't support certain extensions and there are some limitations to upgrading certain extensions. The extensions **Timescaledb**, **pgaudit**, **dblink**, **orafce**, **pg_partman**, and **postgres_fdw** are unsupported for all PostgreSQL versions.
reliability Availability Zones Service Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/availability-zones-service-support.md
Azure offerings are grouped into three categories that reflect their _regional_
| [Azure SQL Managed Instance](/azure/azure-sql/database/business-continuity-high-availability-disaster-recover-hadr-overview?view=azuresql&preserve-view=true) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure Event Hubs](../event-hubs/event-hubs-geo-dr.md#availability-zones) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure Key Vault](../key-vault/general/disaster-recovery-guidance.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
-| [Azure Load Balancer](../load-balancer/load-balancer-standard-availability-zones.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) ![An icon that signifies this service is zonal](media/icon-zonal.svg) |
+| [Azure Load Balancer](reliability-load-balancer.md#availability-zone-support) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) ![An icon that signifies this service is zonal](media/icon-zonal.svg) |
| [Azure Service Bus](../service-bus-messaging/service-bus-geo-dr.md#availability-zones) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure Service Fabric](../service-fabric/service-fabric-cross-availability-zones.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) ![An icon that signifies this service is zonal](media/icon-zonal.svg) | | [Azure Storage account](migrate-storage.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
reliability Reliability Guidance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-guidance-overview.md
For a more detailed overview of reliability principles in Azure, see [Reliabilit
|Azure Event Hubs| [Availability Zones](../event-hubs/event-hubs-geo-dr.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json#availability-zones)| [Geo-disaster recovery](../event-hubs/event-hubs-geo-dr.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) | |Azure ExpressRoute| [Designing for high availability with ExpressRoute](../expressroute/designing-for-high-availability-with-expressroute.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)|[Designing for disaster recovery with ExpressRoute private peering](../expressroute/designing-for-disaster-recovery-with-expressroute-privatepeering.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| |Azure Key Vault|[Azure Key Vault failover within a region](../key-vault/general/disaster-recovery-guidance.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json#failover-within-a-region)| [Azure Key Vault](../key-vault/general/disaster-recovery-guidance.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json#failover-across-regions) |
-|Azure Load Balancer|[Load Balancer and Availability Zones](../load-balancer/load-balancer-standard-availability-zones.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Create a cross-region Azure Load Balancer](../load-balancer/tutorial-cross-region-portal.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) |
+|Azure Load Balancer|[Reliability in Load Balancer](./reliability-load-balancer.md)| [Reliability in Load Balancer](./reliability-load-balancer.md)|
|Azure Public IP|[Azure Public IP - Availability zones](../virtual-network/ip-services/public-ip-addresses.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json#availability-zone)| [Azure Public IP: Cross-region overview](../load-balancer/cross-region-overview.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) | |Azure Service Bus|[Azure Service Bus - Availability zones](../service-bus-messaging/service-bus-geo-dr.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json#availability-zones)| [Azure Service Bus Geo-disaster recovery](../service-bus-messaging/service-bus-geo-dr.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) | |Azure Service Fabric| [Deploy an Azure Service Fabric cluster across Availability Zones](../service-fabric/service-fabric-cross-availability-zones.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [Disaster recovery in Azure Service Fabric](../service-fabric/service-fabric-disaster-recovery.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) |
reliability Reliability Load Balancer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-load-balancer.md
+
+ Title: Reliability in Azure Load Balancer
+description: Find out about reliability in Azure Load Balancer
+++++ Last updated : 02/05/2024++
+# Reliability in Load Balancer
+
+This article contains [specific reliability recommendations](#reliability-recommendations) for [Load Balancer](/azure/load-balancer/load-balancer-overview), as well as detailed information on Load Balancer regional resiliency with [availability zones](#availability-zone-support) and [cross-region disaster recovery and business continuity](#cross-region-disaster-recovery-and-business-continuity).
++
+For an architectural overview of reliability in Azure, see [Azure reliability](/azure/architecture/framework/resiliency/overview).
++
+## Reliability recommendations
+
+
+### Reliability recommendations summary
+
+| Category | Priority |Recommendation |
+||--||
+|[**Availability**](#availability) |:::image type="icon" source="media/icon-recommendation-high.svg":::|[Ensure that Standard Load Balancer is zone-redundant](#-use-nat-gateway-instead-of-outbound-rules-for-production-workloads) |
+| |:::image type="icon" source="media/icon-recommendation-high.svg"::: |[Ensure that the backend pool contains at least two instances](#-ensure-that-the-backend-pool-contains-at-least-two-instances) |
+|[**System Efficiency**](#system-efficiency) |:::image type="icon" source="media/icon-recommendation-medium.svg":::|[Use NAT Gateway instead of outbound rules for production workloads](#-use-nat-gateway-instead-of-outbound-rules-for-production-workloads) |
+| |:::image type="icon" source="media/icon-recommendation-high.svg":::| [Use Standard Load Balancer SKU](#-use-standard-load-balancer-sku) |
++
+### Availability
++
+#### :::image type="icon" source="media/icon-recommendation-high.svg"::: **Ensure that Standard Load Balancer is zone-redundant**
+
+In a region that supports availability zones, Standard Load Balancer should be deployed with zone-redundancy. A zone-redundant Load Balancer allows traffic to be served by a single frontend IP address that can survive zone failure. The frontend IP may be used to reach all (non-impacted) backend pool members regardless of zone. If an availability zone fails, the data path can survive as long as the remaining zones in the region remain healthy. For more information, see [Zone-redundant load balancer](#zone-redundant-load-balancer).
++
+# [Azure Resource Graph](#tab/graph)
++
+-
++
+#### :::image type="icon" source="media/icon-recommendation-high.svg"::: **Ensure that the backend pool contains at least two instances**
+
+Deploy Load Balancer with at least two instances in the backend. A single instance could result in a single point of failure. In order to build for scale, you might want to pair load balancer with [Virtual Machine Scale Sets](../virtual-machine-scale-sets/overview.md).
++
+# [Azure Resource Graph](#tab/graph)
++
+-
+
+### System Efficiency
++
+#### :::image type="icon" source="media/icon-recommendation-medium.svg"::: **Use NAT Gateway instead of outbound rules for production workloads**
+
+Outbound rules allocates fixed amounts of SNAT ports to each virtual machine instance in a backend pool. This method of allocation can lead to SNAT port exhaustion, especially if uneven traffic patterns result in a specific virtual machine sending a higher volume of outgoing connections. For production workloads, it's recommended that you couple Standard Load Balancer or any subnet deployment with [Azure NAT Gateway](/azure/nat-gateway/nat-overview). NAT Gateway dynamically allocates SNAT ports across all virtual machine instances in a subnet and in turn reduces the risk of SNAT port exhaustion.
++
+# [Azure Resource Graph](#tab/graph)
++
+-
+++
+#### :::image type="icon" source="media/icon-recommendation-high.svg"::: **Use Standard Load Balancer SKU**
+
+Standard SKU Load Balancer supports availability zones and zone resiliency, while the Basic SKU doesn't. When a zone goes down, your zone-redundant Standard Load Balancer will not be impacted and your deployments are able to withstand zone failures within a region. In addition, Standard Load Balancer supports cross region load balancing to ensure that your application isn't impacted by region failures.
+
+>[!NOTE]
+> Basic load balancers donΓÇÖt have a Service Level Agreement (SLA).
+
+# [Azure Resource Graph](#tab/graph)
++
+-
+
+## Availability zone support
++
+Azure Load Balancer supports availability zones scenarios. You can use Standard Load Balancer to increase availability throughout your scenario by aligning resources with, and distribution across zones. Review this document to understand these concepts and fundamental scenario design guidance.
+
+Although it's recommended that you deploy Load Balancer with zone-redundancy, a Load Balancer can either be **zone redundant, zonal, or non-zonal**. The load balancer's availability zone selection is synonymous with its frontend IP's zone selection. For public load balancers, if the public IP in the Load balancer's frontend is zone redundant then the load balancer is also zone-redundant. If the public IP in the load balancer's frontend is zonal, then the load balancer will also be designated to the same zone. To configure the zone-related properties for your load balancer, select the appropriate type of frontend needed.
++
+> [!NOTE]
+> It isn't required to have a load balancer for each zone, rather having a single load balancer with multiple frontends (zonal or zone redundant) associated to their respective backend pools will serve the purpose.
+
+### Prerequisites
+
+- To use availability zones with Load Balancer, you need to create your load balancer in a region that supports availability zones. To see which regions support availability zones, see the [list of supported regions](availability-zones-service-support.md#azure-regions-with-availability-zone-support).
+
+- Use Standard SKU for load balancer and Public IP for availability zones support.
+
+- Basic SKU type isn't supported.
+
+- To create your resource, you need to have Network Contributor role or higher.
++
+### Limitations
+
+- Zones can't be changed, updated, or created for the resource after creation.
+- Resources can't be updated from zonal to zone-redundant or vice versa after creation.
+
+### Zone redundant load balancer
++
+In a region with availability zones, a Standard Load Balancer can be zone-redundant with traffic served by a single IP address. A single frontend IP address survives zone failure. The frontend IP may be used to reach all (non-impacted) backend pool members no matter the zone. Up to one availability zone can fail and the data path survives as long as the remaining zones in the region remain healthy.
+
+The frontend's IP address is served simultaneously by multiple independent infrastructure deployments in multiple availability zones. Any retries or reestablishment will succeed in other zones not affected by the zone failure.
++
+>[!NOTE]
+>VMs 1,2, and 3 can be belong to the same subnet and don't necessarily have to be in separate zones as the diagram suggestions.
+
+Members in the backend pool of a load balancer are normally associated with a single zone such as with zonal virtual machines. A common design for production workloads would be to have multiple zonal resources. For example, placing virtual machines from zone 1, 2, and 3 in the backend of a load balancer with a zone-redundant frontend meets this design principle.
++++
+### Zonal load balancer
++
+You can choose to have a frontend guaranteed to a single zone, which is known as a *zonal*. With this scenario, a single zone in a region serves all inbound or outbound flow. Your frontend shares fate with the health of the zone. The data path is unaffected by failures in zones other than where it was guaranteed. You can use zonal frontends to expose an IP address per Availability Zone.
+
+Additionally, the use of zonal frontends directly for load-balanced endpoints within each zone is supported. You can use this configuration to expose per zone load-balanced endpoints to individually monitor each zone. For public endpoints, you can integrate them with a DNS load-balancing product like [Traffic Manager](../traffic-manager/traffic-manager-overview.md) and use a single DNS name.
++++
+For a public load balancer frontend, you add a **zones** parameter to the public IP. This public IP is referenced by the frontend IP configuration used by the respective rule.
+
+For an internal load balancer frontend, add a **zones** parameter to the internal load balancer frontend IP configuration. A zonal frontend guarantees an IP address in a subnet to a specific zone.
+
+### Non-zonal load balancer
+
+Load Balancers can also be created in a non-zonal configuration by use of a "no-zone" frontend. In these scenarios, a public load balancer would use a public IP or public IP prefix, an internal load balancer would use a private IP. This option doesn't give a guarantee of redundancy.
+
+>[!NOTE]
+>All public IP addresses that are upgraded from Basic SKU to Standard SKU will be of type "no-zone". Learn how to [Upgrade a public IP address in the Azure portal](../virtual-network/ip-services/public-ip-upgrade-portal.md).
++
+### SLA improvements
+
+Because availability zones are physically separate and provide distinct power source, network, and cooling, SLAs (Service-level agreements) can increase. For more information, see the [Service Level Agreements (SLA) for Online Services](https://www.microsoft.com/licensing/docs/view/Service-Level-Agreements-SLA-for-Online-Services?lang=1).
+
+#### Create a resource with availability zone enabled
+
+To learn how to load balance VMs within a zone or over multiple zones using a Load Balancer, see [Quickstart: Create a public load balancer to load balance VMs](/azure/load-balancer/quickstart-load-balancer-standard-public-portal).
++
+>[!NOTE]
+> - Zones can't be changed, updated, or created for the resource after creation.
+> - Resources can't be updated from zonal to zone-redundant or vice versa after creation.
++
+### Fault tolerance
+
+Virtual machines can fail over to another server in a cluster, with the VM's operating system restarting on the new server. You should refer to the failover process for disaster recovery, gathering virtual machines in recovery planning, and running disaster recovery drills to ensure their fault tolerance solution is successful.
+
+For more information, see the [site recovery processes](../site-recovery/site-recovery-failover.md#before-you-start).
++
+### Zone down experience
+
+Zone-redundancy doesn't imply hitless data plane or control plane. Zone-redundant flows can use any zone and your flows will use all healthy zones in a region. In a zone failure, traffic flows using healthy zones aren't affected.
+
+Traffic flows using a zone at the time of zone failure may be affected but applications can recover. Traffic continues in the healthy zones within the region upon retransmission when Azure has converged around the zone failure.
+
+Review Azure cloud design patterns to improve the resiliency of your application to failure scenarios.
+
+#### Multiple frontends
+
+Using multiple frontends allow you to load balance traffic on more than one port and/or IP address. When designing your architecture, ensure you account for how zone redundancy interacts with multiple frontends. If your goal is to always have every frontend resilient to failure, then all IP addresses assigned as frontends must be zone-redundant. If a set of frontends is intended to be associated with a single zone, then every IP address for that set must be associated with that specific zone. A load balancer isn't required in each zone. Instead, each zonal front end, or set of zonal frontends, could be associated with virtual machines in the backend pool that are part of that specific availability zone.
++
+### Safe deployment techniques
+
+Review [Azure cloud design patterns](/azure/architecture/patterns/) to improve the resiliency of your application to failure scenarios.
++
+### Migrate to availability zone support
+
+In the case where a region is augmented to have availability zones, any existing IPs would remain non-zonal like IPs used for load balancer frontends. To ensure your architecture can take advantage of the new zones, it's recommended that you create a new frontend IP. Once created, you can replace the existing non-zonal frontend with a new zone-redundant frontend. To learn how to migrate a VM to availability zone support, see [Migrate Load Balancer to availability zone support](./migrate-load-balancer.md).
++
+## Cross-region disaster recovery and business continuity
++
+Azure Standard Load Balancer supports cross-region load balancing enabling geo-redundant high availability scenarios such as:
++
+* Incoming traffic originating from multiple regions.
+* [Instant global failover](#regional-redundancy) to the next optimal regional deployment.
+* Load distribution across regions to the closest Azure region with [ultra-low latency](#ultra-low-latency).
+* Ability to [scale up/down](#ability-to-scale-updown-behind-a-single-endpoint) behind a single endpoint.
+* Static anycast global IP address
+* [Client IP preservation](#client-ip-preservation)
+* [Build on existing load balancer](#build-cross-region-solution-on-existing-azure-load-balancer) solution with no learning curve
+
+The frontend IP configuration of your cross-region load balancer is static and advertised across [most Azure regions](#participating-regions).
++
+> [!NOTE]
+> The backend port of your load balancing rule on cross-region load balancer should match the frontend port of the load balancing rule/inbound nat rule on regional standard load balancer.
+
+### Disaster recovery in multi-region geography
++
+#### Regional redundancy
+
+Configure regional redundancy by seamlessly linking a cross-region load balancer to your existing regional load balancers.
+
+If one region fails, the traffic is routed to the next closest healthy regional load balancer.
+
+The health probe of the cross-region load balancer gathers information about availability of each regional load balancer every 5 seconds. If one regional load balancer drops its availability to 0, cross-region load balancer detects the failure. The regional load balancer is then taken out of rotation.
+++
+#### Ultra-low latency
+
+The geo-proximity load-balancing algorithm is based on the geographic location of your users and your regional deployments.
+
+Traffic started from a client hits the closest participating region and travel through the Microsoft global network backbone to arrive at the closest regional deployment.
+
+For example, you have a cross-region load balancer with standard load balancers in Azure regions:
+
+* West US
+* North Europe
+
+If a flow is started from Seattle, traffic enters West US. This region is the closest participating region from Seattle. The traffic is routed to the closest region load balancer, which is West US.
+
+Azure cross-region load balancer uses geo-proximity load-balancing algorithm for the routing decision.
+
+The configured load distribution mode of the regional load balancers is used for making the final routing decision when multiple regional load balancers are used for geo-proximity.
+
+For more information, see [Configure the distribution mode for Azure Load Balancer](../load-balancer/load-balancer-distribution-mode.md).
+
+Egress traffic follows the routing preference set on the regional load balancers.
+
+### Ability to scale up/down behind a single endpoint
+
+When you expose the global endpoint of a cross-region load balancer to customers, you can add or remove regional deployments behind the global endpoint without interruption.
+
+#### Static anycast global IP address
+
+Cross-region load balancer comes with a static public IP, which ensures the IP address remains the same. To learn more about static IP, read more [here](../virtual-network/ip-services/public-ip-addresses.md#ip-address-assignment)
+
+#### Client IP Preservation
+
+Cross-region load balancer is a Layer-4 pass-through network load balancer. This pass-through preserves the original IP of the packet. The original IP is available to the code running on the virtual machine. This preservation allows you to apply logic that is specific to an IP address.
+
+#### Floating IP
+
+Floating IP can be configured at both the global IP level and regional IP level. For more information, visit [Multiple frontends for Azure Load Balancer](../load-balancer/load-balancer-multivip-overview.md)
+
+It is important to note that floating IP configured on the Azure cross-region Load Balancer operates independently of floating IP configurations on backend regional load balancers. If floating IP is enabled on the cross-region load balancer, the appropriate loopback interface needs to be added to the backend VMs.
+
+#### Health Probes
+
+Azure cross-region Load Balancer utilizes the health of the backend regional load balancers when deciding where to distribute traffic to. Health checks by cross-region load balancer are done automatically every 5 seconds, given that a user has set up health probes on their regional load balancer.
+
+## Build cross region solution on existing Azure Load Balancer
+
+The backend pool of cross-region load balancer contains one or more regional load balancers.
+
+Add your existing load balancer deployments to a cross-region load balancer for a highly available, cross-region deployment.
+
+**Home region** is where the cross-region load balancer or Public IP Address of Global tier is deployed.
+This region doesn't affect how the traffic is routed. If a home region goes down, traffic flow is unaffected.
+
+### Home regions
+* Central US
+* East Asia
+* East US 2
+* North Europe
+* Southeast Asia
+* UK South
+* US Gov Virginia
+* West Europe
+* West US
+
+> [!NOTE]
+> You can only deploy your cross-region load balancer or Public IP in Global tier in one of the listed Home regions.
+
+A **participating region** is where the Global public IP of the load balancer is being advertised.
+
+Traffic started by the user travels to the closest participating region through the Microsoft core network.
+
+Cross-region load balancer routes the traffic to the appropriate regional load balancer.
++
+### Participating regions
+* Australia East
+* Australia Southeast
+* Central India
+* Central US
+* East Asia
+* East US
+* East US 2
+* Japan East
+* North Central US
+* North Europe
+* South Central US
+* Southeast Asia
+* UK South
+* US DoD Central
+* US DoD East
+* US Gov Arizona
+* US Gov Texas
+* US Gov Virginia
+* West Central US
+* West Europe
+* West US
+* West US 2
+
+> [!NOTE]
+> The backend regional load balancers can be deployed in any publicly available Azure Region and is not limited to just participating regions.
+
+## Limitations
+
+* Cross-region frontend IP configurations are public only. An internal frontend is currently not supported.
+
+* Private or internal load balancer can't be added to the backend pool of a cross-region load balancer
+
+* NAT64 translation isn't supported at this time. The frontend and backend IPs must be of the same type (v4 or v6).
+
+* UDP traffic isn't supported on Cross-region Load Balancer for IPv6.
+
+* UDP traffic on port 3 isn't supported on Cross-Region Load Balancer
+
+* Outbound rules aren't supported on Cross-region Load Balancer. For outbound connections, utilize [outbound rules](../load-balancer/outbound-rules.md) on the regional load balancer or [NAT gateway](../nat-gateway/nat-overview.md).
+
+## Pricing and SLA
+Cross-region load balancer shares the [SLA](https://azure.microsoft.com/support/legal/sla/load-balancer/v1_0/) of standard load balancer.
+
+## Next steps
+- [Reliability in Azure](/azure/reliability/availability-zones-overview)
+- See [Tutorial: Create a cross-region load balancer using the Azure portal](../load-balancer/tutorial-cross-region-portal.md) to create a cross-region load balancer.
+- Learn more about [cross-region load balancer](https://www.youtube.com/watch?v=3awUwUIv950).
+- Learn more about [Azure Load Balancer](../load-balancer/load-balancer-overview.md).
resource-mover Support Matrix Move Region Azure Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/support-matrix-move-region-azure-vm.md
Title: Support matrix for moving Azure VMs to another region with Azure Resource Mover
-description: Review support for moving Azure VMs between regions with Azure Resource Mover.
+description: Review support for moving Azure VMs between regions with Azure Resource Mover.
# Support for moving Azure VMs between Azure regions
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ This article summarizes support and prerequisites when you move virtual machines and related network resources across Azure regions using Resource Mover. ## Windows VM support
Oracle Linux | 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 6.10, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5,
### Supported Ubuntu kernel versions
-**Release** | **Kernel version**
- |
-14.04 LTS | 3.13.0-24-generic to 3.13.0-170-generic,<br/>3.16.0-25-generic to 3.16.0-77-generic,<br/>3.19.0-18-generic to 3.19.0-80-generic,<br/>4.2.0-18-generic to 4.2.0-42-generic,<br/>4.4.0-21-generic to 4.4.0-148-generic,<br/>4.15.0-1023-azure to 4.15.0-1045-azure
+**Release** | **Kernel version**
+ |
+14.04 LTS | 3.13.0-24-generic to 3.13.0-170-generic,<br/>3.16.0-25-generic to 3.16.0-77-generic,<br/>3.19.0-18-generic to 3.19.0-80-generic,<br/>4.2.0-18-generic to 4.2.0-42-generic,<br/>4.4.0-21-generic to 4.4.0-148-generic,<br/>4.15.0-1023-azure to 4.15.0-1045-azure
16.04 LTS | 4.4.0-21-generic to 4.4.0-171-generic,<br/>4.8.0-34-generic to 4.8.0-58-generic,<br/>4.10.0-14-generic to 4.10.0-42-generic,<br/>4.11.0-13-generic to 4.11.0-14-generic,<br/>4.13.0-16-generic to 4.13.0-45-generic,<br/>4.15.0-13-generic to 4.15.0-74-generic<br/>4.11.0-1009-azure to 4.11.0-1016-azure,<br/>4.13.0-1005-azure to 4.13.0-1018-azure <br/>4.15.0-1012-azure to 4.15.0-1066-azure 18.04 LTS | 4.15.0-20-generic to 4.15.0-74-generic </br> 4.18.0-13-generic to 4.18.0-25-generic </br> 5.0.0-15-generic to 5.0.0-37-generic </br> 5.3.0-19-generic to 5.3.0-24-generic </br> 4.15.0-1009-azure to 4.15.0-1037-azure </br> 4.18.0-1006-azure to 4.18.0-1025-azure </br> 5.0.0-1012-azure to 5.0.0-1028-azure </br> 5.3.0-1007-azure to 5.3.0-1009-azure
-### Supported Debian kernel versions
+### Supported Debian kernel versions
-**Release** | **Kernel version**
- |
-Debian 7 | 3.2.0-4-amd64 to 3.2.0-6-amd64, 3.16.0-0.bpo.4-amd64
-Debian 8 | 3.16.0-4-amd64 to 3.16.0-10-amd64, 4.9.0-0.bpo.4-amd64 to 4.9.0-0.bpo.11-amd64
+**Release** | **Kernel version**
+ |
+Debian 7 | 3.2.0-4-amd64 to 3.2.0-6-amd64, 3.16.0-0.bpo.4-amd64
+Debian 8 | 3.16.0-4-amd64 to 3.16.0-10-amd64, 4.9.0-0.bpo.4-amd64 to 4.9.0-0.bpo.11-amd64
Debian 8 | 3.16.0-4-amd64 to 3.16.0-10-amd64, 4.9.0-0.bpo.4-amd64 to 4.9.0-0.bpo.9-amd64
-### Supported SUSE Linux Enterprise Server 12 kernel versions
+### Supported SUSE Linux Enterprise Server 12 kernel versions
-**Release** | **Kernel version**
- |
-SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4) | All [stock SUSE 12 SP1,SP2,SP3,SP4 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.4.138-4.7-azure to 4.4.180-4.31-azure,</br>4.12.14-6.3-azure to 4.12.14-6.34-azure
+**Release** | **Kernel version**
+ |
+SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4) | All [stock SUSE 12 SP1,SP2,SP3,SP4 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br></br> 4.4.138-4.7-azure to 4.4.180-4.31-azure,</br>4.12.14-6.3-azure to 4.12.14-6.34-azure
### Supported SUSE Linux Enterprise Server 15 kernel versions
Premium P20 or P30 or P40 or P50 disk | 16 KB or greater |20 MB/s | 1684 GB per
**Setting** | **Support** | **Details** | |
-NIC | Supported | Specify an existing resource in the target region, or create a new resource during the Prepare process.
-Internal load balancer | Supported | Specify an existing resource in the target region, or create a new resource during the Prepare process.
-Public load balancer | Supported | Specify an existing resource in the target region, or create a new resource during the Prepare process.
+NIC | Supported | Specify an existing resource in the target region, or create a new resource during the Prepare process.
+Internal load balancer | Supported | Specify an existing resource in the target region, or create a new resource during the Prepare process.
+Public load balancer | Supported | Specify an existing resource in the target region, or create a new resource during the Prepare process.
Public IP address | Supported | Specify an existing resource in the target region, or create a new resource during the Prepare process.<br/><br/> The public IP address is region-specific, and won't be retained in the target region after the move. Keep this in mind when you modify networking settings (including load balancing rules) in the target location.
-Network security group | Supported | Specify an existing resource in the target region, or create a new resource during the Prepare process.
+Network security group | Supported | Specify an existing resource in the target region, or create a new resource during the Prepare process.
Reserved (static) IP address | Supported | You can't currently configure this. The value defaults to the source value. <br/><br/> If the NIC on the source VM has a static IP address, and the target subnet has the same IP address available, it's assigned to the target VM.<br/><br/> If the target subnet doesn't have the same IP address available, the initiate move for the VM will fail. Dynamic IP address | Supported | You can't currently configure this. The value defaults to the source value.<br/><br/> If the NIC on the source has dynamic IP addressing, the NIC on the target VM is also dynamic by default. IP configurations | Supported | You can't currently configure this. The value defaults to the source value.
Azure VMs that you want to move need outbound access.
If you're using a URL-based firewall proxy to control outbound connectivity, allow access to these URLs:
-**Name** | **Azure public cloud** | **Details**
- | |
-Storage | `*.blob.core.windows.net` | Allows data to be written from the VM to the cache storage account in the source region.
-Microsoft Entra ID | `login.microsoftonline.com` | Provides authorization and authentication to Site Recovery service URLs.
-Replication | `*.hypervrecoverymanager.windowsazure.com` | Allows the VM to communicate with the Site Recovery service.
-Service Bus | `*.servicebus.windows.net` | Allows the VM to write Site Recovery monitoring and diagnostics data.
+**Name** | **Azure public cloud** | **Details**
+ | |
+Storage | `*.blob.core.windows.net` | Allows data to be written from the VM to the cache storage account in the source region.
+Microsoft Entra ID | `login.microsoftonline.com` | Provides authorization and authentication to Site Recovery service URLs.
+Replication | `*.hypervrecoverymanager.windowsazure.com` | Allows the VM to communicate with the Site Recovery service.
+Service Bus | `*.servicebus.windows.net` | Allows the VM to write Site Recovery monitoring and diagnostics data.
## NSG rules If you're using a network security group (NSG) rules to control outbound connectivity, create these [service tag](../virtual-network/service-tags-overview.md) rules. Each rule should allow outbound access on HTTPS (443).
If you're using a network security group (NSG) rules to control outbound connect
- **EventHub* - *AzureKeyVault* - *GuestAndHybridManagement*-- We recommend you test rules in a non-production environment. [Review some examples](../site-recovery/azure-to-azure-about-networking.md#outbound-connectivity-using-service-tags).
+- We recommend you test rules in a non-production environment. [Review some examples](../site-recovery/azure-to-azure-about-networking.md#outbound-connectivity-using-service-tags).
## Next steps
role-based-access-control Elevate Access Global Admin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/elevate-access-global-admin.md
Previously updated : 03/21/2023 Last updated : 02/09/2024
role-based-access-control Rbac And Directory Admin Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/rbac-and-directory-admin-roles.md
ms.assetid: 174f1706-b959-4230-9a75-bf651227ebf6 Previously updated : 01/26/2024 Last updated : 02/09/2024
At a high level, Azure roles control permissions to manage Azure resources, whil
| Manage access to Azure resources | Manage access to Microsoft Entra resources | | Supports custom roles | Supports custom roles | | Scope can be specified at multiple levels (management group, subscription, resource group, resource) | [Scope](../active-directory/roles/custom-overview.md#scope) can be specified at the tenant level (organization-wide), administrative unit, or on an individual object (for example, a specific application) |
-| Role information can be accessed in Azure portal, Azure CLI, Azure PowerShell, Azure Resource Manager templates, REST API | Role information can be accessed in the Azure admin portal, Microsoft 365 admin center, Microsoft Graph, AzureAD PowerShell |
+| Role information can be accessed in Azure portal, Azure CLI, Azure PowerShell, Azure Resource Manager templates, REST API | Role information can be accessed in the Azure portal, Microsoft Entra admin center, Microsoft 365 admin center, Microsoft Graph, Microsoft Graph PowerShell |
<a name='do-azure-roles-and-azure-ad-roles-overlap'></a>
By default, Azure roles and Microsoft Entra roles don't span Azure and Microsoft
Several Microsoft Entra roles span Microsoft Entra ID and Microsoft 365, such as the Global Administrator and User Administrator roles. For example, if you're a member of the Global Administrator role, you have global administrator capabilities in Microsoft Entra ID and Microsoft 365, such as making changes to Microsoft Exchange and Microsoft SharePoint. However, by default, the Global Administrator doesn't have access to Azure resources. ## Classic subscription administrator roles
sap Cal S4h https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/cal-s4h.md
The online library is continuously updated with Appliances for demo, proof of co
| Appliance Template | Date | Description | Creation Link | | | - | -- | - |
+| [**SAP S/4HANA 2023**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/5904c878-82f5-435d-8991-e1c29334765a) | December 14 2023 |This Appliance Template contains a pre-configured and activated SAP S/4HANA Fiori UI in client 100, with prerequisite components activated as per SAP note 3336782 ΓÇô Composite SAP note: Rapid Activation for SAP Fiori in SAP S/4HANA 2023. It also includes a remote desktop for easy frontend access. | [Create Apliance](https://cal.sap.com/registration?sguid=5904c878-82f5-435d-8991-e1c29334765a&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) |
| [**SAP S/4HANA 2022 FPS02, Fully-Activated Appliance**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/983008db-db92-4d4d-ac79-7e2afa95a2e0)| July 16 2023 |This appliance contains SAP S/4HANA 2022 (FPS02) with pre-activated SAP Best Practices for SAP S/4HANA core functions, and further scenarios for Service, Master Data Governance (MDG), Portfolio Mgmt. (PPM), Human Capital Management (HCM), Analytics, and more. User access happens via SAP Fiori, SAP GUI, SAP HANA Studio, Windows remote desktop, or the backend operating system for full administrative access. | [Create Appliance](https://cal.sap.com/registration?sguid=983008db-db92-4d4d-ac79-7e2afa95a2e0&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) | [**SAP S/4HANA 2022 FPS01, Fully-Activated Appliance**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/3722f683-42af-4059-90db-4e6a52dc9f54) | April 20 2023 |This appliance contains SAP S/4HANA 2022 (FPS01) with pre-activated SAP Best Practices for SAP S/4HANA core functions, and further scenarios for Service, Master Data Governance (MDG), Portfolio Mgmt. (PPM), Human Capital Management (HCM), Analytics, and more. User access happens via SAP Fiori, SAP GUI, SAP HANA Studio, Windows remote desktop, or the backend operating system for full administrative access. | [Create Appliance](https://cal.sap.com/registration?sguid=3722f683-42af-4059-90db-4e6a52dc9f54&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) | | [**SAP S/4HANA 2021 FPS01, Fully-Activated Appliance**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/a954cc12-da16-4caa-897e-cf84bc74cf15)| April 26 2022 |This appliance contains SAP S/4HANA 2021 (FPS01) with pre-activated SAP Best Practices for SAP S/4HANA core functions, and further scenarios for Service, Master Data Governance (MDG), Portfolio Mgmt. (PPM), Human Capital Management (HCM), Analytics, Migration Cockpit, and more. User access happens via SAP Fiori, SAP GUI, SAP HANA Studio, Windows remote desktop, or the backend operating system for full administrative access. |[Create Appliance](https://cal.sap.com/registration?sguid=a954cc12-da16-4caa-897e-cf84bc74cf15&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) | | [**SAP S/4HANA 2022, Fully-Activated Appliance**]( https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/f4e6b3ba-ba8f-485f-813f-be27ed5c8311)| December 15 2022 |This appliance contains SAP S/4HANA 2022 (SP00) with pre-activated SAP Best Practices for SAP S/4HANA core functions, and further scenarios for Service, Master Data Governance (MDG), Portfolio Mgmt. (PPM), Human Capital Management (HCM), Analytics, and more. User access happens via SAP Fiori, SAP GUI, SAP HANA Studio, Windows remote desktop, or the backend operating system for full administrative access. | [Create Appliance](https://cal.sap.com/registration?sguid=f4e6b3ba-ba8f-485f-813f-be27ed5c8311&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) |
-| [**SAP ABAP Platform 1909, Developer Edition**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/f2cf3077-256b-4bbf-8091-f970d5792bbf)| November 29 2023|The SAP ABAP Platform on SAP HANA gives you access to your own copy of SAP ABAP Platform 1909 Developer Edition on SAP HANA. Note that this solution is preconfigured with many additional elements, including: SAP ABAP RESTful Application Programming Model, SAP Fiori launchpad, SAP gCTS, SAP ABAP Test Cockpit, and preconfigured frontend / backend connections, etc It also includes all the standard ABAP AS infrastructure: Transaction Management, database operations / persistence, Change and Transport System, SAP Gateway, interoperability with ABAP Development Toolkit and SAP WebIDE, and much more. |[Create Appliance](https://cal.sap.com/registration?sguid=f2cf3077-256b-4bbf-8091-f970d5792bbf&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) |
| [**SAP Focused Run 4.0 FP02, unconfigured**](https://cal.sap.com/catalog?provider=208b780d-282b-40ca-9590-5dd5ad1e52e8#/applianceTemplates/130453cf-8bea-41dc-a692-7d6052e10e2d) | December 07 2023 | SAP Focused Run is designed specifically for businesses that need high-volume system and application monitoring, alerting, and analytics. It's a powerful solution for service providers, who want to host all their customers in one central, scalable, safe, and automated environment. It also addresses customers with advanced needs regarding system management, user monitoring, integration monitoring, and configuration and security analytics. | [Create Appliance](https://cal.sap.com/registration?sguid=130453cf-8bea-41dc-a692-7d6052e10e2d&provider=208b780d-282b-40ca-9590-5dd5ad1e52e8) |
search Search Api Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-api-migration.md
If any of these situations apply to you, change your code to maintain existing f
This version has breaking changes and behavioral differences for semantic ranking and vector search support.
-+ [Semantic ranking](semantic-search-overview.md) no longer uses `queryLanguage`. It also requires a `semanticConfiguration` definition. If you're migrating from 2020-06-30-preview, a semantic configuration replaces `searchFields`. See [Migrate from preview version](semantic-how-to-query-request.md#migrate-from-preview-versions) for steps.
++ [Semantic ranking](semantic-search-overview.md) no longer uses `queryLanguage`. It also requires a `semanticConfiguration` definition. If you're migrating from 2020-06-30-preview, a semantic configuration replaces `searchFields`. See [Migrate from preview version](semantic-how-to-configure.md#migrate-from-preview-versions) for steps. + [Vector search](vector-search-overview.md) support was introduced in [Create or Update Index (2023-07-01-preview)](/rest/api/searchservice/preview-api/create-or-update-index). If you're migrating from that version, there are new options and several breaking changes. New options include vector filter mode, vector profiles, and an exhaustive K-nearest neighbors algorithm and query-time exhaustive k-NN flag. Breaking changes include renaming and restructuring the vector configuration in the index, and vector query syntax.
search Semantic Answers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/semantic-answers.md
- ignite-2023 Previously updated : 10/04/2023 Last updated : 02/08/2024 # Return a semantic answer in Azure AI Search
All prerequisites that apply to [semantic queries](semantic-how-to-query-request
+ Query strings entered by the user must be recognizable as a question (what, where, when, how).
-+ Search documents in the index must contain text having the characteristics of an answer, and that text must exist in one of the fields listed in the [semantic configuration](semantic-how-to-query-request.md#2create-a-semantic-configuration). For example, given a query "what is a hash table", if none of the fields in the semantic configuration contain passages that include "A hash table is ...", then it's unlikely an answer is returned.
++ Search documents in the index must contain text having the characteristics of an answer, and that text must exist in one of the fields listed in the [semantic configuration](semantic-how-to-configure.md). For example, given a query "what is a hash table", if none of the fields in the semantic configuration contain passages that include "A hash table is ...", then it's unlikely an answer is returned. > [!NOTE] > Starting in 2021-04-30-Preview, in [Create or Update Index (Preview)](/rest/api/searchservice/preview-api/create-or-update-index) requests, a `"semanticConfiguration"` is required for specifying input fields for semantic ranking.
To return a semantic answer, the query must have the semantic `"queryType"`, `"q
+ `"queryLanguage"` must be one of the values from the [supported languages list (REST API)](/rest/api/searchservice/preview-api/search-documents#queryLanguage).
-+ A `"semanticConfiguration"` determines which string fields provide tokens to the extraction model. The same fields that produce captions also produce answers. See [Create a semantic configuration](semantic-how-to-query-request.md#2create-a-semantic-configuration) for details.
++ A `"semanticConfiguration"` determines which string fields provide tokens to the extraction model. The same fields that produce captions also produce answers. See [Create a semantic configuration](semantic-how-to-configure.md) for details. + For `"answers"`, parameter construction is `"answers": "extractive"`, where the default number of answers returned is one. You can increase the number of answers by adding a `count` as shown in the above example, up to a maximum of 10. Whether you need more than one answer depends on the user experience of your app, and how you want to render results.
Within @search.answers:
+ **"score"** is a confidence score that reflects the strength of the answer. If there are multiple answers in the response, this score is used to determine the order. Top answers and top captions can be derived from different search documents, where the top answer originates from one document, and the top caption from another, but in general the same documents appear in the top positions within each array.
-Answers are followed by the **"value"** array, which always includes scores, captions, and any fields that are retrievable by default. If you specified the select parameter, the "value" array is limited to the fields that you specified. See [Configure semantic ranking](semantic-how-to-query-request.md) for details.
+Answers are followed by the **"value"** array, which always includes scores, captions, and any fields that are retrievable by default. If you specified the select parameter, the "value" array is limited to the fields that you specified. See [Configure semantic ranking](semantic-how-to-configure.md) for details.
## Tips for producing high-quality answers
For best results, return semantic answers on a document corpus having the follow
+ [Semantic ranking overview](semantic-search-overview.md) + [Configure BM25 ranking](index-ranking-similarity.md)
-+ [Configure semantic ranking](semantic-how-to-query-request.md)
++ [Configure semantic ranking](semantic-how-to-configure.md)
search Semantic How To Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/semantic-how-to-configure.md
+
+ Title: Configure semantic ranker
+
+description: Add a semantic configuration to a search index.
++++++
+ - ignite-2023
+ Last updated : 02/08/2024++
+# Configure semantic ranking and return captions in search results
+
+In this article, learn how to invoke a semantic ranking over a result set, promoting the most semantically relevant results to the top of the stack. You can also get semantic captions, with highlights over the most relevant terms and phrases, and [semantic answers](semantic-answers.md).
+
+## Prerequisites
+++ A search service on Basic, Standard tier (S1, S2, S3), or Storage Optimized tier (L1, L2), subject to [region availability](https://azure.microsoft.com/global-infrastructure/services/?products=search).+++ Semantic ranker [enabled on your search service](semantic-how-to-enable-disable.md).+++ An existing search index with rich text content. Semantic ranking applies to text (nonvector) fields and works best on content that is informational or descriptive.+
+## Choose a client
+
+Choose a search client that supports semantic ranking. Here are some options:
+++ [Azure portal](https://portal.azure.com), using the index designer to add a semantic configuration.++ [Postman app](https://www.postman.com/downloads/) using [REST APIs](/rest/api/searchservice/)++ [Azure SDK for .NET](https://www.nuget.org/packages/Azure.Search.Documents)++ [Azure SDK for Python](https://pypi.org/project/azure-search-documents)++ [Azure SDK for Java](https://central.sonatype.com/artifact/com.azure/azure-search-documents)++ [Azure SDK for JavaScript](https://www.npmjs.com/package/@azure/search-documents)+
+## Add a semantic configuration
+
+A *semantic configuration* is a section in your index that establishes field inputs for semantic ranking. You can add or update a semantic configuration at any time, no rebuild necessary. If you create multiple configurations, you can specify a default. At query time, specify a semantic configuration on a [query request](semantic-how-to-query-request.md), or leave it blank to use the default.
+
+A semantic configuration has a name and the following properties:
+
+| Property | Characteristics |
+|-|--|
+| Title field | A short string, ideally under 25 words. This field could be the title of a document, name of a product, or a unique identifier. If you don't have suitable field, leave it blank. |
+| Content fields | Longer chunks of text in natural language form, subject to [maximum token input limits](semantic-search-overview.md#how-inputs-are-collected-and-summarized) on the machine learning models. Common examples include the body of a document, description of a product, or other free-form text. |
+| Keyword fields | A list of keywords, such as the tags on a document, or a descriptive term, such as the category of an item. |
+
+You can only specify one title field, but you can have as many content and keyword fields as you like. For content and keyword fields, list the fields in priority order because lower priority fields might get truncated.
+
+Across all semantic configuration properties, the fields you assign must be:
+++ Attributed as `searchable` and `retrievable`++ Strings of type `Edm.String`, `Collection(Edm.String)`, string subfields of `Collection(Edm.ComplexType)`+
+### [**Azure portal**](#tab/portal)
+
+1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to a search service that has [semantic ranking enabled](semantic-how-to-enable-disable.md).
+
+1. From **Indexes** on the left-navigation pane, open an index.
+
+1. Select **Semantic Configurations** and then select **Add Semantic Configuration**.
+
+ The **New Semantic Configuration** page opens with options for selecting a title field, content fields, and keyword fields. Only searchable and retrievable string fields are eligible. Make sure to list content fields and keyword fields in priority order.
+
+ :::image type="content" source="./media/semantic-search-overview/create-semantic-config.png" alt-text="Screenshot that shows how to create a semantic configuration in the Azure portal." lightbox="./media/semantic-search-overview/create-semantic-config.png" border="true":::
+
+ Select **OK** to save the changes.
+
+### [**REST API**](#tab/rest)
+
+1. Formulate a [Create or Update Index](/rest/api/searchservice/indexes/create-or-update) request.
+
+1. Add a semantic configuration to the index definition, perhaps after `scoringProfiles` or `suggesters`. Specifying a default is optional but useful if you have more than one configuration.
+
+ ```json
+ "semantic": {
+ "defaultConfiguration": "my-semantic-config-default",
+ "configurations": [
+ {
+ "name": "my-semantic-config-default",
+ "prioritizedFields": {
+ "titleField": {
+ "fieldName": "HotelName"
+ },
+ "prioritizedContentFields": [
+ {
+ "fieldName": "Description"
+ }
+ ],
+ "prioritizedKeywordsFields": [
+ {
+ "fieldName": "Tags"
+ }
+ ]
+ }
+ },
+ {
+ "name": "my-semantic-config-desc-only",
+ "prioritizedFields": {
+ "prioritizedContentFields": [
+ {
+ "fieldName": "Description"
+ }
+ ]
+ }
+ }
+ ]
+ }
+ ```
+
+### [**.NET SDK**](#tab/sdk)
+
+Use the [SemanticConfiguration class](/dotnet/api/azure.search.documents.indexes.models.semanticconfiguration?view=azure-dotnet&branch=main&preserve-view=true) in the Azure SDK for .NET.
+
+The following example is from the [semantic ranking sample](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/search/Azure.Search.Documents/samples/Sample08_SemanticSearch.md) authored by the Azure SDK team.
+
+```c#
+string indexName = "hotel";
+SearchIndex searchIndex = new(indexName)
+{
+ Fields =
+ {
+ new SimpleField("HotelId", SearchFieldDataType.String) { IsKey = true, IsFilterable = true, IsSortable = true, IsFacetable = true },
+ new SearchableField("HotelName") { IsFilterable = true, IsSortable = true },
+ new SearchableField("Description") { IsFilterable = true },
+ new SearchableField("Category") { IsFilterable = true, IsSortable = true, IsFacetable = true },
+ },
+ SemanticSearch = new()
+ {
+ Configurations =
+ {
+ new SemanticConfiguration("my-semantic-config", new()
+ {
+ TitleField = new SemanticField("HotelName"),
+ ContentFields =
+ {
+ new SemanticField("Description")
+ },
+ KeywordsFields =
+ {
+ new SemanticField("Category")
+ }
+ })
+ }
+ }
+};
+```
+++
+## Migrate from preview versions
+
+If your semantic ranking code is using preview APIs, this section explains how to migrate to stable versions. You can check the change logs for verification of general availability:
+++ [2023-11-01 (REST)](/rest/api/searchservice/indexes/create-or-update?view=rest-searchservice-2023-11-01&preserve-view=true)++ [Azure SDK for .NET (11.5) change log](https://github.com/Azure/azure-sdk-for-net/blob/Azure.Search.Documents_11.5.1/sdk/search/Azure.Search.Documents/CHANGELOG.md#1150-2023-11-10)++ [Azure SDK for Python (11.4) change log](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/search/azure-search-documents/CHANGELOG.md#1140-2023-10-13)++ [Azure SDK for Java (11.6) change log](https://github.com/Azure/azure-sdk-for-jav#1160-2023-11-13)++ [Azure SDK for JavaScript (12.0) change log](https://github.com/Azure/azure-sdk-for-js/blob/%40azure/search-documents_12.0.0/sdk/search/search-documents/CHANGELOG.md#1200-2023-11-13)+
+**Behavior changes:**
+++ As of July 14, 2023, semantic ranker is language agnostic. It can rerank results composed of multilingual content, with no bias towards a specific language. In preview versions, semantic ranking would deprioritize results differing from the language specified by the field analyzer.+++ In 2021-04-30-Preview and all later versions, for the REST API and all SDK packages targeting the same version: `semanticConfiguration` (in an index definition) defines which search fields are used in semantic ranking. Previously in the 2020-06-30-Preview REST API, `searchFields` (in a query request) was used for field specification and prioritization. This approach only worked in 2020-06-30-Preview and is obsolete in all other versions.+
+### Step 1: Remove queryLanguage
+
+The semantic ranking engine is now language agnostic. If `queryLanguage` is specified in your query logic, it's no longer used for semantic ranking, but still applies to [spell correction](speller-how-to-add.md).
+
+Keep `queryLanguage` if you're using speller, and if the language value is [supported by speller](speller-how-to-add.md#supported-languages). Spell check has limited availability across languages.
+
+Otherwise, delete `queryLanguage`.
+
+### Step 2: Replace `searchFields` with `semanticConfiguration`
+
+If your code calls the 2020-06-30-Preview REST API or beta SDK packages targeting that REST API version, you might be using `searchFields` in a query request to specify semantic fields and priorities. In initial beta versions, `searchFields` had a dual purpose, constraining the initial query to the fields listed in `searchFields`, and also setting field priority if semantic ranking was used. In later versions, `searchFields` retains its original purpose, but is no longer used for semantic ranking.
+
+Keep `searchFields` in query requests if you're using it to limit full text search to the list of named fields.
+
+Add a `semanticConfiguration` to an index schema to specify field prioritization, following the [instructions in this article](#add-a-semantic-configuration).
+
+## Next steps
+
+Test your semantic configuration by running a semantic query.
+
+> [!div class="nextstepaction"]
+> [Create a semantic query](semantic-how-to-query-request.md)
search Semantic How To Enable Disable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/semantic-how-to-enable-disable.md
- ignite-2023 Previously updated : 12/12/2023 Last updated : 02/08/2024 # Enable or disable semantic ranker Semantic ranker is a premium feature that's billed by usage. By default, semantic ranker is disabled on all services.
+## Check availability
+
+Check the [Products Available by Region](https://azure.microsoft.com/global-infrastructure/services/?products=search) page on the Azure web site to see if your region is listed.
+ ## Enable semantic ranking Follow these steps to enable [semantic ranker](semantic-search-overview.md) at the service level. Once enabled, it's available to all indexes. You can't turn it on or off for specific indexes.
Follow these steps to enable [semantic ranker](semantic-search-overview.md) at t
1. Open the [Azure portal](https://portal.azure.com).
-1. Navigate to your search service. The service must be a billable tier.
-
-1. Determine whether the service region supports semantic ranking:
-
- 1. Find your service region in the overview page in the Azure portal.
-
- 1. Check the [Products Available by Region](https://azure.microsoft.com/global-infrastructure/services/?products=search) page on the Azure web site to see if your region is listed.
+1. Navigate to your search service. On the **Overview** page, make sure the service is a billable tier, Basic or higher.
1. On the left-nav pane, select **Semantic ranking**. 1. Select either the **Free plan** or the **Standard plan**. You can switch between the free plan and the standard plan at any time.
+ :::image type="content" source="media/semantic-search-overview/semantic-search-billing.png" alt-text="Screenshot of enabling semantic ranking in the Azure portal." border="true":::
The free plan is capped at 1,000 queries per month. After the first 1,000 queries in the free plan, you'll receive an error message letting you know you've exhausted your quota the next time you issue a semantic query. When this happens, you need to upgrade to the standard plan to continue using semantic ranking.
To re-enable semantic ranking, rerun the above request, setting "semanticSearch"
## Next steps
-[Configure semantic ranking](semantic-how-to-query-request.md) so that you can test out semantic ranking on your content.
+[Configure semantic ranking](semantic-how-to-configure.md) so that you can test out semantic ranking on your content.
search Semantic How To Query Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/semantic-how-to-query-request.md
Title: Configure semantic ranker
+ Title: Query with semantic ranking
description: Set a semantic query type to attach the deep learning models of semantic ranking.
- ignite-2023 Previously updated : 12/12/2023 Last updated : 02/08/2024
-# Configure semantic ranking and return captions in search results
+# Create a semantic query in Azure AI Search
In this article, learn how to invoke a semantic ranking over a result set, promoting the most semantically relevant results to the top of the stack. You can also get semantic captions, with highlights over the most relevant terms and phrases, and [semantic answers](semantic-answers.md).
-To use semantic ranker:
-
-+ Add a semantic configuration to an index
-+ Add parameters to a query request
- ## Prerequisites
-+ A search service on Basic, Standard tier (S1, S2, S3), or Storage Optimized tier (L1, L2), subject to [region availability](https://azure.microsoft.com/global-infrastructure/services/?products=search).
-
-+ Semantic ranker [enabled on your search service](semantic-how-to-enable-disable.md).
++ A search service, Basic tier or higher, with [semantic ranking](semantic-how-to-enable-disable.md).
-+ An existing search index with rich text content. Semantic ranking applies to text (non-vector) fields and works best on content that is informational or descriptive.
++ An existing search index with a [semantic configuration](semantic-how-to-configure.md) and rich text content. + Review [semantic ranking](semantic-search-overview.md) if you need an introduction to the feature. > [!NOTE] > Captions and answers are extracted verbatim from text in the search document. The semantic subsystem uses machine reading comprehension to recognize content having the characteristics of a caption or answer, but doesn't compose new sentences or phrases. For this reason, content that includes explanations or definitions work best for semantic ranking. If you want chat-style interaction with generated responses, see [Retrieval Augmented Generation (RAG)](retrieval-augmented-generation-overview.md).
-## 1 - Choose a client
+## Choose a client
Choose a search client that supports semantic ranking. Here are some options:
-+ [Azure portal (Search explorer)](search-explorer.md), recommended for initial exploration.
-
-+ [Postman app](https://www.postman.com/downloads/) using [REST APIs](/rest/api/searchservice/). See this [Quickstart](search-get-started-rest.md) for help with setting up REST calls.
-
-+ [Azure.Search.Documents](https://www.nuget.org/packages/Azure.Search.Documents) in the Azure SDK for .NET.
-
-+ [Azure.Search.Documents](https://pypi.org/project/azure-search-documents) in the Azure SDK for Python.
-
-+ [azure-search-documents](https://central.sonatype.com/artifact/com.azure/azure-search-documents) in the Azure SDK for Java.
-
-+ [@azure/search-documents](https://www.npmjs.com/package/@azure/search-documents) in the Azure SDK for JavaScript.
-
-## 2 - Create a semantic configuration
-
-A *semantic configuration* is a section in your index that establishes field inputs for semantic ranking. You can add or update a semantic configuration at any time, no rebuild necessary. If you create multiple configurations, you can specify a default. At query time, specify a semantic configuration on a [query request](#4set-up-the-query), or leave it blank to use the default.
-
-A semantic configuration has a name and the following properties:
-
-| Property | Characteristics |
-|-|--|
-| Title field | A short string, ideally under 25 words. This field could be the title of a document, name of a product, or a unique identifier. If you don't have suitable field, leave it blank. |
-| Content fields | Longer chunks of text in natural language form, subject to [maximum token input limits](semantic-search-overview.md#how-inputs-are-collected-and-summarized) on the machine learning models. Common examples include the body of a document, description of a product, or other free-form text. |
-| Keyword fields | A list of keywords, such as the tags on a document, or a descriptive term, such as the category of an item. |
-
-You can only specify one title field, but you can have as many content and keyword fields as you like. For content and keyword fields, list the fields in priority order because lower priority fields might get truncated.
-
-Across all semantic configuration properties, the fields you assign must be:
-
-+ Attributed as `searchable` and `retrievable`
-+ Strings of type `Edm.String`, `Collection(Edm.String)`, string subfields of `Collection(Edm.ComplexType)`
-
-### [**Azure portal**](#tab/portal)
-
-1. Sign in to the [Azure portal](https://portal.azure.com) and navigate to a search service that has [semantic ranking enabled](semantic-how-to-enable-disable.md).
-
-1. Open an index.
++ [Azure portal](https://portal.azure.com), using the index designer to add a semantic configuration.++ [Postman app](https://www.postman.com/downloads/) using [REST APIs](/rest/api/searchservice/)++ [Azure SDK for .NET](https://www.nuget.org/packages/Azure.Search.Documents)++ [Azure SDK for Python](https://pypi.org/project/azure-search-documents)++ [Azure SDK for Java](https://central.sonatype.com/artifact/com.azure/azure-search-documents)++ [Azure SDK for JavaScript](https://www.npmjs.com/package/@azure/search-documents)
-1. Select **Semantic Configurations** and then select **Add Semantic Configuration**.
-
- The **New Semantic Configuration** page opens with options for selecting a title field, content fields, and keyword fields. Make sure to list content fields and keyword fields in priority order.
-
- :::image type="content" source="./media/semantic-search-overview/create-semantic-config.png" alt-text="Screenshot that shows how to create a semantic configuration in the Azure portal." border="true":::
-
- Select **OK** to save the changes.
-
-### [**REST API**](#tab/rest)
-
-1. Formulate a [Create or Update Index](/rest/api/searchservice/indexes/create-or-update) request.
-
-1. Add a semantic configuration to the index definition, perhaps after `scoringProfiles` or `suggesters`. Specifying a default is optional but useful if you have more than one configuration.
-
- ```json
- "semantic": {
- "defaultConfiguration": "my-semantic-config-default",
- "configurations": [
- {
- "name": "my-semantic-config-default",
- "prioritizedFields": {
- "titleField": {
- "fieldName": "HotelName"
- },
- "prioritizedContentFields": [
- {
- "fieldName": "Description"
- }
- ],
- "prioritizedKeywordsFields": [
- {
- "fieldName": "Tags"
- }
- ]
- }
- },
- {
- "name": "my-semantic-config-desc-only",
- "prioritizedFields": {
- "prioritizedContentFields": [
- {
- "fieldName": "Description"
- }
- ]
- }
- }
- ]
- }
- ```
-
-### [**.NET SDK**](#tab/sdk)
-
-Use the [SemanticConfiguration class](/dotnet/api/azure.search.documents.indexes.models.semanticconfiguration?view=azure-dotnet-preview&preserve-view=true) in the Azure SDK for .NET.
-
-```c#
-var definition = new SearchIndex(indexName, searchFields);
-
-SemanticSettings semanticSettings = new SemanticSettings();
-semanticSettings.Configurations.Add(new SemanticConfiguration
- (
- "my-semantic-config",
- new PrioritizedFields()
- {
- TitleField = new SemanticField { FieldName = "HotelName" },
- ContentFields = {
- new SemanticField { FieldName = "Description" },
- new SemanticField { FieldName = "Description_fr" }
- },
- KeywordFields = {
- new SemanticField { FieldName = "Tags" },
- new SemanticField { FieldName = "Category" }
- }
- }
- )
-);
-
-definition.SemanticSettings = semanticSettings;
-
-adminClient.CreateOrUpdateIndex(definition);
-```
---
-> [!TIP]
-> To see an example of creating a semantic configuration and using it to issue a semantic query, check out the [semantic ranking Postman sample](https://github.com/Azure-Samples/azure-search-postman-samples/tree/main/semantic-search).
-
-## 3 - Avoid features that bypass relevance scoring
+## Avoid features that bypass relevance scoring
Several query capabilities in Azure AI Search bypass relevance scoring or are otherwise incompatible with semantic ranking. If your query logic includes the following features, you can't semantically rank your results:
-+ A query with `search=*` or an empty search string, such as pure filter-only query, won't work because there is nothing to measure semantic relevance against. The query must provide terms or phrases that can be assessed during processing.
++ A query with `search=*` or an empty search string, such as pure filter-only query, won't work because there's nothing to measure semantic relevance against. The query must provide terms or phrases that can be assessed during processing. + A query composed in the [full Lucene syntax](query-lucene-syntax.md) (`queryType=full`) is incompatible with semantic ranking (`queryType=semantic`). The semantic model doesn't support the full Lucene syntax. + Sorting (orderBy clauses) on specific fields overrides search scores and a semantic score. Given that the semantic score is supposed to provide the ranking, adding an orderby clause results in an HTTP 400 error if you apply semantic ranking over ordered results.
-## 4 - Set up the query
+## Set up the query
In this step, add parameters to the query request. To be successful, your query should be full text search (using the `search` parameter to pass in a string), and the index should contain text fields with rich semantic content and a semantic configuration. ### [**Azure portal**](#tab/portal-query)
-[Search explorer](search-explorer.md) has been updated to include options for semantic ranking.
+[Search explorer](search-explorer.md) includes options for semantic ranking.
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Open your search index and select **Search explorer**.
+1. Open a search index and select **Search explorer**.
-1. There are two ways to specify the query, JSON or options. Using JSON, you can paste definitions into the query editor:
+1. Select **Query options**. If you already defined a semantic configuration, it's selected by default. If you don't have one, [create a semantic configuration](semantic-how-to-configure.md) for your index.
- :::image type="content" source="./media/semantic-search-overview/semantic-portal-json-query.png" alt-text="Screenshot showing JSON query syntax in the Azure portal." border="true":::
+ :::image type="content" source="./media/semantic-search-overview/search-explorer-semantic-query-options-v2.png" alt-text="Screenshot showing query options in Search explorer." border="true":::
-1. Using options, specify that you want to use semantic ranking and to create a configuration. If you don't see these options, make sure semantic ranking is enabled and also refresh your browser.
+1. Enter a query, such as "historic hotel with good food", and select **Search**.
- :::image type="content" source="./media/semantic-search-overview/search-explorer-semantic-query-options-v2.png" alt-text="Screenshot showing query options in Search explorer." border="true":::
+1. Alternatively, select **JSON view** and paste definitions into the query editor:
+
+ :::image type="content" source="./media/semantic-search-overview/semantic-portal-json-query.png" alt-text="Screenshot showing JSON query syntax in the Azure portal." border="true":::
### [**REST API**](#tab/rest-query)
The following example in this section uses the [hotels-sample-index](search-get-
1. Set "search" to a full text search query based on the [simple syntax](query-simple-syntax.md). Semantic ranking is an extension of full text search, so while this parameter isn't required, you won't get an expected outcome if it's null.
-1. Set "semanticConfiguration" to a [predefined semantic configuration](#2create-a-semantic-configuration) that's embedded in your index.
+1. Set "semanticConfiguration" to a [predefined semantic configuration](semantic-how-to-configure.md) that's embedded in your index.
1. Set "answers" to specify whether [semantic answers](semantic-answers.md) are included in the result. Currently, the only valid value for this parameter is `extractive`. Answers can be configured to return a maximum of 10. The default is one. This example shows a count of three answers: `extractive|count-3`.
The following example in this section uses the [hotels-sample-index](search-get-
### [**.NET SDK**](#tab/dotnet-query)
-Azure SDKs are on independent release cycles and implement search features on their own timeline. Check the change log for each package to verify general availability for semantic ranking.
+Use QueryType or SemanticQuery to invoke semantic ranking on a semantic query. The [following example](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/search/Azure.Search.Documents/samples/Sample08_SemanticSearch.md) is from the Azure SDK team.
-| Azure SDK | Package |
-|--||
-| .NET | [Azure.Search.Documents package](https://www.nuget.org/packages/Azure.Search.Documents) |
-| Java | [azure-search-documents](https://central.sonatype.com/artifact/com.azure/azure-search-documents) |
-| JavaScript | [azure/search-documents](https://www.npmjs.com/package/@azure/search-documents)|
-| Python | [azure-search-document](https://pypi.org/project/azure-search-documents) |
+```csharp
+SearchResults<Hotel> response = await searchClient.SearchAsync<Hotel>(
+ "Is there any hotel located on the main commercial artery of the city in the heart of New York?",
+ new SearchOptions
+ {
+ SemanticSearch = new()
+ {
+ SemanticConfigurationName = "my-semantic-config",
+ QueryCaption = new(QueryCaptionType.Extractive),
+ QueryAnswer = new(QueryAnswerType.Extractive)
+ },
+ QueryLanguage = QueryLanguage.EnUs,
+ QueryType = SearchQueryType.Semantic
+ });
+
+int count = 0;
+Console.WriteLine($"Semantic Search Results:");
+
+Console.WriteLine($"\nQuery Answer:");
+foreach (QueryAnswerResult result in response.SemanticSearch.Answers)
+{
+ Console.WriteLine($"Answer Highlights: {result.Highlights}");
+ Console.WriteLine($"Answer Text: {result.Text}");
+}
+
+await foreach (SearchResult<Hotel> result in response.GetResultsAsync())
+{
+ count++;
+ Hotel doc = result.Document;
+ Console.WriteLine($"{doc.HotelId}: {doc.HotelName}");
+
+ if (result.SemanticSearch.Captions != null)
+ {
+ var caption = result.SemanticSearch.Captions.FirstOrDefault();
+ if (caption.Highlights != null && caption.Highlights != "")
+ {
+ Console.WriteLine($"Caption Highlights: {caption.Highlights}");
+ }
+ else
+ {
+ Console.WriteLine($"Caption Text: {caption.Text}");
+ }
+ }
+}
+Console.WriteLine($"Total number of search results:{count}");
+```
-## 5 - Evaluate the response
+## Evaluate the response
Only the top 50 matches from the initial results can be semantically ranked. As with all queries, a response is composed of all fields marked as retrievable, or just those fields listed in the select parameter. A response includes the original relevance score, and might also include a count, or batched results, depending on how you formulated the request.
In semantic ranking, the response has more elements: a new semantically ranked r
In a client app, you can structure the search page to include a caption as the description of the match, rather than the entire contents of a specific field. This approach is useful when individual fields are too dense for the search results page.
-The response for the above example query returns the following match as the top pick. Captions are returned because the "captions" property is set, with plain text and highlighted versions. Answers are omitted from the example because one couldn't be determined for this particular query and corpus.
+The response for the above example query returns the following match as the top pick. Captions are returned because the "captions" property is set, with plain text and highlighted versions. Answers are omitted from the example because one couldn't be determined for this particular query and corpus.
```json "@odata.count": 35,
The response for the above example query returns the following match as the top
] ```
-## Migrate from preview versions
-
-If your semantic ranking code is using preview APIs, this section explains how to migrate to stable versions. Generally available versions include:
-
-+ [2023-11-01 (REST)](/rest/api/searchservice/)
-+ [Azure.Search.Documents (Azure SDK for .NET)](https://www.nuget.org/packages/Azure.Search.Documents/)
-
-**Behavior changes:**
-
-+ As of July 14, 2023, semantic ranker is language agnostic. It can rerank results composed of multilingual content, with no bias towards a specific language. In preview versions, semantic ranking would deprioritize results differing from the language specified by the field analyzer.
-
-+ In 2021-04-30-Preview and all later versions, `semanticConfiguration` (in an index definition) defines which search fields are used in semantic ranking. In the 2020-06-30-Preview REST API, `searchFields` (in a query request) was used for field specification and prioritization. This approach only worked in 2020-06-30-Preview and is obsolete in all other versions.
-
-### Step 1: Remove queryLanguage
-
-The semantic ranking engine is now language agnostic. If `queryLanguage` is specified in your query logic, it's no longer used for semantic ranking, but still applies to [spell correction](speller-how-to-add.md).
-
-+ Use [Search POST](/rest/api/searchservice/documents/search-post) and remove `queryLanguage` for semantic ranking purposes.
-
-### Step 2: Add semanticConfiguration
-
-If your code calls the 2020-06-30-Preview REST API or beta SDK packages targeting that REST API version, you might be using `searchFields` in a query request to specify semantic fields and priorities. This code must now be updated to use `semanticConfiguration` instead.
-
-+ [Create or Update Index](/rest/api/searchservice/indexes/create-or-update) to add `semanticConfiguration`.
- ## Next steps
-Recall that semantic ranking and responses are built over an initial result set. Any logic that improves the quality of the initial results carry forward to semantic ranking. As a next step, review the features that contribute to initial results, including analyzers that affect how strings are tokenized, scoring profiles that can tune results, and the default relevance algorithm.
+Semantic ranking can be used in hybrid queries that combine keyword search and vector search into a single request and a unified response.
-+ [Analyzers for text processing](search-analyzers.md)
-+ [Configure BM25 relevance scoring](index-similarity-and-scoring.md)
-+ [Relevance scoring in hybrid search using Reciprocal Rank Fusion (RRF)](hybrid-search-ranking.md)
-+ [Add scoring profiles](index-add-scoring-profiles.md)
-+ [Semantic ranking overview](semantic-search-overview.md)
+> [!div class="nextstepaction"]
+> [Hybrid query with semantic ranking](hybrid-search-how-to-query.md#semantic-hybrid-search)
search Semantic Search Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/semantic-search-overview.md
- ignite-2023 Previously updated : 12/12/2023 Last updated : 02/08/2024 # Semantic ranking in Azure AI Search
-In Azure AI Search, *semantic ranking* measurably improves search relevance by using language understanding to rerank search results. This article is a high-level introduction to the semantic ranker. The [embedded video](#semantic-capabilities-and-limitations) describes the technology, and the section at the end covers availability and pricing.
+In Azure AI Search, *semantic ranking* measurably improves search relevance by using language understanding to rerank search results. This article is a high-level introduction. The section at the end covers [availability and pricing](#availability-and-pricing).
Semantic ranker is a premium feature, billed by usage. We recommend this article for background, but if you'd rather get started, follow these steps: > [!div class="checklist"]
-> * [Check regional availability](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=search).
-> * [Enable semantic ranking](semantic-how-to-enable-disable.md) on your search service.
-> * Create or modify queries to [return semantic captions and highlights](semantic-how-to-query-request.md).
-> * Add a few more query properties to also [return semantic answers](semantic-answers.md).
+> * [Check regional availability](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=search)
+> * [Sign in to Azure portal](https://portal.azure.com) to verify your search service is Basic or higher
+> * [Enable semantic ranking and choose a pricing plan](semantic-how-to-enable-disable.md)
+> * [Set up a semantic configuration in a search index](semantic-how-to-configure.md)
+> * [Set up queries to return semantic captions and highlights](semantic-how-to-query-request.md)
+> * [Optionally, return semantic answers](semantic-answers.md)
> [!NOTE]
-> Looking for vector support and similarity search? See [Vector search in Azure AI Search](vector-search-overview.md) for details.
+> Semantic ranking doesn't use generative AI or vectors. If you're looking for vector support and similarity search? See [Vector search in Azure AI Search](vector-search-overview.md) for details.
## What is semantic ranking?
Semantic ranker is a collection of query-related capabilities that improve the q
* Second, it extracts and returns captions and answers in the response, which you can render on a search page to improve the user's search experience.
-Here are the capabilities of the semantic ranker.
+Here are the capabilities of the semantic reranker.
| Feature | Description | ||-|
Here are the capabilities of the semantic ranker.
## How semantic ranker works
-Semantic ranking looks for context and relatedness among terms, elevating matches that make more sense for the query.
+Semantic ranking feeds a query and results to language understanding models hosted by Microsoft and scans for better matches.
The following illustration explains the concept. Consider the term "capital". It has different meanings depending on whether the context is finance, law, geography, or grammar. Through language understanding, the semantic ranker can detect context and promote results that fit query intent.
In semantic ranking, the query subsystem passes search results as an input to su
1. Semantic ranking starts with a [BM25-ranked result](index-ranking-similarity.md) from a text query or an [RRF-ranked result](hybrid-search-ranking.md) from a hybrid query. Only text fields are used in the reranking exercise, and only the top 50 results progress to semantic ranking, even if results include more than 50. Typically, fields used in semantic ranking are informational and descriptive.
-1. For each document in the search result, the summarization model accepts up to 2,000 tokens, where a token is approximately 10 characters. Inputs are assembled from the "title", "keyword", and "content" fields listed in the [semantic configuration](semantic-how-to-query-request.md#2create-a-semantic-configuration).
+1. For each document in the search result, the summarization model accepts up to 2,000 tokens, where a token is approximately 10 characters. Inputs are assembled from the "title", "keyword", and "content" fields listed in the [semantic configuration](semantic-how-to-configure.md).
1. Excessively long strings are trimmed to ensure the overall length meets the input requirements of the summarization step. This trimming exercise is why it's important to add fields to your semantic configuration in priority order. If you have very large documents with text-heavy fields, anything after the maximum limit is ignored.
sentinel Connect Log Forwarder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-log-forwarder.md
# Deploy a log forwarder to ingest Syslog and CEF logs to Microsoft Sentinel
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ To ingest Syslog and CEF logs into Microsoft Sentinel, particularly from devices and appliances onto which you can't install the Log Analytics agent directly, you'll need to designate and configure a Linux machine that will collect the logs from your devices and forward them to your Microsoft Sentinel workspace. This machine can be a physical or virtual machine in your on-premises environment, an Azure VM, or a VM in another cloud. This machine has two components that take part in this process:
sentinel Connect Syslog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-syslog.md
# Collect data from Linux-based sources using Syslog
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ [!INCLUDE [reference-to-feature-availability](includes/reference-to-feature-availability.md)] **Syslog** is an event logging protocol that is common to Linux. You can use the Syslog daemon built into Linux devices and appliances to collect local events of the types you specify, and have it send those events to Microsoft Sentinel using the **Log Analytics agent for Linux** (formerly known as the OMS agent).
sentinel Troubleshooting Cef Syslog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/troubleshooting-cef-syslog.md
# Troubleshoot your CEF or Syslog data connector
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ This article describes common methods for verifying and troubleshooting a CEF or Syslog data connector for Microsoft Sentinel. For example, if your logs are not appearing in Microsoft Sentinel, either in the Syslog or the Common Security Log tables, your data source may be failing to connect or there may be another reason your data is not being ingested.
service-fabric How To Managed Cluster Application Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-application-gateway.md
The following section describes the steps that should be taken to use Azure Appl
Note the `Role definition name` and `Role definition ID` property values for use in a later step
- B. The [sample ARM deployment template](https://github.com/Azure-Samples/service-fabric-cluster-templates/tree/master/SF-Managed-Standard-SKU-1-NT-DDoSNwProtection) adds a role assignment to the application gateway with contributor access. For more information on Azure roles, see [Azure built-in roles - Azure RBAC](../role-based-access-control/built-in-roles.md#all). This role assignment is defined in the resources section of template with PrincipalId and a role definition ID determined from the first step.
+ B. The [sample ARM deployment template](https://github.com/Azure-Samples/service-fabric-cluster-templates/tree/master/SF-Managed-Standard-SKU-2-NT-AppGateway) adds a role assignment to the application gateway with contributor access. For more information on Azure roles, see [Azure built-in roles - Azure RBAC](../role-based-access-control/built-in-roles.md#all). This role assignment is defined in the resources section of template with PrincipalId and a role definition ID determined from the first step.
```json
The following section describes the steps that should be taken to use Azure Appl
-ResourceGroupName <resourceGroupName> ```
-4. Use a [sample ARM deployment template](https://github.com/Azure-Samples/service-fabric-cluster-templates/tree/master/SF-Managed-Standard-SKU-1-NT-DDoSNwProtection) that assigns roles and adds application gateway configuration as part of the service fabric managed cluster creation. Update the template with `principalId`, `appGatewayName`, and `appGatewayBackendPoolId` obtained above.
+4. Use a [sample ARM deployment template](https://github.com/Azure-Samples/service-fabric-cluster-templates/tree/master/SF-Managed-Standard-SKU-2-NT-AppGateway) that assigns roles and adds application gateway configuration as part of the service fabric managed cluster creation. Update the template with `principalId`, `appGatewayName`, and `appGatewayBackendPoolId` obtained above.
5. You can also modify your existing ARM template and add new property `appGatewayBackendPoolId` under Microsoft.ServiceFabric/managedClusters resource that takes the resource ID of the application gateway. #### ARM template:
site-recovery Azure Stack Site Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-stack-site-recovery.md
# Replicate Azure Stack VMs to Azure
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ This article shows you how to set up disaster recovery Azure Stack VMs to Azure, using the [Azure Site Recovery service](site-recovery-overview.md). Site Recovery contributes to your business continuity and disaster recovery (BCDR) strategy. The service ensures that your VM workloads remain available when expected and unexpected outages occur.
In this article we replicated Azure Stack VMs to Azure. With replication in plac
## Next steps
-After failing back, you can reprotect the VM and start replicating it to Azure again To do this, repeat the steps in this article.
+After failing back, you can reprotect the VM and start replicating it to Azure again To do this, repeat the steps in this article.
site-recovery Azure To Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-support-matrix.md
# Support matrix for Azure VM disaster recovery between Azure regions
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ This article summarizes support and prerequisites for disaster recovery of Azure VMs from one Azure region to another, using the [Azure Site Recovery](site-recovery-overview.md) service. ## Deployment method support
site-recovery Azure To Azure Troubleshoot Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-troubleshoot-errors.md
# Troubleshoot Azure-to-Azure VM replication errors
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ This article describes how to troubleshoot common errors in Azure Site Recovery during replication and recovery of [Azure virtual machines](azure-to-azure-tutorial-enable-replication.md) (VM) from one region to another. For more information about supported configurations, see the [support matrix for replicating Azure VMs](azure-to-azure-support-matrix.md). ## Azure resource quota issues (error code 150097)
site-recovery Azure Vm Disaster Recovery With Accelerated Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-vm-disaster-recovery-with-accelerated-networking.md
# Accelerated Networking with Azure virtual machine disaster recovery
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ Accelerated Networking enables single root I/O virtualization (SR-IOV) to a VM, greatly improving its networking performance. This high-performance path bypasses the host from the datapath, reducing latency, jitter, and CPU utilization, for use with the most demanding network workloads on supported VM types. The following picture shows communication between two VMs with and without accelerated networking: :::image type="content" source="./media/azure-vm-disaster-recovery-with-accelerated-networking/accelerated-networking-benefit.png" alt-text="Screenshot of difference between accelerated and non-accelerated networking." lightbox="./media/azure-vm-disaster-recovery-with-accelerated-networking/accelerated-networking-benefit.png":::
site-recovery How To Enable Replication Proximity Placement Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/how-to-enable-replication-proximity-placement-groups.md
Last updated 08/01/2023
# Replicate virtual machines running in a proximity placement group to another region
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ This article describes how to replicate, fail over, and fail back Azure virtual machines (VMs) running in a proximity placement group to a secondary region. [Proximity placement groups](../virtual-machines/windows/proximity-placement-groups-portal.md) are a logical grouping capability in Azure Virtual Machines. You can use them to decrease the inter-VM network latency associated with your applications.
site-recovery Site Recovery Failover To Azure Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-failover-to-azure-troubleshoot.md
# Troubleshoot errors when failing over VMware VM or physical machine to Azure
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ You may receive one of the following errors while doing failover of a virtual machine to Azure. To troubleshoot, use the described steps for each error condition. ## Failover failed with Error ID 28031
site-recovery Site Recovery Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-whats-new-archive.md
Last updated 12/27/2023
# Archive for What's new in Site Recovery
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ This article contains information on older features and updates in the Azure Site Recovery service. The primary [What's new in Azure Site Recovery](./site-recovery-whats-new.md) article contains the latest updates. ## Updates (November 2021)
site-recovery Site Recovery Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-whats-new.md
# What's new in Site Recovery
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ The [Azure Site Recovery](site-recovery-overview.md) service is updated and improved on an ongoing basis. To help you stay up-to-date, this article provides you with information about the latest releases, new features, and new content. This page is updated regularly. You can follow and subscribe to Site Recovery update notifications in the [Azure updates](https://azure.microsoft.com/updates/?product=site-recovery) channel.
site-recovery Vmware Azure Disaster Recovery Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-disaster-recovery-powershell.md
# Set up disaster recovery of VMware VMs to Azure with PowerShell
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ In this article, you see how to replicate and failover VMware virtual machines to Azure using Azure PowerShell. You learn how to:
site-recovery Vmware Azure Install Linux Master Target https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-install-linux-master-target.md
Last updated 08/01/2023
# Install a Linux master target server for failback+
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ After you fail over your virtual machines to Azure, you can fail back the virtual machines to the on-premises site. To fail back, you need to reprotect the virtual machine from Azure to the on-premises site. For this process, you need an on-premises master target server to receive the traffic. If your protected virtual machine is a Windows virtual machine, then you need a Windows master target. For a Linux virtual machine, you need a Linux master target. Read the following steps to learn how to create and install a Linux master target.
site-recovery Vmware Azure Install Mobility Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-install-mobility-service.md
# Prepare source machine for push installation of mobility agent
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ When you set up disaster recovery for VMware VMs and physical servers using [Azure Site Recovery](site-recovery-overview.md), you install the [Site Recovery Mobility service](vmware-physical-mobility-service-overview.md) on each on-premises VMware VM and physical server. The Mobility service captures data writes on the machine, and forwards them to the Site Recovery process server. ## Install on Windows machine
site-recovery Vmware Azure Mobility Install Configuration Mgr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-mobility-install-configuration-mgr.md
Last updated 05/02/2022
# Automate Mobility Service installation
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ This article describes how to automate installation and updates for the Mobility Service agent in [Azure Site Recovery](site-recovery-overview.md). When you deploy Site Recovery for disaster recovery of on-premises VMware VMs and physical servers to Azure, you install the Mobility Service agent on each machine you want to replicate. The Mobility Service captures data writes on the machine, and forwards them to the Site Recovery process server for replication. You can deploy the Mobility Service in a few ways:
site-recovery Vmware Physical Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-azure-support-matrix.md
# Support matrix for disaster recovery of VMware VMs and physical servers to Azure
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ This article summarizes supported components and settings for disaster recovery of VMware VMs and physical servers to Azure using [Azure Site Recovery](site-recovery-overview.md). >[!NOTE]
site-recovery Vmware Physical Manage Mobility Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-manage-mobility-service.md
Last updated 05/02/2023
# Manage the Mobility agent
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ You set up mobility agent on your server when you use Azure Site Recovery for disaster recovery of VMware VMs and physical servers to Azure. Mobility agent coordinates communications between your protected machine, configuration server/scale-out process server and manages data replication. This article summarizes common tasks for managing mobility agent after it's deployed. >[!TIP]
site-recovery Vmware Physical Mobility Service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-mobility-service-overview.md
# About the Mobility service for VMware VMs and physical servers
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ When you set up disaster recovery for VMware virtual machines (VM) and physical servers using [Azure Site Recovery](site-recovery-overview.md), you install the Site Recovery Mobility service on each on-premises VMware VM and physical server. The Mobility service captures data, writes on the machine, and forwards them to the Site Recovery process server. The Mobility service is installed by the Mobility service agent software that you can deploy using the following methods: - [Push installation](#push-installation): When protection is enabled via the Azure portal, Site Recovery installs the Mobility service on the server.
spring-apps How To Deploy With Custom Container Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-deploy-with-custom-container-image.md
Last updated 4/28/2022
# Deploy an application with a custom container image
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ > [!NOTE] > Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
The following matrix shows what features are supported in each application type.
| Spring Cloud Eureka & Config Server | ✔️ | ❌ | | | API portal for VMware Tanzu | ✔️ | ✔️ | Enterprise plan only. | | Spring Cloud Gateway for VMware Tanzu | ✔️ | ✔️ | Enterprise plan only. |
-| Application Configuration Service for VMware Tanzu | ✔️ | ❌ | Enterprise plan only.
+| Application Configuration Service for VMware Tanzu | ✔️ | ❌ | Enterprise plan only.
| Application Live View for VMware Tanzu | ✔️ | ❌ | Enterprise plan only. | | VMware Tanzu Service Registry | ✔️ | ❌ | Enterprise plan only. | | VNET | ✔️ | ✔️ | Add registry to [allowlist in NSG or Azure Firewall](#avoid-not-being-able-to-connect-to-the-container-registry-in-a-vnet). |
storage Blobfuse2 How To Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blobfuse2-how-to-deploy.md
# How to mount an Azure Blob Storage container on Linux with BlobFuse2
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ This article shows you how to install and configure BlobFuse2, mount an Azure blob container, and access data in the container. The basic steps are: > [Install BlobFuse2](#how-to-install-blobfuse2)
storage Network File System Protocol Support How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/network-file-system-protocol-support-how-to.md
# Mount Blob Storage by using the Network File System (NFS) 3.0 protocol
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ This article provides guidance on how to mount a container in Azure Blob Storage from a Linux-based Azure virtual machine (VM) or a Linux system that runs on-premises by using the Network File System (NFS) 3.0 protocol. To learn more about NFS 3.0 protocol support in Blob Storage, see [Network File System (NFS) 3.0 protocol support for Azure Blob Storage](network-file-system-protocol-support.md). ## Step 1: Create an Azure virtual network
storage Storage How To Mount Container Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-how-to-mount-container-linux.md
# How to mount Azure Blob Storage as a file system with BlobFuse v1
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ > [!IMPORTANT] > [BlobFuse2](blobfuse2-what-is.md) is the latest version of BlobFuse and has many significant improvements over the version discussed in this article, BlobFuse v1. To learn about the improvements made in BlobFuse2, see [the list of BlobFuse2 enhancements](blobfuse2-what-is.md#blobfuse2-enhancements-from-blobfuse-v1).
storage Multiple Identity Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/multiple-identity-scenarios.md
public class ExampleService {
} ```
+#### [JavaScript](#tab/javascript)
+
+1. Inside of your project, use [npm](https://docs.npmjs.com/) to add a reference to the `@azure/identity` package. This library contains all of the necessary entities to implement `DefaultAzureCredential`. Install any other [Azure SDK libraries](https://www.npmjs.com/search?q=%40azure) which are relevant to your app.
+
+ ```bash
+ npm install --save @azure/identity @azure/storage-blob @azure/keyvault-keys
+ ```
+
+2. At the top of your `index.js` file, add the following `import` statements to import the necessary client classes for the services your app will connect to:
+
+ ```javascript
+ import { DefaultAzureCredential } from "@azure/identity";
+ import { BlobServiceClient } from "@azure/storage-blob";
+ import { KeyClient } from "@azure/keyvault-keys";
+ ```
+
+3. Within the `index.js` file, create client objects for the Azure services your app will connect to. The following examples connect to Blob Storage and Key Vault using the corresponding SDK classes.
+
+ ```javascript
+ // Azure resource names
+ const storageAccount = process.env.AZURE_STORAGE_ACCOUNT_NAME;
+ const keyVaultName = process.env.AZURE_KEYVAULT_NAME;
+
+ // Create client for Blob Storage using managed identity
+ const blobServiceClient = new BlobServiceClient(
+ `https://${storageAccount}.blob.core.windows.net`,
+ new DefaultAzureCredential()
+ );
+
+ // Create client for Key Vault using managed identity
+ const keyClient = new KeyClient(`https://${keyVaultName}.vault.azure.net`, new DefaultAzureCredential());
+
+ // Create a new key in Key Vault
+ const result = await keyClient.createKey(keyVaultName, "RSA");
+ ```
+
-When this application code runs locally, `DefaultAzureCredential` will search down a credential chain for the first available credentials. If the `Managed_Identity_Client_ID` is null locally, it will automatically use the credentials from your local Azure CLI or Visual Studio sign-in. You can read more about this process in the [Azure Identity library overview](/dotnet/api/overview/azure/Identity-readme#defaultazurecredential).
+When this application code runs locally, `DefaultAzureCredential` will search a credential chain for the first available credentials. If the `Managed_Identity_Client_ID` is null locally, it will automatically use the credentials from your local Azure CLI or Visual Studio sign-in. You can read more about this process in the [Azure Identity library overview](/dotnet/api/overview/azure/Identity-readme#defaultazurecredential).
When the application is deployed to Azure, `DefaultAzureCredential` will automatically retrieve the `Managed_Identity_Client_ID` variable from the app service environment. That value becomes available when a managed identity is associated with your app.
To configure this setup in your code, make sure your application registers separ
```csharp // Get the first user-assigned managed identity ID to connect to shared storage
-var clientIDstorage = Environment.GetEnvironmentVariable("Managed_Identity_Client_ID_Storage");
+const clientIdStorage = Environment.GetEnvironmentVariable("Managed_Identity_Client_ID_Storage");
// First blob storage client that using a managed identity BlobServiceClient blobServiceClient = new BlobServiceClient(
public class ExampleService {
} ```
+#### [JavaScript](#tab/javascript)
+
+1. Inside of your project, use [npm](https://docs.npmjs.com/) to add a reference to the `@azure/identity` package. This library contains all of the necessary entities to implement `DefaultAzureCredential`. Install any other [Azure SDK libraries](https://www.npmjs.com/search?q=%40azure) which are relevant to your app.
+
+ ```bash
+ npm install --save @azure/identity @azure/storage-blob @azure/cosmos mssql
+ ```
+
+2. At the top of your `index.js` file, add the following `import` statements to import the necessary client classes for the services your app will connect to:
+
+ ```javascript
+ import { DefaultAzureCredential } from "@azure/identity";
+ import { BlobServiceClient } from "@azure/storage-blob";
+ import { KeyClient } from "@azure/keyvault-keys";
+ ```
+
+3. Within the `index.js` file, create client objects for the Azure services your app will connect to. The following examples connect to Blob Storage, Cosmos DB, and Azure SQL using the corresponding SDK classes.
+
+ ```javascript
+ // Get the first user-assigned managed identity ID to connect to shared storage
+ const clientIdStorage = process.env.MANAGED_IDENTITY_CLIENT_ID_STORAGE;
+
+ // Storage account names
+ const storageAccountName1 = process.env.AZURE_STORAGE_ACCOUNT_NAME_1;
+ const storageAccountName2 = process.env.AZURE_STORAGE_ACCOUNT_NAME_2;
+
+ // First blob storage client that using a managed identity
+ const blobServiceClient = new BlobServiceClient(
+ `https://${storageAccountName1}.blob.core.windows.net`,
+ new DefaultAzureCredential({
+ managedIdentityClientId: clientIdStorage
+ })
+ );
+
+ // Second blob storage client that using a managed identity
+ const blobServiceClient2 = new BlobServiceClient(
+ `https://${storageAccountName2}.blob.core.windows.net`,
+ new DefaultAzureCredential({
+ managedIdentityClientId: clientIdStorage
+ })
+ );
+
+ // Get the second user-assigned managed identity ID to connect to shared databases
+ const clientIdDatabases = process.env.MANAGED_IDENTITY_CLIENT_ID_DATABASES;
+
+ // Cosmos DB Account endpoint
+ const cosmosDbAccountEndpoint = process.env.COSMOS_ENDPOINT;
+
+ // Create an Azure Cosmos DB client
+ const client = new CosmosClient({
+ endpoint: cosmosDbAccountEndpoint,
+ credential: new DefaultAzureCredential({
+ managedIdentityClientId: clientIdDatabases
+ })
+ });
+
+ // Open a connection to Azure SQL using a managed identity with mssql package
+ // mssql reads the environment variables to get the managed identity
+ const server = process.env.AZURE_SQL_SERVER;
+ const database = process.env.AZURE_SQL_DATABASE;
+ const port = parseInt(process.env.AZURE_SQL_PORT);
+ const type = process.env.AZURE_SQL_AUTHENTICATIONTYPE;
+
+ const config = {
+ server,
+ port,
+ database,
+ authentication: {
+ type // <- Passwordless connection
+ },
+ options: {
+ encrypt: true
+ }
+ };
+
+ await sql.connect(sqlConfig);
+ ```
+ You can also associate a user-assigned managed identity as well as a system-assigned managed identity to a resource simultaneously. This can be useful in scenarios where all of the apps require access to the same shared services, but one of the apps also has a very specific dependency on an additional service. Using a system-assigned identity also ensures that the identity tied to that specific app is deleted when the app is deleted, which can help keep your environment clean.
storage Storage Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-network-security.md
You can configure storage accounts to allow access only from specific subnets. T
You can enable a [service endpoint](../../virtual-network/virtual-network-service-endpoints-overview.md) for Azure Storage within the virtual network. The service endpoint routes traffic from the virtual network through an optimal path to the Azure Storage service. The identities of the subnet and the virtual network are also transmitted with each request. Administrators can then configure network rules for the storage account that allow requests to be received from specific subnets in a virtual network. Clients granted access via these network rules must continue to meet the authorization requirements of the storage account to access the data.
-Each storage account supports up to 200 virtual network rules. You can combine these rules with [IP network rules](#grant-access-from-an-internet-ip-range).
+Each storage account supports up to 400 virtual network rules. You can combine these rules with [IP network rules](#grant-access-from-an-internet-ip-range).
> [!IMPORTANT] > When referencing a service endpoint in a client application, it's recommended that you avoid taking a dependency on a cached IP address. The storage account IP address is subject to change, and relying on a cached IP address may result in unexpected behavior.
If you want to enable access to your storage account from a virtual network or s
## Grant access from an internet IP range
-You can use IP network rules to allow access from specific public internet IP address ranges by creating IP network rules. Each storage account supports up to 200 rules. These rules grant access to specific internet-based services and on-premises networks and block general internet traffic.
+You can use IP network rules to allow access from specific public internet IP address ranges by creating IP network rules. Each storage account supports up to 400 rules. These rules grant access to specific internet-based services and on-premises networks and block general internet traffic.
### Restrictions for IP network rules
storage Files Remove Smb1 Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-remove-smb1-linux.md
# Remove SMB 1 on Linux+
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
Many organizations and internet service providers (ISPs) block the port that SMB uses to communicate, port 445. This practice originates from security guidance about legacy and deprecated versions of the SMB protocol. Although SMB 3.x is an internet-safe protocol, older versions of SMB, especially SMB 1, aren't. SMB 1, also known as CIFS (Common Internet File System), is included with many Linux distributions. SMB 1 is an outdated, inefficient, and insecure protocol. The good news is that Azure Files doesn't support SMB 1. Also, starting with Linux kernel version 4.18, Linux makes it possible to disable SMB 1. We always [strongly recommend](https://aka.ms/stopusingsmb1) disabling the SMB 1 on your Linux clients before using SMB file shares in production.
See these links for more information about Azure Files:
- [Planning for an Azure Files deployment](storage-files-planning.md) - [Use Azure Files with Linux](storage-how-to-use-files-linux.md) - [Troubleshoot SMB issues on Linux](/troubleshoot/azure/azure-storage/files-troubleshoot-linux-smb?toc=/azure/storage/files/toc.json)-- [Troubleshoot NFS issues on Linux](/troubleshoot/azure/azure-storage/files-troubleshoot-linux-nfs?toc=/azure/storage/files/toc.json)
+- [Troubleshoot NFS issues on Linux](/troubleshoot/azure/azure-storage/files-troubleshoot-linux-nfs?toc=/azure/storage/files/toc.json)
storage Files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-whats-new.md
Metadata caching is an enhancement for SMB Azure premium file shares aimed to re
#### Snapshot support for NFS Azure premium file shares is generally available
-Customers using NFS Azure file shares can now take point-in-time snapshots of file shares. This enables users to roll back their entire filesystem to a previous point in time, or restore specific files that were accidentally deleted or corrupted. Customers using this feature can perform share-level snapshot management operations via the Azure portal, REST API, Azure PowerShell, and Azure CLI. This feature is now available in all Azure public cloud regions except West US 2. [Learn more](storage-files-how-to-mount-nfs-shares.md#nfs-file-share-snapshots).
+Customers using NFS Azure file shares can now take point-in-time snapshots of file shares. This enables users to roll back their entire filesystem to a previous point in time, or restore specific files that were accidentally deleted or corrupted. Customers using this feature can perform share-level snapshot management operations via the Azure portal, REST API, Azure PowerShell, and Azure CLI. This feature is now available in all Azure public cloud regions. [Learn more](storage-files-how-to-mount-nfs-shares.md#nfs-file-share-snapshots).
#### Sync upload performance improvements for Azure File Sync
storage Storage Files How To Mount Nfs Shares https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-how-to-mount-nfs-shares.md
Azure Backup isn't currently supported for NFS file shares.
AzCopy isn't currently supported for NFS file shares. To copy data from an NFS Azure file share or share snapshot, use file system copy tools such as rsync or fpsync.
-NFS Azure file share snapshots are available in all Azure public cloud regions except West US 2.
+NFS Azure file share snapshots are available in all Azure public cloud regions.
### Create a snapshot
storage Storage How To Use Files Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-how-to-use-files-linux.md
# Mount SMB Azure file share on Linux+
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
[Azure Files](storage-files-introduction.md) is Microsoft's easy to use cloud file system. Azure file shares can be mounted in Linux distributions using the [SMB kernel client](https://wiki.samba.org/index.php/LinuxCIFS). The recommended way to mount an Azure file share on Linux is using SMB 3.1.1. By default, Azure Files requires encryption in transit, which is supported by SMB 3.0+. Azure Files also supports SMB 2.1, which doesn't support encryption in transit, but you can't mount Azure file shares with SMB 2.1 from another Azure region or on-premises for security reasons. Unless your application specifically requires SMB 2.1, use SMB 3.1.1.
update-manager Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/support-matrix.md
Last updated 12/19/2023 -+ # Support matrix for Azure Update Manager
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ This article details the Windows and Linux operating systems supported and system requirements for machines or servers managed by Azure Update Manager. The article includes the supported regions and specific versions of the Windows Server and Linux operating systems running on Azure virtual machines (VMs) or machines managed by Azure Arc-enabled servers. ## Update sources supported
Use one of the following options to perform the settings change at scale:
> Run the following PowerShell script on the server to disable first-party updates: > > ```powershell
-> $ServiceManager = (New-Object -com "Microsoft.Update.ServiceManager")
-> $ServiceManager.Services
+> $ServiceManager = (New-Object -com "Microsoft.Update.ServiceManager")
+> $ServiceManager.Services
> $ServiceID = "7971f918-a847-4430-9279-4a52d1efe18d" > $ServiceManager.RemoveService($ServiceId) > ```
Korea | Korea Central
Norway | Norway East Sweden | Sweden Central Switzerland | Switzerland North
-UAE | UAE North
+UAE | UAE North
United Kingdom | UK South </br> UK West
-United States | Central US </br> East US </br> East US 2</br> North Central US </br> South Central US </br> West Central US </br> West US </br> West US 2 </br> West US 3
+United States | Central US </br> East US </br> East US 2</br> North Central US </br> South Central US </br> West Central US </br> West US </br> West US 2 </br> West US 3
The following table lists the operating systems supported on [Azure Arc-enabled
| Windows Server 2012 R2 and higher (including Server Core) | | Windows Server 2008 R2 SP1 with PowerShell enabled and .NET Framework 4.0+ | | Ubuntu 16.04, 18.04, 20.04, and 22.04 LTS |
- | CentOS Linux 7 and 8 (x64) |
+ | CentOS Linux 7 and 8 (x64) |
| SUSE Linux Enterprise Server (SLES) 12 and 15 (x64) |
- | Red Hat Enterprise Linux (RHEL) 7, 8, 9 (x64) |
+ | Red Hat Enterprise Linux (RHEL) 7, 8, 9 (x64) |
| Amazon Linux 2 (x64) | | Oracle 7.x, 8.x| | Debian 10 and 11|
- | Rocky Linux 8|
+ | Rocky Linux 8|
update-manager Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/whats-new.md
Azure Update Manager allows you to create and manage pre and post events on sche
### Alerting (preview) Azure Update Manager allows you to enable alerts to address events as captured in updates data. [Learn more](manage-alerts.md).
-### Azure Stack HCI patching (preview)
+### Azure Stack HCI patching
Azure Update Manager allows you to patch Azure Stack HCI cluster. [Learn more](/azure-stack/hci/update/azure-update-manager-23h2?toc=/azure/update-manager/toc.json&bc=/azure/update-manager/breadcrumb/toc.json)
virtual-machine-scale-sets Virtual Machine Scale Sets Scaling Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-scaling-profile.md
Once you have created the virtual machine scale set, you can manually attach vir
By default, the Azure CLI will create a scale set with a scaling profile. Omit the scaling profile parameters to create a virtual machine scale set with no scaling profile. ```azurecli-interactive
+az group create
+ --name myResourceGroup
+ --location westus3
az vmss create \ --name myVmss \ --resource-group myResourceGroup \
az vmss create \
### [Azure PowerShell](#tab/powershell) ```azurepowershell-interactive
+New-AzResourceGroup
+ -Name myResourceGroup
+ -Location westus3
$vmssConfig = New-AzVmssConfig -Location 'westus3' -PlatformFaultDomainCount 3
virtual-machines Capacity Reservation Associate Virtual Machine Scale Set Flex https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/capacity-reservation-associate-virtual-machine-scale-set-flex.md
This content applies to the flexible orchestration mode. For uniform orchestrati
> [!IMPORTANT]
-> Capacity Reservations with virtual machine set using flexible orchestration is currently in public preview for FD>1. This preview version is provided without a service-level agreement, and we don't recommend it for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+> Capacity Reservations with virtual machine set using flexible orchestration is currently in general availability for Fault Domain equlas to 1.
+
+> Capacity Reservations with virtual machine set using flexible orchestration is currently in Public Preview for Fault Domain greater than 1. This preview version is provided without a service-level agreement, and we don't recommend it for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
> During the preview, always attach reserved capacity during creation of new scale sets using flexible orchestration mode. There are known issues attaching capacity reservations to existing scale sets using flexible orchestration. Microsoft will update this page as more options become enabled during preview. ## Associate a new virtual machine scale set to a Capacity Reservation group
vpn-gateway Vpn Gateway Validate Throughput To Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-validate-throughput-to-vnet.md
# How to validate VPN throughput to a virtual network
+> [!CAUTION]
+> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
+ A VPN gateway connection enables you to establish secure, cross-premises connectivity between your Virtual Network within Azure and your on-premises IT infrastructure. This article shows how to validate network throughput from the on-premises resources to an Azure virtual machine (VM).
The following diagram shows the logical connectivity of an on-premises network t
1. Determine your Azure VPN gateway throughput limits. For help, see the "Gateway SKUs" section of [About VPN Gateway](vpn-gateway-about-vpngateways.md#gwsku). 1. Determine the [Azure VM throughput guidance](../virtual-machines/sizes.md) for your VM size. 1. Determine your Internet Service Provider (ISP) bandwidth.
-1. Calculate your expected throughput by taking the least bandwidth of either the VM, VPN Gateway, or ISP; which is measured in Megabits-per-second (/) divided by eight (8). This calculation gives you Megabytes-per-second.
+1. Calculate your expected throughput by taking the least bandwidth of either the VM, VPN Gateway, or ISP; which is measured in Megabits-per-second (/) divided by eight (8). This calculation gives you Megabytes-per-second.
If your calculated throughput does not meet your application's baseline throughput requirements, you must increase the bandwidth of the resource that you identified as the bottleneck. To resize an Azure VPN Gateway, see [Changing a gateway SKU](vpn-gateway-about-vpn-gateway-settings.md#gwsku). To resize a virtual machine, see [Resize a VM](../virtual-machines/resize-vm.md). If you are not experiencing the expected Internet bandwidth, you may also contact your ISP.
Make install is fast
> [!Note] > Make sure there are no intermediate hops (e.g. Virtual Appliance) during the throughput testing in between the VM and Gateway.
-> If there are poor results (in terms of overall throughput) coming from the iPERF/NTTTCP tests above, please refer to [this article](../virtual-network/virtual-network-tcpip-performance-tuning.md) to understand the key factors behind the possible root causes of the problem:
+> If there are poor results (in terms of overall throughput) coming from the iPERF/NTTTCP tests above, please refer to [this article](../virtual-network/virtual-network-tcpip-performance-tuning.md) to understand the key factors behind the possible root causes of the problem:
In particular, analysis of packet capture traces (Wireshark/Network Monitor) collected in parallel from client and server during those tests help in the assessments of bad performance. These traces can include packet loss, high latency, MTU size. fragmentation, TCP 0 Window, Out of Order fragments, and so on.