Updates from: 11/04/2023 02:25:25
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Custom Email Mailjet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-email-mailjet.md
As with the OTP technical profiles, add the following technical profiles to the
<DisplayName>RestfulProvider</DisplayName> <TechnicalProfiles> <TechnicalProfile Id="sendOtp">
- <DisplayName>Use email API to send the code the the user</DisplayName>
+ <DisplayName>Use email API to send the code to the user</DisplayName>
<Protocol Name="Proprietary" Handler="Web.TPEngine.Providers.RestfulProvider, Web.TPEngine, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" /> <Metadata> <Item Key="ServiceUrl">https://api.mailjet.com/v3.1/send</Item>
active-directory-b2c Custom Email Sendgrid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-email-sendgrid.md
As with the OTP technical profiles, add the following technical profiles to the
<DisplayName>RestfulProvider</DisplayName> <TechnicalProfiles> <TechnicalProfile Id="SendOtp">
- <DisplayName>Use SendGrid's email API to send the code the the user</DisplayName>
+ <DisplayName>Use SendGrid's email API to send the code to the user</DisplayName>
<Protocol Name="Proprietary" Handler="Web.TPEngine.Providers.RestfulProvider, Web.TPEngine, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" /> <Metadata> <Item Key="ServiceUrl">https://api.sendgrid.com/v3/mail/send</Item>
active-directory-b2c Custom Policies Series Validate User Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policies-series-validate-user-input.md
Use the following steps to validate password re-enter in your custom policy:
```xml <DisplayClaim ClaimTypeReferenceId="reenterPassword" Required="true"/> ```
-1. In your your `ContosoCustomPolicy.XML` file, locate the `UserInformationCollector` self-asserted technical profile, add *reenterPassword* claim as an output claim by using the following code:
+1. In your `ContosoCustomPolicy.XML` file, locate the `UserInformationCollector` self-asserted technical profile, add *reenterPassword* claim as an output claim by using the following code:
```xml <OutputClaim ClaimTypeReferenceId="reenterPassword"/>
active-directory-b2c Identity Verification Proofing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-verification-proofing.md
With Azure Active Directory B2C (Azure AD B2C) and solutions from software-vendo
The following architecture diagram illustrates the verification and proofing flow.
- ![Diagram of of the identity proofing flow, from registration to access approval.](./media/partner-gallery/third-party-identity-proofing.png)
+ ![Diagram of the identity proofing flow, from registration to access approval.](./media/partner-gallery/third-party-identity-proofing.png)
1. User begins registration with a device. 2. User enters information.
active-directory-b2c Jwt Issuer Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/jwt-issuer-technical-profile.md
The **InputClaims**, **OutputClaims**, and **PersistClaims** elements are empty
| rolling_refresh_token_lifetime_secs | No | Refresh token sliding window lifetime. After this time period elapses the user is forced to reauthenticate, irrespective of the validity period of the most recent refresh token acquired by the application. If you don't want to enforce a sliding window lifetime, set the value of allow_infinite_rolling_refresh_token to `true`. The default is 7,776,000 seconds (90 days). The minimum (inclusive) is 86,400 seconds (24 hours). The maximum (inclusive) is 31,536,000 seconds (365 days). | | allow_infinite_rolling_refresh_token | No | If set to `true`, the refresh token sliding window lifetime never expires. | | IssuanceClaimPattern | No | Controls the Issuer (iss) claim. One of the values:<ul><li>AuthorityAndTenantGuid - The iss claim includes your domain name, such as `login.microsoftonline` or `tenant-name.b2clogin.com`, and your tenant identifier https:\//login.microsoftonline.com/00000000-0000-0000-0000-000000000000/v2.0/</li><li>AuthorityWithTfp - The iss claim includes your domain name, such as `login.microsoftonline` or `tenant-name.b2clogin.com`, your tenant identifier and your relying party policy name. https:\//login.microsoftonline.com/tfp/00000000-0000-0000-0000-000000000000/b2c_1a_tp_sign-up-or-sign-in/v2.0/</li></ul> Default value: AuthorityAndTenantGuid |
-| AuthenticationContextReferenceClaimPattern | No | Controls the `acr` claim value.<ul><li>None - Azure AD B2C doesn't issue the acr claim</li><li>PolicyId - the `acr` claim contains the policy name</li></ul>The options for setting this value are TFP (trust framework policy) and ACR (authentication context reference). It is recommended setting this value to TFP, to set the value, ensure the `<Item>` with the `Key="AuthenticationContextReferenceClaimPattern"` exists and the value is `None`. In your relying party policy, add `<OutputClaims>` item, add this element `<OutputClaim ClaimTypeReferenceId="trustFrameworkPolicy" Required="true" DefaultValue="{policy}" />`. Also make sure your policy contains the claim type `<ClaimType Id="trustFrameworkPolicy"> <DisplayName>trustFrameworkPolicy</DisplayName> <DataType>string</DataType> </ClaimType>` |
+| AuthenticationContextReferenceClaimPattern | No | Controls the `acr` claim value.<ul><li>None - Azure AD B2C doesn't issue the acr claim</li><li>PolicyId - the `acr` claim contains the policy name</li></ul>The options for setting this value are TFP (trust framework policy) and ACR (authentication context reference). It is recommended setting this value to TFP, to set the value, ensure the `<Item>` with the `Key="AuthenticationContextReferenceClaimPattern"` exists and the value is `None`. In your relying party policy, add `<OutputClaims>` item, add this element `<OutputClaim ClaimTypeReferenceId="trustFrameworkPolicy" Required="true" DefaultValue="{policy}" PartnerClaimType="tfp"/>`. Also make sure your policy contains the claim type `<ClaimType Id="trustFrameworkPolicy"> <DisplayName>trustFrameworkPolicy</DisplayName> <DataType>string</DataType> </ClaimType>` |
|RefreshTokenUserJourneyId| No | The identifier of a user journey that should be executed during the [refresh an access token](authorization-code-flow.md#4-refresh-the-token) POST request to the `/token` endpoint. | ## Cryptographic keys
active-directory-b2c Restful Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/restful-technical-profile.md
The following example `TechnicalProfile` sends a verification email by using a t
```xml <TechnicalProfile Id="SendGrid">
- <DisplayName>Use SendGrid's email API to send the code the the user</DisplayName>
+ <DisplayName>Use SendGrid's email API to send the code to the user</DisplayName>
<Protocol Name="Proprietary" Handler="Web.TPEngine.Providers.RestfulProvider, Web.TPEngine, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" /> <Metadata> <Item Key="ServiceUrl">https://api.sendgrid.com/v3/mail/send</Item>
active-directory-b2c User Profile Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/user-profile-attributes.md
For federated identities, depending on the identity provider, the **issuerAssign
## Password profile property
-For a local identity, the **passwordProfile** attribute is required, and contains the user's password. The `forceChangePasswordNextSignIn` attribute indicates whether a user must reset the password at the next sign-in. To handle a forced password reset, us the the instructions in [set up forced password reset flow](force-password-reset.md).
+For a local identity, the **passwordProfile** attribute is required, and contains the user's password. The `forceChangePasswordNextSignIn` attribute indicates whether a user must reset the password at the next sign-in. To handle a forced password reset, use the instructions in [set up forced password reset flow](force-password-reset.md).
For a federated (social) identity, the **passwordProfile** attribute is not required.
advisor Advisor Reference Cost Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-cost-recommendations.md
Learn more about [Subscription - SQLDWReservedCapacity (Consider Azure Synapse A
We analyzed your Azure Blob and Data Lake storage usage over last 30 days and calculated a Reserved Instance purchase that would maximize your savings. With reserved instances, you can prepurchase hourly usage and save over your current on-demand costs. Blob storage reserved instances applies only to data stored on Azure Blob (GPv2) and Azure Data Lake Storage (Gen 2). Reserved Instance is a billing benefit and automatically applies to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
-Learn more about [Subscription - BlobReservedCapacity ((Preview) Consider Blob storage reserved instances to save on Blob v2 and and Data Lake storage Gen2 costs)](https://aka.ms/rirecommendations).
+Learn more about [Subscription - BlobReservedCapacity ((Preview) Consider Blob storage reserved instances to save on Blob v2 and Data Lake storage Gen2 costs)](https://aka.ms/rirecommendations).
### Consider Azure Dedicated Host reserved instances to save over your on-demand costs
ai-services Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/role-based-access-control.md
A user that is responsible for building and modifying LUIS application, as a col
### Cognitive Services LUIS Owner > [!NOTE]
-> * If you are assigned as an *Owner* and *LUIS Owner* you will be be shown as *LUIS Owner* in LUIS portal.
+> * If you are assigned as an *Owner* and *LUIS Owner* you will be shown as *LUIS Owner* in LUIS portal.
These users are the gatekeepers for LUIS applications in a production environment. They should have full access to any of the underlying functions and thus can view everything in the application and have direct access to edit any changes for both authoring and runtime environments.
ai-services Spatial Analysis Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/computer-vision/spatial-analysis-logging.md
Spatial Analysis includes a set of features to monitor the health of the system
To enable a visualization of AI Insight events in a video frame, you need to use the `.debug` version of a [Spatial Analysis operation](spatial-analysis-operations.md) on a desktop machine or Azure VM. The visualization is not possible on Azure Stack Edge devices. There are four debug operations available.
-If your device is a local desktop machine or Azure GPU VM (with remote desktop enabled), then then you can switch to `.debug` version of any operation and visualize the output.
+If your device is a local desktop machine or Azure GPU VM (with remote desktop enabled), then you can switch to `.debug` version of any operation and visualize the output.
1. Open the desktop either locally or by using a remote desktop client on the host computer running Spatial Analysis. 2. In the terminal run `xhost +`
ai-services Limits And Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/custom-vision-service/limits-and-quotas.md
The number of training images per project and tags per project are expected to i
|Max number of tags per image (classification)|100|100| > [!NOTE]
-> Images smaller than than 256 pixels will be accepted but upscaled.
+> Images smaller than 256 pixels will be accepted but upscaled.
> Image aspect ratio should not be larger than 25:1.
ai-services Build A Custom Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/how-to-guides/build-a-custom-model.md
The Document Intelligence Studio provides and orchestrates all the API calls req
1. On the next step in the workflow, choose or create a Document Intelligence resource before you select continue. > [!IMPORTANT]
- > Custom neural models models are only available in a few regions. If you plan on training a neural model, please select or create a resource in one of [these supported regions](../concept-custom-neural.md#supported-regions).
+ > Custom neural models are only available in a few regions. If you plan on training a neural model, please select or create a resource in one of [these supported regions](../concept-custom-neural.md#supported-regions).
:::image type="content" source="../media/how-to/studio-select-resource.png" alt-text="Screenshot of Select the Document Intelligence resource.":::
ai-services Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/concepts/role-based-access-control.md
A user that is responsible for building and modifying an application, as a colla
### Cognitive Services Language Owner > [!NOTE]
-> If you are assigned as an *Owner* and *Language Owner* you will be be shown as *Cognitive Services Language Owner* in Language studio portal.
+> If you are assigned as an *Owner* and *Language Owner* you will be shown as *Cognitive Services Language Owner* in Language studio portal.
These users are the gatekeepers for the Language applications in production environments. They should have full access to any of the underlying functions and thus can view everything in the application and have direct access to edit any changes for both authoring and runtime environments
ai-services Evaluation Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/custom-text-classification/concepts/evaluation-metrics.md
After you trained your model, you will see some guidance and recommendation on h
> [!Important] > Confusion matrix is not available for multi-label classification projects. A Confusion matrix is an N x N matrix used for model performance evaluation, where N is the number of classes.
-The matrix compares the the expected labels with the ones predicted by the model.
+The matrix compares the expected labels with the ones predicted by the model.
This gives a holistic view of how well the model is performing and what kinds of errors it is making. You can use the Confusion matrix to identify classes that are too close to each other and often get mistaken (ambiguity). In this case consider merging these classes together. If that isn't possible, consider labeling more documents with both classes to help the model differentiate between them.
ai-services Content Filter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/content-filter.md
The default content filtering configuration is set to filter at the medium sever
| High | If approved<sup>\*</sup>| If approved<sup>\*</sup> | Content detected at severity levels low and medium isn't filtered. Only content at severity level high is filtered. Requires approval<sup>\*</sup>.| | No filters | If approved<sup>\*</sup>| If approved<sup>\*</sup>| No content is filtered regardless of severity level detected. Requires approval<sup>\*</sup>.|
-<sup>\*</sup> Only customers who have been approved for modified content filtering have full content filtering control, including configuring content filters at severity level high only or turning content filters off. Apply for modified content filters via this form: [Azure OpenAI Limited Access Review: Modified Content Filters and Abuse Monitoring (microsoft.com)](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xURE01NDY1OUhBRzQ3MkQxMUhZSE1ZUlJKTiQlQCN0PWcu)
+<sup>\*</sup> Only customers who have been approved for modified content filtering have full content filtering control, including configuring content filters at severity level high only or turning content filters off. Apply for modified content filters via this form: [Azure OpenAI Limited Access Review:ΓÇ» Modified Content Filtering (microsoft.com)](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUMlBQNkZMR0lFRldORTdVQzQ0TEI5Q1ExOSQlQCN0PWcu)
Content filtering configurations are created within a Resource in Azure AI Studio, and can be associated with Deployments. [Learn more about configurability here](../how-to/content-filters.md).
As part of your application design, consider the following best practices to del
## Next steps - Learn more about the [underlying models that power Azure OpenAI](../concepts/models.md).-- Apply for modified content filters via [this form](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xURE01NDY1OUhBRzQ3MkQxMUhZSE1ZUlJKTiQlQCN0PWcu).
+- Apply for modified content filters via [this form](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xUMlBQNkZMR0lFRldORTdVQzQ0TEI5Q1ExOSQlQCN0PWcu).
- Azure OpenAI content filtering is powered by [Azure AI Content Safety](https://azure.microsoft.com/products/cognitive-services/ai-content-safety). - Learn more about understanding and mitigating risks associated with your application: [Overview of Responsible AI practices for Azure OpenAI models](/legal/cognitive-services/openai/overview?context=/azure/ai-services/openai/context/context). - Learn more about how data is processed in connection with content filtering and abuse monitoring: [Data, privacy, and security for Azure OpenAI Service](/legal/cognitive-services/openai/data-privacy?context=/azure/ai-services/openai/context/context#preventing-abuse-and-harmful-content-generation).++
ai-services Embeddings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/embeddings.md
Previously updated : 9/12/2023-- Last updated : 11/02/2023++ recommendations: false keywords:
foreach (float item in returnValue.Value.Data[0].Embedding)
### Verify inputs don't exceed the maximum length
-The maximum length of input text for our embedding models is 2048 tokens (equivalent to around 2-3 pages of text). You should verify that your inputs don't exceed this limit before making a request.
-
-### Choose the best model for your task
-
-For the search models, you can obtain embeddings in two ways. The `<search_model>-doc` model is used for longer pieces of text (to be searched over) and the `<search_model>-query` model is used for shorter pieces of text, typically queries or class labels in zero shot classification. You can read more about all of the Embeddings models in our [Models](../concepts/models.md) guide.
-
-### Replace newlines with a single space
-
-Unless you're embedding code, we suggest replacing newlines (\n) in your input with a single space, as we have observed inferior results when newlines are present.
+The maximum length of input text for our latest embedding models is 8192 tokens. You should verify that your inputs don't exceed this limit before making a request.
## Limitations & risks
ai-services Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/policy-reference.md
Title: Built-in policy definitions for Azure AI services description: Lists Azure Policy built-in policy definitions for Azure AI services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 10/23/2023 Last updated : 11/03/2023
ai-services Export Knowledge Base https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/Tutorials/export-knowledge-base.md
You may want to create a copy of your knowledge base for several reasons:
## Import a knowledge base 1. Select **Create a knowledge base** from the top menu of the qnamaker.ai portal and then create an _empty_ knowledge base by not adding any URLs or files. Set the name of your choice for the new knowledge base and then ClickΓÇ»**Create your KB**.
-1. In this new knowledge base, open the **Settings** tab and and under _Import knowledge base_ select one of the following options: **QnAs**, **Synonyms**, or **Knowledge Base Replica**.
+1. In this new knowledge base, open the **Settings** tab and under _Import knowledge base_ select one of the following options: **QnAs**, **Synonyms**, or **Knowledge Base Replica**.
1. **QnAs**: This option imports all QnA pairs. **The QnA pairs created in the new knowledge base shall have the same QnA ID as present in the exported file**. You can refer [SampleQnAs.xlsx](https://aka.ms/qnamaker-sampleqnas), [SampleQnAs.tsv](https://aka.ms/qnamaker-sampleqnastsv) to import QnAs. 2. **Synonyms**: This option can be used to import synonyms to the knowledge base. You can refer [SampleSynonyms.xlsx](https://aka.ms/qnamaker-samplesynonyms), [SampleSynonyms.tsv](https://aka.ms/qnamaker-samplesynonymstsv) to import synonyms.
ai-services Logging Audio Transcription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/logging-audio-transcription.md
Here's a sample output of [Endpoints_ListLogs](https://eastus.dev.cognitive.micr
}, "createdDateTime": "2023-03-13T16:37:15Z", "links": {
- "contentUrl": "<Link to to download log file>"
+ "contentUrl": "<Link to download log file>"
} } ]
aks Aks Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/aks-migration.md
Title: Migrate to Azure Kubernetes Service (AKS)
-description: This article shows you how to to Azure Kubernetes Service (AKS).
+description: This article shows you how to migrate to Azure Kubernetes Service (AKS).
Last updated 05/30/2023
aks App Routing Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/app-routing-configuration.md
+
+ Title: Customize the application routing add-on for Azure Kubernetes Service (AKS)
+description: Understand what advanced configuration options are supported with the application routing add-on for Azure Kubernetes Service.
+++ Last updated : 11/03/2023++
+# Advanced Ingress configurations with the application routing add-on
+
+An Ingress is an API object that defines rules, which allow external access to services in an Azure Kubernetes Service (AKS) cluster. When you create an Ingress object that uses the application routing add-on nginx Ingress classes, the add-on creates, configures, and manages one or more Ingress controllers in your AKS cluster.
+
+This article shows you how to set up an advanced Ingress configuration to encrypt the traffic and use Azure DNS to manage DNS zones.
+
+## Application routing add-on with nginx features
+
+The application routing add-on with nginx delivers the following:
+
+* Easy configuration of managed nginx Ingress controllers based on [Kubernetes nginx Ingress controller][kubernetes-nginx-ingress].
+* Integration with an external DNS such as [Azure DNS][azure-dns-overview] for public and private zone management
+* SSL termination with certificates stored in a key vault, such as [Azure Key Vault][azure-key-vault-overview].
+
+## Prerequisites
+
+- An AKS cluster with the [application routing add-on][app-routing-add-on-basic-configuration].
+- Azure Key Vault if you want to configure SSL termination and store certificates in the vault hosted in Azure.
+- Azure DNS if you want to configure public and private zone management and host them in Azure.
+
+## Connect to your AKS cluster
+
+To connect to the Kubernetes cluster from your local computer, you use `kubectl`, the Kubernetes command-line client. You can install it locally using the [az aks install-cli][az-aks-install-cli] command. If you use the Azure Cloud Shell, `kubectl` is already installed.
+
+Configure kubectl to connect to your Kubernetes cluster using the [`az aks get-credentials`][az-aks-get-credentials] command.
+
+```bash
+az aks get-credentials -g <ResourceGroupName> -n <ClusterName>
+```
+
+## Terminate HTTPS traffic
+
+To enable support for HTTPS traffic, see the following prerequisites:
+
+* **azure-keyvault-secrets-provider**: The [Secret Store CSI provider][secret-store-csi-provider] for Azure Key Vault is required to retrieve the certificates from Azure Key Vault.
+
+ > [!IMPORTANT]
+ > To enable the add-on to reload certificates from Azure Key Vault when they change, you should to enable the [secret autorotation feature][csi-secrets-store-autorotation] of the Secret Store CSI driver with the `--enable-secret-rotation` argument. When autorotation is enabled, the driver updates the pod mount and the Kubernetes secret by polling for changes periodically, based on the rotation poll interval you define. The default rotation poll interval is two minutes.
+
+* An SSL certificate. If you don't have one, you can [create a certificate][create-and-export-a-self-signed-ssl-certificate].
+
+### Enable key vault secrets provider
+
+To enable application routing on your cluster, use the [`az aks enable-addons`][az-aks-enable-addons] command specifying `azure-keyvault-secrets-provider` with the `--addons` argument and the `--enable-secret-rotation` argument.
+
+```azurecli-interactive
+az aks enable-addons -g <ResourceGroupName> -n <ClusterName> --addons azure-keyvault-secrets-provider --enable-secret-rotation
+```
+
+### Create an Azure Key Vault to store the certificate
+
+> [!NOTE]
+> If you already have an Azure Key Vault, you can skip this step.
+
+Create an Azure Key Vault using the [`az keyvault create`][az-keyvault-create] command.
+
+```azurecli-interactive
+az keyvault create -g <ResourceGroupName> -l <Location> -n <KeyVaultName>
+```
+
+### Create and export a self-signed SSL certificate
+
+1. Create a self-signed SSL certificate to use with the Ingress using the `openssl req` command. Make sure you replace *`<Hostname>`* with the DNS name you're using.
+
+ ```bash
+ openssl req -new -x509 -nodes -out aks-ingress-tls.crt -keyout aks-ingress-tls.key -subj "/CN=<Hostname>" -addext "subjectAltName=DNS:<Hostname>"
+ ```
+
+2. Export the SSL certificate and skip the password prompt using the `openssl pkcs12 -export` command.
+
+ ```bash
+ openssl pkcs12 -export -in aks-ingress-tls.crt -inkey aks-ingress-tls.key -out aks-ingress-tls.pfx
+ ```
+
+### Import certificate into Azure Key Vault
+
+Import the SSL certificate into Azure Key Vault using the [`az keyvault certificate import`][az-keyvault-certificate-import] command. If your certificate is password protected, you can pass the password through the `--password` flag.
+
+```azurecli-interactive
+az keyvault certificate import --vault-name <KeyVaultName> -n <KeyVaultCertificateName> -f aks-ingress-tls.pfx [--password <certificate password if specified>]
+```
+
+### Retrieve the add-on's managed identity object ID
+
+You use the managed identity in the next steps to grant permissions to manage the Azure DNS zone and retrieve secrets and certificates from the Azure Key Vault.
+
+Get the add-on's managed identity object ID using the [`az aks show`][az-aks-show] command and setting the output to a variable named `MANAGEDIDENTITY_OBJECTID`.
+
+```bash
+# Provide values for your environment
+RGNAME=<ResourceGroupName>
+CLUSTERNAME=<ClusterName>
+MANAGEDIDENTITY_OBJECTID=$(az aks show -g ${RGNAME} -n ${CLUSTERNAME} --query ingressProfile.webAppRouting.identity.objectId -o tsv)
+```
+
+### Grant the add-on permissions to retrieve certificates from Azure Key Vault
+
+The application routing add-on creates a user-created managed identity in the cluster resource group. You need to grant permissions to the managed identity so it can retrieve SSL certificates from the Azure Key Vault.
+
+Azure Key Vault offers [two authorization systems][authorization-systems]: **Azure role-based access control (Azure RBAC)**, which operates on the management plane, and the **access policy model**, which operates on both the management plane and the data plane. To find out which system your key vault is using, you can query the `enableRbacAuthorization` property.
+
+```azurecli-interactive
+az keyvault show --name <KeyVaultName> --query properties.enableRbacAuthorization
+```
+
+If Azure RBAC authorization is enabled for your key vault, you should configure permissions using Azure RBAC. Add the `Key Vault Secrets User` role assignment to the key vault by running the following commands.
+
+```azurecli-interactive
+KEYVAULTID=$(az keyvault show --name <KeyVaultName> --query "id" --output tsv)
+az role assignment create --role "Key Vault Secrets User" --assignee $MANAGEDIDENTITY_OBJECTID --scope $KEYVAULTID
+```
+
+If Azure RBAC authorization isn't enabled for your key vault, you should configure permissions using the access policy model. Grant `GET` permissions for the application routing add-on to retrieve certificates from Azure Key Vault using the [`az keyvault set-policy`][az-keyvault-set-policy] command.
+
+```azurecli-interactive
+az keyvault set-policy --name <KeyVaultName> --object-id $MANAGEDIDENTITY_OBJECTID --secret-permissions get --certificate-permissions get
+```
+
+## Configure the add-on to use Azure DNS to manage DNS zones
+
+To enable support for DNS zones, see the following prerequisites:
+
+* The app routing add-on can be configured to automatically create records on one or more Azure public and private DNS zones for hosts defined on Ingress resources. All global Azure DNS zones need to be in the same resource group, and all private Azure DNS zones need to be in the same resource group. If you don't have an Azure DNS zone, you can [create one][create-an-azure-dns-zone].
+
+ > [!NOTE]
+ > If you plan to use Azure DNS, you need to update the add-on to include the `--dns-zone-resource-ids` argument. You can pass a comma separated list of multiple public or private Azure DNS zone resource IDs.
+
+### Create a global Azure DNS zone
+
+1. Create an Azure DNS zone using the [`az network dns zone create`][az-network-dns-zone-create] command.
+
+ ```azurecli-interactive
+ az network dns zone create -g <ResourceGroupName> -n <ZoneName>
+ ```
+
+1. Retrieve the resource ID for the DNS zone using the [`az network dns zone show`][az-network-dns-zone-show] command and set the output to a variable named *ZONEID*.
+
+ ```azurecli-interactive
+ ZONEID=$(az network dns zone show -g <ResourceGroupName> -n <ZoneName> --query "id" --output tsv)
+ ```
+
+1. Grant **DNS Zone Contributor** permissions on the DNS zone using the [`az role assignment create`][az-role-assignment-create] command.
+
+ ```azurecli-interactive
+ az role assignment create --role "DNS Zone Contributor" --assignee $MANAGEDIDENTITY_OBJECTID --scope $ZONEID
+ ```
+
+1. Update the add-on to enable the integration with Azure DNS and install the **external-dns** controller using the [`az aks addon update`][az-aks-addon-update] command.
+
+ ```azurecli-interactive
+ az aks addon update -g <ResourceGroupName> -n <ClusterName> --addon web_application_routing --dns-zone-resource-ids=$ZONEID
+ ```
+
+## Create the Ingress
+
+The application routing add-on creates an Ingress class on the cluster named *webapprouting.kubernetes.azure.com*. When you create an Ingress object with this class, it activates the add-on.
+
+1. Get the certificate URI to use in the Ingress from Azure Key Vault using the [`az keyvault certificate show`][az-keyvault-certificate-show] command.
+
+ ```azurecli-interactive
+ az keyvault certificate show --vault-name <KeyVaultName> -n <KeyVaultCertificateName> --query "id" --output tsv
+ ```
+
+2. Copy the following YAML manifest into a new file named **ingress.yaml** and save the file to your local computer.
+
+ > [!NOTE]
+ > Update *`<Hostname>`* with your DNS host name and *`<KeyVaultCertificateUri>`* with the ID returned from Azure Key Vault.
+ > The *`secretName`* key in the `tls` section defines the name of the secret that contains the certificate for this Ingress resource. This certificate will be presented in the browser when a client browses to the URL defined in the `<Hostname>` key. Make sure that the value of `secretName` is equal to `keyvault-` followed by the value of the Ingress resource name (from `metadata.name`). In the example YAML, secretName will need to be equal to `keyvault-<your Ingress name>`.
+
+ ```yml
+ apiVersion: networking.k8s.io/v1
+ kind: Ingress
+ metadata:
+ annotations:
+ kubernetes.azure.com/tls-cert-keyvault-uri: <KeyVaultCertificateUri>
+ name: aks-helloworld
+ namespace: hello-web-app-routing
+ spec:
+ ingressClassName: webapprouting.kubernetes.azure.com
+ rules:
+ - host: <Hostname>
+ http:
+ paths:
+ - backend:
+ service:
+ name: aks-helloworld
+ port:
+ number: 80
+ path: /
+ pathType: Prefix
+ tls:
+ - hosts:
+ - <Hostname>
+ secretName: keyvault-<your ingress name>
+ ```
+
+3. Create the cluster resources using the [`kubectl apply`][kubectl-apply] command.
+
+ ```bash
+ kubectl apply -f ingress.yaml -n hello-web-app-routing
+ ```
+
+ The following example output shows the created resource:
+
+ ```output
+ Ingress.networking.k8s.io/aks-helloworld created
+ ```
+
+## Verify the managed Ingress was created
+
+You can verify the managed Ingress was created using the [`kubectl get ingress`][kubectl-get] command.
+
+```bash
+kubectl get ingress -n hello-web-app-routing
+```
+
+The following example output shows the created managed Ingress:
+
+```output
+NAME CLASS HOSTS ADDRESS PORTS AGE
+aks-helloworld webapprouting.kubernetes.azure.com myapp.contoso.com 20.51.92.19 80, 443 4m
+```
+
+## Next steps
+
+Learn about monitoring the Ingress-nginx controller metrics included with the application routing add-on with [with Prometheus in Grafana][prometheus-in-grafana] (preview) as part of analyzing the performance and usage of your application.
+
+<!-- LINKS - external -->
+[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
+[kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get
+
+<!-- LINKS - internal -->
+[app-routing-add-on-basic-configuration]: app-routing.md
+[secret-store-csi-provider]: csi-secrets-store-driver.md
+[csi-secrets-store-autorotation]: csi-secrets-store-configuration-options.md#enable-and-disable-auto-rotation
+[az-keyvault-set-policy]: /cli/azure/keyvault#az-keyvault-set-policy
+[azure-key-vault-overview]: ../key-vault/general/overview.md
+[az-aks-addon-update]: /cli/azure/aks/addon#az-aks-addon-update
+[az-network-dns-zone-show]: /cli/azure/network/dns/zone#az-network-dns-zone-show
+[az-role-assignment-create]: /cli/azure/role/assignment#az-role-assignment-create
+[az-network-dns-zone-create]: /cli/azure/network/dns/zone#az-network-dns-zone-create
+[az-keyvault-certificate-import]: /cli/azure/keyvault/certificate#az-keyvault-certificate-import
+[az-keyvault-create]: /cli/azure/keyvault#az-keyvault-create
+[authorization-systems]: ../key-vault/general/rbac-access-policy.md
+[az-aks-install-cli]: /cli/azure/aks#az-aks-install-cli
+[az-aks-get-credentials]: /cli/azure/aks#az-aks-get-credentials
+[create-and-export-a-self-signed-ssl-certificate]: #create-and-export-a-self-signed-ssl-certificate
+[create-an-azure-dns-zone]: #create-a-global-azure-dns-zone
+[azure-dns-overview]: ../dns/dns-overview.md
+[az-keyvault-certificate-show]: /cli/azure/keyvault/certificate#az-keyvault-certificate-show
+[az-aks-enable-addons]: /cli/azure/aks/addon#az-aks-enable-addon
+[az-aks-show]: /cli/azure/aks/addon#az-aks-show
+[prometheus-in-grafana]: app-routing-nginx-prometheus.md
aks App Routing Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/app-routing-migration.md
Previously updated : 08/18/2023 Last updated : 11/03/2023 # Migrate from HTTP application routing to the application routing add-on
-In this article, you'll learn how to migrate your Azure Kubernetes Service (AKS) cluster from HTTP application routing feature to the [application routing add-on](./app-routing.md). The HTTP application routing add-on has been retired and won't work on any cluster Kubernetes version currently in support, so we recommend migrating as soon as possible to maintain a supported configuration.
+In this article, you learn how to migrate your Azure Kubernetes Service (AKS) cluster from HTTP application routing feature to the [application routing add-on](./app-routing.md). The HTTP application routing add-on has been retired and doesn't work on any cluster Kubernetes version currently in support. We recommend migrating as soon as possible to maintain a supported configuration.
## Prerequisites
Azure CLI version `2.49.0` or later. If you haven't yet, follow the instructions
> [!NOTE] > These steps detail migrating from an unsupported configuration. As such, AKS cannot offer support for issues that arise during the migration process.
-## Update your cluster's add-ons, ingresses, and IP usage
+## Update your cluster's add-ons, Ingresses, and IP usage
1. Enable the application routing add-on.
Azure CLI version `2.49.0` or later. If you haven't yet, follow the instructions
az aks enable-addons -g <ResourceGroupName> -n <ClusterName> --addons web_application_routing ```
-2. Update your ingresses, setting `ingressClassName` to `webapprouting.kubernetes.azure.com`. Remove the `kubernetes.io/ingress.class` annotation. You'll also need to update the host to one that you own, as the application routing add-on doesn't have a managed cluster DNS zone. If you don't have a DNS zone, follow instructions to [create][app-routing-dns-create] and [configure][app-routing-dns-configure] one.
+2. Update your Ingresses, setting `ingressClassName` to `webapprouting.kubernetes.azure.com`. Remove the `kubernetes.io/ingress.class` annotation. You also need to update the host to one that you own, as the application routing add-on doesn't have a managed cluster DNS zone. If you don't have a DNS zone, follow instructions to [create][app-routing-dns-create] and [configure][app-routing-dns-configure] one.
Initially, your ingress configuration will look something like this:
Azure CLI version `2.49.0` or later. If you haven't yet, follow the instructions
number: 80 ```
- After you've properly updated, the same configuration will look like the following:
+ After you've properly updated, the same configuration looks like the following:
```yaml apiVersion: networking.k8s.io/v1
Azure CLI version `2.49.0` or later. If you haven't yet, follow the instructions
number: 80 ```
-3. Update the ingress controller's IP (such as in DNS records) with the new IP address. You can find the new IP by using `kubectl get`. For example:
+3. Update the Ingress controller's IP (such as in DNS records) with the new IP address. You can find the new IP by using `kubectl get`. For example:
```bash kubectl get svc nginx --namespace app-routing-system -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
Azure CLI version `2.49.0` or later. If you haven't yet, follow the instructions
## Remove and delete all HTTP application routing resources
-1. After the HTTP application routing add-on is disabled, some related Kubernetes resources may remain in your cluster. These resources include *configmaps* and *secrets* that are created in the *kube-system* namespace. To maintain a clean cluster, you may want to remove these resources. Look for *addon-http-application-routing* resources using the following [`kubectl get`][kubectl-get] commands:
+1. After the HTTP application routing add-on is disabled, some related Kubernetes resources might remain in your cluster. These resources include *configmaps* and *secrets* that are created in the *kube-system* namespace. To maintain a clean cluster, you can remove these resources. Look for *addon-http-application-routing* resources using the following [`kubectl get`][kubectl-get] commands:
```bash kubectl get deployments --namespace kube-system
Azure CLI version `2.49.0` or later. If you haven't yet, follow the instructions
## Next steps
-After migrating to the application routing add-on, learn how to [monitor ingress controller metrics with Prometheus and Grafana](./app-routing-nginx-prometheus.md).
+After migrating to the application routing add-on, learn how to [monitor Ingress controller metrics with Prometheus and Grafana](./app-routing-nginx-prometheus.md).
<!-- INTERNAL LINKS --> [install-azure-cli]: /cli/azure/install-azure-cli
-[ingress-https]: ./ingress-tls.md
-[app-routing-dns-create]: ./app-routing.md?tabs=without-osm#create-an-azure-dns-zone
-[app-routing-dns-configure]: ./app-routing.md?tabs=without-osm#configure-the-add-on-to-use-azure-dns-to-manage-dns-zones
+[app-routing-dns-create]: ./app-routing-configuration.md#create-a-global-azure-dns-zone
+[app-routing-dns-configure]: ./app-routing-configuration.md#configure-the-add-on-to-use-azure-dns-to-manage-dns-zones
<!-- EXTERNAL LINKS -->
-[dns-pricing]: https://azure.microsoft.com/pricing/details/dns/
[kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get [kubectl-delete]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#delete
aks App Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/app-routing.md
Title: Azure Kubernetes Service (AKS) managed nginx ingress with the application routing add-on (preview)
+ Title: Azure Kubernetes Service (AKS) managed nginx Ingress with the application routing add-on
description: Use the application routing add-on to securely access applications deployed on Azure Kubernetes Service (AKS). Previously updated : 08/07/2023 Last updated : 11/03/2023
-# Managed nginx ingress with the application routing add-on (preview)
+# Managed nginx Ingress with the application routing add-on
-The application routing add-on configures an [ingress controller](https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/) in your Azure Kubernetes Service (AKS) cluster with SSL termination through certificates stored in Azure Key Vault. When you deploy ingresses, the add-on creates publicly accessible DNS names for endpoints on an Azure DNS zone.
+One way to route Hypertext Transfer Protocol (HTTP) and secure (HTTPS) traffic to applications running on an Azure Kubernetes Service (AKS) cluster is to use the [Kubernetes Ingress object][kubernetes-ingress-object-overview]. When you create an Ingress object that uses the application routing add-on nginx Ingress classes, the add-on creates, configures, and manages one or more Ingress controllers in your AKS cluster.
+This article shows you how to deploy and configure a basic Ingress controller in your AKS cluster.
-## Application routing add-on with nginx overview
+## Application routing add-on with nginx features
-The application routing add-on deploys the following components:
+The application routing add-on with nginx delivers the following:
-- **[nginx ingress controller][nginx]**: This ingress controller is exposed to the internet.-- **[external-dns controller][external-dns]**: This controller watches for Kubernetes ingress resources and creates DNS `A` records in the cluster-specific DNS zone. This is only deployed when you pass in the `--dns-zone-resource-id` argument.
+* Easy configuration of managed nginx Ingress controllers based on [Kubernetes nginx Ingress controller][kubernetes-nginx-ingress].
+* Integration with [Azure DNS][azure-dns-overview] for public and private zone management
+* SSL termination with certificates stored in Azure Key Vault.
+
+For additional configuration information related to SSL encryption and DNS integration, review the [application routing add-on configuration][custom-ingress-configurations].
+
+With the retirement of [Open Service Mesh][open-service-mesh-docs] (OSM) by the Cloud Native Computing Foundation (CNCF), using the application routing add-on is the default method for all AKS clusters.
## Prerequisites - An Azure subscription. If you don't have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free). - Azure CLI version 2.47.0 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].-- An Azure Key Vault to store certificates.-- The `aks-preview` Azure CLI extension version 0.5.137 or later installed. If you need to install or update, see [Install or update the `aks-preview` extension](#install-or-update-the-aks-preview-azure-cli-extension).-- Optionally, a DNS solution, such as [Azure DNS](../dns/dns-getstarted-portal.md).-
-### Install or update the `aks-preview` Azure CLI extension
--- Install the `aks-preview` Azure CLI extension using the [`az extension add`][az-extension-add] command.-
- ```azurecli-interactive
- az extension add --name aks-preview
- ```
--- If you need to update the extension version, you can do this using the [`az extension update`][az-extension-update] command.-
- ```azurecli-interactive
- az extension update --name aks-preview
- ```
-
-### Create and export a self-signed SSL certificate
-
-> [!NOTE]
-> If you already have an SSL certificate, you can skip this step.
-
-1. Create a self-signed SSL certificate to use with the ingress using the `openssl req` command. Make sure you replace *`<Hostname>`* with the DNS name you're using.
-
- ```bash
- openssl req -new -x509 -nodes -out aks-ingress-tls.crt -keyout aks-ingress-tls.key -subj "/CN=<Hostname>" -addext "subjectAltName=DNS:<Hostname>"
- ```
-
-2. Export the SSL certificate and skip the password prompt using the `openssl pkcs12 -export` command.
-
- ```bash
- openssl pkcs12 -export -in aks-ingress-tls.crt -inkey aks-ingress-tls.key -out aks-ingress-tls.pfx
- ```
-
-### Create an Azure Key Vault to store the certificate
-
-> [!NOTE]
-> If you already have an Azure Key Vault, you can skip this step.
-- Create an Azure Key Vault using the [`az keyvault create`][az-keyvault-create] command.
+## Limitations
- ```azurecli-interactive
- az keyvault create -g <ResourceGroupName> -l <Location> -n <KeyVaultName>
- ```
-
-### Import certificate into Azure Key Vault
--- Import the SSL certificate into Azure Key Vault using the [`az keyvault certificate import`][az-keyvault-certificate-import] command. If your certificate is password protected, you can pass the password through the `--password` flag.-
- ```azurecli-interactive
- az keyvault certificate import --vault-name <KeyVaultName> -n <KeyVaultCertificateName> -f aks-ingress-tls.pfx [--password <certificate password if specified>]
- ```
-
-### Create an Azure DNS zone
-
-> [!NOTE]
-> If you want the add-on to automatically manage creating host names via Azure DNS, you need to [create an Azure DNS zone](../dns/dns-getstarted-cli.md) if you don't have one already.
--- Create an Azure DNS zone using the [`az network dns zone create`][az-network-dns-zone-create] command.-
- ```azurecli-interactive
- az network dns zone create -g <ResourceGroupName> -n <ZoneName>
- ```
+- The application routing add-on supports up to five Azure DNS zones.
+- All public Azure DNS zones integrated with the add-on have to be in the same resource group.
+- All private Azure DNS zones integrated with the add-on have to be in the same resource group.
+- Editing any resources in the `app-routing-system` namespace, including the Ingress-nginx ConfigMap isn't supported.
+- Snippet annotations on the Ingress resources through `nginx.ingress.kubernetes.io/configuration-snippet` aren't supported.
## Enable application routing using Azure CLI
-# [Without Open Service Mesh (OSM)](#tab/without-osm)
-
-The following extra add-on is required:
-
-- **azure-keyvault-secrets-provider**: The Secret Store CSI provider for Azure Key Vault is required to retrieve the certificates from Azure Key Vault.
+# [Default](#tab/default)
-> [!IMPORTANT]
-> To enable the add-on to reload certificates from Azure Key Vault when they change, you should enable the [secret autorotation feature](./csi-secrets-store-configuration-options.md#enable-and-disable-auto-rotation) of the Secret Store CSI driver with the `--enable-secret-rotation` argument. When the autorotation is enabled, the driver updates the pod mount and the Kubernetes secret by polling for changes periodically, based on the rotation poll interval you can define. The default rotation poll interval is two minutes.
+### Enable on a new cluster
-### Enable application routing on a new cluster
+To enable application routing on a new cluster, use the [`az aks create`][az-aks-create] command, specifying `web_application_routing` with the `enable-addons` argument.
-- Enable application routing on a new AKS cluster using the [`az aks create`][az-aks-create] command and the `--enable-addons` parameter with the following add-ons:-
- ```azurecli-interactive
- az aks create -g <ResourceGroupName> -n <ClusterName> -l <Location> --enable-addons azure-keyvault-secrets-provider,web_application_routing --generate-ssh-keys --enable-secret-rotation
- ```
+```azurecli-interactive
+az aks create -g <ResourceGroupName> -n <ClusterName> -l <Location> --enable-addons web_application_routing --generate-ssh-keys
+```
-### Enable application routing on an existing cluster
+### Enable on an existing cluster
-- Enable application routing on an existing cluster using the [`az aks enable-addons`][az-aks-enable-addons] command and the `--addons` parameter with the following add-ons:
+To enable application routing on an existing cluster, use the [`az aks enable-addons`][az-aks-enable-addons] command specifying `web_application_routing` with the `--addons` argument.
- ```azurecli-interactive
- az aks enable-addons -g <ResourceGroupName> -n <ClusterName> --addons azure-keyvault-secrets-provider,web_application_routing --enable-secret-rotation
- ```
+```azurecli-interactive
+az aks enable-addons -g <ResourceGroupName> -n <ClusterName> --addons web_application_routing
+```
-# [With Open Service Mesh (OSM)](#tab/with-osm)
+# [Open Service Mesh (OSM)](#tab/with-osm)
-The following extra add-ons are required:
+>[!NOTE]
+>Open Service Mesh (OSM) has been retired by the CNCF. Creating Ingresses using the application routing add-on with OSM integration is not recommended and will be retired.
-- **azure-keyvault-secrets-provider**: The Secret Store CSI provider for Azure Key Vault is required to retrieve the certificates from Azure Key Vault.-- **open-service-mesh**: If you require encrypted intra cluster traffic (recommended) between the nginx ingress and your services, the Open Service Mesh add-on is required which provides mutual TLS (mTLS).
+The following add-ons are required to support this configuration:
-> [!IMPORTANT]
-> To enable the add-on to reload certificates from Azure Key Vault when they change, you should enable the [secret autorotation feature](./csi-secrets-store-configuration-options.md#enable-and-disable-auto-rotation) of the Secret Store CSI driver with the `--enable-secret-rotation` argument. When the autorotation is enabled, the driver updates the pod mount and the Kubernetes secret by polling for changes periodically, based on the rotation poll interval you can define. The default rotation poll interval is two minutes.
+* **open-service-mesh**: If you require encrypted intra cluster traffic (recommended) between the nginx Ingress and your services, the Open Service Mesh add-on is required which provides mutual TLS (mTLS).
-### Enable application routing on a new cluster
+### Enable on a new cluster
-- Enable application routing on a new AKS cluster using the [`az aks create`][az-aks-create] command and the `--enable-addons` parameter with the following add-ons:
+Enable application routing on a new AKS cluster using the [`az aks create`][az-aks-create] command and the `--enable-addons` parameter with the following add-ons:
- ```azurecli-interactive
- az aks create -g <ResourceGroupName> -n <ClusterName> -l <Location> --enable-addons azure-keyvault-secrets-provider,open-service-mesh,web_application_routing --generate-ssh-keys --enable-secret-rotation
- ```
+```azurecli-interactive
+az aks create -g <ResourceGroupName> -n <ClusterName> -l <Location> --enable-addons open-service-mesh,web_application_routing --generate-ssh-keys
+```
-### Enable application routing on an existing cluster
+### Enable on an existing cluster
-- Enable application routing on an existing cluster using the [`az aks enable-addons`][az-aks-enable-addons] command and the `--addons` parameter with the following add-ons:
+Enable application routing on an existing cluster using the [`az aks enable-addons`][az-aks-enable-addons] command and the `--addons` parameter with the following add-ons:
- ```azurecli-interactive
- az aks enable-addons -g <ResourceGroupName> -n <ClusterName> --addons azure-keyvault-secrets-provider,open-service-mesh,web_application_routing --enable-secret-rotation
- ```
+```azurecli-interactive
+az aks enable-addons -g <ResourceGroupName> -n <ClusterName> --addons open-service-mesh,web_application_routing
+```
> [!NOTE] > To use the add-on with Open Service Mesh, you should install the `osm` command-line tool. This command-line tool contains everything needed to configure and manage Open Service Mesh. The latest binaries are available on the [OSM GitHub releases page][osm-release].
-# [With service annotations (retired)](#tab/service-annotations)
+# [Service annotations (retired)](#tab/service-annotations)
> [!WARNING]
-> Configuring ingresses by adding annotations on the Service object is retired. Please consider [configuring via an Ingress object](?tabs=without-osm).
-
-The following extra add-on is required:
--- **azure-keyvault-secrets-provider**: The Secret Store CSI provider for Azure Key Vault is required to retrieve the certificates from Azure Key Vault.-
-> [!IMPORTANT]
-> To enable the add-on to reload certificates from Azure Key Vault when they change, you should enable the [secret autorotation feature](./csi-secrets-store-configuration-options.md#enable-and-disable-auto-rotation) of the Secret Store CSI driver with the `--enable-secret-rotation` argument. When the autorotation is enabled, the driver updates the pod mount and the Kubernetes secret by polling for changes periodically, based on the rotation poll interval you can define. The default rotation poll interval is two minutes.
-
-### Enable application routing on a new cluster
+> Configuring Ingresses by adding annotations on the Service object is retired. Please consider [configuring using an Ingress object](?tabs=default).
-- Enable application routing on a new AKS cluster using the [`az aks create`][az-aks-create] command and the `--enable-addons` parameter with the following add-ons:
+### Enable on a new cluster
- ```azurecli-interactive
- az aks create -g <ResourceGroupName> -n <ClusterName> -l <Location> --enable-addons azure-keyvault-secrets-provider,web_application_routing --generate-ssh-keys --enable-secret-rotation
- ```
-
-### Enable application routing on an existing cluster
--- Enable application routing on an existing cluster using the [`az aks enable-addons`][az-aks-enable-addons] command and the `--addons` parameter with the following add-ons:-
- ```azurecli-interactive
- az aks enable-addons -g <ResourceGroupName> -n <ClusterName> --addons azure-keyvault-secrets-provider,web_application_routing --enable-secret-rotation
- ```
---
-## Retrieve the add-on's managed identity object ID
-
-You use the managed identity in the next steps to grant permissions to manage the Azure DNS zone and retrieve secrets and certificates from the Azure Key Vault.
--- Get the add-on's managed identity object ID using the [`az aks show`][az-aks-show] command and setting the output to a variable named *MANAGEDIDENTITY_OBJECTID*.-
- ```azurecli-interactive
- # Provide values for your environment
- RGNAME=<ResourceGroupName>
- CLUSTERNAME=<ClusterName>
- MANAGEDIDENTITY_OBJECTID=$(az aks show -g ${RGNAME} -n ${CLUSTERNAME} --query ingressProfile.webAppRouting.identity.objectId -o tsv)
- ```
-
-## Configure the add-on to use Azure DNS to manage DNS zones
-
-> [!NOTE]
-> If you plan to use Azure DNS, you need to update the add-on to pass in the `--dns-zone-resource-id`.
-
-1. Retrieve the resource ID for the DNS zone using the [`az network dns zone show`][az-network-dns-zone-show] command and setting the output to a variable named *ZONEID*.
-
- ```azurecli-interactive
- ZONEID=$(az network dns zone show -g <ResourceGroupName> -n <ZoneName> --query "id" --output tsv)
- ```
-
-2. Grant **DNS Zone Contributor** permissions on the DNS zone using the [`az role assignment create`][az-role-assignment-create] command.
-
- ```azurecli-interactive
- az role assignment create --role "DNS Zone Contributor" --assignee $MANAGEDIDENTITY_OBJECTID --scope $ZONEID
- ```
-
-3. Update the add-on to enable the integration with Azure DNS and install the **external-dns** controller using the [`az aks addon update`][az-aks-addon-update] command.
-
- ```azurecli-interactive
- az aks addon update -g <ResourceGroupName> -n <ClusterName> --addon web_application_routing --dns-zone-resource-id=$ZONEID
- ```
-
-## Grant the add-on permissions to retrieve certificates from Azure Key Vault
-
-The application routing add-on creates a user-created managed identity in the cluster resource group. You need to grant permissions to the managed identity so it can retrieve SSL certificates from the Azure Key Vault.
-
-Azure Key Vault offers [two authorization systems](../key-vault/general/rbac-access-policy.md): **Azure role-based access control (Azure RBAC)**, which operates on the management plane, and the **access policy model**, which operates on both the management plane and the data plane. To find out which system your key vault is using, you can query the `enableRbacAuthorization` property.
+Enable application routing on a new AKS cluster using the [`az aks create`][az-aks-create] command and the `--enable-addons` parameter with the following add-ons:
```azurecli-interactive
-az keyvault show --name <KeyVaultName> --query properties.enableRbacAuthorization
+az aks create -g <ResourceGroupName> -n <ClusterName> -l <Location> --enable-addons web_application_routing --generate-ssh-keys
```
-If Azure RBAC authorization is enabled for your key vault, you should configure permissions using Azure RBAC. Add the `Key Vault Secrets User` role assignment to the key vault.
-
-```azurecli-interactive
-KEYVAULTID=$(az keyvault show --name <KeyVaultName> --query "id" --output tsv)
-az role assignment create --role "Key Vault Secrets User" --assignee $MANAGEDIDENTITY_OBJECTID --scope $KEYVAULTID
-```
+### Enable on an existing cluster
-If Azure RBAC authorization is not enabled for your key vault, you should configure permissions using the access policy model. Grant `GET` permissions for the application routing add-on to retrieve certificates from Azure Key Vault using the [`az keyvault set-policy`][az-keyvault-set-policy] command.
+Enable application routing on an existing cluster using the [`az aks enable-addons`][az-aks-enable-addons] command and the `--addons` parameter with the following add-ons:
```azurecli-interactive
-az keyvault set-policy --name <KeyVaultName> --object-id $MANAGEDIDENTITY_OBJECTID --secret-permissions get --certificate-permissions get
+az aks enable-addons -g <ResourceGroupName> -n <ClusterName> --addons web_application_routing --enable-secret-rotation
``` ++ ## Connect to your AKS cluster To connect to the Kubernetes cluster from your local computer, you use [kubectl][kubectl], the Kubernetes command-line client. You can install it locally using the [`az aks install-cli`][az-aks-install-cli] command. If you use the Azure Cloud Shell, `kubectl` is already installed. -- Configure `kubectl` to connect to your Kubernetes cluster using the [az aks get-credentials][az-aks-get-credentials] command.
+Configure `kubectl` to connect to your Kubernetes cluster using the [az aks get-credentials][az-aks-get-credentials] command.
- ```azurecli-interactive
- az aks get-credentials -g <ResourceGroupName> -n <ClusterName>
- ```
+```azurecli-interactive
+az aks get-credentials -g <ResourceGroupName> -n <ClusterName>
+```
## Deploy an application
-Application routing uses annotations on Kubernetes ingress objects to create the appropriate resources, create records on Azure DNS, and retrieve the SSL certificates from Azure Key Vault.
-
-# [Without Open Service Mesh (OSM)](#tab/without-osm)
+The application routing add-on uses annotations on Kubernetes Ingress objects to create the appropriate resources.
-### Create the application namespace
+# [Application routing add-on](#tab/deploy-app-default)
-- Create a namespace called `hello-web-app-routing` to run the example pods using the `kubectl create namespace` command.
+1. Create the application namespace called `hello-web-app-routing` to run the example pods using the `kubectl create namespace` command.
```bash kubectl create namespace hello-web-app-routing ```
-### Create the deployment
--- Copy the following YAML into a new file named **deployment.yaml** and save the file to your local computer.
+2. Create the deployment by copying the following YAML manifest into a new file named **deployment.yaml** and save the file to your local computer.
```yaml apiVersion: apps/v1
Application routing uses annotations on Kubernetes ingress objects to create the
value: "Welcome to Azure Kubernetes Service (AKS)" ```
-### Create the service
--- Copy the following YAML into a new file named **service.yaml** and save the file to your local computer.
+3. Create the service by copying the following YAML manifest into a new file named **service.yaml** and save the file to your local computer.
```yaml apiVersion: v1
Application routing uses annotations on Kubernetes ingress objects to create the
app: aks-helloworld ```
-### Create the ingress
+### Create the Ingress
-The application routing add-on creates an ingress class on the cluster called *webapprouting.kubernetes.azure.com*. When you create an ingress object with this class, it activates the add-on.
+The application routing add-on creates an Ingress class on the cluster named *webapprouting.kubernetes.azure.com*. When you create an Ingress object with this class, it activates the add-on.
-1. Get the certificate URI to use in the ingress from Azure Key Vault using the [`az keyvault certificate show`][az-keyvault-certificate-show] command.
-
- ```azurecli-interactive
- az keyvault certificate show --vault-name <KeyVaultName> -n <KeyVaultCertificateName> --query "id" --output tsv
- ```
-
-2. Copy the following YAML into a new file named **ingress.yaml** and save the file to your local computer.
-
- > [!NOTE]
- > Update *`<Hostname>`* with your DNS host name and *`<KeyVaultCertificateUri>`* with the ID returned from Azure Key Vault.
- > The *`secretName`* key in the `tls` section defines the name of the secret that contains the certificate for this Ingress resource. This certificate will be presented in the browser when a client browses to the URL defined in the `<Hostname>` key. Make sure that the value of `secretName` is equal to `keyvault-` followed by the value of the Ingress resource name (from `metadata.name`). In the example YAML, secretName will need to be equal to `keyvault-aks-helloworld`.
+1. Copy the following YAML manifest into a new file named **ingress.yaml** and save the file to your local computer.
```yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata:
- annotations:
- kubernetes.azure.com/tls-cert-keyvault-uri: <KeyVaultCertificateUri>
name: aks-helloworld namespace: hello-web-app-routing spec:
The application routing add-on creates an ingress class on the cluster called *w
number: 80 path: / pathType: Prefix
- tls:
- - hosts:
- - <Hostname>
- secretName: keyvault-<Ingress resource name>
```
-### Create the resources on the cluster
--- Create the resources on the cluster using the [`kubectl apply`][kubectl-apply] command.
+2. Create the cluster resources using the [`kubectl apply`][kubectl-apply] command.
```bash kubectl apply -f deployment.yaml -n hello-web-app-routing
- kubectl apply -f service.yaml -n hello-web-app-routing
- kubectl apply -f ingress.yaml -n hello-web-app-routing
```
- The following example output shows the created resources:
+ The following example output shows the created resource:
```output deployment.apps/aks-helloworld created
+ ```
+
+ ```bash
+ kubectl apply -f service.yaml -n hello-web-app-routing
+ ```
+
+ The following example output shows the created resource:
+
+ ```output
service/aks-helloworld created
- ingress.networking.k8s.io/aks-helloworld created
```
-# [With Open Service Mesh (OSM)](#tab/with-osm)
+ ```bash
+ kubectl apply -f ingress.yaml -n hello-web-app-routing
+ ```
+
+ The following example output shows the created resource:
+
+ ```output
+ ingress.networking.k8s.io/aks-helloworld created
+ ```
-### Create the application namespace
+# [Open Service Mesh (retired)](#tab/deploy-app-osm)
-1. Create a namespace called `hello-web-app-routing` to run the example pods using the `kubectl create namespace` command.
+1. Create a namespace called `hello-web-app-routing` to run the exmaple pods using the `kubectl create namespace` command.
```bash kubectl create namespace hello-web-app-routing
The application routing add-on creates an ingress class on the cluster called *w
osm namespace add hello-web-app-routing ```
-### Create the deployment
+3. Create the deployment by copying the following YAML manifest into a new file named **deployment.yaml** and save the file to your local computer.
-- Copy the following YAML into a new file named **deployment.yaml** and save the file to your local computer.-
- ```yaml
+ ```yml
apiVersion: apps/v1 kind: Deployment metadata:
The application routing add-on creates an ingress class on the cluster called *w
value: "Welcome to Azure Kubernetes Service (AKS)" ```
-### Create the service
--- Copy the following YAML into a new file named **service.yaml** and save the file to your local computer.
+4. Create the service by copying the following YAML manifest into a new file named **service.yaml** and save the file to your local computer.
- ```yaml
+ ```yml
apiVersion: v1 kind: Service metadata:
The application routing add-on creates an ingress class on the cluster called *w
app: aks-helloworld ```
-### Create the ingress
-
-The application routing add-on creates an ingress class on the cluster called *webapprouting.kubernetes.azure.com*. When you create an ingress object with this class, it activates the add-on. The `kubernetes.azure.com/use-osm-mtls: "true"` annotation on the ingress object creates an Open Service Mesh (OSM) [IngressBackend](https://release-v1-2.docs.openservicemesh.io/docs/guides/traffic_management/ingress/#ingressbackend-api) to configure a backend service to accept ingress traffic from trusted sources.
+### Create the Ingress
-OSM issues a certificate that Nginx uses as the client certificate to proxy HTTPS connections to TLS backends. The client certificate and CA certificate are stored in a Kubernetes secret that Nginx uses to authenticate service mesh back ends. For more information, see [Open Service Mesh: Ingress with Kubernetes Nginx Ingress Controller](https://release-v1-2.docs.openservicemesh.io/docs/demos/ingress_k8s_nginx/).
+The application routing add-on creates an Ingress class on the cluster called *webapprouting.kubernetes.azure.com*. When you create an Ingress object with this class, it activates the add-on. The `kubernetes.azure.com/use-osm-mtls: "true"` annotation on the Ingress object creates an Open Service Mesh (OSM) [IngressBackend][ingress-backend] to configure a backend service to accept Ingress traffic from trusted sources.
-1. Get the certificate URI to use in the ingress from Azure Key Vault using the [`az keyvault certificate show`][az-keyvault-certificate-show] command.
+1. Copy the following YAML manifest into a new file named **ingress.yaml** and save the file to your local computer.
- ```azurecli-interactive
- az keyvault certificate show --vault-name <KeyVaultName> -n <KeyVaultCertificateName> --query "id" --output tsv
- ```
-
-2. Copy the following YAML into a new file named **ingress.yaml** and save the file to your local computer.
-
- > [!NOTE]
- > Update *`<Hostname>`* with your DNS host name and *`<KeyVaultCertificateUri>`* with the ID returned from Azure Key Vault.
- > The *`secretName`* key in the `tls` section defines the name of the secret that contains the certificate for this Ingress resource. This certificate will be presented in the browser when a client browses to the URL defined in the `<Hostname>` key. Make sure that the value of `secretName` is equal to `keyvault-` followed by the value of the Ingress resource name (from `metadata.name`). In the example YAML, secretName will need to be equal to `keyvault-aks-helloworld`.
-
- ```yaml
+ ```yml
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations:
- kubernetes.azure.com/tls-cert-keyvault-uri: <KeyVaultCertificateUri>
kubernetes.azure.com/use-osm-mtls: "true" nginx.ingress.kubernetes.io/backend-protocol: HTTPS nginx.ingress.kubernetes.io/configuration-snippet: |2-- proxy_ssl_name "default.hello-web-app-routing.cluster.local"; nginx.ingress.kubernetes.io/proxy-ssl-secret: kube-system/osm-ingress-client-cert nginx.ingress.kubernetes.io/proxy-ssl-verify: "on"
OSM issues a certificate that Nginx uses as the client certificate to proxy HTTP
number: 80 path: / pathType: Prefix
- tls:
- - hosts:
- - <Hostname>
- secretName: keyvault-<Ingress resource name>
```
-### Create the resources on the cluster
--- Create the resources on the cluster using the [`kubectl apply`][kubectl-apply] command.
+1. Create the cluster resources using the [`kubectl apply`][kubectl-apply] command.
```bash kubectl apply -f deployment.yaml -n hello-web-app-routing
- kubectl apply -f service.yaml -n hello-web-app-routing
- kubectl apply -f ingress.yaml -n hello-web-app-routing
```
- The following example output shows the created resources:
+ The following example output shows the created resource:
```output deployment.apps/aks-helloworld created
+ ```
+
+ ```bash
+ kubectl apply -f service.yaml -n hello-web-app-routing
+ ```
+
+ The following example output shows the created resource:
+
+ ```output
service/aks-helloworld created
+ ```
+
+ ```bash
+ kubectl apply -f ingress.yaml -n hello-web-app-routing
+ ```
+
+ The following example output shows the created resource:
+
+ ```output
ingress.networking.k8s.io/aks-helloworld created ```
-# [With service annotations (retired)](#tab/service-annotations)
+# [Service annotations (retired)](#tab/deploy-app-service-annotations)
> [!WARNING]
-> Configuring ingresses by adding annotations on the Service object is retired. Please consider [configuring via an Ingress object](?tabs=without-osm).
+> Configuring Ingresses by adding annotations on the Service object is retired. Please consider [configuring using an Ingress object](?tabs=default).
-### Create the application namespace
+### Create application namespace
-- Create a namespace called `hello-web-app-routing` to run the example pods using the `kubectl create namespace` command.
+1. Create a namespace called `hello-web-app-routing` to run the exmaple pods using the `kubectl create namespace` command.
```bash kubectl create namespace hello-web-app-routing ```
-### Create the deployment
+2. Add the application namespace to the OSM control plane using the `osm namespace add` command.
-- Copy the following YAML into a new file named **deployment.yaml** and save the file to your local computer.
+ ```bash
+ osm namespace add hello-web-app-routing
+ ```
- ```yaml
+3. Create the deployment by copying the following YAML manifest into a new file named **deployment.yaml** and save the file to your local computer.
+
+ ```yml
apiVersion: apps/v1 kind: Deployment metadata:
OSM issues a certificate that Nginx uses as the client certificate to proxy HTTP
value: "Welcome to Azure Kubernetes Service (AKS)" ```
-### Create the service with the annotations (retired)
--- Copy the following YAML into a new file named **service.yaml** and save the file to your local computer.-
- > [!NOTE]
- > Update *`<Hostname>`* with your DNS host name and *`<KeyVaultCertificateUri>`* with the ID returned from Azure Key Vault. This certificate will be presented in the browser.
+4. Create the service by copying the following YAML manifest into a new file named **service.yaml** and save the file to your local computer.
- ```yaml
+ ```yml
apiVersion: v1 kind: Service metadata: name: aks-helloworld namespace: hello-web-app-routing
- annotations:
- kubernetes.azure.com/ingress-host: <Hostname>
- kubernetes.azure.com/tls-cert-keyvault-uri: <KeyVaultCertificateUri>
spec: type: ClusterIP ports:
OSM issues a certificate that Nginx uses as the client certificate to proxy HTTP
app: aks-helloworld ```
-### Create the resources on the cluster
--- Create the resources on the cluster using the [`kubectl apply`][kubectl-apply] command.
+5. Create the cluster resources using the [`kubectl apply`][kubectl-apply] command.
```bash kubectl apply -f deployment.yaml -n hello-web-app-routing
- kubectl apply -f service.yaml -n hello-web-app-routing
```
- The following example output shows the created resources:
+ The following example output shows the created resource:
```output deployment.apps/aks-helloworld created
- service/aks-helloworld created
``` --
-## Verify the managed ingress was created
--- Verify the managed ingress was created using the `kubectl get ingress` command.-
- ```bash
- kubectl get ingress -n hello-web-app-routing
+ ```bash
+ kubectl apply -f service.yaml -n hello-web-app-routing
```
- The following example output shows the created managed ingress:
+ The following example output shows the created resource:
```output
- NAME CLASS HOSTS ADDRESS PORTS AGE
- aks-helloworld webapprouting.kubernetes.azure.com myapp.contoso.com 20.51.92.19 80, 443 4m
+ service/aks-helloworld created
```
-## Access the endpoint over a DNS hostname
++
+## Verify the managed Ingress was created
-If you haven't configured Azure DNS integration, you need to configure your own DNS provider with an `A` record pointing to the ingress IP address and the host name you configured for the ingress, for example *myapp.contoso.com*.
+You can verify the managed Ingress was created using the `kubectl get ingress` command.
+
+```bash
+kubectl get ingress -n hello-web-app-routing
+```
+
+The following example output shows the created managed Ingress:
+
+```output
+NAME CLASS HOSTS ADDRESS PORTS AGE
+aks-helloworld webapprouting.kubernetes.azure.com myapp.contoso.com 20.51.92.19 80, 443 4m
+```
## Remove the application routing add-on
-1. Remove the associated namespace using the `kubectl delete namespace` command.
+To remove the associated namespace, use the `kubectl delete namespace` command.
- ```bash
- kubectl delete namespace hello-web-app-routing
- ```
+```bash
+kubectl delete namespace hello-web-app-routing
+```
-2. Remove the application routing add-on from your cluster using the [`az aks disable-addons`][az-aks-disable-addons] command.
+To remove the application routing add-on from your cluster, use the [`az aks disable-addons`][az-aks-disable-addons] command.
- ```azurecli-interactive
- az aks disable-addons --addons web_application_routing --name myAKSCluster --resource-group myResourceGroup
- ```
+```azurecli-interactive
+az aks disable-addons --addons web_application_routing --name myAKSCluster --resource-group myResourceGroup
+```
When the application routing add-on is disabled, some Kubernetes resources might remain in the cluster. These resources include *configMaps* and *secrets* and are created in the *app-routing-system* namespace. You can remove these resources if you want.
+## Next steps
+
+* [Configure custom ingress configurations][custom-ingress-configurations] shows how to create Ingresses with a private load balancer, configure SSL certificate integration with Azure Key Vault, and DNS management with Azure DNS.
+
+* Learn about monitoring the ingress-nginx controller metrics included with the application routing add-on with [with Prometheus in Grafana][prometheus-in-grafana] (preview) as part of analyzing the performance and usage of your application.
+ <!-- LINKS - internal -->
-[az-aks-create]: /cli/azure/aks#az-aks-create
-[az-aks-show]: /cli/azure/aks#az-aks-show
+[azure-dns-overview]: ../dns/dns-overview.md
[az-aks-enable-addons]: /cli/azure/aks#az-aks-enable-addons [az-aks-disable-addons]: /cli/azure/aks#az-aks-disable-addons [az-aks-install-cli]: /cli/azure/aks#az-aks-install-cli [az-aks-get-credentials]: /cli/azure/aks#az-aks-get-credentials
-[az-extension-add]: /cli/azure/extension#az-extension-add
-[az-extension-update]: /cli/azure/extension#az-extension-update
[install-azure-cli]: /cli/azure/install-azure-cli
-[az-keyvault-create]: /cli/azure/keyvault#az_keyvault_create
-[az-keyvault-certificate-import]: /cli/azure/keyvault/certificate#az_keyvault_certificate_import
-[az-keyvault-certificate-show]: /cli/azure/keyvault/certificate#az_keyvault_certificate_show
-[az-network-dns-zone-create]: /cli/azure/network/dns/zone#az_network_dns_zone_create
-[az-network-dns-zone-show]: /cli/azure/network/dns/zone#az_network_dns_zone_show
-[az-role-assignment-create]: /cli/azure/role/assignment#az_role_assignment_create
-[az-aks-addon-update]: /cli/azure/aks/addon#az_aks_addon_update
-[az-keyvault-set-policy]: /cli/azure/keyvault#az_keyvault_set_policy
+[custom-ingress-configurations]: app-routing-configuration.md
+[az-aks-create]: /cli/azure/aks#az-aks-create
+[prometheus-in-grafana]: app-routing-nginx-prometheus.md
<!-- LINKS - external -->
-[osm-release]: https://github.com/openservicemesh/osm/releases/
-[nginx]: https://kubernetes.github.io/ingress-nginx/
-[external-dns]: https://github.com/kubernetes-incubator/external-dns
+[kubernetes-ingress-object-overview]: https://kubernetes.io/docs/concepts/services-networking/ingress/
+[osm-release]: https://github.com/openservicemesh/osm
+[open-service-mesh-docs]: https://release-v1-2.docs.openservicemesh.io/
+[kubernetes-nginx-ingress]: https://kubernetes.github.io/ingress-nginx/
[kubectl]: https://kubernetes.io/docs/reference/kubectl/ [kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
+[ingress-backend]: https://release-v1-2.docs.openservicemesh.io/docs/guides/traffic_management/ingress/#ingressbackend-api
aks Open Ai Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-ai-quickstart.md
The [AKS Store application][aks-store-demo] manifest includes the following Kube
- **Rabbit MQ**: Message queue for an order queue. > [!NOTE]
-> We don't recommend running stateful containers, such as MongoDB and Rabbit MQ, without persistent storage for production. We use them here here for simplicity, but we recommend using managed services, such as Azure CosmosDB or Azure Service Bus.
+> We don't recommend running stateful containers, such as MongoDB and Rabbit MQ, without persistent storage for production. We use them here for simplicity, but we recommend using managed services, such as Azure CosmosDB or Azure Service Bus.
1. Review the [YAML manifest](https://github.com/Azure-Samples/aks-store-demo/blob/main/aks-store-all-in-one.yaml) for the application. 2. Deploy the application using the [`kubectl apply`][kubectl-apply] command and specify the name of your YAML manifest.
aks Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/policy-reference.md
Title: Built-in policy definitions for Azure Kubernetes Service description: Lists Azure Policy built-in policy definitions for Azure Kubernetes Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 10/23/2023 Last updated : 11/03/2023
analysis-services Move Between Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/analysis-services/move-between-regions.md
Before moving a server to a different region, it's recommended you create a deta
> Azure regions use different IP address ranges. If you have firewall exceptions configured for the region your server and/or storage account is in, it may be necessary to configure a different IP address range. To learn more, see [Frequently asked questions about Analysis Services network connectivity](analysis-services-network-faq.yml). > [!NOTE]
-> This article describes restoring a database backup to a target server from a storage container in the source server's region. In some cases, restoring backups from a different region can have poor performance, especially for large databases. For the best performance during database restore, migrate or create a a new storage container in the target server region. Copy the .abf backup files from the source region storage container to the target region storage container prior to restoring the database to the target server. While out of scope for this article, in some cases, particularly with very large databases, scripting out a database from your source server, recreating, and then processing on the target server to load database data may be more cost effective than using backup/restore.
+> This article describes restoring a database backup to a target server from a storage container in the source server's region. In some cases, restoring backups from a different region can have poor performance, especially for large databases. For the best performance during database restore, migrate or create a new storage container in the target server region. Copy the .abf backup files from the source region storage container to the target region storage container prior to restoring the database to the target server. While out of scope for this article, in some cases, particularly with very large databases, scripting out a database from your source server, recreating, and then processing on the target server to load database data may be more cost effective than using backup/restore.
> [!NOTE] > If using an On-premises data gateway to connect to data sources, you must also move the gateway resource to the target server region. To learn more, see [Install and configure an on-premises data gateway](analysis-services-gateway-install.md).
api-management Api Management Howto Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-cache.md
APIs and operations in API Management can be configured with response caching. Response caching can significantly reduce latency for API callers and backend load for API providers. > [!IMPORTANT]
-> Built-in cache is volatile and is shared by all units in the same region in the same API Management service. Regardless of the cache type being used (internal or external), if the cache-related operations fail to connect to the cache due to the volatility of the cache or any other reason, the API call that uses the cache related operation doesn't raise an error, and the cache operation completes successfully. In the case of a read operation, a null value is returned to the calling policy expression. Your policy code should be designed to ensure that that there's a "fallback" mechanism to retrieve data not found in the cache.
+> Built-in cache is volatile and is shared by all units in the same region in the same API Management service. Regardless of the cache type being used (internal or external), if the cache-related operations fail to connect to the cache due to the volatility of the cache or any other reason, the API call that uses the cache related operation doesn't raise an error, and the cache operation completes successfully. In the case of a read operation, a null value is returned to the calling policy expression. Your policy code should be designed to ensure that there's a "fallback" mechanism to retrieve data not found in the cache.
For more detailed information about caching, see [API Management caching policies](api-management-caching-policies.md) and [Custom caching in Azure API Management](api-management-sample-cache-by-key.md). ![cache policies](media/api-management-howto-cache/cache-policies.png)
api-management Api Management Howto Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-ip-addresses.md
GET https://management.azure.com/subscriptions/<subscription-id>/resourceGroups/
### IP addresses for outbound traffic
-API Management uses a public IP address for a connection outside the VNet or a peered VNet and a private IP address for a connection in the VNet or a peered VNet.
+API Management uses a public IP address for a connection outside the VNet or a peered VNet, and it uses a private IP address for a connection in the VNet or a peered VNet.
* When API management is deployed in an external or internal virtual network and API management connects to private (intranet-facing) backends, internal IP addresses (dynamic IP, or DIP addresses) from the subnet are used for the runtime API traffic. When a request is sent from API Management to a private backend, a private IP address will be visible as the origin of the request.
api-management Api Management Howto Oauth2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-oauth2.md
Optionally:
1. [Republish](api-management-howto-developer-portal-customize.md#publish) the developer portal. > [!IMPORTANT]
- > When making OAuth 2.0-related changes, be sure to to republish the developer portal after every modification as relevant changes (for example, scope change) otherwise cannot propagate into the portal and subsequently be used in trying out the APIs.
+ > When making OAuth 2.0-related changes, be sure to republish the developer portal after every modification as relevant changes (for example, scope change) otherwise cannot propagate into the portal and subsequently be used in trying out the APIs.
## Configure an API to use OAuth 2.0 user authorization
api-management Configure Graphql Resolver https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/configure-graphql-resolver.md
The `context` variable that is passed through the request and response pipeline
### context.GraphQL.parent
-The `context.ParentResult` is set to the parent object for the current resolver execution. Consider the following partial schema:
+The `context.GraphQL.parent` is set to the parent object for the current resolver execution. Consider the following partial schema:
``` graphql type Comment {
api-management Migrate Stv1 To Stv2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/migrate-stv1-to-stv2.md
Run the following Azure CLI commands, setting variables where indicated with the
```azurecli
+#!/bin/bash
# Verify currently selected subscription az account show
api-management Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policy-reference.md
Title: Built-in policy definitions for Azure API Management description: Lists Azure Policy built-in policy definitions for Azure API Management. These built-in policy definitions provide approaches to managing your Azure resources. Previously updated : 10/23/2023 Last updated : 11/03/2023
app-service Configure Authentication Api Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-api-version.md
If your existing configuration contains a Microsoft Account provider and doesn't
1. Add a new URI that matches the one you just copied, except instead have it end in `/.auth/login/aad/callback`. This will allow the registration to be used by the App Service Authentication / Authorization configuration. 1. Navigate to the App Service Authentication / Authorization configuration for your app. 1. Collect the configuration for the Microsoft Account provider.
-1. Configure the Microsoft Entra provider using the "Advanced" management mode, supplying the client ID and client secret values you collected in the previous step. For the Issuer URL, use Use `<authentication-endpoint>/<tenant-id>/v2.0`, and replace *\<authentication-endpoint>* with the [authentication endpoint for your cloud environment](../active-directory/develop/authentication-national-cloud.md#azure-ad-authentication-endpoints) (e.g., "https://login.microsoftonline.com" for global Azure), also replacing *\<tenant-id>* with your **Directory (tenant) ID**.
+1. Configure the Microsoft Entra provider using the "Advanced" management mode, supplying the client ID and client secret values you collected in the previous step. For the Issuer URL, use `<authentication-endpoint>/<tenant-id>/v2.0`, and replace *\<authentication-endpoint>* with the [authentication endpoint for your cloud environment](../active-directory/develop/authentication-national-cloud.md#azure-ad-authentication-endpoints) (e.g., "https://login.microsoftonline.com" for global Azure), also replacing *\<tenant-id>* with your **Directory (tenant) ID**.
1. Once you've saved the configuration, test the login flow by navigating in your browser to the `/.auth/login/aad` endpoint on your site and complete the sign-in flow. 1. At this point, you've successfully copied the configuration over, but the existing Microsoft Account provider configuration remains. Before you remove it, make sure that all parts of your app reference the Microsoft Entra provider through login links, etc. Verify that all parts of your app work as expected. 1. Once you've validated that things work against the Microsoft Entra provider, you may remove the Microsoft Account provider configuration.
app-service Configure Basic Auth Disable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-basic-auth-disable.md
To confirm that the logs are shipped to your selected service(s), try logging in
<pre> {
- "time": "2020-07-16T17:42:32.9322528Z",
- "ResourceId": "/SUBSCRIPTIONS/EF90E930-9D7F-4A60-8A99-748E0EEA69DE/RESOURCEGROUPS/FREEBERGDEMO/PROVIDERS/MICROSOFT.WEB/SITES/FREEBERG-WINDOWS",
+ "time": "2023-10-16T17:42:32.9322528Z",
+ "ResourceId": "/SUBSCRIPTIONS/EF90E930-9D7F-4A60-8A99-748E0EEA69DE/RESOURCEGROUPS/MYRESOURCEGROUP/PROVIDERS/MICROSOFT.WEB/SITES/MY-DEMO-APP",
"Category": "AppServiceAuditLogs", "OperationName": "Authorization", "Properties": {
- "User": "$freeberg-windows",
- "UserDisplayName": "$freeberg-windows",
+ "User": "$my-demo-app",
+ "UserDisplayName": "$my-demo-app",
"UserAddress": "24.19.191.170", "Protocol": "FTP" }
app-service Configure Connect To Azure Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-connect-to-azure-storage.md
Previously updated : 8/24/2023 Last updated : 11/02/2023 zone_pivot_groups: app-service-containers-code
app-service Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/policy-reference.md
Title: Built-in policy definitions for Azure App Service description: Lists Azure Policy built-in policy definitions for Azure App Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 10/23/2023 Last updated : 11/03/2023
app-service Tutorial Dotnetcore Sqldb App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-dotnetcore-sqldb-app.md
In this step, you'll configure GitHub deployment using GitHub Actions. It's just
## 4. Generate database schema
-With the SQL Database protected by the virtual network, the easiest way to run Run [dotnet database migrations](/ef/core/managing-schemas/migrations/?tabs=dotnet-core-cli) is in an SSH session with the App Service container.
+With the SQL Database protected by the virtual network, the easiest way to run [dotnet database migrations](/ef/core/managing-schemas/migrations/?tabs=dotnet-core-cli) is in an SSH session with the App Service container.
:::row::: :::column span="2":::
app-service Tutorial Send Email https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-send-email.md
Deploy an app with the language framework of your choice to App Service. To foll
1. In the search box, search for **response**, then select the **Response** action.
- ![Screenshot that shows the the search bar and Response action highlighted.](./media/tutorial-send-email/choose-response-action.png)
+ ![Screenshot that shows the search bar and Response action highlighted.](./media/tutorial-send-email/choose-response-action.png)
By default, the response action sends an HTTP 200. That's good enough for this tutorial. For more information, see the [HTTP request/response reference](../connectors/connectors-native-reqres.md).
application-gateway Application Gateway Autoscaling Zone Redundant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-autoscaling-zone-redundant.md
Previously updated : 03/01/2022 Last updated : 11/02/2023
Application Gateway and WAF can be configured to scale in two modes: -- **Autoscaling** - With autoscaling enabled, the Application Gateway and WAF v2 SKUs scale out or in based on application traffic requirements. This mode offers better elasticity to your application and eliminates the need to guess the application gateway size or instance count. This mode also allows you to save cost by not requiring the gateway to run at peak-provisioned capacity for expected maximum traffic load. You must specify a minimum and optionally maximum instance count. Minimum capacity ensures that Application Gateway and WAF v2 don't fall below the minimum instance count specified, even without traffic. Each instance is roughly equivalent to 10 more reserved Capacity Units. Zero signifies no reserved capacity and is purely autoscaling in nature. You can also optionally specify a maximum instance count, which ensures that the Application Gateway doesn't scale beyond the specified number of instances. You'll only be billed for the amount of traffic served by the Gateway. The instance counts can range from 0 to 125. The default value for maximum instance count is 10 if not specified.-- **Manual** - You can also choose Manual mode where the gateway won't autoscale. In this mode, if there's more traffic than what Application Gateway or WAF can handle, it could result in traffic loss. With manual mode, specifying instance count is mandatory. Instance count can vary from 1 to 125 instances.
+- **Autoscaling** - With autoscaling enabled, the Application Gateway and WAF v2 SKUs scale out or in based on application traffic requirements. This mode offers better elasticity to your application and eliminates the need to guess the application gateway size or instance count. This mode also allows you to save cost by not requiring the gateway to run at peak-provisioned capacity for expected maximum traffic load. You must specify a minimum and optionally maximum instance count. Minimum capacity ensures that Application Gateway and WAF v2 don't fall below the minimum instance count specified, even without traffic. Each instance is roughly equivalent to 10 more reserved Capacity Units. Zero signifies no reserved capacity and is purely autoscaling in nature. You can also optionally specify a maximum instance count, which ensures that the Application Gateway doesn't scale beyond the specified number of instances. You are only billed for the amount of traffic served by the Gateway. The instance counts can range from 0 to 125. The default value for maximum instance count is 10 if not specified.
+- **Manual** - You can also choose Manual mode where the gateway doesn't autoscale. In this mode, if there's more traffic than what Application Gateway or WAF can handle, it could result in traffic loss. With manual mode, specifying instance count is mandatory. Instance count can vary from 1 to 125 instances.
## Autoscaling and High Availability
-Azure Application Gateways are always deployed in a highly available fashion. The service is made out of multiple instances that are created as configured (if autoscaling is disabled) or required by the application load (if autoscaling is enabled). Note that from the user's perspective you don't necessarily have visibility into the individual instances, but just into the Application Gateway service as a whole. If a certain instance has a problem and stops being functional, Azure Application Gateway will transparently create a new instance.
+Azure Application Gateways are always deployed in a highly available fashion. The service is made up of multiple instances that are created as configured if autoscaling is disabled, or required by the application load if autoscaling is enabled. From the user's perspective, you don't necessarily have visibility into the individual instances, but just into the Application Gateway service as a whole. If a certain instance has a problem and stops being functional, Azure Application Gateway transparently creates a new instance.
-Even if you configure autoscaling with zero minimum instances the service will still be highly available, which is always included with the fixed price.
+Even if you configure autoscaling with zero minimum instances the service is still highly available, which is always included with the fixed price.
-However, creating a new instance can take some time (around six or seven minutes). If you don't want to have this downtime, you can configure a minimum instance count of two, ideally with Availability Zone support. This way you'll have at least two instances in your Azure Application Gateway under normal circumstances. So if one of them had a problem the other will try to handle the traffic while a new instance is being created. An Azure Application Gateway instance can support around 10 Capacity Units, so depending on how much traffic you typically have you might want to configure your minimum instance autoscaling setting to a value higher than two.
+However, creating a new instance can take around six or seven minutes. If you don't want to have this downtime, you can configure a minimum instance count of two, ideally with Availability Zone support. This way you have at least two instances in your Azure Application Gateway under normal circumstances. So if one of them had a problem the other tries to handle the traffic while a new instance is being created. An Azure Application Gateway instance can support around 10 Capacity Units. Depending on how much traffic you typically have, you might want to configure your minimum instance autoscaling setting to a value higher than two.
-For scale-in events, Application Gateway will drain existing connections for 5 minutes on the instance that is subject for removal. After 5 minutes, existing connections will be closed and the instance removed. Any new connections during or after the 5 minute scale-in time will be established to other existing instances on the same gateway.
+For scale-in events, Application Gateway drains existing connections for 5 minutes on the instance that is subject for removal. After 5 minutes, existing connections are closed and the instance removed. Any new connections during or after the 5 minute scale-in time is established to other existing instances on the same gateway.
## Next steps
+- Learn how to [Schedule autoscaling for Application Gateway](application-gateway-externally-managed-scheduled-autoscaling.md)
- Learn more about [Application Gateway v2](overview-v2.md) - [Create an autoscaling, zone redundant application gateway with a reserved virtual IP address using Azure PowerShell](tutorial-autoscale-ps.md)
application-gateway Application Gateway Externally Managed Scheduled Autoscaling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-externally-managed-scheduled-autoscaling.md
++
+ Title: Externally managed scheduled autoscaling for Application Gateway v2
+description: This article introduces the Azure Application Standard_v2 and WAF_v2 SKU scheduled autoscaling feature.
++++ Last updated : 10/30/2023+++
+# Schedule autoscaling for Application Gateway v2
+
+## Overview
+
+For those experiencing predictable daily traffic patterns and who have a reliable estimate of the required capacity for Application Gateway, the option to preschedule the minimum capacity to better align with traffic demands might be of interest.
+
+While autoscaling is commonly utilized, itΓÇÖs important to note that Application Gateway doesn't currently support prescheduled capacity adjustments natively.
+
+The goal is to use Azure Automation to create a schedule for running runbooks that adjust the minimum autoscaling capacity of Application Gateway to meet traffic demands.
+
+## Set up scheduled autoscaling
+
+To implement scheduled autoscaling:
+1. Create an Azure Automation account resource in the same tenant as the Application Gateway.
+2. Note the system assigned managed identity of the Azure Automation account.
+3. Create PowerShell runbooks for increasing and decreasing min autoscaling capacity for the Application Gateway resource.
+4. Create the schedules during which the runbooks need to be implemented.
+5. Associate the runbooks with their respective schedules.
+6. Associate the system assigned managed identity noted in step 2 with the Application Gateway resource.
+
+## Configure automation
+
+Suppose the requirement is to increase the min count to 4 during business hours and to decrease the min count to 2 during non business hours.
+
+Two runbooks are created:
+- IncreaseMin - Sets the min count of the autoscaling configuration to 4
+- DecreaseMin - Sets the min count of the autoscaling configuration to 2
+
+Use the following PowerShell runbook to adjust capacity:
+
+ ```Azure PowerShell
+# Get the context of the managed identity
+$context = (Connect-AzAccount -Identity).Context
+# Import the Az module
+Import-Module Az
+# Adjust the min count of your Application Gateway
+$gw = Get-AzApplicationGateway -Name ΓÇ£<AppGwName>ΓÇ¥ -ResourceGroupName ΓÇ£<ResourceGroupName>ΓÇ¥
+$gw = Set-AzApplicationGatewayAutoscaleConfiguration -ApplicationGateway $gw -MinCapacity <NumberOfRequiredInstances>
+$gw = Set-AzApplicationGateway -ApplicationGateway $gw
+```
+
+Next, create the following two schedules:
+
+- WeekdayMorning ΓÇô Run the IncreaseMin runbook from Mon-Fri at 5:00AM PST
+- WeekdayEvening ΓÇô Run the DecreaseMin runbook from Mon-Fri at 9:00PM PST
+
+## FAQs
+
+- What is the SLA for timely job executions?
+
+ Azure Automation has a SLA of 99.9% for a timely start of jobs.
+
+- What happens if jobs are interrupted during execution?
+
+ - If the job already sends the request to AppGW before getting interrupted, then the request goes through.
+ - If the job gets interrupted before sending the request to Application Gateway, then it will be one of the scenarios described in next section.
+
+- What happens if job tasks donΓÇÖt occur?
+
+ | Absent job | Impact |
+ | | |
+ |IncreaseMin | Falls back on native autoscaling. Next run of DecreaseMin should be no-op as the count doesnΓÇÖt need to be adjusted. |
+ |DecreaseMin | Additional cost to the customer for the (unintended) capacity that is provisioned for those hours. Next run of IncreaseMin should be no-op because the count doesnΓÇÖt need to be adjusted. |
+
+> [!NOTE]
+> Send email to agschedule-autoscale@microsoft.com if you have questions or need help to set up managed and scheduled autoscale for your deployments.
+
+## Next steps
+
+* Learn more about [Scaling Application Gateway v2 and WAF v2](application-gateway-autoscaling-zone-redundant.md)
+* Learn more about [Monitoring Azure Automation runbooks with metric alerts](../automation/automation-alert-metric.md)
+* Learn more about [Azure Automation](../automation/overview.md)
application-gateway Http Response Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/http-response-codes.md
An HTTP 404 response can be returned if a request is sent to an application gate
#### 408 ΓÇô Request Timeout
-An HTTP 408 response can be observed when client requests to the frontend listener of application gateway don't respond back within 60 seconds. This error can be observed due to traffic congestion between on-premises networks and Azure, when virtual appliance inspects the traffic traffic, or the client itself becomes overwhelmed.
+An HTTP 408 response can be observed when client requests to the frontend listener of application gateway don't respond back within 60 seconds. This error can be observed due to traffic congestion between on-premises networks and Azure, when virtual appliance inspects the traffic, or the client itself becomes overwhelmed.
#### 413 ΓÇô Request Entity Too Large
attestation Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/policy-reference.md
Title: Built-in policy definitions for Azure Attestation description: Lists Azure Policy built-in policy definitions for Azure Attestation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 10/23/2023 Last updated : 11/03/2023
automation Automation Solution Vm Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-solution-vm-management.md
Use one of the following mechanisms to access the enabled feature:
* From your Automation account, select **Start/Stop VM** under **Related Resources**. On the Start/Stop VM page, select **Manage the solution** under **Manage Start/Stop VM Solutions**.
-* Navigate to the Log Analytics workspace linked to your Automation account. After after selecting the workspace, choose **Solutions** from the left pane. On the Solutions page, select **Start-Stop-VM[workspace]** from the list.
+* Navigate to the Log Analytics workspace linked to your Automation account. After selecting the workspace, choose **Solutions** from the left pane. On the Solutions page, select **Start-Stop-VM[workspace]** from the list.
Selecting the feature displays the **Start-Stop-VM[workspace]** page. Here you can review important details, such as the information in the **StartStopVM** tile. As in your Log Analytics workspace, this tile displays a count and a graphical representation of the runbook jobs for the feature that have started and have finished successfully.
automation Guidance Migration Log Analytics Monitoring Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/guidance-migration-log-analytics-monitoring-agent.md
Previously updated : 09/14/2023 Last updated : 11/03/2023
Follow these steps to migrate using scripts.
#### Migration guidance
-1. Install the script to run to conduct migrations.
+1. Install the script and run it to conduct migrations.
1. Ensure that the new workspace resource ID is different to the one with which it's associated to in the Change Tracking and Inventory using the LA version. 1. Migrate settings for the following data types: - Windows Services
To obtain the Log Analytics Workspace resource ID, follow these steps:
### [Using PowerShell script](#tab/limit-policy) 1. For File Content changes-based settings, you have to migrate manually from LA version to AMA version of Change Tracking & Inventory. Follow the guidance listed in [Track file contents](manage-change-tracking.md#track-file-contents).
+1. Any VM with > 100 file/registry settings for migration via portal isn't supported now.
1. Alerts that you configure using the Log Analytics Workspace must be [manually configured](configure-alerts.md).
automation Dsc Linux Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/dsc-linux-powershell.md
Register the Azure Linux VM as a Desired State Configuration (DSC) node for the
These commands obtain the Automation account's primary access key and URL and concatenates it to the registration command. Ensure you remove any carriage returns from the output. This command will be used in a later step.
-1. Connect to your Azure Linux VM. If you used a password, you can use the syntax below. If you used a public-private key pair, see [SSH on Linux](./../virtual-machines/linux/mac-create-ssh-keys.md) for detailed steps. The other commands retrieve information about what packages can be installed, including what updates to currently installed packages packages are available, and installs Python.
+1. Connect to your Azure Linux VM. If you used a password, you can use the syntax below. If you used a public-private key pair, see [SSH on Linux](./../virtual-machines/linux/mac-create-ssh-keys.md) for detailed steps. The other commands retrieve information about what packages can be installed, including what updates to currently installed packages are available, and installs Python.
```cmd ssh user@IP
The output should look similar as shown below:
:::image type="content" source="media/dsc-linux-powershell/get-azautomationdscnodereport-output.png" alt-text="Output from Get-AzAutomationDscNodeReport command.":::
-The first report may not be available immediately and may take up to 30 minutes after you enable a node. For more information about report data, see see [Using a DSC report server](/powershell/dsc/pull-server/reportserver).
+The first report may not be available immediately and may take up to 30 minutes after you enable a node. For more information about report data, see [Using a DSC report server](/powershell/dsc/pull-server/reportserver).
## Clean up resources
automation Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/policy-reference.md
Title: Built-in policy definitions for Azure Automation description: Lists Azure Policy built-in policy definitions for Azure Automation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 10/23/2023 Last updated : 11/03/2023
automation Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/whats-new.md
description: Significant updates to Azure Automation updated each month.
Previously updated : 10/03/2023 Last updated : 10/27/2023
This page is updated monthly, so revisit it regularly. If you're looking for ite
## October 2023
+## General Availability: Automation extension for Visual Studio Code
+
+ Azure Automation now provides an advanced editing experience for PowerShell and Python scripts along with [runbook management operations](how-to/runbook-authoring-extension-for-vscode.md). For more information, see the [Key features and limitations](automation-runbook-authoring.md).
++ ### General Availability: Change Tracking using Azure Monitoring Agent Azure Automation announces General Availability of Change Tracking using Azure Monitoring Agent. [Learn more](change-tracking/guidance-migration-log-analytics-monitoring-agent.md). + ### Retirement of Run As accounts **Type: Retirement**
azure-app-configuration Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/policy-reference.md
Title: Built-in policy definitions for Azure App Configuration description: Lists Azure Policy built-in policy definitions for Azure App Configuration. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 10/23/2023 Last updated : 11/03/2023
azure-arc Automated Integration Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/automated-integration-testing.md
There are two files that need to be generated to localize the launcher to run in
* `patch.json`: fill out from `patch.json.tmpl` > [!TIP]
-> The `.test.env` is a single set of of environment variables that drives the launcher's behavior. Generating it with care for a given environment will ensure reproducibility of the launcher's behavior.
+> The `.test.env` is a single set of environment variables that drives the launcher's behavior. Generating it with care for a given environment will ensure reproducibility of the launcher's behavior.
### Config 1: `.test.env`
azure-arc Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/connectivity.md
Azure Arc-enabled data services provide you the option to connect to Azure in tw
- Directly connected - Indirectly connected
-The connectivity mode provides you the flexibility to choose how much data is sent to Azure and how users interact with the Arc Data Controller. Depending on the connectivity mode that is chosen, some functionality of Azure Arc-enabled data services may or may not be available.
+The connectivity mode provides you the flexibility to choose how much data is sent to Azure and how users interact with the Arc Data Controller. Depending on the connectivity mode that is chosen, some functionality of Azure Arc-enabled data services might or might not be available.
-Importantly, if the Azure Arc-enabled data services are directly connected to Azure, then users can use [Azure Resource Manager APIs](/rest/api/resources/), the Azure CLI, and the Azure portal to operate the Azure Arc data services. The experience in directly connected mode is much like how you would use any other Azure service with provisioning/de-provisioning, scaling, configuring, and so on all in the Azure portal. If the Azure Arc-enabled data services are indirectly connected to Azure, then the Azure portal is a read-only view. You can see the inventory of SQL managed instances and PostgreSQL servers that you have deployed and the details about them, but you cannot take action on them in the Azure portal. In the indirectly connected mode, all actions must be taken locally using Azure Data Studio, the appropriate CLI, or Kubernetes native tools like kubectl.
+Importantly, if the Azure Arc-enabled data services are directly connected to Azure, then users can use [Azure Resource Manager APIs](/rest/api/resources/), the Azure CLI, and the Azure portal to operate the Azure Arc data services. The experience in directly connected mode is much like how you would use any other Azure service with provisioning/de-provisioning, scaling, configuring, and so on, all in the Azure portal. If the Azure Arc-enabled data services are indirectly connected to Azure, then the Azure portal is a read-only view. You can see the inventory of SQL managed instances and PostgreSQL servers that you have deployed and the details about them, but you can't take action on them in the Azure portal. In the indirectly connected mode, all actions must be taken locally using Azure Data Studio, the appropriate CLI, or Kubernetes native tools like kubectl.
-Additionally, Microsoft Entra ID and Azure Role-Based Access Control can be used in the directly connected mode only because there is a dependency on a continuous and direct connection to Azure to provide this functionality.
+Additionally, Microsoft Entra ID and Azure Role-Based Access Control can be used in the directly connected mode only because there's a dependency on a continuous and direct connection to Azure to provide this functionality.
Some Azure-attached services are only available when they can be directly reached such as Container Insights, and backup to blob storage. ||**Indirectly connected**|**Directly connected**|**Never connected**| |||||
-|**Description**|Indirectly connected mode offers most of the management services locally in your environment with no direct connection to Azure. A minimal amount of data must be sent to Azure for inventory and billing purposes _only_. It is exported to a file and uploaded to Azure at least once per month. No direct or continuous connection to Azure is required. Some features and services which require a connection to Azure will not be available.|Directly connected mode offers all of the available services when a direct connection can be established with Azure. Connections are always initiated _from_ your environment to Azure and use standard ports and protocols such as HTTPS/443.|No data can be sent to or from Azure in any way.|
+|**Description**|Indirectly connected mode offers most of the management services locally in your environment with no direct connection to Azure. A minimal amount of data must be sent to Azure for inventory and billing purposes _only_. It's exported to a file and uploaded to Azure at least once per month. No direct or continuous connection to Azure is required. Some features and services that require a connection to Azure won't be available.|Directly connected mode offers all of the available services when a direct connection can be established with Azure. Connections are always initiated _from_ your environment to Azure and use standard ports and protocols such as HTTPS/443.|No data can be sent to or from Azure in any way.|
|**Current availability**| Available |Available|Not currently supported.|
-|**Typical use cases**|On-premises data centers that donΓÇÖt allow connectivity in or out of the data region of the data center due to business or regulatory compliance policies or out of concerns of external attacks or data exfiltration. Typical examples: Financial institutions, health care, government. <br/><br/>Edge site locations where the edge site doesnΓÇÖt typically have connectivity to the Internet. Typical examples: oil/gas or military field applications. <br/><br/>Edge site locations that have intermittent connectivity with long periods of outages. Typical examples: stadiums, cruise ships. | Organizations who are using public clouds. Typical examples: Azure, AWS or Google Cloud.<br/><br/>Edge site locations where Internet connectivity is typically present and allowed. Typical examples: retail stores, manufacturing.<br/><br/>Corporate data centers with more permissive policies for connectivity to/from their data region of the datacenter to the Internet. Typical examples: Non-regulated businesses, small/medium sized businesses|Truly "air-gapped" environments where no data under any circumstances can come or go from the data environment. Typical examples: top secret government facilities.|
-|**How data is sent to Azure**|There are three options for how the billing and inventory data can be sent to Azure:<br><br> 1) Data is exported out of the data region by an automated process that has connectivity to both the secure data region and Azure.<br><br>2) Data is exported out of the data region by an automated process within the data region, automatically copied to a less secure region, and an automated process in the less secure region uploads the data to Azure.<br><br>3) Data is manually exported by a user within the secure region, manually brought out of the secure region, and manually uploaded to Azure. <br><br>The first two options are an automated continuous process that can be scheduled to run frequently so there is minimal delay in the transfer of data to Azure subject only to the available connectivity to Azure.|Data is automatically and continuously sent to Azure.|Data is never sent to Azure.|
+|**Typical use cases**|On-premises data centers that donΓÇÖt allow connectivity in or out of the data region of the data center due to business or regulatory compliance policies or out of concerns of external attacks or data exfiltration. Typical examples: Financial institutions, health care, government. <br/><br/>Edge site locations where the edge site doesnΓÇÖt typically have connectivity to the Internet. Typical examples: oil/gas or military field applications. <br/><br/>Edge site locations that have intermittent connectivity with long periods of outages. Typical examples: stadiums, cruise ships. | Organizations who are using public clouds. Typical examples: Azure, AWS or Google Cloud.<br/><br/>Edge site locations where Internet connectivity is typically present and allowed. Typical examples: retail stores, manufacturing.<br/><br/>Corporate data centers with more permissive policies for connectivity to/from their data region of the datacenter to the Internet. Typical examples: Nonregulated businesses, small/medium sized businesses|Truly "air-gapped" environments where no data under any circumstances can come or go from the data environment. Typical examples: top secret government facilities.|
+|**How data is sent to Azure**|There are three options for how the billing and inventory data can be sent to Azure:<br><br> 1) Data is exported out of the data region by an automated process that has connectivity to both the secure data region and Azure.<br><br>2) Data is exported out of the data region by an automated process within the data region, automatically copied to a less secure region, and an automated process in the less secure region uploads the data to Azure.<br><br>3) Data is manually exported by a user within the secure region, manually brought out of the secure region, and manually uploaded to Azure. <br><br>The first two options are an automated continuous process that can be scheduled to run frequently so there's minimal delay in the transfer of data to Azure subject only to the available connectivity to Azure.|Data is automatically and continuously sent to Azure.|Data is never sent to Azure.|
## Feature availability by connectivity mode
Some Azure-attached services are only available when they can be directly reache
|**Automatic upgrades and patching**|Supported<br/>The data controller must either have direct access to the Microsoft Container Registry (MCR) or the container images need to be pulled from MCR and pushed to a local, private container registry that the data controller has access to.|Supported| |**Automatic backup and restore**|Supported<br/>Automatic local backup and restore.|Supported<br/>In addition to automated local backup and restore, you can _optionally_ send backups to Azure blob storage for long-term, off-site retention.| |**Monitoring**|Supported<br/>Local monitoring using Grafana and Kibana dashboards.|Supported<br/>In addition to local monitoring dashboards, you can _optionally_ send monitoring data and logs to Azure Monitor for at-scale monitoring of multiple sites in one place. |
-|**Authentication**|Use local username/password for data controller and dashboard authentication. Use SQL and Postgres logins or Active Directory (AD is not currently supported) for connectivity to database instances. Use Kubernetes authentication providers for authentication to the Kubernetes API.|In addition to or instead of the authentication methods for the indirectly connected mode, you can _optionally_ use Microsoft Entra ID.|
+|**Authentication**|Use local username/password for data controller and dashboard authentication. Use SQL and Postgres logins or Active Directory (AD isn't currently supported) for connectivity to database instances. Use Kubernetes authentication providers for authentication to the Kubernetes API.|In addition to or instead of the authentication methods for the indirectly connected mode, you can _optionally_ use Microsoft Entra ID.|
|**Role-based access control (RBAC)**|Use Kubernetes RBAC on Kubernetes API. Use SQL and Postgres RBAC for database instances.|You can use Microsoft Entra ID and Azure RBAC.| ## Connectivity requirements **Some functionality requires a connection to Azure.**
-**All communication with Azure is always initiated from your environment.** This is true even for operations which are initiated by a user in the Azure portal. In that case, there is effectively a task, which is queued up in Azure. An agent in your environment initiates the communication with Azure to see what tasks are in the queue, runs the tasks, and reports back the status/completion/fail to Azure.
+**All communication with Azure is always initiated from your environment.** This is true even for operations that are initiated by a user in the Azure portal. In that case, there is effectively a task, which is queued up in Azure. An agent in your environment initiates the communication with Azure to see what tasks are in the queue, runs the tasks, and reports back the status/completion/fail to Azure.
|**Type of Data**|**Direction**|**Required/Optional**|**Additional Costs**|**Mode Required**|**Notes**| |||||||
-|**Container images**|Microsoft Container Registry -> Customer|Required|No|Indirect or direct|Container images are the method for distributing the software. In an environment which can connect to the Microsoft Container Registry (MCR) over the Internet, the container images can be pulled directly from MCR. In the event that the deployment environment doesnΓÇÖt have direct connectivity, you can pull the images from MCR and push them to a private container registry in the deployment environment. At creation time, you can configure the creation process to pull from the private container registry instead of MCR. This will also apply to automated updates.|
+|**Container images**|Microsoft Container Registry -> Customer|Required|No|Indirect or direct|Container images are the method for distributing the software. In an environment which can connect to the Microsoft Container Registry (MCR) over the Internet, the container images can be pulled directly from MCR. If the deployment environment doesnΓÇÖt have direct connectivity, you can pull the images from MCR and push them to a private container registry in the deployment environment. At creation time, you can configure the creation process to pull from the private container registry instead of MCR. This also applies to automated updates.|
|**Resource inventory**|Customer environment -> Azure|Required|No|Indirect or direct|An inventory of data controllers, database instances (PostgreSQL and SQL) is kept in Azure for billing purposes and also for purposes of creating an inventory of all data controllers and database instances in one place which is especially useful if you have more than one environment with Azure Arc data services. As instances are provisioned, deprovisioned, scaled out/in, scaled up/down the inventory is updated in Azure.| |**Billing telemetry data**|Customer environment -> Azure|Required|No|Indirect or direct|Utilization of database instances must be sent to Azure for billing purposes. |
-|**Monitoring data and logs**|Customer environment -> Azure|Optional|Maybe depending on data volume (see [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/))|Indirect or direct|You may want to send the locally collected monitoring data and logs to Azure Monitor for aggregating data across multiple environments into one place and also to use Azure Monitor services like alerts, using the data in Azure Machine Learning, etc.|
-|**Azure Role-based Access Control (Azure RBAC)**|Customer environment -> Azure -> Customer Environment|Optional|No|Direct only|If you want to use Azure RBAC, then connectivity must be established with Azure at all times. If you donΓÇÖt want to use Azure RBAC then local Kubernetes RBAC can be used.|
-|**Microsoft Entra ID (Future)**|Customer environment -> Azure -> Customer environment|Optional|Maybe, but you may already be paying for Microsoft Entra ID|Direct only|If you want to use Microsoft Entra ID for authentication, then connectivity must be established with Azure at all times. If you donΓÇÖt want to use Microsoft Entra ID for authentication, you can use Active Directory Federation Services (ADFS) over Active Directory. **Pending availability in directly connected mode**|
+|**Monitoring data and logs**|Customer environment -> Azure|Optional|Maybe depending on data volume (see [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/))|Indirect or direct|You might want to send the locally collected monitoring data and logs to Azure Monitor for aggregating data across multiple environments into one place and also to use Azure Monitor services like alerts, using the data in Azure Machine Learning, etc.|
+|**Azure Role-based Access Control (Azure RBAC)**|Customer environment -> Azure -> Customer Environment|Optional|No|Direct only|If you want to use Azure RBAC, then connectivity must be established with Azure at all times. If you donΓÇÖt want to use Azure RBAC, then local Kubernetes RBAC can be used.|
+|**Microsoft Entra ID (Future)**|Customer environment -> Azure -> Customer environment|Optional|Maybe, but you might already be paying for Microsoft Entra ID|Direct only|If you want to use Microsoft Entra ID for authentication, then connectivity must be established with Azure at all times. If you donΓÇÖt want to use Microsoft Entra ID for authentication, you can use Active Directory Federation Services (ADFS) over Active Directory. **Pending availability in directly connected mode**|
|**Backup and restore**|Customer environment -> Customer environment|Required|No|Direct or indirect|The backup and restore service can be configured to point to local storage classes. |
-|**Azure backup - long term retention (Future)**| Customer environment -> Azure | Optional| Yes for Azure storage | Direct only |You may want to send backups that are taken locally to Azure Backup for long-term, off-site retention of backups and bring them back to the local environment for restore. |
-|**Provisioning and configuration changes from Azure portal**|Customer environment -> Azure -> Customer environment|Optional|No|Direct only|Provisioning and configuration changes can be done locally using Azure Data Studio or the appropriate CLI. In directly connected mode, you will also be able to provision and make configuration changes from the Azure portal.|
+|**Azure backup - long term retention (Future)**| Customer environment -> Azure | Optional| Yes for Azure storage | Direct only |You might want to send backups that are taken locally to Azure Backup for long-term, off-site retention of backups and bring them back to the local environment for restore. |
+|**Provisioning and configuration changes from Azure portal**|Customer environment -> Azure -> Customer environment|Optional|No|Direct only|Provisioning and configuration changes can be done locally using Azure Data Studio or the appropriate CLI. In directly connected mode, you can also provision and make configuration changes from the Azure portal.|
## Details on internet addresses, ports, encryption, and proxy server support
Some Azure-attached services are only available when they can be directly reache
## Additional network requirements
-In addition, resource bridge (preview) requires [Arc-enabled Kubernetes endpoints](../network-requirements-consolidated.md#azure-arc-enabled-kubernetes-endpoints).
+In addition, resource bridge requires [Arc-enabled Kubernetes endpoints](../network-requirements-consolidated.md#azure-arc-enabled-kubernetes-endpoints).
azure-arc Migrate Postgresql Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/migrate-postgresql-data.md
This document describes the steps to get your existing PostgreSQL database (one
## Considerations
-Azure Arc-enabled PostgreSQL server is the community version of PostgreSQL. So any tool that that works on PostgreSQL outside of Azure Arc should work with Azure Arc-enabled PostgreSQL server.
+Azure Arc-enabled PostgreSQL server is the community version of PostgreSQL. So any tool that works on PostgreSQL outside of Azure Arc should work with Azure Arc-enabled PostgreSQL server.
As such, with the set of tools you use today for Postgres, you should be able to:
azure-arc Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/release-notes.md
As a preview feature, the technology presented in this article is subject to [Su
### Breaking change -- Kubernetes native deployment templates have been modified. Update update your .yml templates.
+- Kubernetes native deployment templates have been modified. Update your .yml templates.
- Updated templates for data controller, bootstrapper, & SQL Managed instance: [GitHub microsoft/azure-arc pr 574](https://github.com/microsoft/azure_arc/pull/574) - Updated templates for PostgreSQL server: [GitHub microsoft/azure-arc pr 574](https://github.com/microsoft/azure_arc/pull/574)
azure-arc Service Tiers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/service-tiers.md
# Azure Arc-enabled SQL Managed Instance service tiers
-As part of of the family of Azure SQL products, Azure Arc-enabled SQL Managed Instance is available in two [vCore](/azure/azure-sql/database/service-tiers-vcore) service tiers.
+As part of the family of Azure SQL products, Azure Arc-enabled SQL Managed Instance is available in two [vCore](/azure/azure-sql/database/service-tiers-vcore) service tiers.
- **General Purpose** is a budget-friendly tier designed for most workloads with common performance and availability features. - **Business Critical** tier is designed for performance-sensitive workloads with higher availability features.
azure-arc Extensions Release https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/extensions-release.md
Title: "Available extensions for Azure Arc-enabled Kubernetes clusters" Previously updated : 10/20/2023 Last updated : 11/03/2023 description: "See which extensions are currently available for Azure Arc-enabled Kubernetes clusters and view release notes."
For more information, see [Deploy an Azure API Management gateway on Azure Arc (
- **Supported distributions**: All Cloud Native Computing Foundation (CNCF) certified Kubernetes clusters. Not currently supported for ARM 64.
-The AzureML extension lets you deploy and run Azure Machine Learning on Azure Arc-enabled Kubernetes clusters.
+The Azure Machine Learning extension lets you deploy and run Azure Machine Learning on Azure Arc-enabled Kubernetes clusters.
-For more information, see [Introduction to Kubernetes compute target in AzureML](../../machine-learning/how-to-attach-kubernetes-anywhere.md) and [Deploy AzureML extension on AKS or Arc Kubernetes cluster](../../machine-learning/how-to-deploy-kubernetes-extension.md).
+For more information, see [Introduction to Kubernetes compute target in Azure Machine Learning](../../machine-learning/how-to-attach-kubernetes-anywhere.md) and [Deploy Azure Machine Learning extension on AKS or Arc Kubernetes cluster](../../machine-learning/how-to-deploy-kubernetes-extension.md).
## Flux (GitOps)
For more information, see [Introduction to Kubernetes compute target in AzureML]
For more information, see [Tutorial: Deploy applications using GitOps with Flux v2](tutorial-use-gitops-flux2.md).
-The currently supported versions of the `microsoft.flux` extension are described below. The most recent version of the Flux v2 extension and the two previous versions (N-2) are supported. We generally recommend that you use the most recent version of the extension.
+The most recent version of the Flux v2 extension and the two previous versions (N-2) are supported. We generally recommend that you use the most recent version of the extension.
> [!IMPORTANT] > Eventually, a major version update (v2.x.x) for the `microsoft.flux` extension will be released. When this happens, clusters won't be auto-upgraded to this version, since [auto-upgrade is only supported for minor version releases](extensions.md#upgrade-extension-instance). If you're still using an older API version when the next major version is released, you'll need to update your manifests to the latest API versions, perform any necessary testing, then upgrade your extension manually. For more information about the new API versions (breaking changes) and how to update your manifests, see the [Flux v2 release notes](https://github.com/fluxcd/flux2/releases/tag/v2.0.0).
+> [!NOTE]
+> When a new version of the `microsoft.flux` extension is released, it may take several days for the new version to become available in all regions.
+
+### 1.8.1 (November 2023)
+
+Flux version: [Release v2.1.2](https://github.com/fluxcd/flux2/releases/tag/v2.1.2)
+
+- source-controller: v1.1.2
+- kustomize-controller: v1.1.1
+- helm-controller: v0.36.2
+- notification-controller: v1.1.0
+- image-automation-controller: v0.36.1
+- image-reflector-controller: v0.30.0
+
+Changes made for this version:
+
+- Upgrades Flux to [v2.1.2](https://github.com/fluxcd/flux2/releases/tag/v2.1.2)
+- Updates to each `fluxConfiguration` status are now relayed back to Azure once every minute, provided there are any changes to report
+ ### 1.8.0 (October 2023) Flux version: [Release v2.1.1](https://github.com/fluxcd/flux2/releases/tag/v2.1.1)
Flux version: [Release v2.0.1](https://github.com/fluxcd/flux2/releases/tag/v2.0
Changes made for this version: - Upgrades Flux to [v2.0.1](https://github.com/fluxcd/flux2/releases/tag/v2.0.1)-- Promotes some APIs to v1. This change should not affect any existing Flux configurations that have already been deployed. Previous API versions will still be supported in all `microsoft.flux` v.1.x.x releases. However, we recommend that you update the API versions in your manifests as soon as possible. For more information about the new API versions (breaking changes) and how to update your manifests, see the [Flux v2 release notes](https://github.com/fluxcd/flux2/releases/tag/v2.0.0).
+- Promotes some APIs to v1. This change shouldn't affect any existing Flux configurations that have already been deployed. Previous API versions will still be supported in all `microsoft.flux` v.1.x.x releases. However, we recommend that you update the API versions in your manifests as soon as possible. For more information about the new API versions (breaking changes) and how to update your manifests, see the [Flux v2 release notes](https://github.com/fluxcd/flux2/releases/tag/v2.0.0).
- Adds support for [Helm drift detection](tutorial-use-gitops-flux2.md#helm-drift-detection) and [OOM watch](tutorial-use-gitops-flux2.md#helm-oom-watch). ### 1.7.4 (June 2023)
Flux version: [Release v0.41.2](https://github.com/fluxcd/flux2/releases/tag/v0.
Changes made for this version: -- Adds support for [`wait`](https://fluxcd.io/flux/components/kustomize/kustomization/#wait) and [`postBuild`](https://fluxcd.io/flux/components/kustomize/kustomization/#post-build-variable-substitution) properties as optional parameters for kustomization. By default, `wait` will be set to `true` for all Flux configurations, and `postBuild` will be null. ([Example](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/kubernetesconfiguration/resource-manager/Microsoft.KubernetesConfiguration/stable/2023-05-01/examples/CreateFluxConfiguration.json#L55))
+- Adds support for [`wait`](https://fluxcd.io/flux/components/kustomize/kustomization/#wait) and [`postBuild`](https://fluxcd.io/flux/components/kustomize/kustomization/#post-build-variable-substitution) properties as optional parameters for kustomization. By default, `wait` is set to `true` for all Flux configurations, and `postBuild` is null. ([Example](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/kubernetesconfiguration/resource-manager/Microsoft.KubernetesConfiguration/stable/2023-05-01/examples/CreateFluxConfiguration.json#L55))
- Adds support for optional properties [`waitForReconciliation`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/kubernetesconfiguration/resource-manager/Microsoft.KubernetesConfiguration/stable/2023-05-01/fluxconfiguration.json#L1299C14-L1299C35) and [`reconciliationWaitDuration`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/kubernetesconfiguration/resource-manager/Microsoft.KubernetesConfiguration/stable/2023-05-01/fluxconfiguration.json#L1304).
- By default, `waitForReconciliation` is set to false, so when creating a flux configuration, the `provisioningState` returns `Succeeded` once the configuration reaches the cluster and the ARM template or Azure CLI command successfully exits. However, the actual state of the objects being deployed as part of the configuration is tracked by `complianceState`, which can be viewed in the portal or by using Azure CLI. Setting `waitForReconciliation` to true and specifying a `reconciliationWaitDuration` means that the template or CLI deployment will wait for `complianceState` to reach a terminal state (success or failure) before exiting. ([Example](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/kubernetesconfiguration/resource-manager/Microsoft.KubernetesConfiguration/stable/2023-05-01/examples/CreateFluxConfiguration.json#L72))
+ By default, `waitForReconciliation` is set to false, so when creating a flux configuration, the `provisioningState` returns `Succeeded` once the configuration reaches the cluster and the ARM template or Azure CLI command successfully exits. However, the actual state of the objects being deployed as part of the configuration is tracked by `complianceState`, which can be viewed in the portal or by using Azure CLI. Setting `waitForReconciliation` to true and specifying a `reconciliationWaitDuration` means that the template or CLI deployment waits for `complianceState` to reach a terminal state (success or failure) before exiting. ([Example](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/kubernetesconfiguration/resource-manager/Microsoft.KubernetesConfiguration/stable/2023-05-01/examples/CreateFluxConfiguration.json#L72))
## Dapr extension for Azure Kubernetes Service (AKS) and Arc-enabled Kubernetes
azure-arc Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/policy-reference.md
Title: Built-in policy definitions for Azure Arc-enabled Kubernetes description: Lists Azure Policy built-in policy definitions for Azure Arc-enabled Kubernetes. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 10/23/2023 Last updated : 11/03/2023 #
azure-arc Network Requirements Consolidated https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/network-requirements-consolidated.md
Title: Azure Arc network requirements description: A consolidated list of network requirements for Azure Arc features and Azure Arc-enabled services. Lists endpoints, ports, and protocols. Previously updated : 11/01/2023 Last updated : 11/03/2023
Connectivity to Arc-enabled server endpoints is required for:
For more information, see [Connected Machine agent network requirements](servers/network-requirements.md).
-## Azure Arc resource bridge (preview)
+## Azure Arc resource bridge
-This section describes additional networking requirements specific to deploying Azure Arc resource bridge (preview) in your enterprise. These requirements also apply to Azure Arc-enabled VMware vSphere (preview) and Azure Arc-enabled System Center Virtual Machine Manager (preview).
+This section describes additional networking requirements specific to deploying Azure Arc resource bridge in your enterprise. These requirements also apply to Azure Arc-enabled VMware vSphere (preview) and Azure Arc-enabled System Center Virtual Machine Manager (preview).
[!INCLUDE [network-requirements](resource-bridge/includes/network-requirements.md)]
-For more information, see [Azure Arc resource bridge (preview) network requirements](resource-bridge/network-requirements.md).
+For more information, see [Azure Arc resource bridge network requirements](resource-bridge/network-requirements.md).
## Azure Arc-enabled System Center Virtual Machine Manager (preview)
Azure Arc-enabled System Center Virtual Machine Manager (SCVMM) also requires:
For more information, see [Overview of Arc-enabled System Center Virtual Machine Manager (preview)](system-center-virtual-machine-manager/overview.md).
-## Azure Arc-enabled VMware vSphere (preview)
+## Azure Arc-enabled VMware vSphere
Azure Arc-enabled VMware vSphere also requires:
Azure Arc-enabled VMware vSphere also requires:
| | | | | | | vCenter Server | 443 | URL of the vCenter server | Appliance VM IP and control plane endpoint need outbound connection. | Used to by the vCenter server to communicate with the Appliance VM and the control plane.|
-For more information, see [Support matrix for Azure Arc-enabled VMware vSphere (preview)](vmware-vsphere/support-matrix-for-arc-enabled-vmware-vsphere.md).
+For more information, see [Support matrix for Azure Arc-enabled VMware vSphere](vmware-vsphere/support-matrix-for-arc-enabled-vmware-vsphere.md).
## Additional endpoints
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/overview.md
Title: Azure Arc overview description: Learn about what Azure Arc is and how it helps customers enable management and governance of their hybrid resources with other Azure services and features. Previously updated : 10/24/2023 Last updated : 11/03/2023
Currently, Azure Arc allows you to manage the following resource types hosted ou
* [Azure data services](dat): Run Azure data services on-premises, at the edge, and in public clouds using Kubernetes and the infrastructure of your choice. SQL Managed Instance and PostgreSQL (preview) services are currently available. * [SQL Server](/sql/sql-server/azure-arc/overview): Extend Azure services to SQL Server instances hosted outside of Azure.
-* Virtual machines (preview): Provision, resize, delete and manage virtual machines based on [VMware vSphere](./vmware-vsphere/overview.md) or [Azure Stack HCI](/azure-stack/hci/manage/azure-arc-vm-management-overview) and enable VM self-service through role-based access.
+* Virtual machines: Provision, resize, delete and manage virtual machines based on [VMware vSphere](./vmware-vsphere/overview.md) or [Azure Stack HCI](/azure-stack/hci/manage/azure-arc-vm-management-overview) and enable VM self-service through role-based access.
## Key features and benefits
azure-arc Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/deploy-cli.md
Title: Azure Arc resource bridge (preview) deployment command overview
-description: Learn about the Azure CLI commands that can be used to manage your Azure Arc resource bridge (preview) deployment.
Previously updated : 02/06/2023
+ Title: Azure Arc resource bridge deployment command overview
+description: Learn about the Azure CLI commands that can be used to manage your Azure Arc resource bridge deployment.
Last updated : 11/03/2023
-# Azure Arc resource bridge (preview) deployment command overview
+# Azure Arc resource bridge deployment command overview
[Azure CLI](/cli/azure/install-azure-cli) is required to deploy the Azure Arc resource bridge. When deploying Arc resource bridge with a corresponding partner product, the Azure CLI commands may be combined into an automation script, along with additional provider-specific commands. To learn about installing Arc resource bridge with a corresponding partner product, see:
- [Azure Stack HCI VM Management through Arc resource bridge](/azure-stack/hci/manage/azure-arc-vm-management-prerequisites) - [AKS on HCI (AKS hybrid) - Arc resource bridge deployment](/azure/aks/hybrid/deploy-arc-resource-bridge-windows-server)
-This topic provides an overview of the [Azure CLI commands](/cli/azure/arcappliance) that are used to manage Arc resource bridge (preview) deployment, in the order in which they are typically used for deployment.
+This topic provides an overview of the [Azure CLI commands](/cli/azure/arcappliance) that are used to manage Arc resource bridge deployment, in the order in which they are typically used for deployment.
## `az arcappliance createconfig`
azure-arc Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/maintenance.md
Title: Azure Arc resource bridge (preview) maintenance operations
-description: Learn how to manage Azure Arc resource bridge (preview) so that it remains online and operational.
+ Title: Azure Arc resource bridge maintenance operations
+description: Learn how to manage Azure Arc resource bridge so that it remains online and operational.
Previously updated : 03/08/2023 Last updated : 11/03/2023
-# Azure Arc resource bridge (preview) maintenance operations
+# Azure Arc resource bridge maintenance operations
-To keep your Azure Arc resource bridge (preview) deployment online and operational, you may need to perform maintenance operations such as updating credentials or monitoring upgrades.
+To keep your Azure Arc resource bridge deployment online and operational, you might need to perform maintenance operations such as updating credentials or monitoring upgrades.
-To maintain the on-premises appliance VM, the [appliance configuration files generated during deployment](deploy-cli.md#az-arcappliance-createconfig) need to be saved in a secure location and made available on the management machine. The management machine used to perform maintenance operations must meet all of [the Arc resource bridge (preview) requirements](system-requirements.md).
+To maintain the on-premises appliance VM, the [appliance configuration files generated during deployment](deploy-cli.md#az-arcappliance-createconfig) need to be saved in a secure location and made available on the management machine. The management machine used to perform maintenance operations must meet all of [the Arc resource bridge requirements](system-requirements.md).
-The following sections describe some of the most common maintenance tasks for Arc resource bridge (preview).
+The following sections describe some of the most common maintenance tasks for Arc resource bridge.
## Update credentials in the appliance VM
If the credentials change, the credentials stored in the Arc resource bridge nee
## Troubleshoot Arc resource bridge
-If you experience problems with the appliance VM, the appliance configuration files may help with troubleshooting. You can include these files when you [open an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md).
+If you experience problems with the appliance VM, the appliance configuration files can help with troubleshooting. You can include these files when you [open an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md).
-You may also want to [collect logs](/cli/azure/arcappliance/logs#az-arcappliance-logs-vmware), which requires you to pass credentials to the on-premises control center:
+You might want to [collect logs](/cli/azure/arcappliance/logs#az-arcappliance-logs-vmware), which requires you to pass credentials to the on-premises control center:
- For VMWare vSphere, use the username and password provided to Arc resource bridge at deployment. - For Azure Stack HCI, use the cloud service IP and HCI login configuration file path. ## Delete Arc resource bridge
-You may need to delete Arc resource bridge due to deployment failures or when no longer needed. To do so, you'll need the appliance configuration files. The [delete command](deploy-cli.md#az-arcappliance-delete) is the recommended way to delete the bridge. This command deletes the on-premises appliance VM as well as the Azure resource and underlying components across the two environments.
+You might need to delete Arc resource bridge due to deployment failures or when no longer needed. To do so, you need the appliance configuration files. The [delete command](deploy-cli.md#az-arcappliance-delete) is the recommended way to delete the bridge. This command deletes the on-premises appliance VM along with the Azure resource and underlying components across the two environments.
## Next steps -- Review the [Azure Arc resource bridge (preview) overview](overview.md) to understand more about requirements and technical details.-- Learn about [system requirements for Azure Arc resource bridge (preview)](system-requirements.md).
+- Learn about [upgrading Arc resource bridge](upgrade.md).
+- Review the [Azure Arc resource bridge overview](overview.md) to understand more about requirements and technical details.
+- Learn about [system requirements for Azure Arc resource bridge](system-requirements.md).
azure-arc Network Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/network-requirements.md
Title: Azure Arc resource bridge (preview) network requirements
-description: Learn about network requirements for Azure Arc resource bridge (preview) including URLs that must be allowlisted.
+ Title: Azure Arc resource bridge network requirements
+description: Learn about network requirements for Azure Arc resource bridge including URLs that must be allowlisted.
Previously updated : 08/24/2023 Last updated : 11/03/2023
-# Azure Arc resource bridge (preview) network requirements
+# Azure Arc resource bridge network requirements
-This article describes the networking requirements for deploying Azure Arc resource bridge (preview) in your enterprise.
+This article describes the networking requirements for deploying Azure Arc resource bridge in your enterprise.
## General network requirements
-Arc resource bridge communicates outbound securely to Azure Arc over TCP port 443. If the appliance needs to connect through a firewall or proxy server to communicate over the internet, it communicates outbound using the HTTPS protocol.
+Arc resource bridge communicates outbound securely to Azure Arc over TCP port 443. If the appliance needs to connect through a firewall or proxy server to communicate over the internet, it communicates outbound using the HTTPS protocol.
[!INCLUDE [network-requirement-principles](../includes/network-requirement-principles.md)]
Arc resource bridge communicates outbound securely to Azure Arc over TCP port 44
## Additional network requirements
-In addition, Arc resource bridge (preview) requires connectivity to the [Arc-enabled Kubernetes endpoints](../network-requirements-consolidated.md?tabs=azure-cloud).
+In addition, Arc resource bridge requires connectivity to the [Arc-enabled Kubernetes endpoints](../network-requirements-consolidated.md?tabs=azure-cloud).
> [!NOTE] > The URLs listed here are required for Arc resource bridge only. Other Arc products (such as Arc-enabled VMware vSphere) may have additional required URLs. For details, see [Azure Arc network requirements](../network-requirements-consolidated.md).
The default value for `noProxy` is `localhost,127.0.0.1,.svc,10.0.0.0/8,172.16.0
## Next steps -- Review the [Azure Arc resource bridge (preview) overview](overview.md) to understand more about requirements and technical details.-- Learn about [security configuration and considerations for Azure Arc resource bridge (preview)](security-overview.md).
+- Review the [Azure Arc resource bridge overview](overview.md) to understand more about requirements and technical details.
+- Learn about [security configuration and considerations for Azure Arc resource bridge](security-overview.md).
- View [troubleshooting tips for networking issues](troubleshoot-resource-bridge.md#networking-issues).
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/overview.md
Title: Azure Arc resource bridge (preview) overview
-description: Learn how to use Azure Arc resource bridge (preview) to support VM self-servicing on Azure Stack HCI, VMware, and System Center Virtual Machine Manager.
Previously updated : 10/31/2023
+ Title: Azure Arc resource bridge overview
+description: Learn how to use Azure Arc resource bridge to support VM self-servicing on Azure Stack HCI, VMware, and System Center Virtual Machine Manager.
Last updated : 11/3/2023
-# What is Azure Arc resource bridge (preview)?
+# What is Azure Arc resource bridge?
-Azure Arc resource bridge (preview) is a Microsoft managed product that is part of the core Azure Arc platform. It is designed to host other Azure Arc services. In this release, the resource bridge supports VM self-servicing and management from Azure, for virtualized Windows and Linux virtual machines hosted in an on-premises environment on [Azure Stack HCI](/azure-stack/hci/manage/azure-arc-vm-management-overview), VMware ([Arc-enabled VMware vSphere](../vmware-vsphere/index.yml) preview), and System Center Virtual Machine Manager (SCVMM) ([Arc-enabled SCVMM](../system-center-virtual-machine-manager/index.yml) preview).
+Azure Arc resource bridge is a Microsoft managed product that is part of the core Azure Arc platform. It is designed to host other Azure Arc services. In this release, the resource bridge supports VM self-servicing and management from Azure, for virtualized Windows and Linux virtual machines hosted in an on-premises environment on [Azure Stack HCI](/azure-stack/hci/manage/azure-arc-vm-management-overview), VMware ([Arc-enabled VMware vSphere](../vmware-vsphere/index.yml)), and System Center Virtual Machine Manager (SCVMM) ([Arc-enabled SCVMM](../system-center-virtual-machine-manager/index.yml) preview).
Azure Arc resource bridge is a Kubernetes management cluster installed on the customerΓÇÖs on-premises infrastructure. The resource bridge is provided credentials to the infrastructure control plane that allows it to apply guest management services on the on-premises resources. Arc resource bridge enables projection of on-premises resources as ARM resources and management from ARM as ΓÇ£arc-enabledΓÇ¥ Azure resources.
Arc resource bridge delivers the following benefits:
## Overview
-Azure Arc resource bridge (preview) hosts other components such as [custom locations](..\platform\conceptual-custom-locations.md), cluster extensions, and other Azure Arc agents in order to deliver the level of functionality with the private cloud infrastructures it supports. This complex system is composed of three layers:
+Azure Arc resource bridge hosts other components such as [custom locations](..\platform\conceptual-custom-locations.md), cluster extensions, and other Azure Arc agents in order to deliver the level of functionality with the private cloud infrastructures it supports. This complex system is composed of three layers:
* The base layer that represents the resource bridge and the Arc agents. * The platform layer that includes the custom location and cluster extension.
Azure Arc resource bridge (preview) hosts other components such as [custom locat
:::image type="content" source="media/overview/architecture-overview.png" alt-text="Azure Arc resource bridge architecture diagram." border="false" lightbox="media/overview/architecture-overview.png":::
-Azure Arc resource bridge (preview) can host other Azure services or solutions running on-premises. For this preview, there are two objects hosted on the Arc resource bridge (preview):
+Azure Arc resource bridge can host other Azure services or solutions running on-premises. For this preview, there are two objects hosted on the Arc resource bridge:
* Cluster extension: The Azure service deployed to run on-premises. For the preview release, it supports three
Azure Arc resource bridge (preview) can host other Azure services or solutions r
* Custom locations: A deployment target where you can create Azure resources. It maps to different resource for different Azure services. For example, for Arc-enabled VMware, the custom locations resource maps to an instance of vCenter, and for Azure Arc VM management on Azure Stack HCI, it maps to an HCI cluster instance.
-Custom locations and cluster extension are both Azure resources, which are linked to the Azure Arc resource bridge (preview) resource in Azure Resource Manager. When you create an on-premises VM from Azure, you can select the custom location, and that routes that *create action* to the mapped vCenter, Azure Stack HCI cluster, or SCVMM.
+Custom locations and cluster extension are both Azure resources, which are linked to the Azure Arc resource bridge resource in Azure Resource Manager. When you create an on-premises VM from Azure, you can select the custom location, and that routes that *create action* to the mapped vCenter, Azure Stack HCI cluster, or SCVMM.
Some resources are unique to the infrastructure. For example, vCenter has a resource pool, network, and template resources. During VM creation, these resources need to be specified. With Azure Stack HCI, you just need to select the custom location, network and template to create a VM. To summarize, the Azure resources are projections of the resources running in your on-premises private cloud. If the on-premises resource is not healthy, it can impact the health of the related resources that are projected in Azure. For example, if the resource bridge is deleted by accident, all the resources projected in Azure by the resource bridge are impacted. The on-premises VMs in your on-premises private cloud are not impacted, as they are running on vCenter but you won't be able to start or stop the VMs from Azure. It is not recommended to directly manage or modify the resource bridge using any on-premises applications.
-## Benefits of Azure Arc resource bridge (preview)
+## Benefits of Azure Arc resource bridge
-Through Azure Arc resource bridge (preview), you can accomplish the following for each private cloud infrastructure from Azure:
+Through Azure Arc resource bridge, you can accomplish the following for each private cloud infrastructure from Azure:
### Azure Stack HCI
By registering resource pools, networks, and VM templates, you can represent a s
### System Center Virtual Machine Manager (SCVMM)
-You can connect an SCVMM management server to Azure by deploying Azure Arc resource bridgeΓÇ»(preview) in the VMM environment. Azure Arc resource bridge (preview) enables you to represent the SCVMM resources (clouds, VMs, templates etc.) in Azure and perform various operations on them:
+You can connect an SCVMM management server to Azure by deploying Azure Arc resource bridgeΓÇ»(preview) in the VMM environment. Azure Arc resource bridge enables you to represent the SCVMM resources (clouds, VMs, templates etc.) in Azure and perform various operations on them:
* Start, stop, and restart a virtual machine * Control access and add Azure tags
azure-arc Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/security-overview.md
Title: Azure Arc resource bridge (preview) security overview
-description: Security information about Azure resource bridge (preview).
+ Title: Azure Arc resource bridge security overview
+description: Understand security configuration and considerations for Azure Arc resource bridge.
Previously updated : 03/23/2023 Last updated : 11/03/2023
-# Azure Arc resource bridge (preview) security overview
+# Azure Arc resource bridge security overview
-This article describes the security configuration and considerations you should evaluate before deploying Azure Arc resource bridge (preview) in your enterprise.
+This article describes the security configuration and considerations you should evaluate before deploying Azure Arc resource bridge in your enterprise.
## Using a managed identity
-By default, a Microsoft Entra system-assigned [managed identity](../../active-directory/managed-identities-azure-resources/overview.md) is created and assigned to the Azure Arc resource bridge (preview). Azure Arc resource bridge currently supports only a system-assigned identity. The `clusteridentityoperator` identity initiates the first outbound communication and fetches the Managed Service Identity (MSI) certificate used by other agents for communication with Azure.
+By default, a Microsoft Entra system-assigned [managed identity](../../active-directory/managed-identities-azure-resources/overview.md) is created and assigned to the Azure Arc resource bridge. Azure Arc resource bridge currently supports only a system-assigned identity. The `clusteridentityoperator` identity initiates the first outbound communication and fetches the Managed Service Identity (MSI) certificate used by other agents for communication with Azure.
## Identity and access control
-Azure Arc resource bridge (preview) is represented as a resource in a resource group inside an Azure subscription. Access to this resource is controlled by standard [Azure role-based access control](../../role-based-access-control/overview.md). From the [**Access Control (IAM)**](../../role-based-access-control/role-assignments-portal.md) page in the Azure portal, you can verify who has access to your Azure Arc resource bridge (preview).
+Azure Arc resource bridge is represented as a resource in a resource group inside an Azure subscription. Access to this resource is controlled by standard [Azure role-based access control](../../role-based-access-control/overview.md). From the [**Access Control (IAM)**](../../role-based-access-control/role-assignments-portal.md) page in the Azure portal, you can verify who has access to your Azure Arc resource bridge.
Users and applications who are granted the [Contributor](../../role-based-access-control/built-in-roles.md#contributor) or Administrator role to the resource group can make changes to the resource bridge, including deploying or deleting cluster extensions.
The [activity log](../../azure-monitor/essentials/activity-log.md) is an Azure p
## Next steps -- Understand [system requirements](system-requirements.md) and [network requirements](network-requirements.md) for Azure Arc resource bridge (preview).-- Review the [Azure Arc resource bridge (preview) overview](overview.md) to understand more about features and benefits.
+- Understand [system requirements](system-requirements.md) and [network requirements](network-requirements.md) for Azure Arc resource bridge.
+- Review the [Azure Arc resource bridge overview](overview.md) to understand more about features and benefits.
- Learn more about [Azure Arc](../overview.md).
azure-arc System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/system-requirements.md
Title: Azure Arc resource bridge (preview) system requirements
-description: Learn about system requirements for Azure Arc resource bridge (preview).
+ Title: Azure Arc resource bridge system requirements
+description: Learn about system requirements for Azure Arc resource bridge.
Previously updated : 06/15/2023 Last updated : 11/03/2023
-# Azure Arc resource bridge (preview) system requirements
+# Azure Arc resource bridge system requirements
-This article describes the system requirements for deploying Azure Arc resource bridge (preview).
+This article describes the system requirements for deploying Azure Arc resource bridge.
Arc resource bridge is used with other partner products, such as [Azure Stack HCI](/azure-stack/hci/manage/azure-arc-vm-management-overview), [Arc-enabled VMware vSphere](../vmware-vsphere/index.yml), and [Arc-enabled System Center Virtual Machine Manager (SCVMM)](../system-center-virtual-machine-manager/index.yml). These products may have additional requirements.
Appliance VM IP address requirements:
- Open communication with the management machine and management endpoint (such as vCenter for VMware or MOC cloud agent service endpoint for Azure Stack HCI). - Internet connectivity to [required URLs](network-requirements.md#outbound-connectivity) enabled in proxy/firewall. - Static IP assigned (strongly recommended)
- - If using DHCP, then the address must be reserved and outside of the assignable DHCP range of IPs. No other machine on the network will use or receive this IP from DHCP. DHCP is generally not recommended because a change in IP address (ex: due to an outage) impacts the resource bridge availability.
+
+ - If using DHCP, then the address must be reserved and outside of the assignable DHCP range of IPs. No other machine on the network will use or receive this IP from DHCP. DHCP is generally not recommended because a change in IP address (ex: due to an outage) impacts the resource bridge availability.
- Must be from within the IP address prefix. - Internal and external DNS resolution.
Reserved appliance VM IP requirements:
- Static IP assigned (strongly recommended)
- - If using DHCP, then the address must be reserved and outside of the assignable DHCP range of IPs. No other machine on the network will use or receive this IP from DHCP. DHCP is generally not recommended because a change in IP address (ex: due to an outage) impacts the resource bridge availability.
+ - If using DHCP, then the address must be reserved and outside of the assignable DHCP range of IPs. No other machine on the network will use or receive this IP from DHCP. DHCP is generally not recommended because a change in IP address (ex: due to an outage) impacts the resource bridge availability.
- - Must be from within the IP address prefix.
+ - Must be from within the IP address prefix.
- - Internal and external DNS resolution.
+ - Internal and external DNS resolution.
- - If using a proxy, the proxy server has to be reachable from this IP and all IPs within the VM IP pool.
+ - If using a proxy, the proxy server has to be reachable from this IP and all IPs within the VM IP pool.
## Control plane IP requirements
For instructions to deploy Arc resource bridge on AKS Hybrid, see [How to instal
## Next steps -- Understand [network requirements for Azure Arc resource bridge (preview)](network-requirements.md).
+- Understand [network requirements for Azure Arc resource bridge](network-requirements.md).
-- Review the [Azure Arc resource bridge (preview) overview](overview.md) to understand more about features and benefits.
+- Review the [Azure Arc resource bridge overview](overview.md) to understand more about features and benefits.
-- Learn about [security configuration and considerations for Azure Arc resource bridge (preview)](security-overview.md).
+- Learn about [security configuration and considerations for Azure Arc resource bridge](security-overview.md).
azure-arc Troubleshoot Resource Bridge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/troubleshoot-resource-bridge.md
Title: Troubleshoot Azure Arc resource bridge (preview) issues
-description: This article tells how to troubleshoot and resolve issues with the Azure Arc resource bridge (preview) when trying to deploy or connect to the service.
Previously updated : 03/15/2023
+ Title: Troubleshoot Azure Arc resource bridge issues
+description: This article tells how to troubleshoot and resolve issues with the Azure Arc resource bridge when trying to deploy or connect to the service.
Last updated : 11/03/2023
-# Troubleshoot Azure Arc resource bridge (preview) issues
+# Troubleshoot Azure Arc resource bridge issues
-This article provides information on troubleshooting and resolving issues that may occur while attempting to deploy, use, or remove the Azure Arc resource bridge (preview). The resource bridge is a packaged virtual machine, which hosts a *management* Kubernetes cluster. For general information, see [Azure Arc resource bridge (preview) overview](./overview.md).
+This article provides information on troubleshooting and resolving issues that could occur while attempting to deploy, use, or remove the Azure Arc resource bridge. The resource bridge is a packaged virtual machine, which hosts a *management* Kubernetes cluster. For general information, see [Azure Arc resource bridge overview](./overview.md).
## General issues
$HOME\.KVA\.ssh\logkey.pub
$HOME\.KVA\.ssh\logkey ```
-### Remote PowerShell is not supported
+### Remote PowerShell isn't supported
-If you run `az arcappliance` CLI commands for Arc Resource Bridge via remote PowerShell, you may experience various problems. For instance, you might see an [authentication handshake failure error when trying to install the resource bridge on an Azure Stack HCI cluster](#authentication-handshake-failure) or another type of error.
+If you run `az arcappliance` CLI commands for Arc Resource Bridge via remote PowerShell, you might experience various problems. For instance, you might see an [authentication handshake failure error when trying to install the resource bridge on an Azure Stack HCI cluster](#authentication-handshake-failure) or another type of error.
-Using `az arcappliance` commands from remote PowerShell is not currently supported. Instead, sign in to the node through Remote Desktop Protocol (RDP) or use a console session.
+Using `az arcappliance` commands from remote PowerShell isn't currently supported. Instead, sign in to the node through Remote Desktop Protocol (RDP) or use a console session.
### Resource bridge cannot be updated
To resolve this issue, delete the appliance and update the appliance YAML file.
### Connection closed before server preface received
-When there are multiple attempts to deploy Arc resource bridge, expired credentials left on the management machine may cause future deployments to fail. The error will contain the message `Unavailable desc = connection closed before server preface received`. This error will surface in various `az arcappliance` commands including `validate`, `prepare` and `delete`.
+When there are multiple attempts to deploy Arc resource bridge, expired credentials left on the management machine might cause future deployments to fail. The error will contain the message `Unavailable desc = connection closed before server preface received`. This error will surface in various `az arcappliance` commands including `validate`, `prepare` and `delete`.
-To resolve this error, the .wssd\python and .wssd\kva folders in the user profile directory need to be manually deleted from the management machine. Depending on where the deployment errored, there may not be a kva folder to delete. You can delete these folders manually by navigating to the user profile directory (typically `C:\Users\<username>`), then deleting the `.wssd\python` and `.wssd\kva` folders. After they are deleted, retry the command that failed.
+To resolve this error, the .wssd\python and .wssd\kva folders in the user profile directory need to be manually deleted from the management machine. Depending on where the deployment errored, there might not be a kva folder to delete. You can delete these folders manually by navigating to the user profile directory (typically `C:\Users\<username>`), then deleting the `.wssd\python` and `.wssd\kva` folders. After they are deleted, retry the command that failed.
### Token refresh error
-When you run the Azure CLI commands, the following error may be returned: *The refresh token has expired or is invalid due to sign-in frequency checks by conditional access.* The error occurs because when you sign in to Azure, the token has a maximum lifetime. When that lifetime is exceeded, you need to sign in to Azure again by using the `az login` command.
+When you run the Azure CLI commands, the following error might be returned: *The refresh token has expired or is invalid due to sign-in frequency checks by conditional access.* The error occurs because when you sign in to Azure, the token has a maximum lifetime. When that lifetime is exceeded, you need to sign in to Azure again by using the `az login` command.
### Default host resource pools are unavailable for deployment
When the appliance is deployed to a host resource pool, there is no high availab
### Resource bridge status "Offline" and `provisioningState` "Failed"
-When deploying Arc resource bridge, the bridge may appear to be successfully deployed, because no errors were encountered when running `az arcappliance deploy` or `az arcappliance create`. However, when viewing the bridge in Azure portal, you may see status shows as **Offline**, and `az arcappliance show` may show the `provisioningState` as **Failed**. This happens when required providers are not registered before the bridge is deployed.
+When deploying Arc resource bridge, the bridge might appear to be successfully deployed, because no errors were encountered when running `az arcappliance deploy` or `az arcappliance create`. However, when viewing the bridge in Azure portal, you might see status shows as **Offline**, and `az arcappliance show` might show the `provisioningState` as **Failed**. This happens when required providers aren't registered before the bridge is deployed.
To resolve this problem, delete the resource bridge, register the providers, then redeploy the resource bridge.
To resolve this problem, delete the resource bridge, register the providers, the
1. Redeploy the resource bridge. > [!NOTE]
-> Partner products (such as Arc-enabled VMware vSphere) may have their own required providers to register. To see additional providers that must be registered, see the product's documentation.
+> Partner products (such as Arc-enabled VMware vSphere) might have their own required providers to register. To see additional providers that must be registered, see the product's documentation.
### Expired credentials in the appliance VM
-Arc resource bridge consists of an appliance VM that is deployed to the on-premises infrastructure. The appliance VM maintains a connection to the management endpoint of the on-premises infrastructure using locally stored credentials. If these credentials are not updated, the resource bridge is no longer able to communicate with the management endpoint. This may cause problems when trying to upgrade the resource bridge or manage VMs through Azure.
+Arc resource bridge consists of an appliance VM that is deployed to the on-premises infrastructure. The appliance VM maintains a connection to the management endpoint of the on-premises infrastructure using locally stored credentials. If these credentials aren't updated, the resource bridge is no longer able to communicate with the management endpoint. This can cause problems when trying to upgrade the resource bridge or manage VMs through Azure.
To fix this, the credentials in the appliance VM need to be updated. For more information, see [Update credentials in the appliance VM](maintenance.md#update-credentials-in-the-appliance-vm). ---------- ## Networking issues ### Back-off pulling image error
-When trying to deploy Arc resource bridge, you may see an error that contains `back-off pulling image \\\"url"\\\: FailFastPodCondition`. This error is caused when the appliance VM can't reach the URL specified in the error. To resolve this issue, make sure the appliance VM meets system requirements, including internet access connectivity to [required allowlist URLs](network-requirements.md).
+When trying to deploy Arc resource bridge, you might see an error that contains `back-off pulling image \\\"url"\\\: FailFastPodCondition`. This error is caused when the appliance VM can't reach the URL specified in the error. To resolve this issue, make sure the appliance VM meets system requirements, including internet access connectivity to [required allowlist URLs](network-requirements.md).
### Not able to connect to URL
-If you receive an error that contains `Not able to connect to https://example.url.com`, check with your network administrator to ensure your network allows all of the required firewall and proxy URLs to deploy Arc resource bridge. For more information, see [Azure Arc resource bridge (preview) network requirements](network-requirements.md).
+If you receive an error that contains `Not able to connect to https://example.url.com`, check with your network administrator to ensure your network allows all of the required firewall and proxy URLs to deploy Arc resource bridge. For more information, see [Azure Arc resource bridge network requirements](network-requirements.md).
### .local not supported
-When trying to set the configuration for Arc resource bridge, you may receive an error message similar to:
+When trying to set the configuration for Arc resource bridge, you might receive an error message similar to:
`"message": "Post \"https://esx.lab.local/52b-bcbc707ce02c/disk-0.vmdk\": dial tcp: lookup esx.lab.local: no such host"`
This occurs when a `.local` path is provided for a configuration setting, such a
### Azure Arc resource bridge is unreachable
-Azure Arc resource bridge (preview) runs a Kubernetes cluster, and its control plane requires a static IP address. The IP address is specified in the `infra.yaml` file. If the IP address is assigned from a DHCP server, the address can change if not reserved. Rebooting the Azure Arc resource bridge (preview) or VM can trigger an IP address change, resulting in failing services.
+Azure Arc resource bridge runs a Kubernetes cluster, and its control plane requires a static IP address. The IP address is specified in the `infra.yaml` file. If the IP address is assigned from a DHCP server, the address can change if it's not reserved. Rebooting the Azure Arc resource bridge or VM can trigger an IP address change, resulting in failing services.
-Intermittently, the resource bridge (preview) can lose the reserved IP configuration. This is due to the behavior described in [loss of VIPs when systemd-networkd is restarted](https://github.com/acassen/keepalived/issues/1385). When the IP address isn't assigned to the Azure Arc resource bridge (preview) VM, any call to the resource bridge API server will fail. As a result, you can't create any new resource through the resource bridge (preview), ranging from connecting to Azure Arc private cloud, create a custom location, create a VM, etc.
+Intermittently, the resource bridge can lose the reserved IP configuration. This is due to the behavior described in [loss of VIPs when systemd-networkd is restarted](https://github.com/acassen/keepalived/issues/1385). When the IP address isn't assigned to the Azure Arc resource bridge VM, any call to the resource bridge API server will fail. As a result, you can't create any new resource through the resource bridge, ranging from connecting to Azure Arc private cloud, create a custom location, create a VM, etc.
Another possible cause is slow disk access. Azure Arc resource bridge uses etcd which requires 10 ms latency or less per [recommendation](https://docs.openshift.com/container-platform/4.6/scalability_and_performance/recommended-host-practices.html#recommended-etcd-practices_). If the underlying disk has low performance, it can impact the operations, and causing failures.
-To resolve this issue, reboot the resource bridge (preview) VM, and it should recover its IP address. If the address is assigned from a DHCP server, reserve the IP address associated with the resource bridge (preview).
+To resolve this issue, reboot the resource bridge VM, and it should recover its IP address. If the address is assigned from a DHCP server, reserve the IP address associated with the resource bridge.
### SSL proxy configuration issues
For more information, see [SSL proxy configuration](network-requirements.md#ssl-
### KVA timeout error
-While trying to deploy Arc Resource Bridge, a "KVA timeout error" may appear. The "KVA timeout error" is a generic error that can be the result of a variety of network misconfigurations that involve the management machine, Appliance VM, or Control Plane IP not having communication with each other, to the internet, or required URLs. This communication failure is often due to issues with DNS resolution, proxy settings, network configuration, or internet access.
+While trying to deploy Arc Resource Bridge, a "KVA timeout error" might appear. The "KVA timeout error" is a generic error that can be the result of a variety of network misconfigurations that involve the management machine, Appliance VM, or Control Plane IP not having communication with each other, to the internet, or required URLs. This communication failure is often due to issues with DNS resolution, proxy settings, network configuration, or internet access.
For clarity, "management machine" refers to the machine where deployment CLI commands are being run. "Appliance VM" is the VM that hosts Arc resource bridge. "Control Plane IP" is the IP of the control plane for the Kubernetes management cluster in the Appliance VM.
For clarity, "management machine" refers to the machine where deployment CLI com
- Management machine is unable to communicate with Control Plane IP and Appliance VM IP. - Appliance VM is unable to communicate with the management machine, vCenter endpoint (for VMware), or MOC cloud agent endpoint (for Azure Stack HCI).ΓÇ» -- Appliance VM does not have internet access.
+- Appliance VM doesn't have internet access.
- Appliance VM has internet access, but connectivity to one or more required URLs is being blocked, possibly due to a proxy or firewall. - Appliance VM is unable to reach a DNS server that can resolve internal names, such as vCenter endpoint for vSphere or cloud agent endpoint for Azure Stack HCI. The DNS server must also be able to resolve external addresses, such as Azure service addresses and container registry names.  - Proxy server configuration on the management machine or Arc resource bridge configuration files is incorrect. This can impact both the management machine and the Appliance VM. When the `az arcappliance prepare` command is run, the management machine won't be able to connect and download OS images if the host proxy isn't correctly configured. Internet access on the Appliance VM might be broken by incorrect or missing proxy configuration, which impacts the VM’s ability to pull container images.  #### Troubleshoot KVA timeout error
-To resolve the error, one or more network misconfigurations may need to be addressed. Follow the steps below to address the most common reasons for this error.
+To resolve the error, one or more network misconfigurations might need to be addressed. Follow the steps below to address the most common reasons for this error.
-1. When there is a problem with deployment, the first step is to collect logs by Appliance VM IP (not by kubeconfig, as the kubeconfig may be empty if deploy command did not complete). Problems collecting logs are most likely due to the management machine being unable to reach the Appliance VM.
+1. When there is a problem with deployment, the first step is to collect logs by Appliance VM IP (not by kubeconfig, as the kubeconfig could be empty if the deploy command didn't complete). Problems collecting logs are most likely due to the management machine being unable to reach the Appliance VM.
Once logs are collected, extract the folder and open kva.log. Review the kva.log for more information on the failure to help pinpoint the cause of the KVA timeout error. 1. The management machine must be able to communicate with the Appliance VM IP and Control Plane IP. Ping the Control Plane IP and Appliance VM IP from the management machine and verify there is a response from both IPs.
- If a request times out, the management machine is not able to communicate with the IP(s). This could be caused by a closed port, network misconfiguration or a firewall block. Work with your network administrator to allow communication between the management machine to the Control Plane IP and Appliance VM IP.
+ If a request times out, the management machine can't communicate with the IP(s). This could be caused by a closed port, network misconfiguration or a firewall block. Work with your network administrator to allow communication between the management machine to the Control Plane IP and Appliance VM IP.
-1. Appliance VM IP and Control Plane IP must be able to communicate with the management machine and vCenter endpoint (for VMware) or MOC cloud agent endpoint (for HCI). Work with your network administrator to ensure the network is configured to permit this. This may require adding a firewall rule to open port 443 from the Appliance VM IP and Control Plane IP to vCenter or port 65000 and 55000 for Azure Stack HCI MOC cloud agent. Review [network requirements for Azure Stack HCI](/azure-stack/hci/manage/azure-arc-vm-management-prerequisites#network-port-requirements) and [VMware](../vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script.md) for Arc resource bridge.
+1. Appliance VM IP and Control Plane IP must be able to communicate with the management machine and vCenter endpoint (for VMware) or MOC cloud agent endpoint (for HCI). Work with your network administrator to ensure the network is configured to permit this. This might require adding a firewall rule to open port 443 from the Appliance VM IP and Control Plane IP to vCenter or port 65000 and 55000 for Azure Stack HCI MOC cloud agent. Review [network requirements for Azure Stack HCI](/azure-stack/hci/manage/azure-arc-vm-management-prerequisites#network-port-requirements) and [VMware](../vmware-vsphere/quick-start-connect-vcenter-to-arc-using-script.md) for Arc resource bridge.
1. Appliance VM IP and Control Plane IP need internet access to [these required URLs](#not-able-to-connect-to-url). Azure Stack HCI requires [additional URLs](/azure-stack/hci/manage/azure-arc-vm-management-prerequisites). Work with your network administrator to ensure that the IPs can access the required URLs.
To resolve the error, one or more network misconfigurations may need to be addre
Verify that the DNS server IP used to create the configuration files has internal and external address resolution. If not, [delete the appliance](/cli/azure/arcappliance/delete), recreate the Arc resource bridge configuration files with the correct DNS server settings, and then deploy Arc resource bridge using the new configuration files.
-## Move Arc resource bridge location
+## Move Arc resource bridge location
+ Resource move of Arc resource bridge isn't currently supported. You'll need to delete the Arc resource bridge, then re-deploy it to the desired location. ## Azure Arc-enabled VMs on Azure Stack HCI issues
For general help resolving issues related to Azure Arc-enabled VMs on Azure Stac
### Authentication handshake failure
-When running an `az arcappliance` command, you may see a connection error: `authentication handshake failed: x509: certificate signed by unknown authority`
+When running an `az arcappliance` command, you might see a connection error: `authentication handshake failed: x509: certificate signed by unknown authority`
-This is usually caused when trying to run commands from remote PowerShell, which is not supported by Azure Arc resource bridge.
+This is usually caused when trying to run commands from remote PowerShell, which isn't supported by Azure Arc resource bridge.
To install Azure Arc resource bridge on an Azure Stack HCI cluster, `az arcappliance` commands must be run locally on a node in the cluster. Sign in to the node through Remote Desktop Protocol (RDP) or use a console session to run these commands.
Error: Error in reading OVA file: failed to parse ovf: strconv.ParseInt: parsing
value out of range. ```
-This error occurs when you run the Azure CLI commands in a 32-bit context, which is the default behavior. The vSphere SDK only supports running in a 64-bit context. The specific error returned from the vSphere SDK is `Unable to import ova of size 6GB using govc`. To resolve the error, install and use Azure CLI 64-bit.
+This error occurs when you run the Azure CLI commands in a 32-bit context, which is the default behavior. The vSphere SDK only supports running in a 64-bit context. The specific error returned from the vSphere SDK is `Unable to import ova of size 6GB using govc`. To resolve the error, install and use Azure CLI 64-bit.
### Error during host configuration
-When you deploy the resource bridge on VMware vCenter, if you have been using the same template to deploy and delete the appliance multiple times, you may encounter the following error:
+When you deploy the resource bridge on VMware vCenter, if you have been using the same template to deploy and delete the appliance multiple times, you might encounter the following error:
`Appliance cluster deployment failed with error: Error: An error occurred during host configuration`
When deploying the resource bridge on VMware vCenter, you specify the folder in
### Insufficient permissions
-When deploying the resource bridge on VMware vCenter, you may get an error saying that you have insufficient permission. To resolve this issue, make sure that the user account being used to deploy the resource bridge has all of the following privileges in VMware vCenter and then try again.
-
+When deploying the resource bridge on VMware vCenter, you might get an error saying that you have insufficient permission. To resolve this issue, make sure that the user account being used to deploy the resource bridge has all of the following privileges in VMware vCenter and then try again.
**Datastore**  -- Allocate space 
+- Allocate space
-- Browse datastore 
+- Browse datastore
-- Low level file operations 
+- Low level file operations
**Folder**  -- Create folder
+- Create folder
-**vSphere Tagging** 
+**vSphere Tagging**
- Assign or Unassign vSphere Tag **Network**  -- Assign network 
+- Assign network
-**Resource** 
+**Resource**
-- Assign virtual machine to resource pool 
+- Assign virtual machine to resource pool
-- Migrate powered off virtual machine 
+- Migrate powered off virtual machine
-- Migrate powered on virtual machine 
+- Migrate powered on virtual machine
-**Sessions** 
+**Sessions**
-- Validate session 
+- Validate session
-**vApp** 
+**vApp**
-- Assign resource pool 
+- Assign resource pool
- Import 
-**Virtual machine** 
+**Virtual machine**
-- Change Configuration 
+- Change Configuration
- - Acquire disk lease 
+ - Acquire disk lease
- - Add existing disk 
+ - Add existing disk
- - Add new disk 
+ - Add new disk
- - Add or remove device 
+ - Add or remove device
- - Advanced configuration 
+ - Advanced configuration
- - Change CPU count 
+ - Change CPU count
- - Change Memory 
+ - Change Memory
- - Change Settings 
+ - Change Settings
- - Change resource 
+ - Change resource
- - Configure managedBy 
+ - Configure managedBy
- - Display connection settings 
+ - Display connection settings
- - Extend virtual disk 
+ - Extend virtual disk
- - Modify device settings 
+ - Modify device settings
- - Query Fault Tolerance compatibility 
+ - Query Fault Tolerance compatibility
- - Query unowned files 
+ - Query unowned files
- - Reload from path 
+ - Reload from path
- - Remove disk 
+ - Remove disk
- - Rename 
+ - Rename
- - Reset guest information 
+ - Reset guest information
- - Set annotation 
+ - Set annotation
- - Toggle disk change tracking 
+ - Toggle disk change tracking
- - Toggle fork parent 
+ - Toggle fork parent
- - Upgrade virtual machine compatibility 
+ - Upgrade virtual machine compatibility
-- Edit Inventory 
+- Edit Inventory
- - Create from existing 
+ - Create from existing
- - Create new 
+ - Create new
- - Register 
+ - Register
- - Remove 
+ - Remove
- - Unregister 
+ - Unregister
-- Guest operations 
+- Guest operations
- - Guest operation alias modification 
+ - Guest operation alias modification
- - Guest operation modifications 
+ - Guest operation modifications
- - Guest operation program execution 
+ - Guest operation program execution
- - Guest operation queries 
+ - Guest operation queries
-- Interaction 
+- Interaction
- - Connect devices 
+ - Connect devices
- - Console interaction 
+ - Console interaction
- - Guest operating system management by VIX API 
+ - Guest operating system management by VIX API
- - Install VMware Tools 
+ - Install VMware Tools
- - Power off 
+ - Power off
- - Power on 
+ - Power on
- - Reset 
+ - Reset
- - Suspend 
+ - Suspend
-- Provisioning 
+- Provisioning
- - Allow disk access 
+ - Allow disk access
- - Allow file access 
+ - Allow file access
- - Allow read-only disk access 
+ - Allow read-only disk access
- - Allow virtual machine download 
+ - Allow virtual machine download
- - Allow virtual machine files upload 
+ - Allow virtual machine files upload
- - Clone virtual machine 
+ - Clone virtual machine
- - Deploy template 
+ - Deploy template
- - Mark as template 
+ - Mark as template
- - Mark as virtual machine 
+ - Mark as virtual machine
-- Snapshot management 
+- Snapshot management
- - Create snapshot 
+ - Create snapshot
- - Remove snapshot 
+ - Remove snapshot
- - Revert to snapshot 
+ - Revert to snapshot
## Next steps
When deploying the resource bridge on VMware vCenter, you may get an error sayin
If you don't see your problem here or you can't resolve your issue, try one of the following channels for support: - Get answers from Azure experts through [Microsoft Q&A](/answers/topics/azure-arc.html).- - Connect with [@AzureSupport](https://twitter.com/azuresupport), the official Microsoft Azure account for improving customer experience. Azure Support connects the Azure community to answers, support, and experts.- - [Open an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). -
azure-arc Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/upgrade.md
Title: Upgrade Arc resource bridge (preview)
-description: Learn how to upgrade Arc resource bridge (preview) using either cloud-managed upgrade or manual upgrade.
Previously updated : 10/20/2023
+ Title: Upgrade Arc resource bridge
+description: Learn how to upgrade Arc resource bridge using either cloud-managed upgrade or manual upgrade.
Last updated : 11/03/2023
-# Upgrade Arc resource bridge (preview)
+# Upgrade Arc resource bridge
-This article describes how Arc resource bridge (preview) is upgraded, and the two ways upgrade can be performed: cloud-managed upgrade or manual upgrade. Currently, some private cloud providers differ in how they handle Arc resource bridge upgrades. For more information, see the [Private cloud providers](#private-cloud-providers) section.
+This article describes how Arc resource bridge is upgraded, and the two ways upgrade can be performed: cloud-managed upgrade or manual upgrade. Currently, some private cloud providers differ in how they handle Arc resource bridge upgrades. For more information, see the [Private cloud providers](#private-cloud-providers) section.
## Prerequisites
Or to upgrade a resource bridge on Azure Stack HCI, run: `az arcappliance upgrad
Currently, private cloud providers differ in how they perform Arc resource bridge upgrades. Review the following information to see how to upgrade your Arc resource bridge for a specific provider.
-For Arc-enabled VMware vSphere (preview), manual upgrade is available, and cloud-managed upgrade is supported for appliances on version 1.0.15 and higher. When Arc-enabled VMware vSphere announces General Availability, appliances on version 1.0.15 and higher will receive cloud-managed upgrade as the default experience. Appliances that are below version 1.0.15 must be manually upgraded. A manual upgrade only upgrades the appliance to the next version, not the latest version. If you have multiple versions to upgrade, then another option is to review the steps for [performing a recovery](/azure/azure-arc/vmware-vsphere/recover-from-resource-bridge-deletion), then delete the appliance VM and perform the recovery steps. This will deploy a new Arc resource bridge using the latest version and reconnect pre-existing Azure resources.
+For Arc-enabled VMware vSphere, manual upgrade is available, but appliances on version 1.0.15 and higher will receive cloud-managed upgrade as the default experience. Appliances that are below version 1.0.15 must be manually upgraded. A manual upgrade only upgrades the appliance to the next version, not the latest version. If you have multiple versions to upgrade, then another option is to review the steps for [performing a recovery](/azure/azure-arc/vmware-vsphere/recover-from-resource-bridge-deletion), then delete the appliance VM and perform the recovery steps. This will deploy a new Arc resource bridge using the latest version and reconnect pre-existing Azure resources.
-[Azure Arc VM management (preview) on Azure Stack HCI](/azure-stack/hci/manage/azure-arc-vm-management-overview) supports upgrade of an Arc resource bridge on Azure Stack HCI, version 22H2 up until appliance version 1.0.14 and `az arcappliance` CLI extension version 0.2.33. These upgrades can be done through manual upgrade or a support request for cloud-managed upgrade. For subsequent upgrades, you must transition to Azure Stack HCI, version 23H2 (preview). In version 23H2 (preview), the LCM tool manages upgrades across all components as a "validated recipe" package. For more information, visit the [Arc VM management FAQ page](/azure-stack/hci/manage/azure-arc-vms-faq).
+Azure Arc VM management (preview) on Azure Stack HCI supports upgrade of an Arc resource bridge on Azure Stack HCI, version 22H2 up until appliance version 1.0.14 and `az arcappliance` CLI extension version 0.2.33. These upgrades can be done through manual upgrade. For subsequent upgrades, you must transition to Azure Stack HCI, version 23H2 (preview). In version 23H2 (preview), the LCM tool manages upgrades across all components as a "validated recipe" package. For more information, visit the [Arc VM management FAQ page](/azure-stack/hci/manage/azure-arc-vms-faq).
For Arc-enabled System Center Virtual Machine Manager (SCVMM) (preview), the upgrade feature isn't currently available yet. Review the steps for [performing the recovery operation](/azure/azure-arc/system-center-virtual-machine-manager/disaster-recovery), then delete the appliance VM from SCVMM and perform the recovery steps. This deploys a new resource bridge and reconnects pre-existing Azure resources.-
+
## Version releases The Arc resource bridge version is tied to the versions of underlying components used in the appliance image, such as the Kubernetes version. When there is a change in the appliance image, the Arc resource bridge version gets incremented. This generally happens when a new `az arcappliance` CLI extension version is released. A new extension is typically released on a monthly cadence at the end of the month. For detailed release info, see the [Arc resource bridge release notes](https://github.com/Azure/ArcResourceBridge/releases) on GitHub.
azure-arc Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-overview.md
The Azure Connected Machine agent is designed to manage agent and system resourc
| MDE.Linux | Linux | 60% | | MicrosoftDnsAgent | Windows | 100% | | MicrosoftMonitoringAgent | Windows | 60% |
- | OmsAgentForLinux | Windows | 60%|
+ | OmsAgentForLinux | Linux | 60%|
During normal operations, defined as the Azure Connected Machine agent being connected to Azure and not actively modifying an extension or evaluating a policy, you can expect the agent to consume the following system resources:
azure-arc Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/policy-reference.md
Title: Built-in policy definitions for Azure Arc-enabled servers description: Lists Azure Policy built-in policy definitions for Azure Arc-enabled servers (preview). These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 10/23/2023 Last updated : 11/03/2023
azure-arc Troubleshoot Agent Onboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/troubleshoot-agent-onboard.md
# Troubleshoot Azure Connected Machine agent connection issues
-This article provides information for troubleshooting issues that may occur configuring the Azure Connected Machine agent for Windows or Linux. Both the interactive and at-scale installation methods when configuring connection to the service are included. For general information, see [Azure Arc-enabled servers overview](./overview.md).
+This article provides information for troubleshooting issues that might occur while configuring the Azure Connected Machine agent for Windows or Linux. Both the interactive and at-scale installation methods when configuring connection to the service are included. For general information, see [Azure Arc-enabled servers overview](./overview.md).
## Agent error codes
-Use the following table to identify and resolve issues when configuring the Azure Connected Machine agent. You'll need the `AZCM0000` ("0000" can be any four digit number) error code printed to the console or script output.
+Use the following table to identify and resolve issues when configuring the Azure Connected Machine agent using the `AZCM0000` ("0000" can be any four digit number) error code printed to the console or script output.
| Error code | Probable cause | Suggested remediation | ||-|--|
Use the following table to identify and resolve issues when configuring the Azur
| AZCM0018 | The command was executed without administrative privileges | Retry the command in an elevated user context (administrator/root). | | AZCM0019 | The path to the configuration file is incorrect | Ensure the path to the configuration file is correct and try again. | | AZCM0023 | The value provided for a parameter (argument) is invalid | Review the error message for more specific information. Refer to the syntax of the command (`azcmagent <command> --help`) for valid values or expected format for the arguments. |
-| AZCM0026 | There is an error in network configuration or some critical services are temporarily unavailable | Check if the required endpoints are reachable (for example, hostnames are resolvable, endpoints aren't blocked). If the network is configured for Private Link Scope, a Private Link Scope resource ID must be provided for onboarding using the `--private-link-scope` parameter. |
-| AZCM0041 | The credentials supplied are invalid | For device logins, verify that the user account specified has access to the tenant and subscription where the server resource will be created<sup>[1](#footnote3)</sup>.<br> For service principal logins, check the client ID and secret for correctness, the expiration date of the secret<sup>[2](#footnote4)</sup>, and that the service principal is from the same tenant where the server resource will be created<sup>[1](#footnote3)</sup>.<br> <a name="footnote3"></a><sup>1</sup>See [How to find your Microsoft Entra tenant ID](/azure/active-directory-b2c/tenant-management-read-tenant-name).<br> <a name="footnote4"></a><sup>2</sup>In Azure portal, open Microsoft Entra ID and select the App registration blade. Select the application to be used and the Certificates and secrets within it. Check whether the expiration data has passed. If it has, create new credentials with sufficient roles and try again. See [Connected Machine agent prerequisites-required permissions](prerequisites.md#required-permissions). |
-| AZCM0042 | Creation of the Azure Arc-enabled server resource failed | Review the error message in the output to identify the cause of the failure to create resource and the suggested remediation. For permission issues, see [Connected Machine agent prerequisites-required permissions](prerequisites.md#required-permissions) for more information. |
-| AZCM0043 | Deletion of the Azure Arc-enabled server resource failed | Verify that the user/service principal specified has permissions to delete Azure Arc-enabled server/resources in the specified group ΓÇö see [Connected Machine agent prerequisites-required permissions](prerequisites.md#required-permissions).<br> If the resource no longer exists in Azure, use the `--force-local-only` flag to proceed. |
+| AZCM0026 | There's an error in network configuration or some critical services are temporarily unavailable | Check if the required endpoints are reachable (for example, hostnames are resolvable, endpoints aren't blocked). If the network is configured for Private Link Scope, a Private Link Scope resource ID must be provided for onboarding using the `--private-link-scope` parameter. |
+| AZCM0041 | The credentials supplied are invalid | For device logins, verify that the user account specified has access to the tenant and subscription where the server resource will be created. For service principal logins, check the client ID and secret for correctness, the expiration date of the secret, and that the service principal is from the same tenant where the server resource will be created. |
+| AZCM0042 | Creation of the Azure Arc-enabled server resource failed | Review the error message in the output to identify the cause of the failure to create resource and the suggested remediation. For more information, see [Connected Machine agent prerequisites-required permissions](prerequisites.md#required-permissions) for more information. |
+| AZCM0043 | Deletion of the Azure Arc-enabled server resource failed | Verify that the user/service principal specified has permissions to delete Azure Arc-enabled server/resources in the specified group. For more information, see [Connected Machine agent prerequisites-required permissions](prerequisites.md#required-permissions). If the resource no longer exists in Azure, use the `--force-local-only` flag to proceed. |
| AZCM0044 | A resource with the same name already exists | Specify a different name for the `--resource-name` parameter or delete the existing Azure Arc-enabled server in Azure and try again. | | AZCM0062 | An error occurred while connecting the server | Review the error message in the output for more specific information. If the error occurred after the Azure resource was created, delete this resource before retrying. | | AZCM0063 | An error occurred while disconnecting the server | Review the error message in the output for more specific information. If this error persists, delete the resource in Azure, and then run `azcmagent disconnect --force-local-only` on the server. |
Use the following table to identify and resolve issues when configuring the Azur
| AZCM0105 | An error occurred while downloading the Microsoft Entra ID managed identify certificate | Delete the resource created in Azure and try again. | | AZCM0147-<br>AZCM0152 | An error occurred while installing Azcmagent on Windows | Review the error message in the output for more specific information. | | AZCM0127-<br>AZCM0146 | An error occurred while installing Azcmagent on Linux | Review the error message in the output for more specific information. |
+| AZCM0150 | Generic failure during installation | Submit a support ticket to get assistance. |
+| AZCM0153 | The system platform isn't supported | Review the [prerequisites](prerequisites.md) for supported platforms |
+| AZCM0154 | The version of PowerShell installed on the system is too old | Upgrade to PowerShell 4 or later and try again. |
+| AZCM0155 | The user running the installation script doesn't have administrator permissions | Re-run the script as an administrator. |
+| AZCM0156 | Installation of the agent failed | Confirm that the machine isn't running on Azure. Detailed errors might be found in the installation log at `%TEMP%\installationlog.txt`. |
+| AZCM0157 | Unable to download repo metadata for the Microsoft Linux software repository | Check if a firewall is blocking access to `packages.microsoft.com` and try again. |
## Agent verbose log
The following table lists some of the known errors and suggestions on how to tro
|Message |Error |Probable cause |Solution | |--||||
-|Failed to acquire authorization token device flow |`Error occurred while sending request for Device Authorization Code: Post https://login.windows.net/fb84ce97-b875-4d12-b031-ef5e7edf9c8e/oauth2/devicecode?api-version=1.0: dial tcp 40.126.9.7:443: connect: network is unreachable.` |Cannot reach `login.windows.net` endpoint | Verify connectivity to the endpoint. |
-|Failed to acquire authorization token device flow |`Error occurred while sending request for Device Authorization Code: Post https://login.windows.net/fb84ce97-b875-4d12-b031-ef5e7edf9c8e/oauth2/devicecode?api-version=1.0: dial tcp 40.126.9.7:443: connect: network is Forbidden`. |Proxy or firewall is blocking access to `login.windows.net` endpoint. | Verify connectivity to the endpoint and it is not blocked by a firewall or proxy server. |
+|Failed to acquire authorization token device flow |`Error occurred while sending request for Device Authorization Code: Post https://login.windows.net/fb84ce97-b875-4d12-b031-ef5e7edf9c8e/oauth2/devicecode?api-version=1.0: dial tcp 40.126.9.7:443: connect: network is unreachable.` |Can't reach `login.windows.net` endpoint | Run [azcmagent check](azcmagent-check.md) to see if a firewall is blocking access to Microsoft Entra ID. |
+|Failed to acquire authorization token device flow |`Error occurred while sending request for Device Authorization Code: Post https://login.windows.net/fb84ce97-b875-4d12-b031-ef5e7edf9c8e/oauth2/devicecode?api-version=1.0: dial tcp 40.126.9.7:443: connect: network is Forbidden`. |Proxy or firewall is blocking access to `login.windows.net` endpoint. | Run [azcmagent check](azcmagent-check.md) to see if a firewall is blocking access to Microsoft Entra ID.|
|Failed to acquire authorization token device flow |`Error occurred while sending request for Device Authorization Code: Post https://login.windows.net/fb84ce97-b875-4d12-b031-ef5e7edf9c8e/oauth2/devicecode?api-version=1.0: dial tcp lookup login.windows.net: no such host`. | Group Policy Object *Computer Configuration\ Administrative Templates\ System\ User Profiles\ Delete user profiles older than a specified number of days on system restart* is enabled. | Verify the GPO is enabled and targeting the affected machine. See footnote <sup>[1](#footnote1)</sup> for further details. |
-|Failed to acquire authorization token from SPN |`Failed to execute the refresh request. Error = 'Post https://login.windows.net/fb84ce97-b875-4d12-b031-ef5e7edf9c8e/oauth2/token?api-version=1.0: Forbidden'` |Proxy or firewall is blocking access to `login.windows.net` endpoint. |Verify connectivity to the endpoint and it is not blocked by a firewall or proxy server. |
+|Failed to acquire authorization token from SPN |`Failed to execute the refresh request. Error = 'Post https://login.windows.net/fb84ce97-b875-4d12-b031-ef5e7edf9c8e/oauth2/token?api-version=1.0: Forbidden'` |Proxy or firewall is blocking access to `login.windows.net` endpoint. |Run [azcmagent check](azcmagent-check.md) to see if a firewall is blocking access to Microsoft Entra ID. |
|Failed to acquire authorization token from SPN |`Invalid client secret is provided` |Wrong or invalid service principal secret. |Verify the service principal secret. |
-| Failed to acquire authorization token from SPN |`Application with identifier 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx' was not found in the directory 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx'. This can happen if the application has not been installed by the administrator of the tenant or consented to by any user in the tenant` |Incorrect service principal and/or Tenant ID. |Verify the service principal and/or the tenant ID.|
+| Failed to acquire authorization token from SPN |`Application with identifier 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx' wasn't found in the directory 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx'. This can happen if the application has not been installed by the administrator of the tenant or consented to by any user in the tenant` |Incorrect service principal and/or Tenant ID. |Verify the service principal and/or the tenant ID.|
|Get ARM Resource Response |`The client 'username@domain.com' with object id 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx' does not have authorization to perform action 'Microsoft.HybridCompute/machines/read' over scope '/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/myResourceGroup/providers/Microsoft.HybridCompute/machines/MSJC01' or the scope is invalid. If access was recently granted, please refresh your credentials."}}" Status Code=403` |Wrong credentials and/or permissions |Verify you or the service principal is a member of the **Azure Connected Machine Onboarding** role. |
-|Failed to AzcmagentConnect ARM resource |`The subscription is not registered to use namespace 'Microsoft.HybridCompute'` |Azure resource providers are not registered. |Register the [resource providers](prerequisites.md#azure-resource-providers). |
-|Failed to AzcmagentConnect ARM resource |`Get https://management.azure.com/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/myResourceGroup/providers/Microsoft.HybridCompute/machines/MSJC01?api-version=2019-03-18-preview: Forbidden` |Proxy server or firewall is blocking access to `management.azure.com` endpoint. |Verify connectivity to the endpoint and it is not blocked by a firewall or proxy server. |
+|Failed to AzcmagentConnect ARM resource |`The subscription isn't registered to use namespace 'Microsoft.HybridCompute'` |Azure resource providers aren't registered. |Register the [resource providers](prerequisites.md#azure-resource-providers). |
+|Failed to AzcmagentConnect ARM resource |`Get https://management.azure.com/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourcegroups/myResourceGroup/providers/Microsoft.HybridCompute/machines/MSJC01?api-version=2019-03-18-preview: Forbidden` |Proxy server or firewall is blocking access to `management.azure.com` endpoint. | Run [azcmagent check](azcmagent-check.md) to see if a firewall is blocking access to Azure Resource Manager. |
<a name="footnote1"></a><sup>1</sup>If this GPO is enabled and applies to machines with the Connected Machine agent, it deletes the user profile associated with the built-in account specified for the *himds* service. As a result, it also deletes the authentication certificate used to communicate with the service that is cached in the local certificate store for 30 days. Before the 30-day limit, an attempt is made to renew the certificate. To resolve this issue, follow the steps to [disconnect the agent](azcmagent-disconnect.md) and then re-register it with the service running `azcmagent connect`. ## Next steps
-If you don't see your problem here or you can't resolve your issue, try one of the following channels for additional support:
+If you don't see your problem here or you can't resolve your issue, try one of the following channels for more support:
* Get answers from Azure experts through [Microsoft Q&A](/answers/topics/azure-arc.html).
azure-cache-for-redis Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/policy-reference.md
Title: Built-in policy definitions for Azure Cache for Redis description: Lists Azure Policy built-in policy definitions for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 10/23/2023 Last updated : 11/03/2023
azure-functions Durable Functions Timers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-timers.md
When you create a timer that expires at 4:30 pm UTC, the underlying Durable Task
> [!NOTE] > * For JavaScript, Python, and PowerShell apps, Durable timers are limited to six days. To work around this limitation, you can use the timer APIs in a `while` loop to simulate a longer delay. Up-to-date .NET and Java apps support arbitrarily long timers.
-> * Depending on the version of the SDK and [storage provider](durable-functions-storage-providers.md) being used, long timers of 6 days or more may be internally implemented using a series of shorter timers (e.g., of 3 day durations) until the desired expiration time is reached. This can be observed in the underlying data store but won't impact the the orchestration behavior.
+> * Depending on the version of the SDK and [storage provider](durable-functions-storage-providers.md) being used, long timers of 6 days or more may be internally implemented using a series of shorter timers (e.g., of 3 day durations) until the desired expiration time is reached. This can be observed in the underlying data store but won't impact the orchestration behavior.
> * Don't use built-in date/time APIs for getting the current time. When calculating a future date for a timer to expire, always use the orchestrator function's current time API. For more information, see the [orchestrator function code constraints](durable-functions-code-constraints.md#dates-and-times) article. ## Usage for delay
azure-functions Functions Bindings Event Hubs Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-event-hubs-output.md
public static async Task Run(
string newEventBody = DoSomething(eventData); // Queue the message to be sent in the background by adding it to the collector.
- // If only the event is passed, an Event Hubs partition to be be assigned via
+ // If only the event is passed, an Event Hubs partition to be assigned via
// round-robin for each batch. await outputEvents.AddAsync(new EventData(newEventBody));
azure-functions Functions Run Local https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-run-local.md
In the terminal window or from a command prompt, run the following command to cr
func init MyProjFolder --worker-runtime dotnet-isolated ```
-By default this command creates a project that runs in-process with the Functons host on the current [Long-Term Support (LTS) version of .NET Core]. You can use the `--target-framework` option to target a specific supported version of .NET, including .NET Framework. For for information, see the [`func init`](functions-core-tools-reference.md#func-init) reference.
+By default this command creates a project that runs in-process with the Functons host on the current [Long-Term Support (LTS) version of .NET Core]. You can use the `--target-framework` option to target a specific supported version of .NET, including .NET Framework. For more information, see the [`func init`](functions-core-tools-reference.md#func-init) reference.
### [In-process](#tab/in-process)
azure-government Documentation Government How To Access Enterprise Agreement Billing Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-how-to-access-enterprise-agreement-billing-account.md
documentationcenter: ''
-+ na Previously updated : 05/08/2023 Last updated : 11/03/2023 # Access your EA billing account in the Azure Government portal
Last updated 05/08/2023
As an Azure Government Enterprise Agreement (EA) customer, you can now manage your EA billing account directly from [Azure Government portal](https://portal.azure.us/). This article helps you to get started with your billing account on the Azure Government portal. > [!NOTE]
-> The Azure Enterprise (EA) portal is getting retired. We recommend that both direct and indirect EA Azure Government customers use Cost Management + Billing in the Azure Government portal to manage their enrollment and billing instead of using the EA portal.
+> On November 15, 2023, the Azure Enterprise portal is retiring for EA enrollments in the Commercial cloud and is becoming read-only for EA enrollments in the Azure Government cloud.
+> Customers and Partners should use Cost Management + Billing in the Azure portal to manage their enrollments. For more information about enrollment management in the Azure portal, see [Get started with EA billing in the Azure portal](../cost-management-billing/manage/ea-direct-portal-get-started.md).
## Access the Azure Government portal
azure-maps Drawing Error Visualizer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-error-visualizer.md
This tutorial uses the [Postman] application, but you can choose a different API
1. Follow the steps outlined in the [How to create data registry] article to upload the drawing package into your Azure storage account then register it in your Azure Maps account. > [!IMPORTANT]
- > Make sure to make a note of the unique identifier (`udid`) value, you will need it. The `udid` is is how you reference the drawing package you uploaded into your Azure storage account from your source code and HTTP requests.
+ > Make sure to make a note of the unique identifier (`udid`) value, you will need it. The `udid` is how you reference the drawing package you uploaded into your Azure storage account from your source code and HTTP requests.
2. Now that the drawing package is uploaded, use `udid` for the uploaded package to convert the package into map data. For steps on how to convert a package, see [Convert a drawing package].
azure-maps How To Dataset Geojson https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dataset-geojson.md
For more information on the GeoJSON package, see the [Geojson zip package requir
Follow the steps outlined in the [How to create data registry] article to upload the GeoJSON package into your Azure storage account then register it in your Azure Maps account. > [!IMPORTANT]
-> Make sure to make a note of the unique identifier (`udid`) value, you will need it. The `udid` is is how you reference the GeoJSON package you uploaded into your Azure storage account from your source code and HTTP requests.
+> Make sure to make a note of the unique identifier (`udid`) value, you will need it. The `udid` is how you reference the GeoJSON package you uploaded into your Azure storage account from your source code and HTTP requests.
### Create a dataset
azure-maps Power Bi Visual Add Bubble Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-add-bubble-layer.md
Initially all bubbles have the same fill color. If a field is passed into the **
> > **Bubble size scaling retirement** >
-> The Power BI Visual bubble layer **Bubble size scaling** settings were deprecated starting in the September 2023 release of Power BI. You can no longer create reports using these settings, but existing reports will continue to work. It is recomended that you upgrade existing reports that use these settings to the new **range scaling** property. To upgrade to to the new **range scaling** property, select the desired option in the **Range scaling** drop-down list:
+> The Power BI Visual bubble layer **Bubble size scaling** settings were deprecated starting in the September 2023 release of Power BI. You can no longer create reports using these settings, but existing reports will continue to work. It is recomended that you upgrade existing reports that use these settings to the new **range scaling** property. To upgrade to the new **range scaling** property, select the desired option in the **Range scaling** drop-down list:
> > :::image type="content" source="./media/power-bi-visual/range-scaling-drop-down.png" alt-text="A screenshot of the range scaling drop-down"::: >
azure-maps Release Notes Drawing Tools Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/release-notes-drawing-tools-module.md
+
+ Title: Release notes - Drawing Tools Module
+
+description: Release notes for the Azure Maps Drawing Tools Module.
++ Last updated : 10/25/2023+++++
+# Drawing Tools Module release notes
+
+This document contains information about new features and other changes to the Azure Maps Drawing Tools Module.
+
+## [1.0.2]
+
+### Bug fixes (1.0.2)
+
+- Resolved various errors in the type declaration file.
+
+- Fixed typing incompatibility with MapControl v3.
+
+## Next steps
+
+Explore samples showcasing Azure Maps:
+
+> [!div class="nextstepaction"]
+> [Azure Maps Drawing Tools Samples]
+
+Stay up to date on Azure Maps:
+
+> [!div class="nextstepaction"]
+> [Azure Maps Blog]
++
+[1.0.2]: https://www.npmjs.com/package/azure-maps-drawing-tools/v/1.0.2
+[Azure Maps Drawing Tools Samples]: https://samples.azuremaps.com/?search=Drawing
+[Azure Maps Blog]: https://techcommunity.microsoft.com/t5/azure-maps-blog/bg-p/AzureMapsBlog
azure-maps Release Notes Indoor Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/release-notes-indoor-module.md
This document contains information about new features and other changes to the Azure Maps Indoor Module.
+## [0.2.3]
+
+### Changes (0.2.3)
+
+- Improve rendering performance by reading facility-level data from the style metadata when available.
+ ## [0.2.2] ### Changes (0.2.2)
Stay up to date on Azure Maps:
> [Azure Maps Blog] [drawing package 2.0]: ./drawing-package-guide.md
+[0.2.3]: https://www.npmjs.com/package/azure-maps-indoor/v/0.2.3
[0.2.2]: https://www.npmjs.com/package/azure-maps-indoor/v/0.2.2 [0.2.1]: https://www.npmjs.com/package/azure-maps-indoor/v/0.2.1 [0.2.0]: https://www.npmjs.com/package/azure-maps-indoor/v/0.2.0
azure-maps Release Notes Map Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/release-notes-map-control.md
This document contains information about new features and other changes to the M
## v3 (latest)
+### [3.0.2] (November 1, 2023)
+
+#### Bug fixes (3.0.2)
+
+- Addressed several errors in the type declaration file and added a dependency for `@maplibre/maplibre-gl-style-spec`.
+
+#### Other changes (3.0.2)
+
+- Removed Authorization headers from style, thumbnail, sprite, and glyph requests to enhance CDN caching for static assets.
+
+- Updated the documentation for `map.clear()` and `layers.clear()`.
+ ### [3.0.1] (October 6, 2023) #### Bug fixes (3.0.1)
The preview is available on [npm][3.0.0-preview.10] and CDN.
#### Bug fixes (3.0.0-preview.9) -- Fixed an issue where accessibility-related duplicated DOM elements may result when `map.setServiceOptions` is called
+- Fixed an issue where accessibility-related duplicated DOM elements might result when `map.setServiceOptions` is called
#### Installation (3.0.0-preview.9)- The preview is available on [npm][3.0.0-preview.9] and CDN. - **NPM:** Refer to the instructions at [azure-maps-control@3.0.0-preview.9][3.0.0-preview.9]
The preview is available on [npm][3.0.0-preview.3] and CDN.
#### New features (3.0.0-preview.3) - **\[BREAKING\]** Migrated from [adal-angular] to [@azure/msal-browser] used for authentication with Microsoft Azure Active Directory ([Azure AD]).
- Changes that may be required:
+ Changes that might be required:
- `Platform / Reply URL` Type must be set to `Single-page application` on Azure AD App Registration portal. - Code change is required if a custom `authOptions.authContext` is used. - For more information, see [How to migrate a JavaScript app from ADAL.js to MSAL.js][migration guide].
This update is the first preview of the upcoming 3.0.0 release. The underlying [
## v2
+### [2.3.4] (November 1, 2023)
+
+#### Other changes (2.3.4)
+
+- Removed Authorization headers from style, thumbnail, sprite, and glyph requests to enhance CDN caching for static assets.
+
+- Updated the documentation for `map.clear()` and `layers.clear()`.
+ ### [2.3.3] (October 6, 2023) #### Bug fixes (2.3.3)
This update is the first preview of the upcoming 3.0.0 release. The underlying [
#### Bug fixes (2.3.2) -- Fixed an issue where accessibility-related duplicated DOM elements may result when `map.setServiceOptions` is called.
+- Fixed an issue where accessibility-related duplicated DOM elements might result when `map.setServiceOptions` is called.
- Fixed zoom control to take into account the `maxBounds` [CameraOptions].
This update is the first preview of the upcoming 3.0.0 release. The underlying [
#### Bug fixes (2.3.1) -- Fix `ImageSpriteManager` icon images may get removed during style change
+- Fix `ImageSpriteManager` icon images might get removed during style change
#### Other changes (2.3.1)
Stay up to date on Azure Maps:
> [!div class="nextstepaction"] > [Azure Maps Blog]
+[3.0.2]: https://www.npmjs.com/package/azure-maps-control/v/3.0.2
[3.0.1]: https://www.npmjs.com/package/azure-maps-control/v/3.0.1 [3.0.0]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0 [3.0.0-preview.10]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.10
Stay up to date on Azure Maps:
[3.0.0-preview.3]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.3 [3.0.0-preview.2]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.2 [3.0.0-preview.1]: https://www.npmjs.com/package/azure-maps-control/v/3.0.0-preview.1
+[2.3.4]: https://www.npmjs.com/package/azure-maps-control/v/2.3.4
[2.3.3]: https://www.npmjs.com/package/azure-maps-control/v/2.3.3 [2.3.2]: https://www.npmjs.com/package/azure-maps-control/v/2.3.2 [2.3.1]: https://www.npmjs.com/package/azure-maps-control/v/2.3.1
azure-maps Supported Map Styles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/supported-map-styles.md
Title: Supported built-in Azure Maps map styles
description: Learn about the built-in map styles that Azure Maps supports, such as road, blank_accessible, satellite, satellite_road_labels, road_shaded_relief, and night. Previously updated : 04/26/2020 Last updated : 11/01/2023
Azure Maps supports several different built-in map styles as described in this a
A **road** map is a standard map that displays roads. It also displays natural and artificial features, and the labels for those features.
-![road map style](./media/supported-map-styles/road.png)
**Applicable APIs:**
The **blank** and **blank_accessible** map styles provide a blank canvas for vis
The **satellite** style is a combination of satellite and aerial imagery.
-![satellite tile map style](./media/supported-map-styles/satellite.png)
**Applicable APIs:**
The **satellite** style is a combination of satellite and aerial imagery.
This map style is a hybrid of roads and labels overlaid on top of satellite and aerial imagery.
-![satellite_road_labels map style](./media/supported-map-styles/satellite-road-labels.png)
**Applicable APIs:**
This map style is a hybrid of roads and labels overlaid on top of satellite and
**grayscale dark** is a dark version of the road map style.
-![gray_scale map style](./media/supported-map-styles/grayscale-dark.png)
**Applicable APIs:**
This map style is a hybrid of roads and labels overlaid on top of satellite and
**grayscale light** is a light version of the road map style.
-![grayscale light map style](./media/supported-map-styles/grayscale-light.jpg)
**Applicable APIs:**
This map style is a hybrid of roads and labels overlaid on top of satellite and
**night** is a dark version of the road map style with colored roads and symbols.
-![night map style](./media/supported-map-styles/night.png)
**Applicable APIs:**
This map style is a hybrid of roads and labels overlaid on top of satellite and
**road shaded relief** is an Azure Maps main style completed with contours of the Earth.
-![shaded relief map style](./media/supported-map-styles/shaded-relief.png)
**Applicable APIs:**
This map style is a hybrid of roads and labels overlaid on top of satellite and
**high_contrast_dark** is a dark map style with a higher contrast than the other styles.
-![high contrast dark map style](./media/supported-map-styles/high-contrast-dark.png)
**Applicable APIs:**
This map style is a hybrid of roads and labels overlaid on top of satellite and
**high_contrast_light** is a light map style with a higher contrast than the other styles.
-![high contrast light map style](./media/supported-map-styles/high-contrast-light.jpg)
**Applicable APIs:**
The interactive Azure Maps map controls use vector tiles in the map styles to po
| `road` | Partial | Yes | The main colorful road map style in Azure Maps. Due to the number of different colors and possible overlapping color combinations, it's nearly impossible to make it 100% accessible. That said, this map style goes through regular accessibility testing and is improved as needed to make labels clearer to read. | | `road_shaded_relief` | Partial | Yes | Similar to the main road map style, but has an added tile layer in the background that adds shaded relief of mountains and land cover coloring when zoomed out. | | `satellite` | N/A | Yes | Purely satellite and aerial imagery, no labels, or road lines. The vector tiles are loaded behind the scenes to power the screen reader and to make for a smoother transition when switching to `satellite_with_roads`. |
-| `satellite_with_roads` | No | Yes | Satellite and aerial imagery, with labels and road lines overlaid. On a global scale, there's an unlimited number of color combinations that may occur between the overlaid data and the imagery. A focus on making labels readable in most common scenarios, however, in some places the color contrast with the background imagery may make labels difficult to read. |
+| `satellite_with_roads` | No | Yes | Satellite and aerial imagery, with labels and road lines overlaid. On a global scale, there's an unlimited number of color combinations that might occur between the overlaid data and the imagery. A focus on making labels readable in most common scenarios, however, in some places the color contrast with the background imagery might make labels difficult to read. |
## Next steps
azure-maps Tutorial Create Store Locator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-create-store-locator.md
description: Tutorial on how to use Microsoft Azure Maps to create store locator web applications. Previously updated : 01/03/2022 Last updated : 11/01/2023
This section lists the Azure Maps features that are demonstrated in the Contoso
The following screenshot shows the general layout of the Contoso Coffee store locator application. To view and interact with the live sample, see the [Simple Store Locator] sample application on the **Azure Maps Code Samples** site. To maximize the usefulness of this store locator, we include a responsive layout that adjusts when a user's screen width is smaller than 700 pixels wide. A responsive layout makes it easy to use the store locator on a small screen, like on a mobile device. Here's a screenshot showing a sample of the small-screen layout:
The first time a user selects the My Location button, the browser displays a sec
When you zoom in close enough in an area that has coffee shop locations, the clusters separate into individual locations. Select one of the icons on the map or select an item in the side panel to see a pop-up window. The pop-up shows information for the selected location.
-![Screenshot of the finished store locator](./media/tutorial-create-store-locator/finished-simple-store-locator.png)
If you resize the browser window to fewer than 700 pixels wide or open the application on a mobile device, the layout changes to be better suited for smaller screens.
azure-maps Tutorial Creator Indoor Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-creator-indoor-maps.md
This tutorial uses the [Postman] application, but you can use a different API de
Follow the steps outlined in the [How to create data registry] article to upload the GeoJSON package into your Azure storage account then register it in your Azure Maps account. > [!IMPORTANT]
-> Make sure to make a note of the unique identifier (`udid`) value, you will need it. The `udid` is is how you reference the GeoJSON package you uploaded into your Azure storage account from your source code and HTTP requests.
+> Make sure to make a note of the unique identifier (`udid`) value, you will need it. The `udid` is how you reference the GeoJSON package you uploaded into your Azure storage account from your source code and HTTP requests.
## Convert a drawing package
azure-maps Tutorial Ev Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-ev-routing.md
for loc in range(len(searchPolyResponse["results"])):
## Upload the reachable range and charging points
-It's helpful to visualize the charging stations and the boundary for the maximum reachable range of the electric vehicle on a map. Follow the steps outlined in the [How to create data registry] article to upload the boundary data and charging stations data as geojson objects to your [Azure storage account] then register them in your Azure Maps account. Make sure to make a note of the unique identifier (`udid`) value, you will need it. The `udid` is is how you reference the geojson objects you uploaded into your Azure storage account from your source code.
+It's helpful to visualize the charging stations and the boundary for the maximum reachable range of the electric vehicle on a map. Follow the steps outlined in the [How to create data registry] article to upload the boundary data and charging stations data as geojson objects to your [Azure storage account] then register them in your Azure Maps account. Make sure to make a note of the unique identifier (`udid`) value, you will need it. The `udid` is how you reference the geojson objects you uploaded into your Azure storage account from your source code.
<! To upload the boundary and charging point data to Azure Maps Data service, run the following two cells:
routeData = {
## Visualize the route
-To help visualize the route, follow the steps outlined in the [How to create data registry] article to upload the route data as a geojson object to your [Azure storage account] then register it in your Azure Maps account. Make sure to make a note of the unique identifier (`udid`) value, you will need it. The `udid` is is how you reference the geojson objects you uploaded into your Azure storage account from your source code. Then, call the rendering service, [Get Map Image API], to render the route on the map, and visualize it.
+To help visualize the route, follow the steps outlined in the [How to create data registry] article to upload the route data as a geojson object to your [Azure storage account] then register it in your Azure Maps account. Make sure to make a note of the unique identifier (`udid`) value, you will need it. The `udid` is how you reference the geojson objects you uploaded into your Azure storage account from your source code. Then, call the rendering service, [Get Map Image API], to render the route on the map, and visualize it.
To get an image for the rendered route on the map, run the following script:
azure-maps Tutorial Geofence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-geofence.md
Create the geofence JSON file using the following geofence data. You'll upload t
Follow the steps outlined in the [How to create data registry] article to upload the geofence JSON file into your Azure storage account then register it in your Azure Maps account. > [!IMPORTANT]
-> Make sure to make a note of the unique identifier (`udid`) value, you will need it. The `udid` is is how you reference the geofence you uploaded into your Azure storage account from your source code and HTTP requests.
+> Make sure to make a note of the unique identifier (`udid`) value, you will need it. The `udid` is how you reference the geofence you uploaded into your Azure storage account from your source code and HTTP requests.
## Create workflows in Azure Logic Apps
azure-maps Webgl Custom Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/webgl-custom-layer.md
You need to add the following script files.
This sample renders an animated 3D parrot on the map.
-![A screenshot showing an an animated 3D parrot on the map.](./media/how-to-webgl-custom-layer/3d-parrot.gif)
+![A screenshot showing an animated 3D parrot on the map.](./media/how-to-webgl-custom-layer/3d-parrot.gif)
For a fully functional sample with source code, see [Three custom WebGL layer] in the Azure Maps Samples.
azure-monitor Agent Windows Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-windows-troubleshoot.md
Both of these scripts may take a while to complete.
# This script uses parallel processing, modify the $parallelThrottleLimit parameter to either increase or decrease the number of parallel processes # PS> .\UpdateMMA.ps1 GetInventory # The above command will generate a csv file with the details of VM's and VMSS that require MMA upgrade.
-# The customer can modify the the csv by adding/removing rows if needed
+# The customer can modify the csv by adding/removing rows if needed
# Update the MMA by running the script again and passing the csv file as parameter as shown below: # PS> .\UpdateMMA.ps1 Upgrade # If you don't want to check the inventory, then run the script wiht an additional -no-inventory-check
azure-monitor Log Analytics Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/log-analytics-agent.md
Title: Log Analytics agent overview
description: This article helps you understand how to collect data and monitor computers hosted in Azure, on-premises, or other cloud environments with Log Analytics. -+ Last updated 07/06/2023
azure-monitor Autoscale Multiprofile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-multiprofile.md
The example below shows how to create two recurring profiles. One profile for we
Use the following command to deploy the template: `az deployment group create --name VMSS1-Autoscale-607 --resource-group rg-vmss1 --template-file VMSS1-autoscale.json`
-where *VMSS1-autoscale.json* is the the file containing the JSON object below.
+where *VMSS1-autoscale.json* is the file containing the JSON object below.
``` JSON {
azure-monitor Container Insights Logging V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-logging-v2.md
For Python:
Customers must [enable ContainerLogV2](./container-insights-logging-v2.md#enable-the-containerlogv2-schema) for multi-line logging to work. ### How to enable
-Multi-line logging feature can be enabled by setting **enabled** flag to "true" under the `[log_collection_settings.enable_multiline_logs]` section in the the [config map](https://github.com/microsoft/Docker-Provider/blob/ci_prod/kubernetes/container-azm-ms-agentconfig.yaml)
+Multi-line logging feature can be enabled by setting **enabled** flag to "true" under the `[log_collection_settings.enable_multiline_logs]` section in the [config map](https://github.com/microsoft/Docker-Provider/blob/ci_prod/kubernetes/container-azm-ms-agentconfig.yaml)
```yaml [log_collection_settings.enable_multiline_logs]
azure-monitor Activity Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/activity-log.md
Activity log events are retained in Azure for *90 days* and then deleted. There'
## View the activity log You can access the activity log from most menus in the Azure portal. The menu that you open it from determines its initial filter. If you open it from the **Monitor** menu, the only filter is on the subscription. If you open it from a resource's menu, the filter is set to that resource. You can always change the filter to view all other entries. Select **Add Filter** to add more properties to the filter.-
-![Screenshot that shows the activity log.](./media/activity-log/view-activity-log.png)
+<!-- convertborder later -->
For a description of activity log categories, see [Azure activity log event schema](activity-log-schema.md#categories). ## Download the activity log Select **Download as CSV** to download the events in the current view.-
-![Screenshot that shows downloading the activity log.](media/activity-log/download-activity-log.png)
+<!-- convertborder later -->
### View change history For some events, you can view the change history, which shows what changes happened during that event time. Select an event from the activity log you want to look at more deeply. Select the **Change history** tab to view any changes on the resource up to 30 minutes before and after the time of the operation.
-![Screenshot that shows the Change history list for an event.](media/activity-log/change-history-event.png)
If any changes are associated with the event, you'll see a list of changes that you can select. Selecting a change opens the **Change history** page. This page displays the changes to the resource. In the following example, you can see that the VM changed sizes. The page displays the VM size before the change and after the change. To learn more about change history, see [Get resource changes](../../governance/resource-graph/how-to/get-resource-changes.md).
-![Screenshot that shows the Change history page showing differences.](media/activity-log/change-history-event-details.png)
### Other methods to retrieve activity log events
You can also access activity log events by using the following methods:
Select **Export Activity Logs** to send the activity log to a Log Analytics workspace.
- ![Screenshot that shows exporting activity logs.](media/activity-log/diagnostic-settings-export.png)
+ :::image type="content" source="media/activity-log/diagnostic-settings-export.png" lightbox="media/activity-log/diagnostic-settings-export.png" alt-text="Screenshot that shows exporting activity logs.":::
You can send the activity log from any single subscription to up to five workspaces.
azure-monitor Collect Custom Metrics Guestos Resource Manager Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/collect-custom-metrics-guestos-resource-manager-vm.md
To deploy the ARM template, we use Azure PowerShell.
1. On the **Monitor** page, select **Metrics**.
- ![Screenshot that shows the Metrics page.](media/collect-custom-metrics-guestos-resource-manager-vm/metrics.png)
+ :::image type="content" source="media/collect-custom-metrics-guestos-resource-manager-vm/metrics.png" lightbox="media/collect-custom-metrics-guestos-resource-manager-vm/metrics.png" alt-text="Screenshot that shows the Metrics page.":::
1. Change the aggregation period to **Last 30 minutes**.
azure-monitor Collect Custom Metrics Guestos Vm Classic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/collect-custom-metrics-guestos-vm-classic.md
The process that's outlined in this article only works on classic virtual machin
## Create a classic virtual machine and storage account 1. Create a classic VM by using the Azure portal.
- ![Create Classic VM](./media/collect-custom-metrics-guestos-vm-classic/create-classic-vm.png)
+ :::image type="content" source="./media/collect-custom-metrics-guestos-vm-classic/create-classic-vm.png" lightbox="./media/collect-custom-metrics-guestos-vm-classic/create-classic-vm.png" alt-text="Create Classic VM":::
1. When you're creating this VM, choose the option to create a new classic storage account. We use this storage account in later steps. 1. In the Azure portal, go to the **Storage accounts** resource pane. Select **Keys**, and take note of the storage account name and storage account key. You need this information in later steps.
- ![Storage access keys](./media/collect-custom-metrics-guestos-vm-classic/storage-access-keys.png)
+ :::image type="content" source="./media/collect-custom-metrics-guestos-vm-classic/storage-access-keys.png" lightbox="./media/collect-custom-metrics-guestos-vm-classic/storage-access-keys.png" alt-text="Storage access keys":::
## Create a service principal
Give this app ΓÇ£Monitoring Metrics PublisherΓÇ¥ permissions to the resource tha
1. On the **Monitor** pane on the left, select **Metrics**.
- ![Navigate metrics](./media/collect-custom-metrics-guestos-vm-classic/navigate-metrics.png)
+ :::image type="content" source="./media/collect-custom-metrics-guestos-vm-classic/navigate-metrics.png" lightbox="./media/collect-custom-metrics-guestos-vm-classic/navigate-metrics.png" alt-text="Navigate metrics":::
1. In the resources drop-down menu, select your classic VM. 1. In the namespaces drop-down menu, select **azure.vm.windows.guest**. 1. In the metrics drop-down menu, select **Memory\Committed Bytes in Use**.
- ![Plot metrics](./media/collect-custom-metrics-guestos-vm-classic/plot-metrics.png)
+ :::image type="content" source="./media/collect-custom-metrics-guestos-vm-classic/plot-metrics.png" lightbox="./media/collect-custom-metrics-guestos-vm-classic/plot-metrics.png" alt-text="Plot metrics":::
## Next steps
azure-monitor Collect Custom Metrics Guestos Vm Cloud Service Classic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/collect-custom-metrics-guestos-vm-cloud-service-classic.md
The process that's outlined in this article works only for performance counters
1. Create and deploy a classic cloud service. A sample classic Cloud Services application and deployment can be found at [Get started with Azure Cloud Services and ASP.NET](../../cloud-services/cloud-services-dotnet-get-started.md). 2. You can use an existing storage account or deploy a new storage account. It's best if the storage account is in the same region as the classic cloud service that you created. In the Azure portal, go to the **Storage accounts** resource pane, and then select **Keys**. Take note of the storage account name and the storage account key. You'll need this information in later steps.-
- ![Storage account keys](./media/collect-custom-metrics-guestos-vm-cloud-service-classic/storage-keys.png)
+ <!-- convertborder later -->
+ :::image type="content" source="./media/collect-custom-metrics-guestos-vm-cloud-service-classic/storage-keys.png" lightbox="./media/collect-custom-metrics-guestos-vm-cloud-service-classic/storage-keys.png" alt-text="Storage account keys" border="false":::
## Create a service principal
Set-AzureServiceDiagnosticsExtension -ServiceName <classicCloudServiceName> -Sto
1. Go to the Azure portal.
- ![Screenshot shows the Azure portal with Monitor, then Metrics selected.](./media/collect-custom-metrics-guestos-vm-cloud-service-classic/navigate-metrics.png)
+ :::image type="content" source="./media/collect-custom-metrics-guestos-vm-cloud-service-classic/navigate-metrics.png" lightbox="./media/collect-custom-metrics-guestos-vm-cloud-service-classic/navigate-metrics.png" alt-text="Screenshot shows the Azure portal with Monitor, then Metrics selected.":::
2. On the left menu, select **Monitor.**
Set-AzureServiceDiagnosticsExtension -ServiceName <classicCloudServiceName> -Sto
6. In the metrics drop-down menu, select **Memory\Committed Bytes in Use**. You use the dimension filtering and splitting capabilities to view the total memory that's used by a specific role or role instance. -
- ![Screenshot shows Metrics data.](./media/collect-custom-metrics-guestos-vm-cloud-service-classic/metrics-graph.png)
+ <!-- convertborder later -->
+ :::image type="content" source="./media/collect-custom-metrics-guestos-vm-cloud-service-classic/metrics-graph.png" lightbox="./media/collect-custom-metrics-guestos-vm-cloud-service-classic/metrics-graph.png" alt-text="Screenshot shows Metrics data." border="false":::
## Next steps
azure-monitor Data Collection Endpoint Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-endpoint-overview.md
This table describes the components of a data collection endpoint, related regio
# [Azure portal](#tab/portal) 1. On the **Azure Monitor** menu in the Azure portal, select **Data Collection Endpoints** under the **Settings** section. Select **Create** to create a new DCR and assignment.-
- [![Screenshot that shows data collection endpoints.](media/data-collection-endpoint-overview/data-collection-endpoint-overview.png)](media/data-collection-endpoint-overview/data-collection-endpoint-overview.png#lightbox)
+ <!-- convertborder later -->
+ :::image type="content" source="media/data-collection-endpoint-overview/data-collection-endpoint-overview.png" lightbox="media/data-collection-endpoint-overview/data-collection-endpoint-overview.png" alt-text="Screenshot that shows data collection endpoints." border="false":::
1. Select **Create** to create a new endpoint. Provide a **Rule name** and specify a **Subscription**, **Resource Group**, and **Region**. This information specifies where the DCE will be created.-
- [![Screenshot that shows data collection rule basics.](media/data-collection-endpoint-overview/data-collection-endpoint-basics.png)](media/data-collection-endpoint-overview/data-collection-endpoint-basics.png#lightbox)
+ <!-- convertborder later -->
+ :::image type="content" source="media/data-collection-endpoint-overview/data-collection-endpoint-basics.png" lightbox="media/data-collection-endpoint-overview/data-collection-endpoint-basics.png" alt-text="Screenshot that shows data collection rule basics." border="false":::
1. Select **Review + create** to review the details of the DCE. Select **Create** to create it.
azure-monitor Data Platform Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-platform-metrics.md
There are multiple types of metrics supported by Azure Monitor Metrics:
- Custom metrics are collected from different sources that you configure including applications and agents running on virtual machines. - Prometheus metrics are collected from Kubernetes clusters including Azure Kubernetes service (AKS) and use industry standard tools for analyzing and alerting such as PromQL and Grafana.
-![Diagram that shows sources and uses of metrics.](media/data-platform-metrics/metrics-overview.png)
The differences between each of the metrics are summarized in the following table.
For a complete list of data sources that can send data to Azure Monitor Metrics,
Azure Monitor provides REST APIs that allow you to get data in and out of Azure Monitor Metrics. - **Custom metrics API** - [Custom metrics](./metrics-custom-overview.md) allow you to load your own metrics into the Azure Monitor Metrics database. Those metrics can then be used by the same analysis tools that process Azure Monitor platform metrics. - **Azure Monitor Metrics REST API** - Allows you to access Azure Monitor platform metrics definitions and values. For more information, see [Azure Monitor REST API](/rest/api/monitor/). For information on how to use the API, see the [Azure monitoring REST API walkthrough](./rest-api-walkthrough.md).-- **Azure Monitor Metrics Data plane REST API** - [Azure Monitor Metrics data plane API](/rest/api/monitor/metrics-data-plane/) is a high-volume API designed for customers with large volume metrics queries. It's similar to the existing standard Azure Monitor Metrics REST API, but provides the capability to retrieve metric data for up to 50 resource IDs in the same subscription and region in a single batch API call. This improves query throughput and reduces the risk of throttling.
+- **Azure Monitor Metrics Batch REST API** - [Azure Monitor Metrics Batch API](/rest/api/monitor/metrics-batch/) is a high-volume API designed for customers with large volume metrics queries. It's similar to the existing standard Azure Monitor Metrics REST API, but provides the capability to retrieve metric data for up to 50 resource IDs in the same subscription and region in a single batch API call. This improves query throughput and reduces the risk of throttling.
## Security
Secure connection is established between the agent and the Azure Monitor service
## Metrics Explorer Use [Metrics Explorer](metrics-charts.md) to interactively analyze the data in your metric database and chart the values of multiple metrics over time. You can pin the charts to a dashboard to view them with other visualizations. You can also retrieve metrics by using the [Azure monitoring REST API](./rest-api-walkthrough.md).-
-![Screenshot that shows an example graph in Metrics Explorer that displays server requests, server response time, and failed requests.](media/data-platform-metrics/metrics-explorer.png)
+<!-- convertborder later -->
For more information, see [Analyze metrics with Azure Monitor metrics explorer](./analyze-metrics.md).
azure-monitor Diagnostic Settings Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/diagnostic-settings-policy.md
For details on creating an initiative, see [Create and assign an initiative defi
- Set **Category** to **Monitoring** to group it with related built-in and custom policy definitions. - Instead of specifying the details for the Log Analytics workspace and the event hub for policy definitions included in the initiative, use a common initiative parameter. This parameter allows you to easily specify a common value for all policy definitions and change that value if necessary.-
-![Screenshot that shows settings for initiative definition.](media/diagnostic-settings-policy/initiative-definition.png)
+<!-- convertborder later -->
## Assignment Assign the initiative to an Azure management group, subscription, or resource group, depending on the scope of your resources to monitor. A [management group](../../governance/management-groups/overview.md) is useful for scoping policy, especially if your organization has multiple subscriptions.-
-![Screenshot of the settings for the Basics tab in the Assign initiative section of the Diagnostic settings to Log Analytics workspace in the Azure portal.](media/diagnostic-settings-policy/initiative-assignment.png)
+<!-- convertborder later -->
By using initiative parameters, you can specify the workspace or any other details once for all of the policy definitions in the initiative. -
-![Screenshot that shows initiative parameters on the Parameters tab.](media/diagnostic-settings-policy/initiative-parameters.png)
+<!-- convertborder later -->
## Remediation The initiative will be applied to each virtual machine as it's created. A [remediation task](../../governance/policy/how-to/remediate-resources.md) deploys the policy definitions in the initiative to existing resources, so you can create diagnostic settings for any resources that were already created. When you create the assignment by using the Azure portal, you have the option of creating a remediation task at the same time. See [Remediate non-compliant resources with Azure Policy](../../governance/policy/how-to/remediate-resources.md) for details on the remediation.-
-![Screenshot that shows initiative remediation for a Log Analytics workspace.](media/diagnostic-settings-policy/initiative-remediation.png)
+<!-- convertborder later -->
## Troubleshooting
azure-monitor Metrics Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-troubleshoot.md
Use this article if you run into issues with creating, customizing, or interpret
## Chart shows no data
-Sometimes the charts might show no data after selecting correct resources and metrics. This behavior can be caused by several of the following reasons:
+Sometimes the charts might show no data after selecting correct resources and metrics. Several of the following reasons can cause this behavior:
### Microsoft.Insights resource provider isn't registered for your subscription
Exploring metrics requires *Microsoft.Insights* resource provider registered in
### You don't have sufficient access rights to your resource
-In Azure, access to metrics is controlled by [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md). You must be a member of [monitoring reader](../../role-based-access-control/built-in-roles.md#monitoring-reader), [monitoring contributor](../../role-based-access-control/built-in-roles.md#monitoring-contributor), or [contributor](../../role-based-access-control/built-in-roles.md#contributor) to explore metrics for any resource.
+In Azure, [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md) controls access to metrics. You must be a member of [monitoring reader](../../role-based-access-control/built-in-roles.md#monitoring-reader), [monitoring contributor](../../role-based-access-control/built-in-roles.md#monitoring-contributor), or [contributor](../../role-based-access-control/built-in-roles.md#contributor) to explore metrics for any resource.
**Solution:** Ensure that you have sufficient permissions for the resource from which you're exploring metrics. ### Your resource didn't emit metrics during the selected time range
-Some resources donΓÇÖt constantly emit their metrics. For example, Azure won't collect metrics for stopped virtual machines. Other resources might emit their metrics only when some condition occurs. For example, a metric showing processing time of a transaction requires at least one transaction. If there were no transactions in the selected time range, the chart will naturally be empty. Additionally, while most of the metrics in Azure are collected every minute, there are some that are collected less frequently. See the metric documentation to get more details about the metric that you're trying to explore.
+Some resources donΓÇÖt constantly emit their metrics. For example, Azure doesn't collect metrics for stopped virtual machines. Other resources might emit their metrics only when some condition occurs. For example, a metric showing processing time of a transaction requires at least one transaction. If there were no transactions in the selected time range, the chart is naturally empty. Additionally, while most of the metrics in Azure are collected every minute, there are some that are collected less frequently. See the metric documentation to get more details about the metric that you're trying to explore.
**Solution:** Change the time of the chart to a wider range. You may start from ΓÇ£Last 30 daysΓÇ¥ using a larger time granularity (or relying on the ΓÇ£Automatic time granularityΓÇ¥ option).
Some resources donΓÇÖt constantly emit their metrics. For example, Azure won't c
[Most metrics in Azure are stored for 93 days](../essentials/data-platform-metrics.md#retention-of-metrics). However, you can only query for no more than 30 days worth of data on any single chart. This limitation doesn't apply to [log-based metrics](../app/pre-aggregated-metrics-log-metrics.md#log-based-metrics).
-**Solution:** If you see a blank chart or your chart only displays part of metric data, verify that the difference between start- and end- dates in the time picker doesn't exceed the 30-day interval. Once you have selected a 30 day interval, you can [pan](metrics-charts.md#pan) the chart to view the full retention window.
+**Solution:** If you see a blank chart or your chart only displays part of metric data, verify that the difference between start- and end- dates in the time picker doesn't exceed the 30-day interval. Once you select a 30 day interval, you can [pan](metrics-charts.md#pan) the chart to view the full retention window.
### All metric values were outside of the locked y-axis range
This problem may happen when your dashboard was created with a metric that was l
## Chart shows dashed line
-Azure metrics charts use dashed line style to indicate that there's a missing value (also known as ΓÇ£null valueΓÇ¥) between two known time grain data points. For example, if in the time selector you picked ΓÇ£1 minuteΓÇ¥ time granularity but the metric was reported at 07:26, 07:27, 07:29, and 07:30 (note a minute gap between second and third data points), then a dashed line will connect 07:27 and 07:29 and a solid line will connect all other data points. The dashed line drops down to zero when the metric uses **count** and **sum** aggregation. For the **avg**, **min** or **max** aggregations, the dashed line connects two nearest known data points. Also, when the data is missing on the rightmost or leftmost side of the chart, the dashed line expands to the direction of the missing data point.
- ![Screenshot that shows how when the data is missing on the rightmost or leftmost side of the chart, the dashed line expands to the direction of the missing data point.](./media/metrics-troubleshoot/dashed-line.png)
+Azure metrics charts use dashed line style to indicate that there's a missing value (also known as ΓÇ£null valueΓÇ¥) between two known time grain data points. For example, if in the time selector you picked ΓÇ£1 minuteΓÇ¥ time granularity but the metric was reported at 07:26, 07:27, 07:29, and 07:30 (note a minute gap between second and third data points), then a dashed line connects 07:27 and 07:29 and a solid line connects all other data points. The dashed line drops down to zero when the metric uses **count** and **sum** aggregation. For the **avg**, **min** or **max** aggregations, the dashed line connects two nearest known data points. Also, when the data is missing on the rightmost or leftmost side of the chart, the dashed line expands to the direction of the missing data point.
+ :::image type="content" source="./media/metrics-troubleshoot/dashed-line.png" lightbox="./media/metrics-troubleshoot/dashed-line.png" alt-text="Screenshot that shows how when the data is missing on the rightmost or leftmost side of the chart, the dashed line expands to the direction of the missing data point.":::
**Solution:** This behavior is by design. It's useful for identifying missing data points. The line chart is a superior choice for visualizing trends of high-density metrics but may be difficult to interpret for the metrics with sparse values, especially when corelating values with time grain is important. The dashed line makes reading of these charts easier but if your chart is still unclear, consider viewing your metrics with a different chart type. For example, a scattered plot chart for the same metric clearly shows each time grain by only visualizing a dot when there's a value and skipping the data point altogether when the value is missing:
- ![Screenshot that highlights the Scatter chart menu option.](./media/metrics-troubleshoot/scatter-plot.png)
+ <!-- convertborder later -->
+ :::image type="content" source="./media/metrics-troubleshoot/scatter-plot.png" lightbox="./media/metrics-troubleshoot/scatter-plot.png" alt-text="Screenshot that highlights the Scatter chart menu option." border="false":::
> [!NOTE] > If you still prefer a line chart for your metric, moving mouse over the chart may help to assess the time granularity by highlighting the data point at the location of the mouse pointer. ## Units of measure in metrics charts
-Azure monitor metrics uses SI based prefixes. Metrics will only be using IEC prefixes if the resource provider has chosen an appropriate unit for a metric.
+Azure monitor metrics uses SI based prefixes. Metrics only use IEC prefixes if the resource provider chooses an appropriate unit for a metric.
For ex: The resource provider Network interface (resource name: rarana-vm816) has no metric unit defined for "Packets Sent". The prefix used for the metric value here's k representing kilo (1000), a SI prefix.
-![Screenshot that shows metric value with prefix kilo.](./media/metrics-troubleshoot/prefix-si.png)
The resource provider Storage account (resource name: ibabichvm) has metric unit defined for "Blob Capacity" as bytes. Hence, the prefix used is mebi (1024^2), an IEC prefix.
-![Screenshot that shows metric value with prefix mebi.](./media/metrics-troubleshoot/prefix-iec.png)
SI uses decimal
IEC uses binary
## Chart shows unexpected drop in values
-In many cases, the perceived drop in the metric values is a misunderstanding of the data shown on the chart. You can be misled by a drop in sums or counts when the chart shows the most-recent minutes because the last metric data points havenΓÇÖt been received or processed by Azure yet. Depending on the service, the latency of processing metrics can be within a couple minutes range. For charts showing a recent time range with a 1- or 5- minute granularity, a drop of the value over the last few minutes becomes more noticeable:
- ![Screenshot that shows a drop of the value over the last few minutes.](./media/metrics-troubleshoot/unexpected-dip.png)
+In many cases, the perceived drop in the metric values is a misunderstanding of the data shown on the chart. You can be misled by a drop in sums or counts when the chart shows the most-recent minutes because Azure hasn't received or processed the last metric data points yet. Depending on the service, the latency of processing metrics can be within a couple minutes range. For charts showing a recent time range with a 1- or 5- minute granularity, a drop of the value over the last few minutes becomes more noticeable:
+ <!-- convertborder later -->
+ :::image type="content" source="./media/metrics-troubleshoot/unexpected-dip.png" lightbox="./media/metrics-troubleshoot/unexpected-dip.png" alt-text="Screenshot that shows a drop of the value over the last few minutes." border="false":::
-**Solution:** This behavior is by design. We believe that showing data as soon as we receive it's beneficial even when the data is *partial* or *incomplete*. Doing so allows you to make important conclusion sooner and start investigation right away. For example, for a metric that shows the number of failures, seeing a partial value X tells you that there were at least X failures on a given minute. You can start investigating the problem right away, rather than wait to see the exact count of failures that happened on this minute, which might not be as important. The chart will update once we receive the entire set of data, but at that time it may also show new incomplete data points from more recent minutes.
+**Solution:** This behavior is by design. We believe that showing data as soon as we receive it's beneficial even when the data is *partial* or *incomplete*. Doing so allows you to make important conclusion sooner and start investigation right away. For example, for a metric that shows the number of failures, seeing a partial value X tells you that there were at least X failures on a given minute. You can start investigating the problem right away, rather than wait to see the exact count of failures that happened on this minute, which might not be as important. The chart updates once we receive the entire set of data, but at that time it may also show new incomplete data points from more recent minutes.
## Cannot pick Guest namespace and metrics
Virtual machines and virtual machine scale sets have two categories of metrics:
By default, Guest (classic) metrics are stored in Azure Storage account, which you pick from the **Diagnostic settings** tab of your resource. If Guest metrics aren't collected or metrics explorer cannot access them, you'll only see the **Virtual Machine Host** metric namespace:
-![metric image](./media/metrics-troubleshoot/vm-metrics.png)
**Solution:** If you don't see **Guest (classic)** namespace and metrics in metrics explorer:
azure-monitor Monitor Azure Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/monitor-azure-resource.md
The **Overview** page includes details about the resource and often its current
To learn how to use Metrics Explorer, see [Analyze metrics for an Azure resource](../essentials/tutorial-metrics.md).
-![Screenshot that shows the Overview page.](media/monitor-azure-resource/overview-page.png)
### Activity log The **Activity log** menu item lets you view entries in the [activity log](../essentials/activity-log.md) for the current resource.-
+<!-- convertborder later -->
## Alerts
To learn how to create alert rules and view alerts, see [Create a metric alert f
The **Metrics** menu item opens [Metrics Explorer](./metrics-getting-started.md). You can use it to work with individual metrics or combine multiple metrics to identify correlations and trends. This is the same Metrics Explorer that opens when you select one of the charts on the **Overview** page. To learn how to use Metrics Explorer, see [Analyze metrics for an Azure resource](../essentials/tutorial-metrics.md).-
+<!-- convertborder later -->
## Diagnostic settings
To learn how to create a diagnostic setting, see [Collect and analyze resource l
The **Insights** menu item opens the insight for the resource if the Azure service has one. [Insights](../monitor-reference.md) provide a customized monitoring experience built on the Azure Monitor data platform and standard features. For a list of insights that are available and links to their documentation, see [Insights](../insights/insights-overview.md) and [core solutions](/previous-versions/azure/azure-monitor/insights/solutions).-
+<!-- convertborder later -->
## Next steps
azure-monitor Prometheus Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/prometheus-workbooks.md
Azure Monitor workspaces include an exploration workbook to query your Prometheu
1. From the Azure Monitor workspace overview page, select **Prometheus explorer**
-![Screenshot that shows Azure Monitor workspace menu selection.](./media/prometheus-workbooks/prometheus-explorer-menu.png)
2. Or the **Workbooks** menu item, and in the Azure Monitor workspace gallery, select the **Prometheus Explorer** workbook tile.
-![Screenshot that shows Azure Monitor workspace gallery.](./media/prometheus-workbooks/prometheus-gallery.png)
A workbook has the following input options: - **Time Range**. Select the period of time that you want to include in your query. Select **Custom** to set a start and end time. - **PromQL**. Enter the PromQL query to retrieve your data. For more information about PromQL, see [Querying Prometheus](https://prometheus.io/docs/prometheus/latest/querying/basics/#querying-prometheus). - **Graph**, **Grid**, and **Dimensions** tabs. Switch between a graphic, tabular, and dimensional view of the query output.
-![Screenshot that shows PromQL explorer.](./media/prometheus-workbooks/prometheus-explorer.png)
## Create a Prometheus workbook
Workbooks support many visualizations and Azure integrations. For more informati
1. Select **New**. 1. In the new workbook, select **Add**, and select **Add query** from the dropdown. 1. Azure Workbooks use [data sources](../visualize/workbooks-data-sources.md#prometheus-preview) to set the source scope the data they present. To query Prometheus metrics, select the **Data source** dropdown, and choose **Prometheus** . 1. From the **Azure Monitor workspace** dropdown, select your workspace. 1. Select your query type from **Prometheus query type** dropdown.
Workbooks support many visualizations and Azure integrations. For more informati
1. Select **Run Query** button. 1. Select the **Done Editing** at the bottom of the section and save your work
-![Screenshot that shows sample PromQL query.](./media/prometheus-workbooks/prometheus-query.png)
## Troubleshooting
azure-monitor Aiops Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/aiops-machine-learning.md
Last updated 02/28/2023
-#customer-intent: As a DevOps manager or data scientist, I want to understand which AIOps features Azure Monitor offers and how to implement a machine learning pipeline on data in Azure Monitor Logs so that I can use artifical intelligence to to improve service quality and reliability of my IT environment.
+#customer-intent: As a DevOps manager or data scientist, I want to understand which AIOps features Azure Monitor offers and how to implement a machine learning pipeline on data in Azure Monitor Logs so that I can use artifical intelligence to improve service quality and reliability of my IT environment.
# Detect and mitigate potential issues using AIOps and machine learning in Azure Monitor
azure-monitor Basic Logs Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/basic-logs-query.md
Creating a query using Basic Logs is the same as any other query in Log Analytic
In the Azure portal, select **Monitor** > **Logs** > **Tables**. In the list of tables, you can identify Basic Logs tables by their unique icon: -
-![Screenshot of the Basic Logs table icon in the table list.](./media/basic-logs-configure/table-icon.png)
+<!-- convertborder later -->
You can also hover over a table name for the table information view, which will specify that the table is configured as Basic Logs:-
-![Screenshot of the Basic Logs table indicator in the table details.](./media/basic-logs-configure/table-info.png)
+<!-- convertborder later -->
When you add a table to the query, Log Analytics will identify a Basic Logs table and align the authoring experience accordingly. The following example shows when you attempt to use an operator that isn't supported by Basic Logs.
-![Screenshot of Query on Basic Logs limitations.](./media/basic-logs-query/query-validator.png)
# [API](#tab/api-1)
azure-monitor Custom Fields https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/custom-fields.md
Last updated 03/31/2023
The **Custom Fields** feature of Azure Monitor allows you to extend existing records in your Log Analytics workspace by adding your own searchable fields. Custom fields are automatically populated from data extracted from other properties in the same record.
-![Diagram shows an original record associated with a modified record in a Log Analytics workspace with property value pairs added to the original property in the modified record.](media/custom-fields/overview.png)
For example, the sample record below has useful data buried in the event description. Extracting this data into a separate property makes it available for such actions as sorting and filtering.-
-![Sample extract](media/custom-fields/sample-extract.png)
+<!-- convertborder later -->
> [!NOTE] > In the Preview, you are limited to 500 custom fields in your workspace. This limit will be expanded when this feature reaches general availability.
There are two ways to remove a custom field. The first is the **Remove** option
The following section walks through a complete example of creating a custom field. This example extracts the service name in Windows events that indicate a service changing state. This relies on events created by Service Control Manager during system startup on Windows computers. If you want to follow this example, you must be [collecting Information events for the System log](../agents/data-sources-windows-events.md). We enter the following query to return all events from Service Control Manager that have an Event ID of 7036 which is the event that indicates a service starting or stopping.-
-![Screenshot showing a query for an event source and ID.](media/custom-fields/query.png)
+<!-- convertborder later -->
We then right-click on any record with event ID 7036 and select **Extract fields from \`Event`**.-
-![Screenshot showing the Copy and Extract fields options, which are available when you right-click a record from the list of results.](media/custom-fields/extract-fields.png)
+<!-- convertborder later -->
The **Field Extraction Wizard** opens with the **EventLog** and **EventID** fields selected in the **Main Example** column. This indicates that the custom field will be defined for events from the System log with an event ID of 7036. This is sufficient so we donΓÇÖt need to select any other fields.-
-![Main example](media/custom-fields/main-example.png)
+<!-- convertborder later -->
We highlight the name of the service in the **RenderedDescription** property and use **Service** to identify the service name. The custom field will be called **Service_CF**. The field type in this case is a string, so we can leave that unchanged.-
-![Field Title](media/custom-fields/field-title.png)
+<!-- convertborder later -->
We see that the service name is identified properly for some records but not for others. The **Search Results** show that part of the name for the **WMI Performance Adapter** wasnΓÇÖt selected. The **Summary** shows that one record identified **Modules Installer** instead of **Windows Modules Installer**. -
-![Screenshot showing portions of the service name highlighted in the Search Results pane and one incorrect service name highlighted in the Summary.](media/custom-fields/search-results-01.png)
+<!-- convertborder later -->
We start with the **WMI Performance Adapter** record. We click its edit icon and then **Modify this highlight**. -
-![Modify highlight](media/custom-fields/modify-highlight.png)
+<!-- convertborder later -->
We increase the highlight to include the word **WMI** and then rerun the extract. -
-![Additional example](media/custom-fields/additional-example-01.png)
+<!-- convertborder later -->
We can see that the entries for **WMI Performance Adapter** have been corrected, and Log Analytics also used that information to correct the records for **Windows Module Installer**.-
-![Screenshot showing the full service name highlighted in the Search Results pane and the correct service names highlighted in the Summary.](media/custom-fields/search-results-02.png)
+<!-- convertborder later -->
We can now run a query that verifies **Service_CF** is created but is not yet added to any records. That's because the custom field doesn't work against existing records so we need to wait for new records to be collected.-
-![Initial count](media/custom-fields/initial-count.png)
+<!-- convertborder later -->
After some time has passed so new events are collected, we can see that the **Service_CF** field is now being added to records that match our criteria.-
-![Final results](media/custom-fields/final-results.png)
+<!-- convertborder later -->
We can now use the custom field like any other record property. To illustrate this, we create a query that groups by the new **Service_CF** field to inspect which services are the most active.-
-![Group by query](media/custom-fields/query-group.png)
+<!-- convertborder later -->
## Next steps * Learn about [log queries](./log-query-overview.md) to build queries using custom fields for criteria.
azure-monitor Custom Logs Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/custom-logs-migrate.md
The migration procedure described in this article assumes you have:
The Log Ingestion API requires you to create two new types of resources, which the HTTP Data Collector API doesn't require: -- [Data collection endpoints](../essentials/data-collection-endpoint-overview.md), from which the the data you collect is ingested into the pipeline for processing.
+- [Data collection endpoints](../essentials/data-collection-endpoint-overview.md), from which the data you collect is ingested into the pipeline for processing.
- [Data collection rules](../essentials/data-collection-rule-overview.md), which define [data transformations](../essentials/data-collection-transformations.md) and the destination table to which the data is ingested. ## Migrate existing custom tables or create new tables
azure-monitor Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/customer-managed-keys.md
You can apply Customer-managed key configuration to a new cluster, or existing c
> [!IMPORTANT] > Customer-managed key capability is regional. Your Azure Key Vault, cluster and linked workspaces must be in the same region, but they can be in different subscriptions.-
-[![Customer-managed key overview](media/customer-managed-keys/cmk-overview.png "Screenshot of Customer-managed key diagram.")](media/customer-managed-keys/cmk-overview.png#lightbox)
+<!-- convertborder later -->
1. Key Vault 2. Log Analytics cluster resource having managed identity with permissions to Key VaultΓÇöThe identity is propagated to the underlay dedicated cluster storage
Customer-managed key configuration isn't supported in Azure portal currently and
A [portfolio of Azure Key Management products](../../key-vault/managed-hsm/mhsm-control-data.md#portfolio-of-azure-key-management-products) lists the vaults and managed HSMs that can be used. Create or use an existing Azure Key Vault in the region that the cluster is planed, and generate or import a key to be used for logs encryption. The Azure Key Vault must be configured as recoverable, to protect your key and the access to your data in Azure Monitor. You can verify this configuration under properties in your Key Vault, both **Soft delete** and **Purge protection** should be enabled.-
-[![Soft delete and purge protection settings](media/customer-managed-keys/soft-purge-protection.png "Screenshot of Key Vault soft delete and purge protection properties")](media/customer-managed-keys/soft-purge-protection.png#lightbox)
+<!-- convertborder later -->
These settings can be updated in Key Vault via CLI and PowerShell:
There are two permission models in Key Vault to grant access to your cluster and
- Select principalΓÇödepending on the identity type used in the cluster (system or user assigned managed identity) - System assigned managed identity - enter the cluster name or cluster principal ID - User assigned managed identity - enter the identity name-
- [<img src="media/customer-managed-keys/grant-key-vault-permissions-8bit.png" alt="Screenshot of Grant Key Vault access policy permissions." title="Grant Key Vault access policy permissions" width="80%"/>](media/customer-managed-keys/grant-key-vault-permissions-8bit.png#lightbox)
+ <!-- convertborder later -->
+ :::image type="content" source="media/customer-managed-keys/grant-key-vault-permissions-8bit.png" lightbox="media/customer-managed-keys/grant-key-vault-permissions-8bit.png" alt-text="Screenshot of Grant Key Vault access policy permissions." border="false":::
The **Get** permission is required to verify that your Key Vault is configured as recoverable to protect your key and the access to your Azure Monitor data.
This step updates dedicated cluster storage with the key and version to use for
>- Key rotation can be automatic or require explicit key update, see [Key rotation](#key-rotation) to determine approach that is suitable for you before updating the key identifier details in cluster. >- Cluster update should not include both identity and key identifier details in the same operation. If you need to update both, the update should be in two consecutive operations.
-[![Grant Key Vault permissions](media/customer-managed-keys/key-identifier-8bit.png "Screenshot of Key Vault key identifier details")](media/customer-managed-keys/key-identifier-8bit.png#lightbox)
Update KeyVaultProperties in cluster with key identifier details.
The query language used in Log Analytics is expressive and can contain sensitive
## Customer-managed key for Workbooks With the considerations mentioned for [Customer-managed key for saved queries and log alerts](#customer-managed-key-for-saved-queries-and-log-alerts), Azure Monitor enables you to store Workbook queries encrypted with your key in your own Storage Account, when selecting **Save content to an Azure Storage Account** in Workbook 'Save' operation.-
-[ ![Screenshot of Workbook save.](media/customer-managed-keys/cmk-workbook.png) ](media/customer-managed-keys/cmk-workbook.png#lightbox)
+<!-- convertborder later -->
> [!NOTE] > Queries remain encrypted with Microsoft key ("MMK") in the following scenarios regardless Customer-managed key configuration: Azure dashboards, Azure Logic App, Azure Notebooks and Automation Runbooks.
azure-monitor Data Collector Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-collector-api.md
You can use the HTTP Data Collector API to send log data to a Log Analytics work
All data in the Log Analytics workspace is stored as a record with a particular record type. You format your data to send to the HTTP Data Collector API as multiple records in JavaScript Object Notation (JSON). When you submit the data, an individual record is created in the repository for each record in the request payload.
-![Screenshot illustrating the HTTP Data Collector overview.](media/data-collector-api/overview.png)
## Create a request To use the HTTP Data Collector API, you create a POST request that includes the data to send in JSON. The next three tables list the attributes that are required for each request. We describe each attribute in more detail later in the article.
The data type that Azure Monitor uses for each property depends on whether the r
* If the record type does exist, Azure Monitor attempts to create a new record based on existing properties. If the data type for a property in the new record doesnΓÇÖt match and canΓÇÖt be converted to the existing type, or if the record includes a property that doesnΓÇÖt exist, Azure Monitor creates a new property that has the relevant suffix. For example, the following submission entry would create a record with three properties, **number_d**, **boolean_b**, and **string_s**:-
-![Screenshot of sample record 1.](media/data-collector-api/record-01.png)
+<!-- convertborder later -->
If you were to submit this next entry, with all values formatted as strings, the properties wouldn't change. You can convert the values to existing data types.-
-![Screenshot of sample record 2.](media/data-collector-api/record-02.png)
+<!-- convertborder later -->
But, if you then make this next submission, Azure Monitor would create the new properties **boolean_d** and **string_d**. You can't convert these values.-
-![Screenshot of sample record 3.](media/data-collector-api/record-03.png)
+<!-- convertborder later -->
If you then submit the following entry, before the record type is created, Azure Monitor would create a record with three properties, **number_s**, **boolean_s**, and **string_s**. In this entry, each of the initial values is formatted as a string:-
-![Screenshot of sample record 4.](media/data-collector-api/record-04.png)
+<!-- convertborder later -->
## Reserved properties The following properties are reserved and shouldn't be used in a custom record type. You'll receive an error if your payload includes any of these property names:
azure-monitor Data Platform Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-platform-logs.md
The following table describes some of the ways that you can use Azure Monitor Lo
| Export | Configure [automated export of log data](./logs-data-export.md) to an Azure Storage account or Azure Event Hubs.<br>Build a workflow to retrieve log data and copy it to an external location by using [Azure Logic Apps](../../connectors/connectors-azure-monitor-logs.md). | | Bring your own analysis | [Analyze data in Azure Monitor Logs using a notebook](../logs/notebooks-azure-monitor-logs.md) to create streamlined, multi-step processes on top of data you collect in Azure Monitor Logs. This is especially useful for purposes such as [building and running machine learning pipelines](../logs/aiops-machine-learning.md#create-your-own-machine-learning-pipeline-on-data-in-azure-monitor-logs), advanced analysis, and troubleshooting guides (TSGs) for Support needs. |
-![Diagram that shows an overview of Azure Monitor Logs.](media/data-platform-logs/logs-overview.png)
## Data collection After you create a [Log Analytics workspace](#log-analytics-workspaces), you must configure sources to send their data. No data is collected automatically.
You can:
Insights include prebuilt queries to support their views and workbooks. For a list of where log queries are used and references to tutorials and other documentation to get you started, see [Log queries in Azure Monitor](./log-query-overview.md).-
-![Screenshot that shows queries in Log Analytics.](media/data-platform-logs/log-analytics.png)
+<!-- convertborder later -->
## Relationship to Azure Data Explorer Azure Monitor Logs is based on Azure Data Explorer. A Log Analytics workspace is roughly the equivalent of a database in Azure Data Explorer. Tables are structured the same, and both use KQL. For information on KQL, see [Kusto Query Language (KQL) overview](/azure/data-explorer/kusto/query/).
azure-monitor Data Retention Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-retention-archive.md
This article describes how to configure data retention and archiving.
Each workspace has a default retention setting that's applied to all tables. You can configure a different retention setting on individual tables. During the interactive retention period, data is available for monitoring, troubleshooting, and analytics. When you no longer use the logs, but still need to keep the data for compliance or occasional investigation, archive the logs to save costs.
To set the default workspace retention:
1. Select **Usage and estimated costs** in the left pane. 1. Select **Data Retention** at the top of the page.
- :::image type="content" source="media/manage-cost-storage/manage-cost-change-retention-01.png" alt-text="Screenshot that shows changing the workspace data retention setting.":::
+ :::image type="content" source="media/manage-cost-storage/manage-cost-change-retention-01.png" lightbox="media/manage-cost-storage/manage-cost-change-retention-01.png" alt-text="Screenshot that shows changing the workspace data retention setting.":::
1. Move the slider to increase or decrease the number of days, and then select **OK**.
The default retention for Application Insights resources is 90 days. You can sel
To change the retention, from your Application Insights resource, go to the **Usage and estimated costs** page and select the **Data retention** option.
-![Screenshot that shows where to change the data retention period.](../app/media/pricing/pricing-005.png)
A several-day grace period begins when the retention is lowered before the oldest data is removed.
azure-monitor Data Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-security.md
Azure Log Analytics meets the following requirements:
## Cloud computing security data flow The following diagram shows a cloud security architecture as the flow of information from your company and how it's secured as is moves to Azure Monitor, ultimately seen by you in the Azure portal. More information about each step follows the diagram.-
-![Image of Azure Monitor Logs data collection and security](./media/data-security/log-analytics-data-security-diagram.png)
+<!-- convertborder later -->
### 1. Sign up for Azure Monitor and collect data For your organization to send data to Azure Monitor Logs, you configure a Windows or Linux agent running on Azure virtual machines, or on virtual or physical computers in your environment or other cloud provider. If you use Operations Manager, from the management group you configure the Operations Manager agent. Users (which might be you, other individual users, or a group of people) create one or more Log Analytics workspaces, and register agents by using one of the following accounts:
azure-monitor Delete Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/delete-workspace.md
You can delete a workspace by using [PowerShell](/powershell/module/azurerm.oper
1. If you want to permanently delete the workspace and remove the option to later recover it, select the **Delete the workspace permanently** checkbox. 1. Enter the name of the workspace to confirm and then select **Delete**.
- ![Screenshot that shows confirming the deletion of a workspace.](media/delete-workspace/workspace-delete.png)
+ :::image type="content" source="media/delete-workspace/workspace-delete.png" lightbox="media/delete-workspace/workspace-delete.png" alt-text="Screenshot that shows confirming the deletion of a workspace.":::
### PowerShell ```PowerShell
You can recover your workspace during the soft-delete period, including its data
1. Sign in to the [Azure portal](https://portal.azure.com). 1. In the Azure portal, select **All services**. In the list of resources, enter **Log Analytics**. As you begin typing, the list filters based on your input. Select **Log Analytics workspaces**. You see the list of workspaces you have in the selected scope. 1. Select **Recover** on the top left menu to open a page with workspaces in a soft-delete state that can be recovered.-
- ![Screenshot that shows the Log Analytics workspaces screen and Open recycle bin on the menu bar.](media/delete-workspace/recover-menu.png)
+ <!-- convertborder later -->
+ :::image type="content" source="media/delete-workspace/recover-menu.png" lightbox="media/delete-workspace/recover-menu.png" alt-text="Screenshot that shows the Log Analytics workspaces screen and Open recycle bin on the menu bar." border="false":::
1. Select the workspace. Then select **Recover** to recover the workspace.-
- ![Screenshot that shows the Recycle bin with a workspace and the Recover button.](media/delete-workspace/recover-workspace.png)
+ <!-- convertborder later -->
+ :::image type="content" source="media/delete-workspace/recover-workspace.png" lightbox="media/delete-workspace/recover-workspace.png" alt-text="Screenshot that shows the Recycle bin with a workspace and the Recover button." border="false":::
### PowerShell ```PowerShell
azure-monitor Get Started Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/get-started-queries.md
SecurityEvent
``` Descending is the default sorting order, so you would usually omit the `desc` argument. The output looks like this example.-
-![Screenshot that shows the top 10 records sorted in descending order.](media/get-started-queries/top10.png)
+<!-- convertborder later -->
## The where operator: Filter on a condition Filters, as indicated by their name, filter the data by a specific condition. Filtering is the most common way to limit query results to relevant information.
You can specify a time range by using the time picker or a time filter.
### Use the time picker The time picker is displayed next to the **Run** button and indicates that you're querying records from only the last 24 hours. This default time range is applied to all queries. To get records from only the last hour, select **Last hour** and then run the query again.-
-![Screenshot that shows the time picker and its list of time-range commands.](media/get-started-queries/timepicker.png)
+<!-- convertborder later -->
### Add a time filter to the query
SecurityEvent
``` The preceding example generates the following output:-
-![Screenshot that shows the query "project" results list.](media/get-started-queries/project.png)
+<!-- convertborder later -->
You can also use `project` to rename columns and define new ones. The next example uses `project` to do the following:
Perf
``` To make the output clearer, you can select to display it as a time chart, which shows the available memory over time.-
-![Screenshot that shows the values of a query memory over time.](media/get-started-queries/chart.png)
+<!-- convertborder later -->
## Frequently asked questions
azure-monitor Log Analytics Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-analytics-overview.md
Whether you work with the results of your queries interactively or use them with
To start Log Analytics in the Azure portal, on the **Azure Monitor** menu select **Logs**. You'll also see this option on the menu for most Azure resources. No matter where you start Log Analytics, the tool is the same. But the menu you use to start Log Analytics determines the data that's available. If you start Log Analytics from the **Azure Monitor** menu or the **Log Analytics workspaces** menu, you'll have access to all the records in a workspace. If you select **Logs** from another type of resource, your data will be limited to log data for that resource. For more information, see [Log query scope and time range in Azure Monitor Log Analytics](./scope.md).-
-[![Screenshot that shows starting Log Analytics.](media/log-analytics-overview/start-log-analytics.png)](media/log-analytics-overview/start-log-analytics.png#lightbox)
+<!-- convertborder later -->
When you start Log Analytics, a dialog appears that contains [example queries](../logs/queries.md). The queries are categorized by solution. Browse or search for queries that match your requirements. You might find one that does exactly what you need. You can also load one to the editor and modify it as required. Browsing through example queries is a good way to learn how to write your own queries.
If you want to start with an empty script and write it yourself, close the examp
The following image identifies four Log Analytics components.
-[![Screenshot that shows the Log Analytics interface with four features identified.](media/log-analytics-overview/log-analytics.png)](media/log-analytics-overview/log-analytics.png#lightbox)
### Top action bar
azure-monitor Log Analytics Workspace Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-analytics-workspace-overview.md
To create a new workspace, see [Create a Log Analytics workspace in the Azure po
Each workspace contains multiple tables that are organized into separate columns with multiple rows of data. Each table is defined by a unique set of columns. Rows of data provided by the data source share those columns. Log queries define columns of data to retrieve and provide output to different features of Azure Monitor and other services that use workspaces.
-[![Diagram that shows the Azure Monitor Logs structure.](media/data-platform-logs/logs-structure.png)](media/data-platform-logs/logs-structure.png#lightbox)
> [!WARNING] > Table names are used for billing purposes so they should not contain sensitive information.
To access archived data, you must first retrieve data from it in an Analytics Lo
| [Search jobs](search-jobs.md) | Retrieve data matching particular criteria. | | [Restore](restore.md) | Retrieve data from a particular time range. | ## Permissions
azure-monitor Log Standard Columns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-standard-columns.md
Last updated 02/18/2022
# Standard columns in Azure Monitor Logs Data in Azure Monitor Logs is [stored as a set of records in either a Log Analytics workspace or Application Insights application](../logs/data-platform-logs.md), each with a particular data type that has a unique set of columns. Many data types will have standard columns that are common across multiple types. This article describes these columns and provides examples of how you can use them in queries.
-Workspace-based applications in Application Insights store their data in a Log Analytics workspace and use the same standard columns as other other tables in the workspace. Classic applications store their data separately and have different standard columns as specified in this article.
+Workspace-based applications in Application Insights store their data in a Log Analytics workspace and use the same standard columns as other tables in the workspace. Classic applications store their data separately and have different standard columns as specified in this article.
> [!NOTE] > Some of the standard columns will not show in the schema view or intellisense in Log Analytics, and they won't show in query results unless you explicitly specify the column in the output.
azure-monitor Logs Data Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-data-export.md
Data in Log Analytics is available for the retention period defined in your work
After you've configured data export rules in a Log Analytics workspace, new data for tables in rules is exported from the Azure Monitor pipeline to your Storage Account or Event Hubs as it arrives.
-[![Diagram that shows a data export flow.](media/logs-data-export/data-export-overview.png "Diagram that shows a data export flow.")](media/logs-data-export/data-export-overview.png#lightbox)
Data is exported without a filter. For example, when you configure a data export rule for a *SecurityEvent* table, all data sent to the *SecurityEvent* table is exported starting from the configuration time. Alternatively, you can filter or modify exported data by configuring [transformations](./../essentials/data-collection-transformations.md) in your workspace, which apply to incoming data, before it's sent to your Log Analytics workspaces and to export destinations.
Blobs are stored in 5-minute folders in the following path structure: *Workspace
The format of blobs in a Storage Account is in [JSON lines](/previous-versions/azure/azure-monitor/essentials/resource-logs-blob-format), where each record is delimited by a new line, with no outer records array and no commas between JSON records.
-[![Screenshot that shows data format in a blob.](media/logs-data-export/storage-data.png "Screenshot that shows data format in a blob.")](media/logs-data-export/storage-data-expand.png#lightbox)
### Event Hubs
Register-AzResourceProvider -ProviderNamespace Microsoft.insights
### Allow trusted Microsoft services If you've configured your Storage Account to allow access from selected networks, you need to add an exception to allow Azure Monitor to write to the account. From **Firewalls and virtual networks** for your Storage Account, select **Allow Azure services on the trusted services list to access this Storage Account**.-
-[![Screenshot that shows the option Allow Azure services on the trusted services list.](media/logs-data-export/storage-account-network.png "Screenshot that shows the option Allow Azure services on the trusted services list.")](media/logs-data-export/storage-account-network.png#lightbox)
+<!-- convertborder later -->
### Monitor destinations
A data export rule defines the destination and tables for which data is exported
1. On the **Log Analytics workspace** menu in the Azure portal, select **Data Export** under the **Settings** section. Select **New export rule** at the top of the pane.
- [![Screenshot that shows the data export entry point.](media/logs-data-export/export-create-1.png "Screenshot that shows the data export entry point.")](media/logs-data-export/export-create-1.png#lightbox)
+ :::image type="content" source="media/logs-data-export/export-create-1.png" lightbox="media/logs-data-export/export-create-1.png" alt-text="Screenshot that shows the data export entry point.":::
1. Follow the steps, and then select **Create**.
Use the following command to create a data export rule to a specific Event Hub b
1. On the **Log Analytics workspace** menu in the Azure portal, select **Data Export** under the **Settings** section.
- [![Screenshot that shows the Data Export screen.](media/logs-data-export/export-view-1.png "Screenshot that shows the Data Export screen.")](media/logs-data-export/export-view-1.png#lightbox)
+ :::image type="content" source="media/logs-data-export/export-view-1.png" lightbox="media/logs-data-export/export-view-1.png" alt-text="Screenshot that shows the Data Export screen.":::
1. Select a rule for a configuration view.-
- <img src="media/logs-data-export/export-view-2.png" alt="Screenshot of data export rule view." title= "Data export rule configuration view" width="65%"/>
+ <!-- convertborder later -->
+ :::image type="content" source="media/logs-data-export/export-view-2.png" lightbox="media/logs-data-export/export-view-2.png" alt-text="Screenshot of data export rule view." border="false":::
# [PowerShell](#tab/powershell)
The template option doesn't apply.
You can disable export rules to stop the export for a certain period, such as when testing is being held. On the **Log Analytics workspace** menu in the Azure portal, select **Data Export** under the **Settings** section. Select the **Status** toggle to disable or enable the export rule.
-[![Screenshot that shows disabling the data export rule.](media/logs-data-export/export-disable.png "Screenshot that shows disabling the data export rule.")](media/logs-data-export/export-disable.png#lightbox)
# [PowerShell](#tab/powershell)
You can disable export rules to stop export when testing is performed and you do
On the **Log Analytics workspace** menu in the Azure portal, select **Data Export** under the **Settings** section. Select the ellipsis to the right of the rule and select **Delete**.
-[![Screenshot that shows deleting the data export rule.](media/logs-data-export/export-delete.png "Screenshot that shows deleting the data export rule.")](media/logs-data-export/export-delete.png#lightbox)
# [PowerShell](#tab/powershell)
The template option doesn't apply.
On the **Log Analytics workspace** menu in the Azure portal, select **Data Export** under the **Settings** section to view all export rules in the workspace.
-[![Screenshot that shows the data export rules view.](media/logs-data-export/export-view.png "Screenshot that shows the data export rules view.")](media/logs-data-export/export-view.png#lightbox)
# [PowerShell](#tab/powershell)
azure-monitor Logs Export Logic App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-export-logic-app.md
The method discussed in this article describes a scheduled export from a log que
## Overview This procedure uses the [Azure Monitor Logs connector](/connectors/azuremonitorlogs), which lets you run a log query from a logic app and use its output in other actions in the workflow. The [Azure Blob Storage connector](/connectors/azureblob) is used in this procedure to send the query output to storage.-
-[![Screenshot that shows a Logic Apps overview.](media/logs-export-logic-app/logic-app-overview.png "Screenshot that shows a Logic Apps flow.")](media/logs-export-logic-app/logic-app-overview.png#lightbox)
+<!-- convertborder later -->
When you export data from a Log Analytics workspace, limit the amount of data processed by your Logic Apps workflow. Filter and aggregate your log data in the query to reduce the required data. For example, if you need to export sign-in events, filter for required events and project only the required fields. For example:
Use the procedure in [Create a container](../../storage/blobs/storage-quickstart
### Create a logic app workflow 1. Go to **Logic Apps** in the Azure portal and select **Add**. Select a **Subscription**, **Resource group**, and **Region** to store the new logic app. Then give it a unique name. You can turn on the **Log Analytics** setting to collect information about runtime data and events as described in [Set up Azure Monitor Logs and collect diagnostics data for Azure Logic Apps](../../logic-apps/monitor-workflows-collect-diagnostic-data.md). This setting isn't required for using the Azure Monitor Logs connector.-
- [![Screenshot that shows creating a logic app.](media/logs-export-logic-app/create-logic-app.png "Screenshot that shows creating a Logic Apps resource.")](media/logs-export-logic-app/create-logic-app.png#lightbox)
+ <!-- convertborder later -->
+ :::image type="content" source="media/logs-export-logic-app/create-logic-app.png" lightbox="media/logs-export-logic-app/create-logic-app.png" alt-text="Screenshot that shows creating a logic app." border="false":::
1. Select **Review + create** and then select **Create**. After the deployment is finished, select **Go to resource** to open the **Logic Apps Designer**. ### Create a trigger for the workflow Under **Start with a common trigger**, select **Recurrence**. This setting creates a logic app workflow that automatically runs at a regular interval. In the **Frequency** box of the action, select **Day**. In the **Interval** box, enter **1** to run the workflow once per day.-
-[![Screenshot that shows a Recurrence action.](media/logs-export-logic-app/recurrence-action.png "Screenshot that shows creating a recurrence action.")](media/logs-export-logic-app/recurrence-action.png#lightbox)
+<!-- convertborder later -->
### Add an Azure Monitor Logs action
The Azure Monitor Logs action lets you specify the query to run. The log query u
You're prompted to select a tenant to grant access to the Log Analytics workspace with the account that the workflow will use to run the query. 1. Select **+ New step** to add an action that runs after the recurrence action. Under **Choose an action**, enter **azure monitor**. Then select **Azure Monitor Logs**.-
- [![Screenshot that shows an Azure Monitor Logs action.](media/logs-export-logic-app/select-azure-monitor-connector.png "Screenshot that shows creating a Azure Monitor Logs action.")](media/logs-export-logic-app/select-azure-monitor-connector.png#lightbox)
+ <!-- convertborder later -->
+ :::image type="content" source="media/logs-export-logic-app/select-azure-monitor-connector.png" lightbox="media/logs-export-logic-app/select-azure-monitor-connector.png" alt-text="Screenshot that shows an Azure Monitor Logs action." border="false":::
1. Select **Azure Log Analytics ΓÇô Run query and list results**.-
- [![Screenshot that shows Azure Monitor Logs is highlighted under Choose an action.](media/logs-export-logic-app/select-query-action-list.png "Screenshot that shows a new action being added to a step in the Logic Apps Designer.")](media/logs-export-logic-app/select-query-action-list.png#lightbox)
+ <!-- convertborder later -->
+ :::image type="content" source="media/logs-export-logic-app/select-query-action-list.png" lightbox="media/logs-export-logic-app/select-query-action-list.png" alt-text="Screenshot that shows Azure Monitor Logs is highlighted under Choose an action." border="false":::
1. Select the **Subscription** and **Resource Group** for your Log Analytics workspace. Select **Log Analytics Workspace** for the **Resource Type**. Then select the workspace name under **Resource Name**.
You're prompted to select a tenant to grant access to the Log Analytics workspac
``` 1. The **Time Range** specifies the records that will be included in the query based on the **TimeGenerated** column. The value should be greater than the time range selected in the query. Because this query isn't using the **TimeGenerated** column, the **Set in query** option isn't available. For more information about the time range, see [Query scope](./scope.md). Select **Last 4 hours** for the **Time Range**. This setting ensures that any records with an ingestion time larger than **TimeGenerated** will be included in the results.-
- [![Screenshot that shows the settings for the new Azure Monitor Logs action named Run query and visualize results.](media/logs-export-logic-app/run-query-list-action.png "Screenshot that shows the settings for the Azure Monitor Logs action named Run query.")](media/logs-export-logic-app/run-query-list-action.png#lightbox)
+ <!-- convertborder later -->
+ :::image type="content" source="media/logs-export-logic-app/run-query-list-action.png" lightbox="media/logs-export-logic-app/run-query-list-action.png" alt-text="Screenshot that shows the settings for the new Azure Monitor Logs action named Run query and visualize results." border="false":::
### Add a Parse JSON action (optional)
You can use a sample output from the **Run query and list results** step.
``` 1. Select **+ New step** and then select **+ Add an action**. Under **Choose an operation**, enter **json** and then select **Parse JSON**.-
- [![Screenshot that shows selecting a Parse JSON operator.](media/logs-export-logic-app/select-parse-json.png "Screenshot that shows the Parse JSON operator.")](media/logs-export-logic-app/select-parse-json.png#lightbox)
+ <!-- convertborder later -->
+ :::image type="content" source="media/logs-export-logic-app/select-parse-json.png" lightbox="media/logs-export-logic-app/select-parse-json.png" alt-text="Screenshot that shows selecting a Parse JSON operator." border="false":::
1. Select the **Content** box to display a list of values from previous activities. Select **Body** from the **Run query and list results** action. This output is from the log query.-
- [![Screenshot that shows selecting a Body.](media/logs-export-logic-app/select-body.png "Screenshot that shows a Parse JSON Content setting with the output Body from the previous step.")](media/logs-export-logic-app/select-body.png#lightbox)
+ <!-- convertborder later -->
+ :::image type="content" source="media/logs-export-logic-app/select-body.png" lightbox="media/logs-export-logic-app/select-body.png" alt-text="Screenshot that shows selecting a Body." border="false":::
1. Copy the sample record saved earlier. Select **Use sample payload to generate schema** and paste.-
- [![Screenshot that shows parsing a JSON payload.](media/logs-export-logic-app/parse-json-payload.png "Screenshot that shows a Parse JSON schema.")](media/logs-export-logic-app/parse-json-payload.png#lightbox)
+ <!-- convertborder later -->
+ :::image type="content" source="media/logs-export-logic-app/parse-json-payload.png" lightbox="media/logs-export-logic-app/parse-json-payload.png" alt-text="Screenshot that shows parsing a JSON payload." border="false":::
### Add the Compose action The **Compose** action takes the parsed JSON output and creates the object that you need to store in the blob. 1. Select **+ New step**, and then select **+ Add an action**. Under **Choose an operation**, enter **compose**. Then select the **Compose** action.-
- [![Screenshot that shows selecting a Compose action.](media/logs-export-logic-app/select-compose.png "Screenshot that shows a Compose action.")](media/logs-export-logic-app/select-compose.png#lightbox)
+ <!-- convertborder later -->
+ :::image type="content" source="media/logs-export-logic-app/select-compose.png" lightbox="media/logs-export-logic-app/select-compose.png" alt-text="Screenshot that shows selecting a Compose action." border="false":::
1. Select the **Inputs** box to display a list of values from previous activities. Select **Body** from the **Parse JSON** action. This parsed output is from the log query.-
- [![Screenshot that shows selecting a body for a Compose action.](media/logs-export-logic-app/select-body-compose.png "Screenshot that shows a body for Compose action.")](media/logs-export-logic-app/select-body-compose.png#lightbox)
+ <!-- convertborder later -->
+ :::image type="content" source="media/logs-export-logic-app/select-body-compose.png" lightbox="media/logs-export-logic-app/select-body-compose.png" alt-text="Screenshot that shows selecting a body for a Compose action." border="false":::
### Add the Create blob action The **Create blob** action writes the composed JSON to storage. 1. Select **+ New step**, and then select **+ Add an action**. Under **Choose an operation**, enter **blob**. Then select the **Create blob** action.-
- [![Screenshot that shows selecting the Create Blob action.](media/logs-export-logic-app/select-create-blob.png "Screenshot that shows creating a Blob storage action.")](media/logs-export-logic-app/select-create-blob.png#lightbox)
+ <!-- convertborder later -->
+ :::image type="content" source="media/logs-export-logic-app/select-create-blob.png" lightbox="media/logs-export-logic-app/select-create-blob.png" alt-text="Screenshot that shows selecting the Create Blob action." border="false":::
1. Enter a name for the connection to your storage account in **Connection Name**. Then select the folder icon in the **Folder path** box to select the container in your storage account. Select **Blob name** to see a list of values from previous activities. Select **Expression** and enter an expression that matches your time interval. For this query, which is run hourly, the following expression sets the blob name per previous hour: ```json subtractFromTime(formatDateTime(utcNow(),'yyyy-MM-ddTHH:00:00'), 1,'Hour') ```-
- [![Screenshot that shows a blob expression.](media/logs-export-logic-app/blob-expression.png "Screenshot that shows a Blob action connection.")](media/logs-export-logic-app/blob-expression.png#lightbox)
+ <!-- convertborder later -->
+ :::image type="content" source="media/logs-export-logic-app/blob-expression.png" lightbox="media/logs-export-logic-app/blob-expression.png" alt-text="Screenshot that shows a blob expression." border="false":::
1. Select the **Blob content** box to display a list of values from previous activities. Then select **Outputs** in the **Compose** section.-
- [![Screenshot that shows creating a blob expression.](media/logs-export-logic-app/create-blob.png "Screenshot that shows a Blob action output configuration.")](media/logs-export-logic-app/create-blob.png#lightbox)
+ <!-- convertborder later -->
+ :::image type="content" source="media/logs-export-logic-app/create-blob.png" lightbox="media/logs-export-logic-app/create-blob.png" alt-text="Screenshot that shows creating a blob expression." border="false":::
### Test the workflow To test the workflow, select **Run**. If the workflow has errors, they're indicated on the step with the problem. You can view the executions and drill in to each step to view the input and output to investigate failures. See [Troubleshoot and diagnose workflow failures in Azure Logic Apps](../../logic-apps/logic-apps-diagnosing-failures.md), if necessary.-
-[![Screenshot that shows Runs history.](media/logs-export-logic-app/runs-history.png "Screenshot that shows trigger run history.")](media/logs-export-logic-app/runs-history.png#lightbox)
+<!-- convertborder later -->
### View logs in storage Go to the **Storage accounts** menu in the Azure portal and select your storage account. Select the **Blobs** tile. Then select the container you specified in the **Create blob** action. Select one of the blobs and then select **Edit blob**.-
-[![Screenshot that shows blob data.](media/logs-export-logic-app/blob-data.png "Screenshot that shows sample data exported to a blob.")](media/logs-export-logic-app/blob-data.png#lightbox)
+<!-- convertborder later -->
### Logic App template
azure-monitor Manage Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/manage-access.md
The *access control mode* is a setting on each workspace that defines how permis
View the current workspace access control mode on the **Overview** page for the workspace in the **Log Analytics workspace** menu.
-![Screenshot that shows the workspace access control mode.](media/manage-access/view-access-control-mode.png)
Change this setting on the **Properties** page of the workspace. If you don't have permissions to configure the workspace, changing the setting is disabled.
-![Screenshot that shows changing workspace access mode.](media/manage-access/change-access-control-mode.png)
# [PowerShell](#tab/powershell)
azure-monitor Move Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/move-workspace.md
Run the [Get-AzSubscription](/powershell/module/az.accounts/get-azsubscription/)
1. Open the menu for the resource group where any solutions are installed. 1. Select the solutions to remove. 1. Select **Delete Resources** and then confirm the resources to be removed by selecting **Delete**.-
- [![Screenshot that shows deleting solutions.](media/move-workspace/delete-solutions.png)](media/move-workspace/delete-solutions.png#lightbox)
+ <!-- convertborder later -->
+ :::image type="content" source="media/move-workspace/delete-solutions.png" lightbox="media/move-workspace/delete-solutions.png" alt-text="Screenshot that shows deleting solutions." border="false":::
### [REST API](#tab/rest-api)
To remove the **Start/Stop VMs** solution, you also need to remove the alert rul
- AutoStop_VM_Child - ScheduledStartStop_Parent - SequencedStartStop_Parent-
- [![Screenshot that shows deleting rules.](media/move-workspace/delete-rules.png)](media/move-workspace/delete-rules.png#lightbox)
+ <!-- convertborder later -->
+ :::image type="content" source="media/move-workspace/delete-rules.png" lightbox="media/move-workspace/delete-rules.png" alt-text="Screenshot that shows deleting rules." border="false":::
### [REST API](#tab/rest-api)
Not supported.
1. Select a destination **Subscription** and **Resource group**. If you're moving the workspace to another resource group in the same subscription, you won't see the **Subscription** option. 1. Select **OK** to move the workspace and selected resources.
- [![Screenshot that shows the Overview pane in the Log Analytics workspace with options to change the resource group and subscription name.](media/move-workspace/portal.png)](media/move-workspace/portal.png#lightbox)
+ :::image type="content" source="media/move-workspace/portal.png" lightbox="media/move-workspace/portal.png" alt-text="Screenshot that shows the Overview pane in the Log Analytics workspace with options to change the resource group and subscription name.":::
### [ REST API](#tab/rest-api)
azure-monitor Private Link Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/private-link-configure.md
In this section, we review the step-by-step process of setting up a private link
1. Give the AMPLS a name. Use a meaningful and clear name like *AppServerProdTelem*. 1. Select **Review + create**.
- ![Screenshot that shows creating an Azure Monitor Private Link Scope.](./media/private-link-security/ampls-create-1d.png)
+ :::image type="content" source="./media/private-link-security/ampls-create-1d.png" lightbox="./media/private-link-security/ampls-create-1d.png" alt-text="Screenshot that shows creating an Azure Monitor Private Link Scope.":::
1. Let the validation pass and select **Create**.
Connect Azure Monitor resources like Log Analytics workspaces, Application Insig
1. In your AMPLS, select **Azure Monitor Resources** in the menu on the left. Select **Add**. 1. Add the workspace or component. Selecting **Add** opens a dialog where you can select Azure Monitor resources. You can browse through your subscriptions and resource groups. You can also enter their names to filter down to them. Select the workspace or component and select **Apply** to add them to your scope.
- ![Screenshot that shows selecting a scope.](./media/private-link-security/ampls-select-2.png)
+ :::image type="content" source="./media/private-link-security/ampls-select-2.png" lightbox="./media/private-link-security/ampls-select-2.png" alt-text="Screenshot that shows selecting a scope.":::
> [!NOTE] > Deleting Azure Monitor resources requires that you first disconnect them from any AMPLS objects they're connected to. It's not possible to delete resources connected to an AMPLS.
So far we've covered the configuration of your network. But you should also cons
Go to the Azure portal. On your resource's menu, find **Network Isolation** on the left side. This page controls which networks can reach the resource through a private link and whether other networks can reach it or not.
-![Screenshot that shows Network Isolation.](./media/private-link-security/ampls-network-isolation.png)
### Connected Azure Monitor Private Link Scopes Here you can review and configure the resource's connections to an AMPLS. Connecting to an AMPLS allows traffic from the virtual network connected to each AMPLS to reach the resource. It has the same effect as connecting it from the scope as we did in the section [Connect Azure Monitor resources](#connect-azure-monitor-resources).
This zone also covers the resource-specific endpoints for [data collection endpo
* `<unique-dce-identifier>.<regionname>.handler.control`: Private configuration endpoint, part of a DCE resource. * `<unique-dce-identifier>.<regionname>.ingest`: Private ingestion endpoint, part of a DCE resource.-
-[![Screenshot that shows Private DNS zone monitor-azure-com.](./media/private-link-security/dns-zone-privatelink-monitor-azure-com-with-endpoint.png)](./media/private-link-security/dns-zone-privatelink-monitor-azure-com-expanded-with-endpoint.png#lightbox)
+<!-- convertborder later -->
#### Log Analytics endpoints > [!IMPORTANT]
Log Analytics uses four DNS zones:
The following screenshot shows endpoints mapped for an AMPLS with two workspaces in East US and one workspace in West Europe. Notice the East US workspaces share the IP addresses. The West Europe workspace endpoint is mapped to a different IP address. The blob endpoint doesn't appear in this image but it's configured.
-[![Screenshot that shows private link compressed endpoints.](./media/private-link-security/dns-zone-privatelink-compressed-endpoints.png)](./media/private-link-security/dns-zone-privatelink-compressed-endpoints.png#lightbox)
### Validate that you're communicating over a private link
azure-monitor Private Link Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/private-link-design.md
In the following diagram, virtual network 10.0.1.x connects to AMPLS1, which cre
To avoid this conflict, create only a single AMPLS object per DNS.
-![Diagram that shows DNS overrides in multiple virtual networks.](./media/private-link-security/dns-overrides-multiple-vnets.png)
### Hub-and-spoke networks Hub-and-spoke networks should use a single private link connection set on the hub (main) network, and not on each spoke virtual network.
-![Diagram that shows a hub-and-spoke single private link.](./media/private-link-security/hub-and-spoke-with-single-private-endpoint-with-data-collection-endpoint.png)
> [!NOTE] > You might prefer to create separate private links for your spoke virtual networks, for example, to allow each virtual network to access a limited set of monitoring resources. In such cases, you can create a dedicated private endpoint and AMPLS for each virtual network. *You must also verify they don't share the same DNS zones to avoid DNS overrides*.
By using private link access modes, you can control how private links affect you
Choosing the proper access mode is critical to ensuring continuous, uninterrupted network traffic. Each of these modes can be set for ingestion and queries, separately: * **Private Only**: Allows the virtual network to reach only private link resources (resources in the AMPLS). That's the most secure mode of work. It prevents data exfiltration by blocking traffic out of the AMPLS to Azure Monitor resources.
-![Diagram that shows the AMPLS Private Only access mode.](./media/private-link-security/ampls-private-only-access-mode.png)
* **Open**: Allows the virtual network to reach both private link resources and resources not in the AMPLS (if they [accept traffic from public networks](./private-link-design.md#control-network-access-to-your-resources)). The Open access mode doesn't prevent data exfiltration, but it still offers the other benefits of private links. Traffic to private link resources is sent through private endpoints, validated, and sent over the Microsoft backbone. The Open mode is useful for a mixed mode of work (accessing some resources publicly and others over a private link) or during a gradual onboarding process.
-![Diagram that shows the AMPLS Open access mode.](./media/private-link-security/ampls-open-access-mode.png)
Access modes are set separately for ingestion and queries. For example, you can set the Private Only mode for ingestion and the Open mode for queries. Apply caution when you select your access mode. Using the Private Only access mode will block traffic to resources not in the AMPLS across all networks that share the same DNS, regardless of subscription or tenant. The exception is Log Analytics ingestion requests, which is explained. If you can't add all Azure Monitor resources to the AMPLS, start by adding select resources and applying the Open access mode. Switch to the Private Only mode for maximum security *only after you've added all Azure Monitor resources to your AMPLS*.
For configuration details and examples, see [Use APIs and the command line](./pr
The access modes set on the AMPLS resource affect all networks, but you can override these settings for specific networks. In the following diagram, VNet1 uses the Open mode and VNet2 uses the Private Only mode. Requests from VNet1 can reach Workspace 1 and Component 2 over a private link. Requests can reach Component 3 only if it [accepts traffic from public networks](./private-link-design.md#control-network-access-to-your-resources). VNet2 requests can't reach Component 3.
-![Diagram that shows mixed access modes.](./media/private-link-security/ampls-mixed-access-modes.png)
## Consider AMPLS limits
In the following diagram:
* Workspace 2 connects to AMPLS A and AMPLS B by using two of the five possible AMPLS connections. * AMPLS B is connected to private endpoints of two virtual networks (VNet2 and VNet3) by using two of the 10 possible private endpoint connections.
-![Diagram that shows AMPLS limits.](./media/private-link-security/ampls-limits.png)
## Control network access to your resources Your Log Analytics workspaces or Application Insights components can be set to:
azure-monitor Private Link Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/private-link-security.md
For more information, see [Key benefits of Private Link](../../private-link/priv
## How it works: Main principles An Azure Monitor private link connects a private endpoint to a set of Azure Monitor resources made up of Log Analytics workspaces and Application Insights resources. That set is called an Azure Monitor Private Link Scope.
-![Diagram that shows basic resource topology.](./media/private-link-security/private-link-basic-topology.png)
An AMPLS:
azure-monitor Private Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/private-storage.md
For the storage account to connect to your private link, it must:
* Be located on the same region as the workspace it's linked to. * Allow Azure Monitor to access the storage account. To allow only specific networks to access your storage account, select the exception **Allow trusted Microsoft services to access this storage account**.
- ![Screenshot that shows Storage account trust Microsoft services.](./media/private-storage/storage-trust.png)
+ :::image type="content" source="./media/private-storage/storage-trust.png" lightbox="./media/private-storage/storage-trust.png" alt-text="Screenshot that shows Storage account trust Microsoft services.":::
If your workspace handles traffic from other networks, configure the storage account to allow incoming traffic coming from the relevant networks/internet.
To configure your Azure Storage account to use CMKs with Key Vault, use the [Azu
### Use the Azure portal On the Azure portal, open your workspace menu and select **Linked storage accounts**. A pane shows the linked storage accounts by the use cases previously mentioned (ingestion over Private Link, applying CMKs to saved queries or to alerts).
-![Screenshot that shows the Linked storage accounts pane.](./media/private-storage/all-linked-storage-accounts.png)
Selecting an item on the table opens its storage account details, where you can set or update the linked storage account for this type.
-![Screenshot that shows the Link storage account pane.](./media/private-storage/link-a-storage-account-blade.png)
You can use the same account for different use cases if you prefer. ### Use the Azure CLI or REST API
azure-monitor Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/queries.md
Select queries from the query interface, which is available from two different l
When you open Log Analytics, the **Queries** dialog automatically appears. If you don't want this dialog to automatically appear, turn off the **Always show Queries** toggle.
-![Screenshot that shows the Queries screen.](media/queries/query-start.png)
Each query is represented by a card. You can quickly scan through the queries to find what you need. You can run the query directly from the dialog or choose to load it to the query editor for modification. You can also access it by selecting **Queries** in the upper-right corner.-
-[![Screenshot that shows the Queries button.](media/queries/queries-button.png)](media/queries/queries-button.png#lightbox)
+<!-- convertborder later -->
### Query sidebar You can access the same functionality of the dialog experience from the **Queries** pane on the left sidebar of Log Analytics. Hover over a query name to get the query description and more functionality.-
-[![Screenshot that shows the Query sidebar.](media/queries/query-sidebar.png)](media/queries/query-sidebar.png#lightbox)
+<!-- convertborder later -->
## Find and filter queries
The options in this section are available in both the dialog and sidebar query e
Change the grouping of the queries by selecting the **group by** dropdown list. The grouping values also act as an active table of contents. Selecting one of the values on the left side of the screen scrolls the **Queries** view directly to the item selected. If your organization created query packs with tags, the custom tags will be included in this list.
-[![Screenshot that shows the Example queries screen group by dropdown list.](media/queries/example-query-groupby.png)](media/queries/example-query-groupby.png#lightbox)
### [Filter](#tab/filter) You can also filter the queries according to the **group by** values mentioned earlier. In the **Example queries** dialog, the filters are found at the top.
-[![Screenshot that shows an Example queries screen filter.](media/queries/example-query-filter.png)](media/queries/example-query-filter.png#lightbox)
### [Combine group by and filter](#tab/combinegroupbyandfilter)
azure-monitor Query Audit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/query-audit.md
Query auditing is enabled with a [diagnostic setting](../essentials/diagnostic-s
Access the diagnostic setting for a Log Analytics workspace in the Azure portal in either of the following locations: - From the **Azure Monitor** menu, select **Diagnostic settings**, and then locate and select the workspace.-
- [![Diagnostic settings Azure Monitor](media/query-audit/diagnostic-setting-monitor.png) ](media/query-audit/diagnostic-setting-monitor.png#lightbox)
+ <!-- convertborder later -->
+ :::image type="content" source="media/query-audit/diagnostic-setting-monitor.png" lightbox="media/query-audit/diagnostic-setting-monitor.png" alt-text="Screenshot of diagnostic settings Azure Monitor." border="false":::
- From the **Log Analytics workspaces** menu, select the workspace, and then select **Diagnostic settings**.
- [![Diagnostic settings Log Analytics workspace](media/query-audit/diagnostic-setting-workspace.png) ](media/query-audit/diagnostic-setting-workspace.png#lightbox)
+ :::image type="content" source="media/query-audit/diagnostic-setting-workspace.png" lightbox="media/query-audit/diagnostic-setting-workspace.png" alt-text="Screenshot of diagnostic settings Log Analytics workspace.":::
### Resource Manager template You can get an example Resource Manager template from [Diagnostic setting for Log Analytics workspace](../essentials/resource-manager-diagnostic-settings.md#diagnostic-setting-for-a-log-analytics-workspace).
azure-monitor Query Packs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/query-packs.md
You can set the permissions on a query pack when you view it in the Azure portal
## View query packs You can view and manage query packs in the Azure portal from the **Log Analytics query packs** menu. Select a query pack to view and edit its permissions. This article describes how to create a query pack by using the API.-
-[![Screenshot that shows query packs.](media/query-packs/view-query-pack.png)](media/query-packs/view-query-pack.png#lightbox)
+<!-- convertborder later -->
## Default query pack Azure Monitor automatically creates a query pack called `DefaultQueryPack` in each subscription in a resource group called `LogAnalyticsDefaultResources` when you save your first query. You can save queries to this query pack or create other query packs depending on your requirements.
azure-monitor Save Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/save-query.md
When you save a query, it's stored in a query pack, which has benefits over the
## Save a query To save a query to a query pack, select **Save as query** from the **Save** dropdown in Log Analytics.-
-[![Screenshot that shows the Save query menu.](media/save-query/save-query.png)](media/save-query/save-query.png#lightbox)
+<!-- convertborder later -->
When you save a query to a query pack, the following dialog box appears where you can provide values for the query properties. The query properties are used for filtering and grouping of similar queries to help you find the query you're looking for. For a detailed description of each property, see [Query properties](queries.md#query-properties). Most users should leave the option to **Save to the default query pack**, which will save the query in the [default query pack](query-packs.md#default-query-pack). Clear this checkbox if there are other query packs in your subscription. For information on how to create a new query pack, see [Query packs in Azure Monitor Logs](query-packs.md).-
-[![Screenshot that shows the Save as query dialog.](media/save-query/save-query-dialog.png)](media/save-query/save-query-dialog.png#lightbox)
+<!-- convertborder later -->
## Edit a query You might want to edit a query that you've already saved. You might want to change the query itself or modify any of its properties. After you open an existing query in Log Analytics, you can edit it by selecting **Edit query details** from the **Save** dropdown. Now you can save the edited query with the same properties or modify any properties before saving.
azure-monitor Scope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/scope.md
When you run a [log query](../logs/log-query-overview.md) in [Log Analytics in t
[!INCLUDE [log-analytics-query-permissions](../../../includes/log-analytics-query-permissions.md)] ## Query scope
-The query scope defines the records that are evaluated by the query. This will usually include all records in a single Log Analytics workspace or Application Insights application. Log Analytics also allows you to set a scope for a particular monitored Azure resource. This allows a resource owner to focus only on their data, even if that resource writes to multiple workspaces.
+The query scope defines the records that the query evaluates. This definition will usually include all records in a single Log Analytics workspace or Application Insights application. Log Analytics also allows you to set a scope for a particular monitored Azure resource. This allows a resource owner to focus only on their data, even if that resource writes to multiple workspaces.
The scope is always displayed at the top left of the Log Analytics window. An icon indicates whether the scope is a Log Analytics workspace or an Application Insights application. No icon indicates another Azure resource.
-![Scope displayed in portal](media/scope/scope.png)
-The scope is determined by the method you use to start Log Analytics, and in some cases you can change the scope by clicking on it. The following table lists the different types of scope used and different details for each.
+The method you use to start Log Analytics determines the scope, and in some cases you can change the scope by clicking on it. The following table lists the different types of scope used and different details for each.
> [!IMPORTANT] > If you're using a workspace-based application in Application Insights, then its data is stored in a Log Analytics workspace with all other log data. For backward compatibility you will get the classic Application Insights experience when you select the application as your scope. To see this data in the Log Analytics workspace, set the scope to the workspace.
The scope is determined by the method you use to start Log Analytics, and in som
|:|:|:|:| | Log Analytics workspace | All records in the Log Analytics workspace. | Select **Logs** from the **Azure Monitor** menu or the **Log Analytics workspaces** menu. | Can change scope to any other resource type. | | Application Insights application | All records in the Application Insights application. | Select **Logs** from the **Application Insights** menu for the application. | Can only change scope to another Application Insights application. |
-| Resource group | Records created by all resources in the resource group. May include data from multiple Log Analytics workspaces. | Select **Logs** from the resource group menu. | Cannot change scope.|
-| Subscription | Records created by all resources in the subscription. May include data from multiple Log Analytics workspaces. | Select **Logs** from the subscription menu. | Cannot change scope. |
-| Other Azure resources | Records created by the resource. May include data from multiple Log Analytics workspaces. | Select **Logs** from the resource menu.<br>OR<br>Select **Logs** from the **Azure Monitor** menu and then select a new scope. | Can only change scope to same resource type. |
+| Resource group | Records created by all resources in the resource group. Can include data from multiple Log Analytics workspaces. | Select **Logs** from the resource group menu. | Can't change scope.|
+| Subscription | Records created by all resources in the subscription. Can include data from multiple Log Analytics workspaces. | Select **Logs** from the subscription menu. | Can't change scope. |
+| Other Azure resources | Records created by the resource. Can include data from multiple Log Analytics workspaces. | Select **Logs** from the resource menu.<br>OR<br>Select **Logs** from the **Azure Monitor** menu and then select a new scope. | Can only change scope to same resource type. |
### Limitations when scoped to a resource
When the query scope is a Log Analytics workspace or an Application Insights app
- Query explorer - New alert rule
-You can't use the following commands in a query when scoped to a resource since the query scope will already include any workspaces with data for that resource or set of resources:
+You can't use the following commands in a query when scoped to a resource since the query scope already includes any workspaces with data for that resource or set of resources:
- [app](../logs/app-expression.md) - [workspace](../logs/workspace-expression.md) ## Query scope limits
-Setting the scope to a resource or set of resources is a particularly powerful feature of Log Analytics since it allows you to automatically consolidate distributed data in a single query. It can significantly affect performance though if data needs to be retrieved from workspaces across multiple Azure regions.
+Setting the scope to a resource or set of resources is a powerful feature of Log Analytics since it allows you to automatically consolidate distributed data in a single query. It can significantly affect performance though if data needs to be retrieved from workspaces across multiple Azure regions.
Log Analytics helps protect against excessive overhead from queries that span workspaces in multiple regions by issuing a warning or error when a certain number of regions are being used.
-Your query will receive a warning if the scope includes workspaces in 5 or more regions. it will still run, but it may take excessive time to complete.
+Your query receives a warning if the scope includes workspaces in 5 or more regions. it will still run, but it might take excessive time to complete.
+<!-- convertborder later -->
-![Query warning](media/scope/query-warning.png)
-
-Your query will be blocked from running if the scope includes workspaces in 20 or more regions. In this case you will be prompted to reduce the number of workspace regions and attempt to run the query again. The dropdown will display all of the regions in the scope of the query, and you should reduce the number of regions before attempting to run the query again.
-
-![Query failed](media/scope/query-failed.png)
+Your query will be blocked from running if the scope includes workspaces in 20 or more regions. In this case, you'll be prompted to reduce the number of workspace regions and attempt to run the query again. The dropdown will display all of the regions in the scope of the query, and you should reduce the number of regions before attempting to run the query again.
+<!-- convertborder later -->
## Time range
The time range specifies the set of records that are evaluated for the query bas
Set the time range by selecting it from the time picker at the top of the Log Analytics window. You can select a predefined period or select **Custom** to specify a specific time range.-
-![Time picker](media/scope/time-picker.png)
+<!-- convertborder later -->
If you set a filter in the query that uses the standard time column as shown in the table above, the time picker changes to **Set in query**, and the time picker is disabled. In this case, it's most efficient to put the filter at the top of the query so that any subsequent processing only needs to work with the filtered records.
+<!-- convertborder later -->
-![Filtered query](media/scope/query-filtered.png)
-
-If you use the [workspace](../logs/workspace-expression.md) or [app](../logs/app-expression.md) command to retrieve data from another workspace or classic application, the time picker may behave differently. If the scope is a Log Analytics workspace and you use **app**, or if the scope is a classic Application Insights application and you use **workspace**, then Log Analytics may not understand that the column used in the filter should determine the time filter.
+If you use the [workspace](../logs/workspace-expression.md) or [app](../logs/app-expression.md) command to retrieve data from another workspace or classic application, the time picker might behave differently. If the scope is a Log Analytics workspace and you use **app**, or if the scope is a classic Application Insights application and you use **workspace**, then Log Analytics might not understand that the column used in the filter should determine the time filter.
In the following example, the scope is set to a Log Analytics workspace. The query uses **workspace** to retrieve data from another Log Analytics workspace. The time picker changes to **Set in query** because it sees a filter that uses the expected **TimeGenerated** column.-
-![Query with workspace](media/scope/query-workspace.png)
+<!-- convertborder later -->
If the query uses **app** to retrieve data from a classic Application Insights application though, Log Analytics doesn't recognize the **timestamp** column in the filter, and the time picker remains unchanged. In this case, both filters are applied. In the example, only records created in the last 24 hours are included in the query even though it specifies 7 days in the **where** clause.-
-![Query with app](media/scope/query-app.png)
+<!-- convertborder later -->
## Next steps
azure-monitor Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/policy-reference.md
Title: Built-in policy definitions for Azure Monitor description: Lists Azure Policy built-in policy definitions for Azure Monitor. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 10/23/2023 Last updated : 11/03/2023
azure-monitor Profiler Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-troubleshooting.md
Search for trace messages and custom events sent by Profiler to your Application
1. In your Application Insights resource, select **Search** from the top menu.
- :::image type="content" source="./media/profiler-troubleshooting/search-trace-messages.png" alt-text="Screenshot that shows selecting the Search button from the Application Insights resource.":::
+ :::image type="content" source="./media/profiler-troubleshooting/search-trace-messages.png" lightbox="./media/profiler-troubleshooting/search-trace-messages.png" alt-text="Screenshot that shows selecting the Search button from the Application Insights resource.":::
1. Use the following search string to find the relevant data:
Search for trace messages and custom events sent by Profiler to your Application
stopprofiler OR startprofiler OR upload OR ServiceProfilerSample ```
- :::image type="content" source="./media/profiler-troubleshooting/search-results.png" alt-text="Screenshot that shows the search results from aforementioned search string.":::
+ :::image type="content" source="./media/profiler-troubleshooting/search-results.png" lightbox="./media/profiler-troubleshooting/search-results.png" alt-text="Screenshot that shows the search results from aforementioned search string.":::
The preceding search results include two examples of searches from two AI resources:
For Profiler to work properly, make sure:
If **ApplicationInsightsProfiler3** doesn't show up, restart your App Service application.
- :::image type="content" source="./media/profiler-troubleshooting/profiler-web-job.png" alt-text="Screenshot that shows the WebJobs pane, which displays the name, status, and last runtime of jobs.":::
+ :::image type="content" source="./media/profiler-troubleshooting/profiler-web-job.png" lightbox="./media/profiler-troubleshooting/profiler-web-job.png" alt-text="Screenshot that shows the WebJobs pane, which displays the name, status, and last runtime of jobs.":::
1. To view the details of the WebJob, including the log, select the **ApplicationInsightsProfiler3** link. The **Continuous WebJob Details** pane opens.
- :::image type="content" source="./media/profiler-troubleshooting/profiler-web-job-log.png" alt-text="Screenshot that shows the Continuous WebJob Details pane.":::
+ :::image type="content" source="./media/profiler-troubleshooting/profiler-web-job-log.png" lightbox="./media/profiler-troubleshooting/profiler-web-job-log.png" alt-text="Screenshot that shows the Continuous WebJob Details pane.":::
If Profiler still isn't working for you, download the log and [submit an Azure support ticket](https://azure.microsoft.com/support/).
It ends like `https://<kudu-url>/DiagnosticServices`.
A status page appears similar to the following example.
-![Screenshot that shows the Diagnostic Services status page.](../app/media/diagnostic-services-site-extension/status-page.png)
> [!NOTE] > Codeless installation of Application Insights Profiler follows the .NET Core support policy. For more information about supported runtimes, see [.NET Core support policy](https://dotnet.microsoft.com/platform/support/policy/dotnet-core).
azure-monitor Monitor Virtual Machine Analyze https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine-analyze.md
Access the single machine analysis experience from the **Monitoring** section of
|:|:| | Overview page | Select the **Monitoring** tab to display alerts, [platform metrics](../essentials/data-platform-metrics.md), and other monitoring information for the virtual machine host. You can see the number of active alerts on the tab. In the **Monitoring** tab, you get a quick view of:<br><br>**Alerts:** the alerts fired in the last 24 hours, with some important statistics about those alerts. If you do not have any alerts set up for this VM, there is a link to help you quickly create new alerts for your VM.<br><br>**Key metrics:** the trend over different time periods for important metrics, such as CPU, network, and disk. Because these are host metrics though, counters from the guest operating system such as memory aren't included. Select a graph to work with this data in [metrics explorer](../essentials/analyze-metrics.md) where you can perform different aggregations, and add more counters for analysis. | | Activity log | See [activity log](../essentials/activity-log.md#view-the-activity-log) entries filtered for the current virtual machine. Use this log to view the recent activity of the machine, such as any configuration changes and when it was stopped and started.
-| Insights | Displays VM insights views if If the VM is enabled for [VM insights](../vm/vminsights-overview.md).<br><br>Select the **Performance** tab to view trends of critical performance counters over different periods of time. When you open VM insights from the virtual machine menu, you also have a table with detailed metrics for each disk. For details on how to use the Map view for a single machine, see [Chart performance with VM insights](vminsights-performance.md#view-performance-directly-from-an-azure-vm).<br><br>If *processes and dependencies* is enabled for the VM, select the **Map** tab to view the running processes on the machine, dependencies on other machines, and external processes. For details on how to use the Map view for a single machine, see [Use the Map feature of VM insights to understand application components](vminsights-maps.md#view-a-map-from-a-vm).<br><br>If the VM is not enabled for VM insights, it offers the option to enable VM insights. |
+| Insights | Displays VM insights views if the VM is enabled for [VM insights](../vm/vminsights-overview.md).<br><br>Select the **Performance** tab to view trends of critical performance counters over different periods of time. When you open VM insights from the virtual machine menu, you also have a table with detailed metrics for each disk. For details on how to use the Map view for a single machine, see [Chart performance with VM insights](vminsights-performance.md#view-performance-directly-from-an-azure-vm).<br><br>If *processes and dependencies* is enabled for the VM, select the **Map** tab to view the running processes on the machine, dependencies on other machines, and external processes. For details on how to use the Map view for a single machine, see [Use the Map feature of VM insights to understand application components](vminsights-maps.md#view-a-map-from-a-vm).<br><br>If the VM is not enabled for VM insights, it offers the option to enable VM insights. |
| Alerts | View [alerts](../alerts/alerts-overview.md) for the current virtual machine. These alerts only use the machine as the target resource, so there might be other alerts associated with it. You might need to use the **Alerts** option in the Azure Monitor menu to view alerts for all resources. For details, see [Monitor virtual machines with Azure Monitor - Alerts](monitor-virtual-machine-alerts.md). | | Metrics | Open metrics explorer with the scope set to the machine. This option is the same as selecting one of the performance charts from the **Overview** page except that the metric isn't already added. | | Diagnostic settings | Enable and configure the [diagnostics extension](../agents/diagnostics-extension-overview.md) for the current virtual machine. This option is different than the **Diagnostic settings** option for other Azure resources. This is a [legacy agent](monitor-virtual-machine-agent.md#legacy-agents) that has been replaced by the [Azure Monitor agent](monitor-virtual-machine-agent.md). |
azure-netapp-files Configure Application Volume Group Sap Hana Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-application-volume-group-sap-hana-api.md
The following list describes all the possible volume types for application volum
## Prepare your environment
-1. **Networking:** You need to decide on the networking architecture. To use Azure NetApp Files, you need to create create a VNet that will host a delegated subnet for the Azure NetApp Files storage endpoints (IPs). To ensure that the size of this subnet is large enough, see [Considerations about delegating a subnet to Azure NetApp Files](azure-netapp-files-delegate-subnet.md#considerations).
+1. **Networking:** You need to decide on the networking architecture. To use Azure NetApp Files, you need to create a VNet that will host a delegated subnet for the Azure NetApp Files storage endpoints (IPs). To ensure that the size of this subnet is large enough, see [Considerations about delegating a subnet to Azure NetApp Files](azure-netapp-files-delegate-subnet.md#considerations).
1. Create a VNet. 2. Create a virtual machine (VM) subnet and delegated subnet for Azure NetApp Files. 1. **Storage Account and Capacity Pool:** A storage account is the entry point to consume Azure NetApp Files. At least one storage account needs to be created. Within a storage account, a capacity pool is the logical unit to create volumes. Application volume groups require a capacity pool with a manual QoS. It should be created with a size and service level that meets your HANA requirements.
azure-netapp-files Convert Nfsv3 Nfsv41 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/convert-nfsv3-nfsv41.md
This section shows you how to convert the NFSv4.1 volume to NFSv3.
`mount -v | grep /path/to/vol1` `vol1:/path/to/vol1 on /path type nfs (rw,intr,tcp,nfsvers=3,rsize=16384,wsize=16384,addr=192.168.1.1)`.
-7. Change the read-only export policy back to the original export policy. See See [Configure export policy for NFS or dual-protocol volumes](azure-netapp-files-configure-export-policy.md).
+7. Change the read-only export policy back to the original export policy. See [Configure export policy for NFS or dual-protocol volumes](azure-netapp-files-configure-export-policy.md).
8. Verify access using root and non-root users.
azure-netapp-files Create Active Directory Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/create-active-directory-connections.md
Several features of Azure NetApp Files require that you have an Active Directory
This feature is used for installing SQL Server in certain scenarios where a non-administrator AD DS domain account must temporarily be granted elevated security privilege. >[!NOTE]
- > Using the Security privilege users feature relies on the [SMB Continuous Availability Shares feature](azure-netapp-files-create-volumes-smb.md#continuous-availability). SMB Continuous Availability is **not** supported on custom applications. It is is only supported for workloads using Citrix App Laying, [FSLogix user profile containers](../virtual-desktop/create-fslogix-profile-container.md), and Microsoft SQL Server (not Linux SQL Server).
+ > Using the Security privilege users feature relies on the [SMB Continuous Availability Shares feature](azure-netapp-files-create-volumes-smb.md#continuous-availability). SMB Continuous Availability is **not** supported on custom applications. It is only supported for workloads using Citrix App Laying, [FSLogix user profile containers](../virtual-desktop/create-fslogix-profile-container.md), and Microsoft SQL Server (not Linux SQL Server).
> [!IMPORTANT] > Using the **Security privilege users** feature requires that you submit a waitlist request through the **[Azure NetApp Files SMB Continuous Availability Shares Public Preview waitlist submission page](https://aka.ms/anfsmbcasharespreviewsignup)**. Wait for an official confirmation email from the Azure NetApp Files team before using this feature.
azure-netapp-files Cross Region Replication Display Health Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cross-region-replication-display-health-status.md
You can view replication status on the source volume or the destination volume.
This is the state after you break the peering relationship. The destination volume is `ΓÇÿRWΓÇÖ` and snapshots are present. * **Relationship status** ΓÇô Shows one of the following values: * *Idle*:
- No transfer operation is in progress and future transfers are not disabled.
+ No transfer operation is in progress and future transfers aren't disabled.
* *Transferring*:
- A transfer operation is in progress and future transfers are not disabled.
+ A transfer operation is in progress and future transfers aren't disabled.
* **Replication schedule** ΓÇô Shows how frequently incremental mirroring updates will be performed when the initialization (baseline copy) is complete. * **Total progress** ΓÇô Shows the total number of cumulative bytes transferred over the lifetime of the relationship. This amount is the actual bytes transferred, and it might differ from the logical space that the source and destination volumes report.
You can view replication status on the source volume or the destination volume.
## Set alert rules to monitor replication
-Follow the following steps to create [alert rules in Azure Monitor](../azure-monitor/alerts/alerts-overview.md) to help you monitor the status of cross-region replication:
+Create [alert rules in Azure Monitor](../azure-monitor/alerts/alerts-overview.md) to help you monitor the status of cross-region replication:
-1. From Azure Monitor, select **Alerts**.
-2. From the Alerts window, select the **Create** dropdown and select **Create new alert rule**.
-3. From the Scope tab of the Create an Alert Rule page, select **Select scope**. The **Select a Resource** page appears.
-4. From the Resource tab, find the **Volumes** resource type.
-5. From the Condition tab, select **Add condition**. From there, find a signal called ΓÇ£**is volume replication healthy**ΓÇ¥.
-6. There you'll see **Condition of the relationship, 1 or 0** and the **Configure Signal Logic** window is displayed.
-7. To check if the replication is _unhealthy_:
- * Set **Operator** to `Less than`.
- * Set **Aggregation type** to `Average`.
- * Set **Threshold** value to `1`.
- * Set **Unit** to `Count`.
-8. To check if the replication is _healthy_:
- * Set **Operator** to `Greater than or equal to`.
- * Set **Aggregation** type to `Average`.
- * Set **Threshold** value to `1`.
- * Set **Unit** to `Count`.
-9. Select **Review + create**. The alert rule is ready for use.
+1. In Azure Monitor, select **Alerts**.
+2. From the **Alerts** window, select the **Create** dropdown then **Alert rule**.
+3. From the **Scope** tab of the **Create an Alert Rule** page, choose **Select scope**. The **Select a Resource** page appears.
+4. From the **Browse** tab, enter "Volumes" in the **Search to filter items...** field.
+5. Select the target volume you'd like to monitor and select **Apply**.
+6. From the **Condition** tab, use the **Signal name** dropdown to select **See all signals**. Identify the **Volume replication lag time** signal then select **Apply**.
+7. Confirm **Greater than** is selected for the **Operator** field.
+8. For the **Threshold** value field, enter the number of seconds equal to your replication schedule plus 20%. For example:
+ * If your replication schedule is 10 minutes, enter 720 (10 minutes * 60 seconds * 1.2).
+ * If your replication schedule is hourly, enter 4,320 (60 minutes * 60 seconds * 1.2).
+ * If your replication schedule is daily, enter 103,680 (24 hours * 60 minutes * 60 seconds * 1.2).
+9. Select **Review + create**. The alert rule is ready for use.
:::image type="content" source="../media/azure-netapp-files/alert-config-signal-logic.png" alt-text="Screenshot of the Azure interface that shows the configure signal logic step with a backdrop of the Create alert rule page." lightbox="../media/azure-netapp-files/alert-config-signal-logic.png":::
azure-netapp-files Understand Guidelines Active Directory Domain Service Site https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/understand-guidelines-active-directory-domain-service-site.md
In addition to multiple domain controllers assigned to the AD DS site configured
>[!NOTE] >It's essential that all the domain controllers and subnets assigned to the Azure NetApp Files AD DS site must be well connected (less than 10ms RTT latency) and reachable by the network interfaces used by the Azure NetApp Files volumes. >
->If you're using using Standard network features, you should ensure that any User Defined Routes (UDRs) or Network Security Group (NSG) rules do not block Azure NetApp Files network communication with AD DS domain controllers assigned to the Azure NetApp Files AD DS site.
+>If you're using Standard network features, you should ensure that any User Defined Routes (UDRs) or Network Security Group (NSG) rules do not block Azure NetApp Files network communication with AD DS domain controllers assigned to the Azure NetApp Files AD DS site.
> >If you're using Network Virtual Appliances or firewalls (such as Palo Alto Networks or Fortinet firewalls), they must be configured to not block network traffic between Azure NetApp Files and the AD DS domain controllers and subnets assigned to the Azure NetApp Files AD DS site.
azure-netapp-files Use Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/use-availability-zones.md
Latency is subject to availability zone latency for within availability zone acc
## Azure regions with availability zones
-For a list of regions that that currently support availability zones, see [Azure regions with availability zone support](../reliability/availability-zones-service-support.md).
+For a list of regions that currently support availability zones, see [Azure regions with availability zone support](../reliability/availability-zones-service-support.md).
## Next steps
azure-portal Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/policy-reference.md
Title: Built-in policy definitions for Azure portal description: Lists Azure Policy built-in policy definitions for Azure portal. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 10/23/2023 Last updated : 11/03/2023
azure-resource-manager Bicep Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-cli.md
Title: Bicep CLI commands and overview
description: Describes the commands that you can use in the Bicep CLI. These commands include building Azure Resource Manager templates from Bicep. Previously updated : 10/13/2023 Last updated : 11/03/2023 # Bicep CLI commands
The module with reference "br:exampleregistry.azurecr.io/bicep/modules/storage:v
When you get this error, either run the `build` command without the `--no-restore` switch or run `bicep restore` first.
-To use the `--no-restore` switch, you must have Bicep CLI version **0.4.1008 or later**.
+To use the `--no-restore` switch, you must have [Bicep CLI version 0.4.X or higher](./install.md).
## build-params
The `publish` command adds a module to a registry. The Azure container registry
After publishing the file to the registry, you can [reference it in a module](modules.md#file-in-registry).
-To use the publish command, you must have Bicep CLI version **0.4.1008 or later**. To use the `--documentationUri`/`-d` parameter, you must have Bicep CLI version **0.14.46 or later**.
+To use the publish command, you must have [Bicep CLI version 0.4.X or higher](./install.md). To use the `--documentationUri`/`-d` parameter, you must have [Bicep CLI version 0.14.X or higher](./install.md).
To publish a module to a registry, use:
When your Bicep file uses modules that are published to a registry, the `restore
To restore external modules to the local cache, the account must have the correct profile and permissions to access the registry. You can configure the profile and credential precedence for authenticating to the registry in the [Bicep config file](./bicep-config-modules.md#configure-profiles-and-credentials).
-To use the restore command, you must have Bicep CLI version **0.4.1008 or later**. This command is currently only available when calling the Bicep CLI directly. It's not currently available through the Azure CLI command.
+To use the restore command, you must have [Bicep CLI version 0.4.X or higher](./install.md). This command is currently only available when calling the Bicep CLI directly. It's not currently available through the Azure CLI command.
To manually restore the external modules for a file, use:
az bicep version
The command shows the version number. ```azurecli
-Bicep CLI version 0.20.4 (c9422e016d)
+Bicep CLI version 0.22.6 (d62b94db31)
``` To call this command directly through the Bicep CLI, use:
azure-resource-manager Bicep Functions Date https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-date.md
Title: Bicep functions - date
description: Describes the functions to use in a Bicep file to work with dates. Previously updated : 10/12/2023 Last updated : 11/03/2023 # Date functions for Bicep
An ISO 8601 datetime string.
### Remarks
-This function requires **Bicep version 0.5.6 or later**.
+This function requires [Bicep CLI version 0.5.X or higher](./install.md).
### Example
An integer that represents the number of seconds from midnight on January 1, 197
### Remarks
-This function requires **Bicep version 0.5.6 or later**.
+This function requires [Bicep CLI version 0.5.X or higher](./install.md).
### Examples
azure-resource-manager Bicep Functions Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-files.md
Title: Bicep functions - files
description: Describes the functions to use in a Bicep file to load content from a file. Previously updated : 04/21/2023 Last updated : 11/03/2023 # File functions for Bicep
Namespace: [sys](bicep-functions.md#namespaces-for-functions).
Use this function when you have binary content you would like to include in deployment. Rather than manually encoding the file to a base64 string and adding it to your Bicep file, load the file with this function. The file is loaded when the Bicep file is compiled to a JSON template. You can't use variables in the file path because they haven't been resolved when compiling to the template. During deployment, the JSON template contains the contents of the file as a hard-coded string.
-This function requires **Bicep version 0.4.412 or later**.
+This function requires [Bicep CLI version 0.4.X or higher](./install.md).
The maximum allowed size of the file is **96 Kb**.
Use this function when you have JSON content or minified JSON content that is st
In VS Code, the properties of the loaded object are available intellisense. For example, you can create a file with values to share across many Bicep files. An example is shown in this article.
-This function requires **Bicep version 0.7.4 or later**.
+This function requires [Bicep CLI version 0.7.X or higher](./install.md).
The maximum allowed size of the file is **1,048,576 characters**, including line endings.
Use this function when you have YAML content or minified YAML content that is st
In VS Code, the properties of the loaded object are available intellisense. For example, you can create a file with values to share across many Bicep files. An example is shown in this article.
-This function requires **Bicep version >0.16.2**.
+This function requires [Bicep CLI version 0.16.X or higher](./install.md).
The maximum allowed size of the file is **1,048,576 characters**, including line endings.
Use this function when you have content that is stored in a separate file. You c
Use the [`loadJsonContent()`](#loadjsoncontent) function to load JSON files.
-This function requires **Bicep version 0.4.412 or later**.
+This function requires [Bicep CLI version 0.4.X or higher](./install.md).
The maximum allowed size of the file is **131,072 characters**, including line endings.
azure-resource-manager Bicep Functions Lambda https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-lambda.md
Title: Bicep functions - lambda description: Describes the lambda functions to use in a Bicep file.- - Previously updated : 03/15/2023 Last updated : 11/03/2023 # Lambda functions for Bicep
This article describes the lambda functions to use in Bicep. [Lambda expressions
``` > [!NOTE]
-> The lambda functions are only supported in Bicep CLI version 0.10.61 or newer.
+> The lambda functions are only supported in [Bicep CLI version 0.10.X or higher](./install.md).
## Limitations
azure-resource-manager Bicep Functions Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-resource.md
Previously updated : 06/22/2023 Last updated : 11/03/2023 # Resource functions for Bicep
You can call a list function for any resource type with an operation that starts
The syntax for this function varies by the name of the list operation. The returned values also vary by operation. Bicep doesn't currently support completions and validation for `list*` functions.
-With **Bicep version 0.4.412 or later**, you call the list function by using the [accessor operator](operators-access.md#function-accessor). For example, `storageAccount.listKeys()`.
+With [Bicep CLI version 0.4.X or higher](./install.md), you call the list function by using the [accessor operator](operators-access.md#function-accessor). For example, `storageAccount.listKeys()`.
A [namespace qualifier](bicep-functions.md#namespaces-for-functions) isn't needed because the function is used with a resource type.
The possible uses of `list*` are shown in the following table.
| Microsoft.DocumentDB/databaseAccounts/notebookWorkspaces | [listConnectionInfo](/rest/api/cosmos-db-resource-provider/2023-03-15-preview/notebook-workspaces/list-connection-info?tabs=HTTP) | | Microsoft.DomainRegistration | [listDomainRecommendations](/rest/api/appservice/domains/listrecommendations) | | Microsoft.DomainRegistration/topLevelDomains | [listAgreements](/rest/api/appservice/topleveldomains/listagreements) |
-| Microsoft.EventGrid/domains | [listKeys](/rest/api/eventgrid/controlplane-version2022-06-15/domains/list-shared-access-keys) |
-| Microsoft.EventGrid/topics | [listKeys](/rest/api/eventgrid/controlplane-version2022-06-15/topics/list-shared-access-keys) |
+| Microsoft.EventGrid/domains | [listKeys](/rest/api/eventgrid/controlplane/domains/list-shared-access-keys) |
+| Microsoft.EventGrid/topics | [listKeys](/rest/api/eventgrid/controlplane/topics/list-shared-access-keys) |
| Microsoft.EventHub/namespaces/authorizationRules | [listkeys](/rest/api/eventhub) | | Microsoft.EventHub/namespaces/disasterRecoveryConfigs/authorizationRules | [listkeys](/rest/api/eventhub) | | Microsoft.EventHub/namespaces/eventhubs/authorizationRules | [listkeys](/rest/api/eventhub) |
The possible uses of `list*` are shown in the following table.
| Microsoft.Logic/workflows/versions/triggers | [listCallbackUrl](/rest/api/logic/workflowversions/listcallbackurl) | | Microsoft.MachineLearning/webServices | [listkeys](/rest/api/machinelearning/webservices/listkeys) | | Microsoft.MachineLearning/Workspaces | listworkspacekeys |
-| Microsoft.MachineLearningServices/workspaces/computes | [listKeys](/rest/api/azureml/2023-04-01/compute/list-keys) |
-| Microsoft.MachineLearningServices/workspaces/computes | [listNodes](/rest/api/azureml/2023-04-01/compute/list-nodes) |
-| Microsoft.MachineLearningServices/workspaces | [listKeys](/rest/api/azureml/2023-04-01/workspaces/list-keys) |
+| Microsoft.MachineLearningServices/workspaces/computes | [listKeys](/rest/api/azureml/compute/list-keys) |
+| Microsoft.MachineLearningServices/workspaces/computes | [listNodes](/rest/api/azureml/compute/list-nodes) |
+| Microsoft.MachineLearningServices/workspaces | [listKeys](/rest/api/azureml/workspaces/list-keys) |
| Microsoft.Maps/accounts | [listKeys](/rest/api/maps-management/accounts/listkeys) | | Microsoft.Media/mediaservices/assets | [listContainerSas](/rest/api/media/assets/listcontainersas) | | Microsoft.Media/mediaservices/assets | [listStreamingLocators](/rest/api/media/assets/liststreaminglocators) |
azure-resource-manager Bicep Functions String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-string.md
The output from the preceding example with the default values is:
| firstOutput | String | "one,two,three" | | secondOutput | String | "one;two;three" |
-This function requires **Bicep version 0.8.2 or later**.
+This function requires [Bicep CLI version 0.8.X or higher](./install.md).
<a id="json"></a>
azure-resource-manager Bicep Import https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-import.md
Title: Imports in Bicep
description: Describes how to import shared functionality and namespaces in Bicep. Previously updated : 09/21/2023 Last updated : 11/03/2023 # Imports in Bicep
This article describes the syntax you use to export and import shared functional
## Exporting types, variables and functions (Preview) > [!NOTE]
-> [Bicep version 0.23 or newer](./install.md) is required to use this feature. The experimental feature `compileTimeImports` must be enabled from the [Bicep config file](./bicep-config.md#enable-experimental-features). For user-defined functions, the experimental feature `userDefinedFunctions` must also be enabled.
+> [Bicep CLI version 0.23.X or higher](./install.md) is required to use this feature. The experimental feature `compileTimeImports` must be enabled from the [Bicep config file](./bicep-config.md#enable-experimental-features). For user-defined functions, the experimental feature `userDefinedFunctions` must also be enabled.
The `@export()` decorator is used to indicate that a given statement can be imported by another file. This decorator is only valid on type, variable and function statements. Variable statements marked with `@export()` must be compile-time constants.
The syntax for exporting functionality for use in other Bicep files is:
## Import types, variables and functions (Preview) > [!NOTE]
-> [Bicep version 0.23.X or newer](./install.md) is required to use this feature. The experimental feature `compileTimeImports` must be enabled from the [Bicep config file](./bicep-config.md#enable-experimental-features). For user-defined functions, the experimental feature `userDefinedFunctions` must also be enabled.
+> [Bicep CLI version 0.23.X or higher](./install.md) is required to use this feature. The experimental feature `compileTimeImports` must be enabled from the [Bicep config file](./bicep-config.md#enable-experimental-features). For user-defined functions, the experimental feature `userDefinedFunctions` must also be enabled.
The syntax for importing functionality from another Bicep file is:
azure-resource-manager Data Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/data-types.md
Title: Data types in Bicep
description: Describes the data types that are available in Bicep Previously updated : 07/07/2023 Last updated : 11/03/2023 # Data types in Bicep
Within a Bicep, you can use these data types:
## Arrays
-Arrays start with a left bracket (`[`) and end with a right bracket (`]`). In Bicep, an array can be declared in single line or multiple lines. Commas (`,`) are used between values in single-line declarations, but not used in multiple-line declarations, You can mix and match single-line and multiple-line declarations. The multiple-line declaration requires **Bicep version 0.7.4 or later**.
+Arrays start with a left bracket (`[`) and end with a right bracket (`]`). In Bicep, an array can be declared in single line or multiple lines. Commas (`,`) are used between values in single-line declarations, but not used in multiple-line declarations, You can mix and match single-line and multiple-line declarations. The multiple-line declaration requires [Bicep CLI version 0.7.X or higher](./install.md).
```bicep var multiLineArray = [
Floating point, decimal or binary formats aren't currently supported.
## Objects
-Objects start with a left brace (`{`) and end with a right brace (`}`). In Bicep, an object can be declared in single line or multiple lines. Each property in an object consists of key and value. The key and value are separated by a colon (`:`). An object allows any property of any type. Commas (`,`) are used between properties for single-line declarations, but not used between properties for multiple-line declarations. You can mix and match single-line and multiple-line declarations. The multiple-line declaration requires **Bicep version 0.7.4 or later**.
+Objects start with a left brace (`{`) and end with a right brace (`}`). In Bicep, an object can be declared in single line or multiple lines. Each property in an object consists of key and value. The key and value are separated by a colon (`:`). An object allows any property of any type. Commas (`,`) are used between properties for single-line declarations, but not used between properties for multiple-line declarations. You can mix and match single-line and multiple-line declarations. The multiple-line declaration requires [Bicep CLI version 0.7.X or higher](./install.md).
```bicep param singleLineObject object = {name: 'test name', id: '123-abc', isCurrent: true, tier: 1}
azure-resource-manager Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deploy-cli.md
Title: Deploy resources with Azure CLI and Bicep files | Microsoft Docs description: Use Azure Resource Manager and Azure CLI to deploy resources to Azure. The resources are defined in a Bicep file. Previously updated : 10/10/2023 Last updated : 11/03/2023
The evaluation of parameters follows a sequential order, meaning that if a value
Rather than passing parameters as inline values in your script, you might find it easier to use a parameters file, either a [Bicep parameters file](#bicep-parameter-files) or a [JSON parameters file](#json-parameter-files) that contains the parameter values. The parameters file must be a local file. External parameters files aren't supported with Azure CLI. For more information about the parameters file, see [Create Resource Manager parameters file](./parameter-files.md).
-With Azure CLI version 2.53.0 or later, and Bicep CLI version 0.22.6 or later, you can deploy a Bicep file by utilizing a Bicep parameter file. With the `using` statement within the Bicep parameters file, there's no need to provide the `--template-file` switch when specifying a Bicep parameter file for the `--parameters` switch. Including the `--template-file` switch will result in an "Only a .bicep template is allowed with a .bicepparam file" error.
+With Azure CLI version 2.53.0 or later, and [Bicep CLI version 0.22.X or higher](./install.md), you can deploy a Bicep file by utilizing a Bicep parameter file. With the `using` statement within the Bicep parameters file, there's no need to provide the `--template-file` switch when specifying a Bicep parameter file for the `--parameters` switch. Including the `--template-file` switch will result in an "Only a .bicep template is allowed with a .bicepparam file" error.
The following example shows a parameters file named _storage.bicepparam_. The file is in the same directory where the command is run.
azure-resource-manager Deploy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deploy-powershell.md
Title: Deploy resources with PowerShell and Bicep
description: Use Azure Resource Manager and Azure PowerShell to deploy resources to Azure. The resources are defined in a Bicep file. Previously updated : 06/05/2023 Last updated : 11/03/2023 # Deploy resources with Bicep and Azure PowerShell
New-AzResourceGroupDeployment -ResourceGroupName testgroup `
Rather than passing parameters as inline values in your script, you might find it easier to use a parameters file, either a `.bicepparam` file or a JSON parameters file, that contains the parameter values. The Bicep parameters file must be a local file.
-With Azure PowerShell version 10.4.0 or later, and Bicep CLI version 0.22.6 or later, you can deploy a Bicep file by utilizing a Bicep parameter file. With the `using` statement within the Bicep parameters file, there is no need to provide the `-TemplateFile` switch when specifying a Bicep parameter file for the `-TemplateParameterFile` switch.
+With Azure PowerShell version 10.4.0 or later, and [Bicep CLI version 0.22.X or higher](./install.md), you can deploy a Bicep file by utilizing a Bicep parameter file. With the `using` statement within the Bicep parameters file, there is no need to provide the `-TemplateFile` switch when specifying a Bicep parameter file for the `-TemplateParameterFile` switch.
The following example shows a parameters file named _storage.bicepparam_. The file is in the same directory where the command is run.
azure-resource-manager Deployment Stacks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/deployment-stacks.md
Currently not implemented.
You get a warning similar to the following: ```warning
-The deployment stack 'myStack' you're trying to create already already exists in the current subscription/management group/resource group. Do you want to overwrite it? Detaching: resources, resourceGroups (Y/N)
+The deployment stack 'myStack' you're trying to create already exists in the current subscription/management group/resource group. Do you want to overwrite it? Detaching: resources, resourceGroups (Y/N)
``` For more information, see [Create deployment stacks](#create-deployment-stacks).
azure-resource-manager File https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/file.md
Title: Bicep file structure and syntax
description: Describes the structure and properties of a Bicep file using declarative syntax. Previously updated : 09/11/2023 Last updated : 11/03/2023 # Understand the structure and syntax of Bicep files
The preceding example is equivalent to the following JSON.
## Multiple-line declarations
-You can now use multiple lines in function, array and object declarations. This feature requires **Bicep version 0.7.4 or later**.
+You can now use multiple lines in function, array and object declarations. This feature requires [Bicep CLI version 0.7.X or higher](./install.md).
In the following example, the `resourceGroup()` definition is broken into multiple lines.
azure-resource-manager Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/install.md
Title: Set up Bicep development and deployment environments description: How to configure Bicep development and deployment environments Previously updated : 09/21/2023 Last updated : 11/03/2023
bicep --help
``` > [!NOTE]
-> The installation of Bicep CLI version 0.16 or newer does not need Gatekeeper exception. However, [nightly builds](#install-the-nightly-builds) of the Bicep CLI still require the exception.
+> The installation of [Bicep CLI version 0.16.X or higher](./install.md) does not need Gatekeeper exception. However, [nightly builds](#install-the-nightly-builds) of the Bicep CLI still require the exception.
### Windows
azure-resource-manager Loops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/loops.md
Title: Iterative loops in Bicep
description: Use loops to iterate over collections in Bicep Previously updated : 10/17/2023 Last updated : 11/03/2023 # Iterative loops in Bicep
resource share 'Microsoft.Storage/storageAccounts/fileServices/shares@2021-06-01
## Reference resource/module collections
-The ARM template [`references`](../templates/template-functions-resource.md#references) function returns an array of objects representing a resource collection's runtime states. In Bicep, there is no explicit references function. Instead, symbolic collection usage is employed directly, and during code generation, Bicep translates it to an ARM template that utilizes the ARM template references function. For the translation feature that transforms symbolic collections into ARM templates using the references function, it is necessary to have Bicep CLI version 0.20.4 or a more recent version. Additionally, in the [`bicepconfig.json`](./bicep-config.md#enable-experimental-features) file, the `symbolicNameCodegen` setting should be presented and set to `true`.
+The ARM template [`references`](../templates/template-functions-resource.md#references) function returns an array of objects representing a resource collection's runtime states. In Bicep, there is no explicit references function. Instead, symbolic collection usage is employed directly, and during code generation, Bicep translates it to an ARM template that utilizes the ARM template references function. For the translation feature that transforms symbolic collections into ARM templates using the references function, it is necessary to have [Bicep CLI version 0.20.X or higher](./install.md). Additionally, in the [`bicepconfig.json`](./bicep-config.md#enable-experimental-features) file, the `symbolicNameCodegen` setting should be presented and set to `true`.
The outputs of the two samples in [Integer index](#integer-index) can be written as:
azure-resource-manager Msbuild Bicep File https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/msbuild-bicep-file.md
You'll need the latest versions of the following software:
If your existing continuous integration (CI) pipeline relies on [MSBuild](/visualstudio/msbuild/msbuild), you can use MSBuild tasks and CLI packages to convert Bicep files into ARM template JSON.
-The functionality relies on the following NuGet packages. The latest NuGet package versions match the latest Bicep version.
+The functionality relies on the following NuGet packages. The latest NuGet package versions match the latest Bicep CLI version.
| Package Name | Description | | - |- |
azure-resource-manager Outputs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/outputs.md
output stringOutput string = user['user-name']
## Conditional output
-When the value to return depends on a condition in the deployment, use the the `?` operator.
+When the value to return depends on a condition in the deployment, use the `?` operator.
```bicep output <name> <data-type> = <condition> ? <true-value> : <false-value>
azure-resource-manager Parameter Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/parameter-files.md
Title: Create parameters files for Bicep deployment
description: Create parameters file for passing in values during deployment of a Bicep file Previously updated : 10/10/2023 Last updated : 11/03/2023 # Create parameters files for Bicep deployment
using './main.bicep'
param intFromEnvironmentVariables = int(readEnvironmentVariable('intEnvVariableName')) ```
-You can define and use variables. Bicep CLI version 0.21.1 or newer is required for using variables in .bicepparam file. Here are some examples:
+You can define and use variables. [Bicep CLI version 0.21.X or higher](./install.md) is required for using variables in .bicepparam file. Here are some examples:
```bicep using './main.bicep'
From Azure CLI, you can pass a parameter file with your Bicep file deployment.
# [Bicep parameters file](#tab/Bicep)
-With Azure CLI version 2.53.0 or later, and Bicep CLI version 0.22.6 or later, you can deploy a Bicep file by utilizing a Bicep parameter file. With the `using` statement within the Bicep parameters file, there is no need to provide the `--template-file` switch when specifying a Bicep parameter file for the `--parameters` switch. Including the `--template-file` switch will result in an "Only a .bicep template is allowed with a .bicepparam file" error.
+With Azure CLI version 2.53.0 or later, and [Bicep CLI version 0.22.X or higher](./install.md), you can deploy a Bicep file by utilizing a Bicep parameter file. With the `using` statement within the Bicep parameters file, there is no need to provide the `--template-file` switch when specifying a Bicep parameter file for the `--parameters` switch. Including the `--template-file` switch will result in an "Only a .bicep template is allowed with a .bicepparam file" error.
```azurecli
az deployment group create \
-For more information, see [Deploy resources with Bicep and Azure CLI](./deploy-cli.md#parameters).
+For more information, see [Deploy resources with Bicep and Azure CLI](./deploy-cli.md#parameters).
From Azure PowerShell, pass a local parameters file using the `TemplateParameterFile` parameter.
azure-resource-manager User Defined Data Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/user-defined-data-types.md
Title: User-defined types in Bicep
description: Describes how to define and use user-defined data types in Bicep. Previously updated : 09/26/2023 Last updated : 11/03/2023 # User-defined data types in Bicep Learn how to use user-defined data types in Bicep.
-[Bicep version 0.12.1 or newer](./install.md) is required to use this feature.
+[Bicep CLI version 0.12.X or higher](./install.md) is required to use this feature.
## User-defined data type syntax
resource storageAccount 'Microsoft.Storage/storageAccounts@2022-09-01' = {
## Declare tagged union type
-To declare a custom tagged union data type within a Bicep file, you can place a discriminator decorator above a user-defined type declartion. [Bicep version 0.21.1 or newer](./install.md) is required to use this decorator. The syntax is:
+To declare a custom tagged union data type within a Bicep file, you can place a discriminator decorator above a user-defined type declartion. [Bicep CLI version 0.21.X or higher](./install.md) is required to use this decorator. The syntax is:
```bicep @discriminator('<propertyName>')
param serviceConfig ServiceConfig = { type: 'bar', value: true }
output config object = serviceConfig ```
-The parameter value is validated based on the discriminated property value. In the preceeding example, if the *serviceConfig* parameter value is of type *foo*, it undergoes validation using the *FooConfig*type. Likewise, if the parameter value is of type *bar*, validation is performed using the *BarConfig* type, and this pattern continues for other types as well.
+The parameter value is validated based on the discriminated property value. In the preceeding example, if the *serviceConfig* parameter value is of type *foo*, it undergoes validation using the *FooConfig*type. Likewise, if the parameter value is of type *bar*, validation is performed using the *BarConfig* type, and this pattern continues for other types as well.
## Import types between Bicep files (Preview)
-[Bicep version 0.21.1 or newer](./install.md) is required to use this compile-time import feature. The experimental flag `compileTimeImports` must be enabled from the [Bicep config file](./bicep-config.md#enable-experimental-features).
+[Bicep CLI version 0.21.X or higher](./install.md) is required to use this compile-time import feature. The experimental flag `compileTimeImports` must be enabled from the [Bicep config file](./bicep-config.md#enable-experimental-features).
Only user-defined data types that bear the `@export()` decorator can be imported to other templates. Currently, this decorator can only be used on `type` statements.
azure-resource-manager User Defined Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/user-defined-functions.md
Title: User-defined functions in Bicep
description: Describes how to define and use user-defined functions in Bicep. Previously updated : 11/02/2023 Last updated : 11/03/2023 # User-defined functions in Bicep (Preview) Within your Bicep file, you can create your own functions. These functions are available for use in your Bicep files. User-defined functions are separate from the [standard Bicep functions](./bicep-functions.md) that are automatically available within your Bicep files. Create your own functions when you have complicated expressions that are used repeatedly in your Bicep files.
-[Bicep version 0.20 or newer](./install.md) is required to use this feature.
+[Bicep CLI version 0.20.X or higher](./install.md) is required to use this feature.
## Enable the preview feature
The outputs from the preceding examples are:
| nameArray | Array | ["John"] | | addNameArray | Array | ["Mary","Bob","John"] |
-With [Bicep version 0.23 or newer](./install.md), you have the flexibility to invoke another user-defined function within a user-defined function. In the preceding example, with the function definition of `sayHelloString`, you can redefine the `sayHelloObject` function as:
+With [Bicep CLI version 0.23.X or higher](./install.md), you have the flexibility to invoke another user-defined function within a user-defined function. In the preceding example, with the function definition of `sayHelloString`, you can redefine the `sayHelloObject` function as:
```bicep func sayHelloObject(name string) object => {
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/custom-providers/policy-reference.md
Title: Built-in policy definitions for Azure Custom Resource Providers description: Lists Azure Policy built-in policy definitions for Azure Custom Resource Providers. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 10/23/2023 Last updated : 11/03/2023
azure-resource-manager Deploy Marketplace App Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/deploy-marketplace-app-quickstart.md
After the managed application deployment is finished, you can verify the resourc
1. The managed resource group shows the resources that were deployed and the deployments that created the resources.
- :::image type="content" source="media/deploy-marketplace-app-quickstart/mrg-apps.png" alt-text="Screenshot of the managed resource group that that highlights the deployments and list of deployed resources.":::
+ :::image type="content" source="media/deploy-marketplace-app-quickstart/mrg-apps.png" alt-text="Screenshot of the managed resource group that highlights the deployments and list of deployed resources.":::
1. To review the publisher's permissions in the managed resource group, select **Access Control (IAM)** > **Role assignments**.
azure-resource-manager Microsoft Common Serviceprincipalselector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/microsoft-common-serviceprincipalselector.md
After you register a new application, use the **Authentication Type** to enter a
### Use existing application
-To use an existing application, choose **Select Existing** and then select **Make selection**. Use the **Select an application** dialog box to search for the application's name. From the results, select the the application and then the **Select** button. After you select an application, the control displays the **Authentication Type** to enter a password or certificate thumbprint.
+To use an existing application, choose **Select Existing** and then select **Make selection**. Use the **Select an application** dialog box to search for the application's name. From the results, select the application and then the **Select** button. After you select an application, the control displays the **Authentication Type** to enter a password or certificate thumbprint.
:::image type="content" source="./media/managed-application-elements/microsoft-common-serviceprincipal-existing.png" alt-text="Screenshot of Microsoft.Common.ServicePrincipalSelector with select existing application option and authentication type displayed.":::
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/policy-reference.md
Title: Built-in policy definitions for Azure Managed Applications description: Lists Azure Policy built-in policy definitions for Azure Managed Applications. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 10/23/2023 Last updated : 11/03/2023
azure-resource-manager Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/policy-reference.md
Title: Built-in policy definitions for Azure Resource Manager description: Lists Azure Policy built-in policy definitions for Azure Resource Manager. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 10/23/2023 Last updated : 11/03/2023
azure-signalr Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/policy-reference.md
Title: Built-in policy definitions for Azure SignalR description: Lists Azure Policy built-in policy definitions for Azure SignalR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 10/23/2023 Last updated : 11/03/2023
azure-web-pubsub Concept Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/concept-disaster-recovery.md
For regional disaster recovery, we recommend the following two approaches:
## High available architecture for Web PubSub service There are two typical patterns using Web PubSub service:
-* One is client-server pattern that [clients send events the the server](./quickstarts-event-notifications-from-clients.md) and [server pushes messages to the clients](./quickstarts-push-messages-from-server.md).
+* One is client-server pattern that [clients send events to the server](./quickstarts-event-notifications-from-clients.md) and [server pushes messages to the clients](./quickstarts-push-messages-from-server.md).
* Another is client-client pattern that [clients pub/sub messages through the Web PubSub service to other clients](./quickstarts-pubsub-among-clients.md). Below sections describe different ways for these two patterns to do disaster recovery
backup Backup Azure Backup Server Import Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-backup-server-import-export.md
The information in this section helps you complete the offline-backup workflow s
3. Provide the inputs on the **Use your Own Disk** page.
- ![Screenshot shows how how to add details to use your own disk.](./media/backup-azure-backup-server-import-export/use-your-own-disk.png)
+ ![Screenshot shows how to add details to use your own disk.](./media/backup-azure-backup-server-import-export/use-your-own-disk.png)
The description of the inputs is as follows:
backup Backup Azure Restore Files From Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-restore-files-from-vm.md
If you run the script on a computer with restricted access, ensure there's acces
> [!NOTE] > > In case, the backed up VM is Windows, then the geo-name will be mentioned in the password generated.<br><br>
-> For eg, if the generated password is *ContosoVM_wcus_GUID*, then then geo-name is wcus and the URL would be: <`https://pod01-rec2.wcus.backup.windowsazure.com`><br><br>
+> For eg, if the generated password is *ContosoVM_wcus_GUID*, then geo-name is wcus and the URL would be: <`https://pod01-rec2.wcus.backup.windowsazure.com`><br><br>
> > > If the backed up VM is Linux, then the script file you downloaded in step 1 [above](#step-1-generate-and-download-script-to-browse-and-recover-files) will have the **geo-name** in the name of the file. Use that **geo-name** to fill in the URL. The downloaded script name will begin with: \'VMname\'\_\'geoname\'_\'GUID\'.<br><br>
backup Backup Mabs Sql Azure Stack https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-mabs-sql-azure-stack.md
To create a backup policy to protect SQL Server databases to Azure, follow these
14. Once you review the policy details in the **Summary** screen, select **Create group** to complete the workflow. You can select **Close** and monitor the job progress in Monitoring workspace.
- ![Screenshot shows the the in-progress job state of the Protection Group creation.](./media/backup-azure-backup-sql/pg-summary.png)
+ ![Screenshot shows the in-progress job state of the Protection Group creation.](./media/backup-azure-backup-sql/pg-summary.png)
## Run an on-demand backup
To run an on-demand backup of a SQL Server database, follow these steps:
![Screenshot shows the Protection Group members.](./media/backup-azure-backup-sql/sqlbackup-recoverypoint.png) 2. Right-click the database and select **Create Recovery Point**.
- ![Screenshot shows how to start creating the online online Recovery Point.](./media/backup-azure-backup-sql/sqlbackup-createrp.png)
+ ![Screenshot shows how to start creating the online Recovery Point.](./media/backup-azure-backup-sql/sqlbackup-createrp.png)
3. Choose **Online Protection** in the drop-down menu and select **OK** to start creation of a recovery point in Azure. ![Screenshot shows how to choose the Online Protection option.](./media/backup-azure-backup-sql/sqlbackup-azure.png)
backup Multi User Authorization Concept https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/multi-user-authorization-concept.md
The following table lists the scenarios for creating your Resource Guard and vau
**Usage scenario** | **Protection due to MUA** | **Ease of implementation** | **Notes** | | | | Vault and Resource Guard are **in the same subscription.** </br> The Backup admin doesn't have access to the Resource Guard. | Least isolation between the Backup admin and the Security admin. | Relatively easy to implement since only one subscription is required. | Resource level permissions/ roles need to be ensured are correctly assigned.
-Vault and Resource Guard are **in different subscriptions but the same tenant.** </br> The Backup admin doesn't have access to the Resource Guard or the corresponding subscription. | Medium isolation between the Backup admin and the Security admin. | Relatively medium ease of implementation since two subscriptions (but a single tenant) are required. | Ensure that that permissions/ roles are correctly assigned for the resource or the subscription.
+Vault and Resource Guard are **in different subscriptions but the same tenant.** </br> The Backup admin doesn't have access to the Resource Guard or the corresponding subscription. | Medium isolation between the Backup admin and the Security admin. | Relatively medium ease of implementation since two subscriptions (but a single tenant) are required. | Ensure that permissions/ roles are correctly assigned for the resource or the subscription.
Vault and Resource Guard are **in different tenants.** </br> The Backup admin doesn't have access to the Resource Guard, the corresponding subscription, or the corresponding tenant.| Maximum isolation between the Backup admin and the Security admin, hence, maximum security. | Relatively difficult to test since requires two tenants or directories to test. | Ensure that permissions/ roles are correctly assigned for the resource, the subscription or the directory. ## Next steps
backup Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/policy-reference.md
Title: Built-in policy definitions for Azure Backup description: Lists Azure Policy built-in policy definitions for Azure Backup. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 10/23/2023 Last updated : 11/03/2023
backup Use Restapi Update Vault Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/use-restapi-update-vault-properties.md
This article describes how to update backup related configurations in Azure Reco
Deleting backups of a protected item is a significant operation that has to be monitored. To protect against accidental deletions, Azure Recovery Services vault has a soft-delete capability. This capability allows you to restore deleted backups, if necessary, within a time period after the deletion.
-But there are scenarios in which this capability isn't required. An Azure Recovery Services vault can't be deleted if there are backup items within it, even soft-deleted ones. This may pose a problem if the vault needs to be immediately deleted. For for example: deployment operations often clean up the created resources in the same workflow. A deployment can create a vault, configure backups for an item, do a test restore and then proceed to delete the backup items and the vault. If the vault deletion fails, the entire deployment might fail. Disabling soft-delete is the only way to guarantee immediate deletion.
+But there are scenarios in which this capability isn't required. An Azure Recovery Services vault can't be deleted if there are backup items within it, even soft-deleted ones. This may pose a problem if the vault needs to be immediately deleted. For example: deployment operations often clean up the created resources in the same workflow. A deployment can create a vault, configure backups for an item, do a test restore and then proceed to delete the backup items and the vault. If the vault deletion fails, the entire deployment might fail. Disabling soft-delete is the only way to guarantee immediate deletion.
So you need to carefully choose whether or not to disable soft-delete for a particular vault depending on the scenario. For more information, see the [soft-delete article](backup-azure-security-feature-cloud.md).
batch Batch Sig Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-sig-images.md
Using a Shared Image configured for your scenario can provide several advantages
> > The image can be in a different region as long as it has replicas in the same region as your Batch account.
-If you use a Microsoft Entra application to create a custom image pool with an Azure Compute Gallery image, that application must have been granted an [Azure built-in role](../role-based-access-control/rbac-and-directory-admin-roles.md#azure-roles) that gives it access to the the Shared Image. You can grant this access in the Azure portal by navigating to the Shared Image, selecting **Access control (IAM)** and adding a role assignment for the application.
+If you use a Microsoft Entra application to create a custom image pool with an Azure Compute Gallery image, that application must have been granted an [Azure built-in role](../role-based-access-control/rbac-and-directory-admin-roles.md#azure-roles) that gives it access to the Shared Image. You can grant this access in the Azure portal by navigating to the Shared Image, selecting **Access control (IAM)** and adding a role assignment for the application.
## Prepare a Shared Image
batch Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/best-practices.md
The automated cleanup for the working directory will be blocked if you run a ser
- Learn about the [Batch service workflow and primary resources](batch-service-workflow-features.md) such as pools, nodes, jobs, and tasks. - Learn about [default Azure Batch quotas, limits, and constraints, and how to request quota increases](batch-quota-limit.md).-- Learn how to to [detect and avoid failures in pool and node background operations ](batch-pool-node-error-checking.md).
+- Learn how to [detect and avoid failures in pool and node background operations ](batch-pool-node-error-checking.md).
batch Nodes And Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/nodes-and-pools.md
If you add a certificate to an existing pool, you must reboot its compute nodes
## Next steps - Learn about [jobs and tasks](jobs-and-tasks.md).-- Learn how to to [detect and avoid failures in pool and node background operations ](batch-pool-node-error-checking.md).
+- Learn how to [detect and avoid failures in pool and node background operations ](batch-pool-node-error-checking.md).
batch Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/policy-reference.md
Title: Built-in policy definitions for Azure Batch description: Lists Azure Policy built-in policy definitions for Azure Batch. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 10/23/2023 Last updated : 11/03/2023
cdn Monitoring And Access Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/monitoring-and-access-log.md
For more information, see [Azure Monitor metrics](../azure-monitor/essentials/da
| TotalLatency | The total time from the client request received by CDN **until the last response byte send from CDN to client**. |Endpoint </br> Client country. </br> Client region. </br> HTTP status. </br> HTTP status group. | > [!NOTE]
-> If a request to the the origin timeout, the value for HttpStatusCode is set to **0**.
+> If a request to the origin timeout, the value for HttpStatusCode is set to **0**.
***Bytes Hit Ratio = (egress from edge - egress from origin)/egress from edge**
chaos-studio Chaos Studio Tutorial Aad Outage Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-aad-outage-portal.md
Now you can create your experiment from a pre-filled experiment template. A chao
[![Screenshot that shows the experiment templates screen, with the AAD outage template button highlighted.](images/tutorial-aad-outage-select.png)](images/tutorial-aad-outage-select.png#lightbox) 1. Add a name for your experiment that complies with resource naming guidelines. Select **Next: Permissions**.
- [![Screenshot that shows the experiment basics screen, with the permissions tab button button highlighted.](images/tutorial-aad-outage-basics.png)](images/tutorial-aad-outage-basics.png#lightbox)
+ [![Screenshot that shows the experiment basics screen, with the permissions tab button highlighted.](images/tutorial-aad-outage-basics.png)](images/tutorial-aad-outage-basics.png#lightbox)
1. For your chaos experiment to run successfully, it must have [sufficient permissions on target resources](chaos-studio-permissions-security.md). Select a system-assigned managed identity or a user-assigned managed identity for your experiment. You can choose to enable custom role assignment if you would like Chaos Studio to add the necessary permissions to run (in the form of a custom role) to your experiment's identity. Select **Next: Experiment designer**.
- [![Screenshot that shows the experiment permissions screen, with the experiment designer tab button button highlighted.](images/tutorial-aad-outage-permissions.png)](images/tutorial-aad-outage-permissions.png#lightbox)
+ [![Screenshot that shows the experiment permissions screen, with the experiment designer tab button highlighted.](images/tutorial-aad-outage-permissions.png)](images/tutorial-aad-outage-permissions.png#lightbox)
1. Within the **NSG Security Rule (version 1.1)** fault, select **Edit**. [![Screenshot that shows the experiment designer screen, with the edit button within the NSG fault highlighted.](images/tutorial-aad-outage-edit-fault.png)](images/tutorial-aad-outage-edit-fault.png#lightbox)
chaos-studio Sample Policy Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/sample-policy-targets.md
# Azure Policy samples for adding resources to Azure Chaos Studio
-This article includes sample [Azure Policy](../governance/policy/overview.md) definitions that create [targets and capabilities](chaos-studio-targets-capabilities.md) for a specific resource type. You can automatically add resources to Azure Chaos Studio. First, you [deploy these samples as custom policy definitions](../governance/policy/tutorials/create-custom-policy-definition.md). Then you [assign the policy](../governance/policy/assign-policy-portal.md) to a scope.
+This article includes sample [Azure Policy](../governance/policy/overview.md) definitions that create [targets and capabilities](chaos-studio-targets-capabilities.md) for a specific resource type. You can automatically add resources to Azure Chaos Studio. First, you [deploy these samples as custom policy definitions](../governance/policy/tutorials/create-and-manage.md). Then you [assign the policy](../governance/policy/assign-policy-portal.md) to a scope.
In these samples, we add service-direct targets and capabilities for each [supported resource type](chaos-studio-fault-providers.md) by using [targets and capabilities](chaos-studio-targets-capabilities.md).
chaos-studio Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/troubleshooting.md
From the **Experiments** list in the Azure portal, select the experiment name to
This error might happen if you added the agent by using the Azure portal, which has a known issue. Enabling an agent-based target doesn't assign the user-assigned managed identity to the VM or virtual machine scale set.
-To resolve this problem, go to the VM or virtual machine scale set in the Azure portal and go to **Identity**. Open the **User assigned** tab and add your user-assigned identity to the VM. After you're finished, you might need to reboot the VM for the agent to connect.
+To resolve this problem, go to the VM or virtual machine scale set in the Azure portal and go to **Identity**. Open the **User assigned** tab and add your user-assigned identity to the VM. After you're finished, you might need to reboot the VM for the agent to connect.
+
+## Problems when setting up a managed identity
+
+### When I try to add a system-assigned/user-assigned managed identity to my existing experiment, it fails to save.
+
+If you are trying to add a user-assigned or system-assigned managed identity to an experiment that **already** has a managed identity assigned to it, the experiment fails to deploy. You need to delete the existing user-assigned or system-assigned managed identity on the desired experiment **first** before adding your desired managed identity.
cloud-services-extended-support Sample Create Cloud Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/sample-create-cloud-service.md
# Create a new Azure Cloud Service (extended Support)
-These samples cover various ways to to create a new Azure Cloud Service (extended support) deployment.
+These samples cover various ways to create a new Azure Cloud Service (extended support) deployment.
## Create new Cloud Service with single role
cloud-services-extended-support Support Help https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/support-help.md
Here are suggestions for where you can get help when developing your Azure Cloud
## Self help troubleshooting
-For common issues and and workarounds, see [Azure Cloud Services troubleshooting documentation](/troubleshoot/azure/cloud-services/welcome-cloud-services) and [Frequently asked questions](faq.yml)
+For common issues and workarounds, see [Azure Cloud Services troubleshooting documentation](/troubleshoot/azure/cloud-services/welcome-cloud-services) and [Frequently asked questions](faq.yml)
## Post a question on Microsoft Q&A
cloud-services Resource Health For Cloud Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/resource-health-for-cloud-services.md
Since Role Instances are basically VMs and the health check for VMs is reused fo
| Provisioning failure | We're sorry, your virtual machine isn't available due to unexpected provisioning problems. The provisioning of your virtual machine has failed due to an unexpected error | | Live Migration | This virtual machine is paused because of a memory-preserving Live Migration operation. The virtual machine typically resumes within 10 seconds. No additional action is required from you at this time | | Live Migration | This virtual machine is paused because of a memory-preserving Live Migration operation. The virtual machine typically resumes within 10 seconds. No additional action is required from you at this time |
-| Remote disk disconnected | We're sorry, your virtual machine is unavailable because because of connectivity loss to the remote disk. We're working to reestablish disk connectivity. No additional action is required from you at this time |
+| Remote disk disconnected | We're sorry, your virtual machine is unavailable because of connectivity loss to the remote disk. We're working to reestablish disk connectivity. No additional action is required from you at this time |
| Azure service issue | Your virtual machine is impacted by Azure service issue | | Network issue | This virtual machine is impacted by a top-of-rack network device | | Unavailable | Your virtual machine is unavailable. We are currently unable to determine the reason for this downtime |
cognitive-services Bing Autosuggest Upgrade Guide V5 To V7 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Autosuggest/bing-autosuggest-upgrade-guide-v5-to-v7.md
- Title: Upgrade Bing Autosuggest API v5 to v7-
-description: Identifies the parts of your Bing Autosuggest application that you need to update to use version 7.
------- Previously updated : 02/20/2019--
-# Autosuggest API upgrade guide
--
-This upgrade guide identifies the changes between version 5 and version 7 of the Bing Autosuggest API. Use this guide to help update your application to use version 7.
-
-## Breaking changes
-
-### Endpoints
--- The endpoint's version number changed from v5 to v7. For example, https:\//api.cognitive.microsoft.com/bing/\*\*v7.0**/Suggestions.-
-### Error response objects and error codes
--- All failed requests should now include an `ErrorResponse` object in the response body.--- Added the following fields to the `Error` object.
- - `subCode`&mdash;Partitions the error code into discrete buckets, if possible
- - `moreDetails`&mdash;Additional information about the error described in the `message` field
--- Replaced the v5 error codes with the following possible `code` and `subCode` values.-
-|Code|SubCode|Description
-|-|-|-
-|ServerError|UnexpectedError<br/>ResourceError<br/>NotImplemented|Bing returns ServerError whenever any of the sub-code conditions occur. The response includes these errors if the HTTP status code is 500.
-|InvalidRequest|ParameterMissing<br/>ParameterInvalidValue<br/>HttpNotAllowed<br/>Blocked|Bing returns InvalidRequest whenever any part of the request is not valid. For example, a required parameter is missing or a parameter value is not valid.<br/><br/>If the error is ParameterMissing or ParameterInvalidValue, the HTTP status code is 400.<br/><br/>If the error is HttpNotAllowed, the HTTP status code is 410.
-|RateLimitExceeded||Bing returns RateLimitExceeded whenever you exceed your queries per second (QPS) or queries per month (QPM) quota.<br/><br/>Bing returns HTTP status code 429 if you exceeded QPS and 403 if you exceeded QPM.
-|InvalidAuthorization|AuthorizationMissing<br/>AuthorizationRedundancy|Bing returns InvalidAuthorization when Bing cannot authenticate the caller. For example, the `Ocp-Apim-Subscription-Key` header is missing or the subscription key is not valid.<br/><br/>Redundancy occurs if you specify more than one authentication method.<br/><br/>If the error is InvalidAuthorization, the HTTP status code is 401.
-|InsufficientAuthorization|AuthorizationDisabled<br/>AuthorizationExpired|Bing returns InsufficientAuthorization when the caller does not have permissions to access the resource. This can occur if the subscription key has been disabled or has expired. <br/><br/>If the error is InsufficientAuthorization, the HTTP status code is 403.
--- The following maps the previous error codes to the new codes. If you've taken a dependency on v5 error codes, update your code accordingly.-
-|Version 5 code|Version 7 code.subCode
-|-|-
-|RequestParameterMissing|InvalidRequest.ParameterMissing
-RequestParameterInvalidValue|InvalidRequest.ParameterInvalidValue
-ResourceAccessDenied|InsufficientAuthorization
-ExceededVolume|RateLimitExceeded
-ExceededQpsLimit|RateLimitExceeded
-Disabled|InsufficientAuthorization.AuthorizationDisabled
-UnexpectedError|ServerError.UnexpectedError
-DataSourceErrors|ServerError.ResourceError
-AuthorizationMissing|InvalidAuthorization.AuthorizationMissing
-HttpNotAllowed|InvalidRequest.HttpNotAllowed
-UserAgentMissing|InvalidRequest.ParameterMissing
-NotImplemented|ServerError.NotImplemented
-InvalidAuthorization|InvalidAuthorization
-InvalidAuthorizationMethod|InvalidAuthorization
-MultipleAuthorizationMethod|InvalidAuthorization.AuthorizationRedundancy
-ExpiredAuthorizationToken|InsufficientAuthorization.AuthorizationExpired
-InsufficientScope|InsufficientAuthorization
-Blocked|InvalidRequest.Blocked
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Use and display requirements](../bing-web-search/use-display-requirements.md)
cognitive-services Get Suggestions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Autosuggest/concepts/get-suggestions.md
- Title: Suggesting search terms with the Bing Autosuggest API-
-description: This article discusses the concept of suggesting query terms using the Bing Autosuggest API and the impact of query length on relevance.
------ Previously updated : 02/20/2019---
-# Suggesting query terms
---
-Typically, you'd call the Bing Autosuggest API each time a user types a new character in your application's search box. The completeness of the query string impacts the relevance of the suggested query terms that the API returns. The more complete the query string, the more relevant the list of suggested query terms are. For example, the suggestions that the API may return for `s` are likely to be less relevant than the queries it returns for `sailing dinghies`.
-
-## Example request
-
-The following example shows a request that returns the suggested query strings for *sail*. Remember to URL encode the user's partial query term when you set the [q](/rest/api/cognitiveservices-bingsearch/bing-autosuggest-api-v7-reference#query) query parameter. For example, if the user entered *sailing les*, set `q` to `sailing+les` or `sailing%20les`.
-
-```http
-GET https://api.cognitive.microsoft.com/bing/v7.0/suggestions?q=sail&mkt=en-us HTTP/1.1
-Ocp-Apim-Subscription-Key: 123456789ABCDE
-X-MSEdge-ClientIP: 999.999.999.999
-X-Search-Location: lat:47.60357;long:-122.3295;re:100
-X-MSEdge-ClientID: <blobFromPriorResponseGoesHere>
-Host: api.cognitive.microsoft.com
-```
-
-The following response contains a list of [SearchAction](/rest/api/cognitiveservices-bingsearch/bing-autosuggest-api-v7-reference#searchaction) objects that contain the suggested query terms.
-
-```json
-{
- "url" : "https:\/\/www.bing.com\/search?q=sailing+lessons+seattle&FORM=USBAPI",
- "displayText" : "sailing lessons seattle",
- "query" : "sailing lessons seattle",
- "searchKind" : "WebSearch"
-}, ...
-```
-
-## Using suggested query terms
-
-Each suggestion includes a `displayText`, `query` and, `url` field. The `displayText` field contains the suggested query that you use to populate your search box's drop-down list. You must display all suggestions that the response includes, and in the given order.
-
-The following example shows a drop-down search box with suggested query terms from the Bing Autosuggest API.
-
-![Autosuggest drop-down search box list](../media/cognitive-services-bing-autosuggest-api/bing-autosuggest-drop-down-list.PNG)
-
-If the user selects a suggested query from the drop-down list, you'd use the query term in the `query` field to call the [Bing Web Search API](../../bing-web-search/overview.md) and display the results yourself. Or, you could use the URL in the `url` field to send the user to the Bing search results page instead.
-
-## Next steps
-
-* [What is the Bing Autosuggest API?](../get-suggested-search-terms.md)
cognitive-services Sending Requests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Autosuggest/concepts/sending-requests.md
- Title: "Sending requests to the Bing Autosuggest API"-
-description: The Bing Autosuggest API returns a list of suggested queries based on the partial query string in the search box. Learn more about sending requests.
------- Previously updated : 06/27/2019---
-# Sending requests to the Bing Autosuggest API.
--
-If your application sends queries to any of the Bing Search APIs, you can use the Bing Autosuggest API to improve your users' search experience. The Bing Autosuggest API returns a list of suggested queries based on the partial query string in the search box. As characters are entered into a search box in your application, you can display suggestions in a drop-down list. Use this article to learn more about sending requests to this API.
-
-## Bing Autosuggest API Endpoint
-
-The **Bing Autosuggest API** includes one endpoint, which returns a list of suggested queries from a partial search term.
-
-To get suggested queries using the Bing API, send a `GET` request to the following endpoint. Use the headers and URL parameters to define further specifications.
-
-**Endpoint:** Returns search suggestions as JSON results that are relevant to the user's input defined by `?q=""`.
-
-```http
-GET https://api.cognitive.microsoft.com/bing/v7.0/Suggestions
-```
-
-For details about headers, parameters, market codes, response objects, errors, etc., see the [Bing Autosuggest API v7](/rest/api/cognitiveservices-bingsearch/bing-autosuggest-api-v7-reference) reference.
-
-The **Bing** APIs support search actions that return results according to their type. All search endpoints return results as JSON response objects.
-All endpoints support queries that return a specific language and/or location by longitude, latitude, and search radius.
-
-For complete information about the parameters supported by each endpoint, see the reference pages for each type.
-For examples of basic requests using the Autosuggest API, see [Autosuggest Quickstarts](/azure/cognitive-services/bing-autosuggest/get-suggested-search-terms).
-
-## Bing Autosuggest API requests
-
-> [!NOTE]
-> * Requests to the Bing Autosuggest API must use the HTTPS protocol.
-
-We recommend that all requests originate from a server. Distributing the key as part of a client application provides more opportunity malicious third-party access. Additionally, making calls from a server provides a single upgrade point for future updates.
-
-The request must specify the [q](/rest/api/cognitiveservices/bing-autosuggest-api-v5-reference#query) query parameter, which contains the user's partial search term. Although it's optional, the request should also specify the [mkt](/rest/api/cognitiveservices/bing-autosuggest-api-v5-reference#mkt) query parameter, which identifies the market where you want the results to come from. For a list of optional query parameters, see [Query Parameters](/rest/api/cognitiveservices/bing-autosuggest-api-v5-reference#query-parameters). All query parameter values must be URL encoded.
-
-The request must specify the [Ocp-Apim-Subscription-Key](/rest/api/cognitiveservices/bing-autosuggest-api-v5-reference#subscriptionkey) header. Although optional, you are encouraged to also specify the following headers:
--- [User-Agent](/rest/api/cognitiveservices/bing-autosuggest-api-v5-reference#useragent)-- [X-MSEdge-ClientID](/rest/api/cognitiveservices/bing-autosuggest-api-v5-reference#clientid)-- [X-Search-ClientIP](/rest/api/cognitiveservices/bing-autosuggest-api-v5-reference#clientip)-- [X-Search-Location](/rest/api/cognitiveservices/bing-autosuggest-api-v5-reference#location)-
-The client IP and location headers are important for returning location aware content.
-
-For a list of all request and response headers, see [Headers](/rest/api/cognitiveservices/bing-autosuggest-api-v5-reference#headers).
-
-> [!NOTE]
-> When you call the Bing Autosuggest API from JavaScript, your browser's built-in security features might prevent you from accessing the values of these headers.
-
-To resolve this, you can make the Bing Autosuggest API request through a CORS proxy. The response from such a proxy has a `Access-Control-Expose-Headers` header that filter response headers and makes them available to JavaScript.
-
-It's easy to install a CORS proxy to allow our [tutorial app](../tutorials/autosuggest.md) to access the optional client headers. First, if you don't already have it, [install Node.js](https://nodejs.org/en/download/). Then enter the following command at a command prompt.
-
-```console
-npm install -g cors-proxy-server
-```
-
-Next, change the Bing Autosuggest API endpoint in the HTML file to:
-
-```http
-http://localhost:9090/https://api.cognitive.microsoft.com/bing/v7.0/Suggestions
-```
-
-Finally, start the CORS proxy with the following command:
-
-```console
-cors-proxy-server
-```
-
-Leave the command window open while you use the tutorial app; closing the window stops the proxy. In the expandable HTTP Headers section below the search results, you can now see the `X-MSEdge-ClientID` header (among others) and verify that it is the same for each request.
-
-Requests should include all suggested query parameters and headers.
-
-The following example shows a request that returns the suggested query strings for *sail*.
-
-> ```http
-> GET https://api.cognitive.microsoft.com/bing/v7.0/suggestions?q=sail&mkt=en-us HTTP/1.1
-> Ocp-Apim-Subscription-Key: 123456789ABCDE
-> X-MSEdge-ClientIP: 999.999.999.999
-> X-Search-Location: lat:47.60357;long:-122.3295;re:100
-> X-MSEdge-ClientID: <blobFromPriorResponseGoesHere>
-> Host: api.cognitive.microsoft.com
-> ```
-
-If it's your first time calling any of the Bing APIs, don't include the client ID header. Only include the client ID header if you've previously called a Bing API and Bing returned a client ID for the user and device combination.
-
-The following web suggestion group is a response to the above request. The group contains a list of search query suggestions, with each suggestion including a `displayText`, `query`, and `url` field.
-
-The `displayText` field contains the suggested query that you'd use to populate your search box's drop-down list. You must display all suggestions that the response includes, and in the given order.
-
-If the user selects a query from the drop-down list, you can use it to call the one of the [Bing Search APIs](../../bing-web-search/bing-api-comparison.md?bc=%2fen-us%2fazure%2fbread%2ftoc.json&toc=%2fen-us%2fazure%2fcognitive-services%2fbing-autosuggest%2ftoc.json) and display the results yourself, or send the user to the Bing results page using the returned `url` field.
--
-```json
-BingAPIs-TraceId: 76DD2C2549B94F9FB55B4BD6FEB6AC
-X-MSEdge-ClientID: 1C3352B306E669780D58D607B96869
-BingAPIs-Market: en-US
-
-{
- "_type" : "Suggestions",
- "queryContext" : {
- "originalQuery" : "sail"
- },
- "suggestionGroups" : [{
- "name" : "Web",
- "searchSuggestions" : [{
- "url" : "https:\/\/www.bing.com\/search?q=sailing+lessons+seattle&FORM=USBAPI",
- "displayText" : "sailing lessons seattle",
- "query" : "sailing lessons seattle",
- "searchKind" : "WebSearch"
- },
- {
- "url" : "https:\/\/www.bing.com\/search?q=sailor+moon+news&FORM=USBAPI",
- "displayText" : "sailor moon news",
- "query" : "sailor moon news",
- "searchKind" : "WebSearch"
- },
- {
- "url" : "https:\/\/www.bing.com\/search?q=sailor+jack%27s+lincoln+city&FORM=USBAPI",
- "displayText" : "sailor jack's lincoln city",
- "query" : "sailor jack's lincoln city",
- "searchKind" : "WebSearch"
- },
- {
- "url" : "https:\/\/www.bing.com\/search?q=sailing+anarchy&FORM=USBAPI",
- "displayText" : "sailing anarchy",
- "query" : "sailing anarchy",
- "searchKind" : "WebSearch"
- },
- {
- "url" : "https:\/\/www.bing.com\/search?q=sailboats+for+sale&FORM=USBAPI",
- "displayText" : "sailboats for sale",
- "query" : "sailboats for sale",
- "searchKind" : "WebSearch"
- },
- {
- "url" : "https:\/\/www.bing.com\/search?q=sailstn.mylabsplus.com&FORM=USBAPI",
- "displayText" : "sailstn.mylabsplus.com",
- "query" : "sailstn.mylabsplus.com",
- "searchKind" : "WebSearch"
- },
- {
- "url" : "https:\/\/www.bing.com\/search?q=sailusfood&FORM=USBAPI",
- "displayText" : "sailusfood",
- "query" : "sailusfood",
- "searchKind" : "WebSearch"
- },
- {
- "url" : "https:\/\/www.bing.com\/search?q=sailboats+for+sale+seattle&FORM=USBAPI",
- "displayText" : "sailboats for sale seattle",
- "query" : "sailboats for sale seattle",
- "searchKind" : "WebSearch"
- }]
- }]
-}
-```
-
-## Next steps
--- [What is Bing Autosuggest?](../get-suggested-search-terms.md)-- [Bing Autosuggest API v7 reference](/rest/api/cognitiveservices-bingsearch/bing-autosuggest-api-v7-reference)-- [Getting suggested search terms from the Bing Autosuggest API](get-suggestions.md)
cognitive-services Get Suggested Search Terms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Autosuggest/get-suggested-search-terms.md
- Title: What is Bing Autosuggest?-
-description: The Bing Autosuggest API returns a list of suggested queries based on the partial query string in the search box.
------- Previously updated : 12/18/2019--
-# What is Bing Autosuggest?
--
-If your application sends queries to any of the Bing Search APIs, you can use the Bing Autosuggest API to improve your users' search experience. The Bing Autosuggest API returns a list of suggested queries based on the partial query string in the search box. As characters are entered into the search box, you can display suggestions in a drop-down list.
-
-## Bing Autosuggest API features
-
-| Feature | Description |
-|--||
-| [Suggest search terms in real-time](concepts/get-suggestions.md) | Improve your app experience by using the Autosuggest API to display suggested search terms as they're typed. |
-
-## Workflow
-
-The Bing Autosuggest API is a RESTful web service, easy to call from any programming language that can make HTTP requests and parse JSON.
-
-1. Create an [Azure AI services API account](../cognitive-services-apis-create-account.md) with access to the Bing Search APIs. If you don't have an Azure subscription, you can [create an account](https://azure.microsoft.com/free/cognitive-services/) for free.
-2. Send a request to this API each time a user types a new character in your application's search box.
-3. Process the API response by parsing the returned JSON message.
-
-Typically, you'd call this API each time the user types a new character in your application's search box. As more characters are entered, the API will return more relevant suggested search queries. For example, the suggestions the API might return for a single `s` are likely to be less relevant than ones for `sail`.
-
-The following example shows a drop-down search box with suggested query terms from the Bing Autosuggest API.
-
-![Autosuggest drop-down search box list](./media/cognitive-services-bing-autosuggest-api/bing-autosuggest-drop-down-list.PNG)
-
-When a user selects a suggestion from the drop-down list, you can use it to begin searching with one of the Bing Search APIs, or directly go to the Bing search results page.
-
-## Next steps
-
-To get started quickly with your first request, see [Making Your First Query](quickstarts/csharp.md).
-
-Familiarize yourself with the [Bing Autosuggest API v7](/rest/api/cognitiveservices-bingsearch/bing-autosuggest-api-v7-reference) reference. The reference contains the list of endpoints, headers, and query parameters that you'd use to request suggested query terms, and the definitions of the response objects.
-
-Visit the [Bing Search API hub page](../bing-web-search/overview.md) to explore the other available APIs.
--
-Learn how to search the web by using the [Bing Web Search API](../bing-web-search/overview.md), and explore the other [Bing Search APIs](../bing-web-search/index.yml).
-
-Be sure to read [Bing Use and Display Requirements](../bing-web-search/use-display-requirements.md) so you don't break any of the rules about using the search results.
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Autosuggest/language-support.md
- Title: Language support - Bing Autosuggest API-
-description: A list of supported languages and regions for the Bing Autosuggest API.
------- Previously updated : 02/20/2019---
-# Language and region support for the Bing Autosuggest API
--
-The following lists the languages supported by Bing Autosuggest API.
-
-| Language | Language code |
-|:-- |:-:|
-| Arabic | `ar` |
-| Chinese (People's Republic of China) | `zh-CN` |
-| Chinese (Hong Kong SAR) | `zh-HK` |
-| Chinese (Taiwan) | `zh-TW` |
-| Danish | `da` |
-| Dutch (Belgium) | `nl-BE` |
-| Dutch (Netherlands) | `nl-NL` |
-| English (Australia) | `en-AU` |
-| English (Canada) | `en-CA` |
-| English (India) | `en-IN` |
-| English (Indonesia) | `en-ID` |
-| English (Malaysia) | `en-MY` |
-| English (New Zealand) | `en-NZ` |
-| English (Philippines) | `en-PH` |
-| English (South Africa) | `en-ZA` |
-| English (United Kingdom) | `en-GB` |
-| English (United States) | `en-US` |
-| Finnish | `fi` |
-| French (Belgium) | `fr-BE` |
-| French (Canada) | `fr-CA` |
-| French (France) | `fr-FR` |
-| French (Switzerland) | `fr-CH` |
-| German (Austria) | `de-AT` |
-| German (Germany) | `de-DE` |
-| German (Switzerland) | `de-CH` |
-| Italian | `it` |
-| Japanese | `ja` |
-| Korean | `ko` |
-| Norwegian | `no` |
-| Polish | `pl` |
-| Portuguese (Brazil) | `pt-BR`|
-| Portuguese (Portugal) | `pt-PT`|
-| Russian | `ru` |
-| Spanish (Argentina) | `es-AR` |
-| Spanish (Chile) | `es-CL` |
-| Spanish (Mexico) | `es-MX` |
-| Spanish (Spain) | `es-ES` |
-| Spanish (United States) | `es-US` |
-| Swedish | `sv` |
-| Turkish | `tr` |
-
-## See also
--- [Azure AI services documentation](../../ai-services/index.yml)-- [Azure AI services product information](https://azure.microsoft.com/services/cognitive-services/)
cognitive-services Client Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Autosuggest/quickstarts/client-libraries.md
- Title: 'Quickstart: Use the Bing Autosuggest client library'-
-description: The Autosuggest API offers client libraries that makes it easy to integrate search capabilities into your applications. Use this quickstart to start sending search requests, and get back results.
---
-zone_pivot_groups: programming-languages-set-fifteen
--- Previously updated : 04/06/2020---
-# Quickstart: Use the Bing Autosuggest client library
-------
cognitive-services Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Autosuggest/quickstarts/csharp.md
- Title: "Quickstart: Suggest search queries with the Bing Autosuggest REST API and C#"-
-description: "Learn how to quickly start suggesting search terms in real time with the Bing Autosuggest API and C#."
------ Previously updated : 05/06/2020---
-# Quickstart: Suggest search queries with the Bing Autosuggest REST API and C#
--
-Follow this quickstart to learn how to make calls to the Bing Autosuggest API and read the JSON response. This simple C# application sends a partial search query to the API, and returns suggestions for searches. While this application is written in C#, the API is a RESTful Web service compatible with most programming languages. The source code for this sample is available on [GitHub](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/blob/master/dotnet/Search/BingAutosuggestv7.cs).
-
-## Prerequisites
-
-* Any edition of [Visual Studio 2017 or later](https://www.visualstudio.com/downloads/).
-* If you're using Linux/MacOS, this application can be run using [Mono](https://www.mono-project.com/).
--
-## Create a Visual Search Solution
-
-1. Create a new console solution in Visual Studio. Then, add the following namespaces into the main code file.
-
- ```csharp
- using System;
- using System.Collections.Generic;
- using System.Net.Http;
- using System.Net.Http.Headers;
- using System.Text;
- ```
-
-2. In a new class, create variables for your API host and path, [market code](/rest/api/cognitiveservices-bingsearch/bing-autosuggest-api-v7-reference#market-codes), and a partial search query. Use the global endpoint in the following code, or use the [custom subdomain](../../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
-
- ```csharp
- static string host = "https://api.cognitive.microsoft.com";
- static string path = "/bing/v7.0/Suggestions";
- static string market = "en-US";
- static string key = "your-api-key";
-
- static string query = "sail";
- ```
--
-## Create and send an API request
-
-1. Create a function called `Autosuggest()` to send a request to the API. Create a new `HttpClient()`, and add your subscription key to the `Ocp-Apim-Subscription-Key` header.
-
- ```csharp
- async static void Autosuggest()
- {
- HttpClient client = new HttpClient();
- client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", key);
- //..
- }
- ```
-
-2. In the same function above, create a request URI by combining your API host and path. Append your market to the `mkt=` parameter, and your query to the `query=` parameter. Be sure to URL-encode your query.
-
- ```csharp
- string uri = host + path + "?mkt=" + market + "&query=" + System.Net.WebUtility.UrlEncode (query);
- ```
-
-3. Send the request to the uri constructed above, and print the response.
-
- ```csharp
- HttpResponseMessage response = await client.GetAsync(uri);
-
- string contentString = await response.Content.ReadAsStringAsync();
- Console.WriteLine(contentString);
- ```
-
-4. In the main method of your program, call `Autosuggest()`.
-
- ```csharp
- static void Main(string[] args)
- {
- Autosuggest();
- Console.ReadLine();
- }
- ```
-
-## Example JSON response
-
-A successful response is returned in JSON, as shown in the following example:
-
-```json
-{
- "_type": "Suggestions",
- "queryContext": {
- "originalQuery": "sail"
- },
- "suggestionGroups": [
- {
- "name": "Web",
- "searchSuggestions": [
- {
- "url": "https://www.bing.com/cr?IG\u003d2ACC4FE8B02F4AACB9182A6502B0E556\u0026CID\u003d1D546424A4CB64AF2D386F26A5CD6583\u0026rd\u003d1\u0026h\u003dgvtP9TS9NwhajSapY2Se6y1eCbP2fq_GiP2n-cxi6OY\u0026v\u003d1\u0026r\u003dhttps%3a%2f%2fwww.bing.com%2fsearch%3fq%3dsailrite%26FORM%3dUSBAPI\u0026p\u003dDevEx,5003.1",
- "displayText": "sailrite",
- "query": "sailrite",
- "searchKind": "WebSearch"
- },
- {
- "url": "https://www.bing.com/cr?IG\u003d2ACC4FE8B02F4AACB9182A6502B0E556\u0026CID\u003d1D546424A4CB64AF2D386F26A5CD6583\u0026rd\u003d1\u0026h\u003dBTS0G6AakxntIl9rmbDXtk1n6rQpsZZ99aQ7ClE7dTY\u0026v\u003d1\u0026r\u003dhttps%3a%2f%2fwww.bing.com%2fsearch%3fq%3dsail%2bsand%2bpoint%26FORM%3dUSBAPI\u0026p\u003dDevEx,5004.1",
- "displayText": "sail sand point",
- "query": "sail sand point",
- "searchKind": "WebSearch"
- },
- {
- "url": "https://www.bing.com/cr?IG\u003d2ACC4FE8B02F4AACB9182A6502B0E556\u0026CID\u003d1D546424A4CB64AF2D386F26A5CD6583\u0026rd\u003d1\u0026h\u003dc0QOA_j6swCZJy9FxqOwke2KslJE7ZRmMooGClAuCpY\u0026v\u003d1\u0026r\u003dhttps%3a%2f%2fwww.bing.com%2fsearch%3fq%3dsailboats%2bfor%2bsale%26FORM%3dUSBAPI\u0026p\u003dDevEx,5005.1",
- "displayText": "sailboats for sale",
- "query": "sailboats for sale",
- "searchKind": "WebSearch"
- },
- {
- "url": "https://www.bing.com/cr?IG\u003d2ACC4FE8B02F4AACB9182A6502B0E556\u0026CID\u003d1D546424A4CB64AF2D386F26A5CD6583\u0026rd\u003d1\u0026h\u003dmnMdREUH20SepmHQH1zlh9Hy_w7jpOlZFm3KG2R_BoA\u0026v\u003d1\u0026r\u003dhttps%3a%2f%2fwww.bing.com%2fsearch%3fq%3dsailing%2banarchy%26FORM%3dUSBAPI\u0026p\u003dDevEx,5006.1",
- "displayText": "sailing anarchy",
- "query": "sailing anarchy",
- "searchKind": "WebSearch"
- },
- {
- "url": "https://www.bing.com/cr?IG\u003d2ACC4FE8B02F4AACB9182A6502B0E556\u0026CID\u003d1D546424A4CB64AF2D386F26A5CD6583\u0026rd\u003d1\u0026h\u003dWLFO-B1GG5qtBGnoU1Bizz02YKkg5fgAQtHwhXn4z8I\u0026v\u003d1\u0026r\u003dhttps%3a%2f%2fwww.bing.com%2fsearch%3fq%3dsailpoint%26FORM%3dUSBAPI\u0026p\u003dDevEx,5007.1",
- "displayText": "sailpoint",
- "query": "sailpoint",
- "searchKind": "WebSearch"
- },
- {
- "url": "https://www.bing.com/cr?IG\u003d2ACC4FE8B02F4AACB9182A6502B0E556\u0026CID\u003d1D546424A4CB64AF2D386F26A5CD6583\u0026rd\u003d1\u0026h\u003dquBMwmKlGwqC5wAU0K7n416plhWcR8zQCi7r-Fw9Y0w\u0026v\u003d1\u0026r\u003dhttps%3a%2f%2fwww.bing.com%2fsearch%3fq%3dsailflow%26FORM%3dUSBAPI\u0026p\u003dDevEx,5008.1",
- "displayText": "sailflow",
- "query": "sailflow",
- "searchKind": "WebSearch"
- },
- {
- "url": "https://www.bing.com/cr?IG\u003d2ACC4FE8B02F4AACB9182A6502B0E556\u0026CID\u003d1D546424A4CB64AF2D386F26A5CD6583\u0026rd\u003d1\u0026h\u003d0udadFl0gCTKCp0QmzQTXS3_y08iO8FpwsoKPHPS6kw\u0026v\u003d1\u0026r\u003dhttps%3a%2f%2fwww.bing.com%2fsearch%3fq%3dsailboatdata%26FORM%3dUSBAPI\u0026p\u003dDevEx,5009.1",
- "displayText": "sailboatdata",
- "query": "sailboatdata",
- "searchKind": "WebSearch"
- },
- {
- "url": "https://www.bing.com/cr?IG\u003d2ACC4FE8B02F4AACB9182A6502B0E556\u0026CID\u003d1D546424A4CB64AF2D386F26A5CD6583\u0026rd\u003d1\u0026h\u003deSSt0MRSbl2V0RFPSuVd-gC7fGOT4717pz55EBUgPec\u0026v\u003d1\u0026r\u003dhttps%3a%2f%2fwww.bing.com%2fsearch%3fq%3dsailor%2b2025%26FORM%3dUSBAPI\u0026p\u003dDevEx,5010.1",
- "displayText": "sailor 2025",
- "query": "sailor 2025",
- "searchKind": "WebSearch"
- }
- ]
- }
- ]
-}
-```
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Bing Autosuggest tutorial](../tutorials/autosuggest.md)
-
-## See also
--- [What is Bing Autosuggest?](../get-suggested-search-terms.md)-- [Bing Autosuggest API v7 reference](/rest/api/cognitiveservices-bingsearch/bing-autosuggest-api-v7-reference)
cognitive-services Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Autosuggest/quickstarts/java.md
- Title: "Quickstart: Suggest search queries with the Bing Autosuggest REST API and Java"-
-description: Learn how to quickly start suggesting search terms in real time with the Bing Autosuggest API and Java.
------ Previously updated : 05/06/2020----
-# Quickstart: Suggest search queries with the Bing Autosuggest REST API and Java
--
-Follow this quickstart to learn how to make calls to the Bing Autosuggest API and read the JSON response. This simple Java application sends a partial search query to the API, and returns suggestions for searches. While this application is written in Java, the API is a RESTful Web service compatible with most programming languages. The source code for this sample is available on [GitHub](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/blob/master/java/Search/BingAutosuggestv7.java)
-
-## Prerequisites
-
-* The [Java Development Kit(JDK)](https://www.oracle.com/technetwork/java/javase/downloads/)
-* The [Gson library](https://github.com/google/gson)
--
-## Create and initialize a project
-
-1. Create a new Java project in your favorite IDE or editor, and import the following libraries.
-
- ```java
- import java.io.*;
- import java.net.*;
- import java.util.*;
- import javax.net.ssl.HttpsURLConnection;
- import com.google.gson.Gson;
- import com.google.gson.GsonBuilder;
- import com.google.gson.JsonObject;
- import com.google.gson.JsonParser;
- ```
-
-2. Create variables for your subscription key, the API host and path, your [market code](/rest/api/cognitiveservices-bingsearch/bing-autosuggest-api-v7-reference#market-codes), and a search query. Use the global endpoint below, or use the [custom subdomain](../../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
-
- ```java
- static String subscriptionKey = "enter key here";
- static String host = "https://api.cognitive.microsoft.com";
- static String path = "/bing/v7.0/Suggestions";
- static String mkt = "en-US";
- static String query = "sail";
- ```
--
-## Format the response
-
-Create a method named `prettify()` to format the response returned from the Bing Video API. Use the Gson library's `JsonParser` to take in a JSON string and convert it into an object. Then, use `GsonBuilder()` and `toJson()` to create the formatted string.
-
-```java
-// pretty-printer for JSON; uses GSON parser to parse and re-serialize
-public static String prettify(String json_text) {
- JsonParser parser = new JsonParser();
- JsonObject json = parser.parse(json_text).getAsJsonObject();
- Gson gson = new GsonBuilder().setPrettyPrinting().create();
- return gson.toJson(json);
-}
-```
-
-## Construct and send the search request
-
-1. Create a new method named `get_suggestions()` and perform the following steps:
-
- 1. Construct the URL for your request by combining your API host, path, and encoding your search query. Be sure to url-encode the query before appending it. Create a parameters string for your query by appending the market code to the `mkt=` parameter, and your query to the `q=` parameter.
-
- ```java
-
- public static String get_suggestions () throws Exception {
- String encoded_query = URLEncoder.encode (query, "UTF-8");
- String params = "?mkt=" + mkt + "&q=" + encoded_query;
- //...
- }
- ```
-
- 2. Create a new URL for the request with the API host, path, and parameters that you created in the previous step.
-
- ```java
- //...
- URL url = new URL (host + path + params);
- //...
- ```
-
- 3. Create a `HttpsURLConnection` object, and use `openConnection()` to create a connection. Set the request method to `GET`, and add your subscription key to the `Ocp-Apim-Subscription-Key` header.
-
- ```java
- //...
- HttpsURLConnection connection = (HttpsURLConnection) url.openConnection();
- connection.setRequestMethod("GET");
- connection.setRequestProperty("Ocp-Apim-Subscription-Key", subscriptionKey);
- connection.setDoOutput(true);
- //...
- ```
-
- 4. Store the API response in `StringBuilder`. After the response has been captured, close the `InputStreamReader` stream, and return the response.
-
- ```java
- //...
- StringBuilder response = new StringBuilder ();
- BufferedReader in = new BufferedReader(
- new InputStreamReader(connection.getInputStream()));
- String line;
- while ((line = in.readLine()) != null) {
- response.append(line);
- }
- in.close();
-
- return response.toString();
- ```
-
-2. In the main function of your application, call `get_suggestions()`, and print the response by using `prettify()`.
-
- ```java
- public static void main(String[] args) {
- try {
- String response = get_suggestions ();
- System.out.println (prettify (response));
- }
- catch (Exception e) {
- System.out.println (e);
- }
- }
- ```
-
-## Example JSON response
-
-A successful response is returned in JSON, as shown in the following example:
-
-```json
-{
- "_type": "Suggestions",
- "queryContext": {
- "originalQuery": "sail"
- },
- "suggestionGroups": [
- {
- "name": "Web",
- "searchSuggestions": [
- {
- "url": "https://www.bing.com/cr?IG\u003d2ACC4FE8B02F4AACB9182A6502B0E556\u0026CID\u003d1D546424A4CB64AF2D386F26A5CD6583\u0026rd\u003d1\u0026h\u003dgvtP9TS9NwhajSapY2Se6y1eCbP2fq_GiP2n-cxi6OY\u0026v\u003d1\u0026r\u003dhttps%3a%2f%2fwww.bing.com%2fsearch%3fq%3dsailrite%26FORM%3dUSBAPI\u0026p\u003dDevEx,5003.1",
- "displayText": "sailrite",
- "query": "sailrite",
- "searchKind": "WebSearch"
- },
- {
- "url": "https://www.bing.com/cr?IG\u003d2ACC4FE8B02F4AACB9182A6502B0E556\u0026CID\u003d1D546424A4CB64AF2D386F26A5CD6583\u0026rd\u003d1\u0026h\u003dBTS0G6AakxntIl9rmbDXtk1n6rQpsZZ99aQ7ClE7dTY\u0026v\u003d1\u0026r\u003dhttps%3a%2f%2fwww.bing.com%2fsearch%3fq%3dsail%2bsand%2bpoint%26FORM%3dUSBAPI\u0026p\u003dDevEx,5004.1",
- "displayText": "sail sand point",
- "query": "sail sand point",
- "searchKind": "WebSearch"
- },
- {
- "url": "https://www.bing.com/cr?IG\u003d2ACC4FE8B02F4AACB9182A6502B0E556\u0026CID\u003d1D546424A4CB64AF2D386F26A5CD6583\u0026rd\u003d1\u0026h\u003dc0QOA_j6swCZJy9FxqOwke2KslJE7ZRmMooGClAuCpY\u0026v\u003d1\u0026r\u003dhttps%3a%2f%2fwww.bing.com%2fsearch%3fq%3dsailboats%2bfor%2bsale%26FORM%3dUSBAPI\u0026p\u003dDevEx,5005.1",
- "displayText": "sailboats for sale",
- "query": "sailboats for sale",
- "searchKind": "WebSearch"
- },
- {
- "url": "https://www.bing.com/cr?IG\u003d2ACC4FE8B02F4AACB9182A6502B0E556\u0026CID\u003d1D546424A4CB64AF2D386F26A5CD6583\u0026rd\u003d1\u0026h\u003dmnMdREUH20SepmHQH1zlh9Hy_w7jpOlZFm3KG2R_BoA\u0026v\u003d1\u0026r\u003dhttps%3a%2f%2fwww.bing.com%2fsearch%3fq%3dsailing%2banarchy%26FORM%3dUSBAPI\u0026p\u003dDevEx,5006.1",
- "displayText": "sailing anarchy",
- "query": "sailing anarchy",
- "searchKind": "WebSearch"
- },
- {
- "url": "https://www.bing.com/cr?IG\u003d2ACC4FE8B02F4AACB9182A6502B0E556\u0026CID\u003d1D546424A4CB64AF2D386F26A5CD6583\u0026rd\u003d1\u0026h\u003dWLFO-B1GG5qtBGnoU1Bizz02YKkg5fgAQtHwhXn4z8I\u0026v\u003d1\u0026r\u003dhttps%3a%2f%2fwww.bing.com%2fsearch%3fq%3dsailpoint%26FORM%3dUSBAPI\u0026p\u003dDevEx,5007.1",
- "displayText": "sailpoint",
- "query": "sailpoint",
- "searchKind": "WebSearch"
- },
- {
- "url": "https://www.bing.com/cr?IG\u003d2ACC4FE8B02F4AACB9182A6502B0E556\u0026CID\u003d1D546424A4CB64AF2D386F26A5CD6583\u0026rd\u003d1\u0026h\u003dquBMwmKlGwqC5wAU0K7n416plhWcR8zQCi7r-Fw9Y0w\u0026v\u003d1\u0026r\u003dhttps%3a%2f%2fwww.bing.com%2fsearch%3fq%3dsailflow%26FORM%3dUSBAPI\u0026p\u003dDevEx,5008.1",
- "displayText": "sailflow",
- "query": "sailflow",
- "searchKind": "WebSearch"
- },
- {
- "url": "https://www.bing.com/cr?IG\u003d2ACC4FE8B02F4AACB9182A6502B0E556\u0026CID\u003d1D546424A4CB64AF2D386F26A5CD6583\u0026rd\u003d1\u0026h\u003d0udadFl0gCTKCp0QmzQTXS3_y08iO8FpwsoKPHPS6kw\u0026v\u003d1\u0026r\u003dhttps%3a%2f%2fwww.bing.com%2fsearch%3fq%3dsailboatdata%26FORM%3dUSBAPI\u0026p\u003dDevEx,5009.1",
- "displayText": "sailboatdata",
- "query": "sailboatdata",
- "searchKind": "WebSearch"
- },
- {
- "url": "https://www.bing.com/cr?IG\u003d2ACC4FE8B02F4AACB9182A6502B0E556\u0026CID\u003d1D546424A4CB64AF2D386F26A5CD6583\u0026rd\u003d1\u0026h\u003deSSt0MRSbl2V0RFPSuVd-gC7fGOT4717pz55EBUgPec\u0026v\u003d1\u0026r\u003dhttps%3a%2f%2fwww.bing.com%2fsearch%3fq%3dsailor%2b2025%26FORM%3dUSBAPI\u0026p\u003dDevEx,5010.1",
- "displayText": "sailor 2025",
- "query": "sailor 2025",
- "searchKind": "WebSearch"
- }
- ]
- }
- ]
-}
-```
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Create a single-page web app](../tutorials/autosuggest.md)
--- [What is Bing Autosuggest?](../get-suggested-search-terms.md)-- [Bing Autosuggest API v7 reference](/rest/api/cognitiveservices-bingsearch/bing-autosuggest-api-v7-reference)
cognitive-services Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Autosuggest/quickstarts/nodejs.md
- Title: "Quickstart: Suggest search queries with the Bing Autosuggest REST API and Node.js"-
-description: Learn how to quickly start suggesting search terms in real time with the Bing Autosuggest API and Node.js.
------ Previously updated : 05/06/2020----
-# Quickstart: Suggest search queries with the Bing Autosuggest REST API and Node.js
--
-Follow this quickstart to learn how to make calls to the Bing Autosuggest API and read the JSON response. This simple Node.js application sends a partial search query to the API, and returns suggestions for searches. While this application is written in JavaScript, the API is a RESTful Web service compatible with most programming languages. The source code for this sample is available on [GitHub](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/blob/master/nodejs/Search/BingAutosuggestv7.js)
-
-## Prerequisites
-
-* [Node.js 6](https://nodejs.org/en/download/) or later
--
-## Create a new application
-
-1. Create a new JavaScript file in your favorite IDE or editor, and set the strictness and https requirements.
-
- ```javascript
- 'use strict';
-
- let https = require ('https');
- ```
-
-2. Create variables for the API endpoint host and path, your subscription key, [market code](/rest/api/cognitiveservices-bingsearch/bing-autosuggest-api-v7-reference#market-codes), and a search term. Use the global endpoint in the following code, or use the [custom subdomain](../../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
-
- ```javascript
- // Replace the subscriptionKey string value with your valid subscription key.
- let subscriptionKey = 'enter key here';
-
- let host = 'api.cognitive.microsoft.com';
- let path = '/bing/v7.0/Suggestions';
-
- let mkt = 'en-US';
- let query = 'sail';
- ```
-
-## Construct the search request and query.
-
-1. Create a parameters string for your query by appending the market code to the `mkt=` parameter, and your query to the `q=` parameter.
-
- ```javascript
- let params = '?mkt=' + mkt + '&q=' + query;
- ```
-
-2. Create a function called `get_suggestions()`. Use the variables from the last steps to format a search URL for the API request. Your search term must be URL-encoded before being sent to the API.
-
- ```javascript
- let get_suggestions = function () {
- let request_params = {
- method : 'GET',
- hostname : host,
- path : path + params,
- headers : {
- 'Ocp-Apim-Subscription-Key' : subscriptionKey,
- }
- };
- //...
- }
- ```
-
- 1. In the same function, use the request library to send your query to the API. `response_handler` is defined in the next section.
-
- ```javascript
- //...
- let req = https.request(request_params, response_handler);
- req.end();
- ```
-
-## Create a search handler
-
-1. Define a function named `response_handler` that takes an HTTP call, `response`, as a parameter.
-Do the following steps within this function:
-
- 1. Define a variable to contain the body of the JSON response.
-
- ```javascript
- let response_handler = function (response) {
- let body = '';
- };
- ```
-
- 2. Store the body of the response when the `data` flag is called
-
- ```javascript
- response.on ('data', function (d) {
- body += d;
- });
- ```
-
- 3. When an `end` flag is signaled, use `JSON.parse()` and `JSON.stringify()` to print the response.
-
- ```javascript
- response.on ('end', function () {
- let body_ = JSON.parse (body);
- let body__ = JSON.stringify (body_, null, ' ');
- console.log (body__);
- });
- response.on ('error', function (e) {
- console.log ('Error: ' + e.message);
- });
- ```
-
-2. Call `get_suggestions()` to send the request to the Bing Autosuggest API.
-
-## Example JSON response
-
-A successful response is returned in JSON, as shown in the following example:
-
-```json
-{
- "_type": "Suggestions",
- "queryContext": {
- "originalQuery": "sail"
- },
- "suggestionGroups": [
- {
- "name": "Web",
- "searchSuggestions": [
- {
- "url": "https://www.bing.com/cr?IG\u003d2ACC4FE8B02F4AACB9182A6502B0E556\u0026CID\u003d1D546424A4CB64AF2D386F26A5CD6583\u0026rd\u003d1\u0026h\u003dgvtP9TS9NwhajSapY2Se6y1eCbP2fq_GiP2n-cxi6OY\u0026v\u003d1\u0026r\u003dhttps%3a%2f%2fwww.bing.com%2fsearch%3fq%3dsailrite%26FORM%3dUSBAPI\u0026p\u003dDevEx,5003.1",
- "displayText": "sailrite",
- "query": "sailrite",
- "searchKind": "WebSearch"
- },
- {
- "url": "https://www.bing.com/cr?IG\u003d2ACC4FE8B02F4AACB9182A6502B0E556\u0026CID\u003d1D546424A4CB64AF2D386F26A5CD6583\u0026rd\u003d1\u0026h\u003dBTS0G6AakxntIl9rmbDXtk1n6rQpsZZ99aQ7ClE7dTY\u0026v\u003d1\u0026r\u003dhttps%3a%2f%2fwww.bing.com%2fsearch%3fq%3dsail%2bsand%2bpoint%26FORM%3dUSBAPI\u0026p\u003dDevEx,5004.1",
- "displayText": "sail sand point",
- "query": "sail sand point",
- "searchKind": "WebSearch"
- },
- {
- "url": "https://www.bing.com/cr?IG\u003d2ACC4FE8B02F4AACB9182A6502B0E556\u0026CID\u003d1D546424A4CB64AF2D386F26A5CD6583\u0026rd\u003d1\u0026h\u003dc0QOA_j6swCZJy9FxqOwke2KslJE7ZRmMooGClAuCpY\u0026v\u003d1\u0026r\u003dhttps%3a%2f%2fwww.bing.com%2fsearch%3fq%3dsailboats%2bfor%2bsale%26FORM%3dUSBAPI\u0026p\u003dDevEx,5005.1",
- "displayText": "sailboats for sale",
- "query": "sailboats for sale",
- "searchKind": "WebSearch"
- },
- {
- "url": "https://www.bing.com/cr?IG\u003d2ACC4FE8B02F4AACB9182A6502B0E556\u0026CID\u003d1D546424A4CB64AF2D386F26A5CD6583\u0026rd\u003d1\u0026h\u003dmnMdREUH20SepmHQH1zlh9Hy_w7jpOlZFm3KG2R_BoA\u0026v\u003d1\u0026r\u003dhttps%3a%2f%2fwww.bing.com%2fsearch%3fq%3dsailing%2banarchy%26FORM%3dUSBAPI\u0026p\u003dDevEx,5006.1",
- "displayText": "sailing anarchy",
- "query": "sailing anarchy",
- "searchKind": "WebSearch"
- },
- {
- "url": "https://www.bing.com/cr?IG\u003d2ACC4FE8B02F4AACB9182A6502B0E556\u0026CID\u003d1D546424A4CB64AF2D386F26A5CD6583\u0026rd\u003d1\u0026h\u003dWLFO-B1GG5qtBGnoU1Bizz02YKkg5fgAQtHwhXn4z8I\u0026v\u003d1\u0026r\u003dhttps%3a%2f%2fwww.bing.com%2fsearch%3fq%3dsailpoint%26FORM%3dUSBAPI\u0026p\u003dDevEx,5007.1",
- "displayText": "sailpoint",
- "query": "sailpoint",
- "searchKind": "WebSearch"
- },
- {
- "url": "https://www.bing.com/cr?IG\u003d2ACC4FE8B02F4AACB9182A6502B0E556\u0026CID\u003d1D546424A4CB64AF2D386F26A5CD6583\u0026rd\u003d1\u0026h\u003dquBMwmKlGwqC5wAU0K7n416plhWcR8zQCi7r-Fw9Y0w\u0026v\u003d1\u0026r\u003dhttps%3a%2f%2fwww.bing.com%2fsearch%3fq%3dsailflow%26FORM%3dUSBAPI\u0026p\u003dDevEx,5008.1",
- "displayText": "sailflow",
- "query": "sailflow",
- "searchKind": "WebSearch"
- },
- {
- "url": "https://www.bing.com/cr?IG\u003d2ACC4FE8B02F4AACB9182A6502B0E556\u0026CID\u003d1D546424A4CB64AF2D386F26A5CD6583\u0026rd\u003d1\u0026h\u003d0udadFl0gCTKCp0QmzQTXS3_y08iO8FpwsoKPHPS6kw\u0026v\u003d1\u0026r\u003dhttps%3a%2f%2fwww.bing.com%2fsearch%3fq%3dsailboatdata%26FORM%3dUSBAPI\u0026p\u003dDevEx,5009.1",
- "displayText": "sailboatdata",
- "query": "sailboatdata",
- "searchKind": "WebSearch"
- },
- {
- "url": "https://www.bing.com/cr?IG\u003d2ACC4FE8B02F4AACB9182A6502B0E556\u0026CID\u003d1D546424A4CB64AF2D386F26A5CD6583\u0026rd\u003d1\u0026h\u003deSSt0MRSbl2V0RFPSuVd-gC7fGOT4717pz55EBUgPec\u0026v\u003d1\u0026r\u003dhttps%3a%2f%2fwww.bing.com%2fsearch%3fq%3dsailor%2b2025%26FORM%3dUSBAPI\u0026p\u003dDevEx,5010.1",
- "displayText": "sailor 2025",
- "query": "sailor 2025",
- "searchKind": "WebSearch"
- }
- ]
- }
- ]
-}
-```
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Create a single-page web app](../tutorials/autosuggest.md)
--- [What is Bing Autosuggest?](../get-suggested-search-terms.md)-- [Bing Autosuggest API v7 reference](/rest/api/cognitiveservices-bingsearch/bing-autosuggest-api-v7-reference)
cognitive-services Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Autosuggest/quickstarts/php.md
- Title: "Quickstart: Suggest search queries with the Bing Autosuggest REST API and PHP"-
-description: Learn how to quickly start suggesting search terms in real time with the Bing Autosuggest API and PHP.
------ Previously updated : 05/06/2020----
-# Quickstart: Suggest search queries with the Bing Autosuggest REST API and PHP
--
-Follow this quickstart to learn how to make calls to the Bing Autosuggest API and read the JSON response. This simple PHP application sends a partial search query to the API, and returns suggestions for searches. While this application is written in PHP, the API is a RESTful Web service compatible with most programming languages.
-
-## Prerequisites
-
-* [PHP 5.6.x](https://php.net/downloads.php) or later
--
-## Get Autosuggest results
-
-1. Create a new PHP project in your favorite IDE.
-2. Add the code provided below.
-3. Replace the `subscriptionKey` value with an access key that's valid for your subscription.
-4. Use the global endpoint in the code, or use the [custom subdomain](../../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
-5. Run the program.
-
-```php
-<?php
-
-// NOTE: Be sure to uncomment the following line in your php.ini file.
-// ;extension=php_openssl.dll
-
-// **********************************************
-// *** Update or verify the following values. ***
-// **********************************************
-
-// Replace the subscriptionKey string value with your valid subscription key.
-$subscriptionKey = 'enter key here';
-
-$host = "https://api.cognitive.microsoft.com";
-$path = "/bing/v7.0/Suggestions";
-
-$mkt = "en-US";
-$query = "sail";
-
-function get_suggestions ($host, $path, $key, $mkt, $query) {
-
- $params = '?mkt=' . $mkt . '&q=' . $query;
-
- $headers = "Content-type: text/json\r\n" .
- "Ocp-Apim-Subscription-Key: $key\r\n";
-
- // NOTE: Use the key 'http' even if you are making an HTTPS request. See:
- // https://php.net/manual/en/function.stream-context-create.php
- $options = array (
- 'http' => array (
- 'header' => $headers,
- 'method' => 'GET'
- )
- );
- $context = stream_context_create ($options);
- $result = file_get_contents ($host . $path . $params, false, $context);
- return $result;
-}
-
-$result = get_suggestions ($host, $path, $subscriptionKey, $mkt, $query);
-
-echo json_encode (json_decode ($result), JSON_PRETTY_PRINT);
-?>
-```
-
-## Example JSON response
-
-A successful response is returned in JSON, as shown in the following example:
-
-```json
-{
- "_type": "Suggestions",
- "queryContext": {
- "originalQuery": "sail"
- },
- "suggestionGroups": [
- {
- "name": "Web",
- "searchSuggestions": [
- {
- "url": "https://www.bing.com/cr?IG\u003d2ACC4FE8B02F4AACB9182A6502B0E556\u0026CID\u003d1D546424A4CB64AF2D386F26A5CD6583\u0026rd\u003d1\u0026h\u003dgvtP9TS9NwhajSapY2Se6y1eCbP2fq_GiP2n-cxi6OY\u0026v\u003d1\u0026r\u003dhttps%3a%2f%2fwww.bing.com%2fsearch%3fq%3dsailrite%26FORM%3dUSBAPI\u0026p\u003dDevEx,5003.1",
- "displayText": "sailrite",
- "query": "sailrite",
- "searchKind": "WebSearch"
- },
- {
- "url": "https://www.bing.com/cr?IG\u003d2ACC4FE8B02F4AACB9182A6502B0E556\u0026CID\u003d1D546424A4CB64AF2D386F26A5CD6583\u0026rd\u003d1\u0026h\u003dBTS0G6AakxntIl9rmbDXtk1n6rQpsZZ99aQ7ClE7dTY\u0026v\u003d1\u0026r\u003dhttps%3a%2f%2fwww.bing.com%2fsearch%3fq%3dsail%2bsand%2bpoint%26FORM%3dUSBAPI\u0026p\u003dDevEx,5004.1",
- "displayText": "sail sand point",
- "query": "sail sand point",
- "searchKind": "WebSearch"
- },
- {
- "url": "https://www.bing.com/cr?IG\u003d2ACC4FE8B02F4AACB9182A6502B0E556\u0026CID\u003d1D546424A4CB64AF2D386F26A5CD6583\u0026rd\u003d1\u0026h\u003dc0QOA_j6swCZJy9FxqOwke2KslJE7ZRmMooGClAuCpY\u0026v\u003d1\u0026r\u003dhttps%3a%2f%2fwww.bing.com%2fsearch%3fq%3dsailboats%2bfor%2bsale%26FORM%3dUSBAPI\u0026p\u003dDevEx,5005.1",
- "displayText": "sailboats for sale",
- "query": "sailboats for sale",
- "searchKind": "WebSearch"
- },
- {
- "url": "https://www.bing.com/cr?IG\u003d2ACC4FE8B02F4AACB9182A6502B0E556\u0026CID\u003d1D546424A4CB64AF2D386F26A5CD6583\u0026rd\u003d1\u0026h\u003dmnMdREUH20SepmHQH1zlh9Hy_w7jpOlZFm3KG2R_BoA\u0026v\u003d1\u0026r\u003dhttps%3a%2f%2fwww.bing.com%2fsearch%3fq%3dsailing%2banarchy%26FORM%3dUSBAPI\u0026p\u003dDevEx,5006.1",
- "displayText": "sailing anarchy",
- "query": "sailing anarchy",
- "searchKind": "WebSearch"
- },
- {
- "url": "https://www.bing.com/cr?IG\u003d2ACC4FE8B02F4AACB9182A6502B0E556\u0026CID\u003d1D546424A4CB64AF2D386F26A5CD6583\u0026rd\u003d1\u0026h\u003dWLFO-B1GG5qtBGnoU1Bizz02YKkg5fgAQtHwhXn4z8I\u0026v\u003d1\u0026r\u003dhttps%3a%2f%2fwww.bing.com%2fsearch%3fq%3dsailpoint%26FORM%3dUSBAPI\u0026p\u003dDevEx,5007.1",
- "displayText": "sailpoint",
- "query": "sailpoint",
- "searchKind": "WebSearch"
- },
- {
- "url": "https://www.bing.com/cr?IG\u003d2ACC4FE8B02F4AACB9182A6502B0E556\u0026CID\u003d1D546424A4CB64AF2D386F26A5CD6583\u0026rd\u003d1\u0026h\u003dquBMwmKlGwqC5wAU0K7n416plhWcR8zQCi7r-Fw9Y0w\u0026v\u003d1\u0026r\u003dhttps%3a%2f%2fwww.bing.com%2fsearch%3fq%3dsailflow%26FORM%3dUSBAPI\u0026p\u003dDevEx,5008.1",
- "displayText": "sailflow",
- "query": "sailflow",
- "searchKind": "WebSearch"
- },
- {
- "url": "https://www.bing.com/cr?IG\u003d2ACC4FE8B02F4AACB9182A6502B0E556\u0026CID\u003d1D546424A4CB64AF2D386F26A5CD6583\u0026rd\u003d1\u0026h\u003d0udadFl0gCTKCp0QmzQTXS3_y08iO8FpwsoKPHPS6kw\u0026v\u003d1\u0026r\u003dhttps%3a%2f%2fwww.bing.com%2fsearch%3fq%3dsailboatdata%26FORM%3dUSBAPI\u0026p\u003dDevEx,5009.1",
- "displayText": "sailboatdata",
- "query": "sailboatdata",
- "searchKind": "WebSearch"
- },
- {
- "url": "https://www.bing.com/cr?IG\u003d2ACC4FE8B02F4AACB9182A6502B0E556\u0026CID\u003d1D546424A4CB64AF2D386F26A5CD6583\u0026rd\u003d1\u0026h\u003deSSt0MRSbl2V0RFPSuVd-gC7fGOT4717pz55EBUgPec\u0026v\u003d1\u0026r\u003dhttps%3a%2f%2fwww.bing.com%2fsearch%3fq%3dsailor%2b2025%26FORM%3dUSBAPI\u0026p\u003dDevEx,5010.1",
- "displayText": "sailor 2025",
- "query": "sailor 2025",
- "searchKind": "WebSearch"
- }
- ]
- }
- ]
-}
-```
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Bing Autosuggest tutorial](../tutorials/autosuggest.md)
-
-## See also
--- [What is Bing Autosuggest?](../get-suggested-search-terms.md)-- [Bing Autosuggest API v7 reference](/rest/api/cognitiveservices-bingsearch/bing-autosuggest-api-v7-reference)
cognitive-services Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Autosuggest/quickstarts/python.md
- Title: "Quickstart: Suggest search queries with the Bing Autosuggest REST API and Python"-
-description: Learn how to quickly start suggesting search terms in real time with the Bing Autosuggest API and Python.
------ Previously updated : 05/06/2020---
-# Quickstart: Suggest search queries with the Bing Autosuggest REST API and Python
--
-Follow this quickstart to learn how to make calls to the Bing Autosuggest API and read the JSON response. This simple Python application sends a partial search query to the API, and returns suggestions for searches. While this application is written in Python, the API is a RESTful Web service compatible with most programming languages. The source code for this sample is available on [GitHub](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/blob/master/python/Search/BingAutosuggestv7.py)
-
-## Prerequisites
-
-* [Python 3.x](https://www.python.org/downloads/)
--
-## Create a new application
-
-1. Create a new Python file in your favorite IDE or editor. Add the following imports:
-
- ```python
- import http.client, urllib.parse, json
- ```
-
-2. Create variables for your API host and path, [market code](/rest/api/cognitiveservices-bingsearch/bing-autosuggest-api-v7-reference#market-codes), and partial search query. Use the global endpoint in the following code, or use the [custom subdomain](../../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
-
- ```python
- subscriptionKey = 'enter key here'
- host = 'api.cognitive.microsoft.com'
- path = '/bing/v7.0/Suggestions'
- mkt = 'en-US'
- query = 'sail'
- ```
-
-3. Create a parameters string by appending your market code to the `mkt=` parameter, and appending your query to the `q=` parameter.
-
- ```python
- params = '?mkt=' + mkt + '&q=' + query
- ```
-
-## Create and send an API request
-
-1. Add your subscription key to a `Ocp-Apim-Subscription-Key` header.
-
- ```python
- headers = {'Ocp-Apim-Subscription-Key': subscriptionKey}
- ```
-
-2. Connect to the API using `HTTPSConnection()`, and send the `GET` request containing your request parameters.
-
- ```python
- conn = http.client.HTTPSConnection(host)
- conn.request ("GET", path + params, None, headers)
- response = conn.getresponse ()
- return response.read ()
- ```
-
-3. Get and print the JSON response.
-
- ```python
- result = get_suggestions ()
- print (json.dumps(json.loads(result), indent=4))
- ```
-
-## Example JSON response
-
-A successful response is returned in JSON, as shown in the following example:
-
-```json
-{
- "_type": "Suggestions",
- "queryContext": {
- "originalQuery": "sail"
- },
- "suggestionGroups": [
- {
- "name": "Web",
- "searchSuggestions": [
- {
- "url": "https://www.bing.com/cr?IG\u003d2ACC4FE8B02F4AACB9182A6502B0E556\u0026CID\u003d1D546424A4CB64AF2D386F26A5CD6583\u0026rd\u003d1\u0026h\u003dgvtP9TS9NwhajSapY2Se6y1eCbP2fq_GiP2n-cxi6OY\u0026v\u003d1\u0026r\u003dhttps%3a%2f%2fwww.bing.com%2fsearch%3fq%3dsailrite%26FORM%3dUSBAPI\u0026p\u003dDevEx,5003.1",
- "displayText": "sailrite",
- "query": "sailrite",
- "searchKind": "WebSearch"
- },
- {
- "url": "https://www.bing.com/cr?IG\u003d2ACC4FE8B02F4AACB9182A6502B0E556\u0026CID\u003d1D546424A4CB64AF2D386F26A5CD6583\u0026rd\u003d1\u0026h\u003dBTS0G6AakxntIl9rmbDXtk1n6rQpsZZ99aQ7ClE7dTY\u0026v\u003d1\u0026r\u003dhttps%3a%2f%2fwww.bing.com%2fsearch%3fq%3dsail%2bsand%2bpoint%26FORM%3dUSBAPI\u0026p\u003dDevEx,5004.1",
- "displayText": "sail sand point",
- "query": "sail sand point",
- "searchKind": "WebSearch"
- },
- {
- "url": "https://www.bing.com/cr?IG\u003d2ACC4FE8B02F4AACB9182A6502B0E556\u0026CID\u003d1D546424A4CB64AF2D386F26A5CD6583\u0026rd\u003d1\u0026h\u003dc0QOA_j6swCZJy9FxqOwke2KslJE7ZRmMooGClAuCpY\u0026v\u003d1\u0026r\u003dhttps%3a%2f%2fwww.bing.com%2fsearch%3fq%3dsailboats%2bfor%2bsale%26FORM%3dUSBAPI\u0026p\u003dDevEx,5005.1",
- "displayText": "sailboats for sale",
- "query": "sailboats for sale",
- "searchKind": "WebSearch"
- },
- {
- "url": "https://www.bing.com/cr?IG\u003d2ACC4FE8B02F4AACB9182A6502B0E556\u0026CID\u003d1D546424A4CB64AF2D386F26A5CD6583\u0026rd\u003d1\u0026h\u003dmnMdREUH20SepmHQH1zlh9Hy_w7jpOlZFm3KG2R_BoA\u0026v\u003d1\u0026r\u003dhttps%3a%2f%2fwww.bing.com%2fsearch%3fq%3dsailing%2banarchy%26FORM%3dUSBAPI\u0026p\u003dDevEx,5006.1",
- "displayText": "sailing anarchy",
- "query": "sailing anarchy",
- "searchKind": "WebSearch"
- },
- {
- "url": "https://www.bing.com/cr?IG\u003d2ACC4FE8B02F4AACB9182A6502B0E556\u0026CID\u003d1D546424A4CB64AF2D386F26A5CD6583\u0026rd\u003d1\u0026h\u003dWLFO-B1GG5qtBGnoU1Bizz02YKkg5fgAQtHwhXn4z8I\u0026v\u003d1\u0026r\u003dhttps%3a%2f%2fwww.bing.com%2fsearch%3fq%3dsailpoint%26FORM%3dUSBAPI\u0026p\u003dDevEx,5007.1",
- "displayText": "sailpoint",
- "query": "sailpoint",
- "searchKind": "WebSearch"
- },
- {
- "url": "https://www.bing.com/cr?IG\u003d2ACC4FE8B02F4AACB9182A6502B0E556\u0026CID\u003d1D546424A4CB64AF2D386F26A5CD6583\u0026rd\u003d1\u0026h\u003dquBMwmKlGwqC5wAU0K7n416plhWcR8zQCi7r-Fw9Y0w\u0026v\u003d1\u0026r\u003dhttps%3a%2f%2fwww.bing.com%2fsearch%3fq%3dsailflow%26FORM%3dUSBAPI\u0026p\u003dDevEx,5008.1",
- "displayText": "sailflow",
- "query": "sailflow",
- "searchKind": "WebSearch"
- },
- {
- "url": "https://www.bing.com/cr?IG\u003d2ACC4FE8B02F4AACB9182A6502B0E556\u0026CID\u003d1D546424A4CB64AF2D386F26A5CD6583\u0026rd\u003d1\u0026h\u003d0udadFl0gCTKCp0QmzQTXS3_y08iO8FpwsoKPHPS6kw\u0026v\u003d1\u0026r\u003dhttps%3a%2f%2fwww.bing.com%2fsearch%3fq%3dsailboatdata%26FORM%3dUSBAPI\u0026p\u003dDevEx,5009.1",
- "displayText": "sailboatdata",
- "query": "sailboatdata",
- "searchKind": "WebSearch"
- },
- {
- "url": "https://www.bing.com/cr?IG\u003d2ACC4FE8B02F4AACB9182A6502B0E556\u0026CID\u003d1D546424A4CB64AF2D386F26A5CD6583\u0026rd\u003d1\u0026h\u003deSSt0MRSbl2V0RFPSuVd-gC7fGOT4717pz55EBUgPec\u0026v\u003d1\u0026r\u003dhttps%3a%2f%2fwww.bing.com%2fsearch%3fq%3dsailor%2b2025%26FORM%3dUSBAPI\u0026p\u003dDevEx,5010.1",
- "displayText": "sailor 2025",
- "query": "sailor 2025",
- "searchKind": "WebSearch"
- }
- ]
- }
- ]
-}
-```
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Create a single-page web app](../tutorials/autosuggest.md)
-
-## See also
--- [What is Bing Autosuggest?](../get-suggested-search-terms.md)-- [Bing Autosuggest API v7 reference](/rest/api/cognitiveservices-bingsearch/bing-autosuggest-api-v7-reference)
cognitive-services Ruby https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Autosuggest/quickstarts/ruby.md
- Title: "Quickstart: Suggest search queries with the Bing Autosuggest REST API and Ruby"-
-description: Learn how to quickly start suggesting search terms in real time with the Bing Autosuggest API and Ruby.
------ Previously updated : 05/06/2020----
-# Quickstart: Suggest search queries with the Bing Autosuggest REST API and Ruby
--
-Follow this quickstart to learn how to make calls to the Bing Autosuggest API and read the JSON response. This simple Ruby application sends a partial search query to the API, and returns suggestions for searches. While this application is written in Ruby, the API is a RESTful Web service compatible with most programming languages.
--
-## Prerequisites
-
-* [Ruby 2.4](https://www.ruby-lang.org/en/downloads/) or later.
--
-## Create a new application
-
-1. Create a new Ruby file in your favorite IDE or editor. Add the following requirements:
-
- ```ruby
- require 'net/https'
- require 'uri'
- require 'json'
- ```
-
-2. Create variables for your API host and path, [market code](/rest/api/cognitiveservices-bingsearch/bing-autosuggest-api-v7-reference#market-codes), and partial search query. Use the global endpoint in the following code, or use the [custom subdomain](../../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
-
- ```ruby
- subscriptionKey = 'enter your key here'
- host = 'https://api.cognitive.microsoft.com'
- path = '/bing/v7.0/Suggestions'
- mkt = 'en-US'
- query = 'sail'
- ```
-
-3. Create a parameters string by appending your market code to the `mkt=` parameter, and appending your query to the `q=` parameter. Then, construct your request URI by combining the API host, path, and the parameters string.
-
- ```ruby
- params = '?mkt=' + mkt + '&q=' + query
- uri = URI (host + path + params)
- ```
-
-## Create and send an API request
-
-1. Create a request with your URI, and add your subscription key to the `Ocp-Apim-Subscription-Key` header.
-
- ```ruby
- request = Net::HTTP::Get.new(uri)
- request['Ocp-Apim-Subscription-Key'] = subscriptionKey
- ```
-
-2. Send the request, and store the response.
-
- ```ruby
- response = Net::HTTP.start(uri.host, uri.port, :use_ssl => uri.scheme == 'https') do |http|
- http.request (request)
- end
- ```
-
-3. Print the JSON response.
-
- ```ruby
- puts JSON::pretty_generate (JSON (response.body))
- ```
-
-## Example JSON response
-
-A successful response is returned in JSON, as shown in the following example:
-
-```json
-{
- "_type": "Suggestions",
- "queryContext": {
- "originalQuery": "sail"
- },
- "suggestionGroups": [
- {
- "name": "Web",
- "searchSuggestions": [
- {
- "url": "https://www.bing.com/cr?IG\u003d2ACC4FE8B02F4AACB9182A6502B0E556\u0026CID\u003d1D546424A4CB64AF2D386F26A5CD6583\u0026rd\u003d1\u0026h\u003dgvtP9TS9NwhajSapY2Se6y1eCbP2fq_GiP2n-cxi6OY\u0026v\u003d1\u0026r\u003dhttps%3a%2f%2fwww.bing.com%2fsearch%3fq%3dsailrite%26FORM%3dUSBAPI\u0026p\u003dDevEx,5003.1",
- "displayText": "sailrite",
- "query": "sailrite",
- "searchKind": "WebSearch"
- },
- {
- "url": "https://www.bing.com/cr?IG\u003d2ACC4FE8B02F4AACB9182A6502B0E556\u0026CID\u003d1D546424A4CB64AF2D386F26A5CD6583\u0026rd\u003d1\u0026h\u003dBTS0G6AakxntIl9rmbDXtk1n6rQpsZZ99aQ7ClE7dTY\u0026v\u003d1\u0026r\u003dhttps%3a%2f%2fwww.bing.com%2fsearch%3fq%3dsail%2bsand%2bpoint%26FORM%3dUSBAPI\u0026p\u003dDevEx,5004.1",
- "displayText": "sail sand point",
- "query": "sail sand point",
- "searchKind": "WebSearch"
- },
- {
- "url": "https://www.bing.com/cr?IG\u003d2ACC4FE8B02F4AACB9182A6502B0E556\u0026CID\u003d1D546424A4CB64AF2D386F26A5CD6583\u0026rd\u003d1\u0026h\u003dc0QOA_j6swCZJy9FxqOwke2KslJE7ZRmMooGClAuCpY\u0026v\u003d1\u0026r\u003dhttps%3a%2f%2fwww.bing.com%2fsearch%3fq%3dsailboats%2bfor%2bsale%26FORM%3dUSBAPI\u0026p\u003dDevEx,5005.1",
- "displayText": "sailboats for sale",
- "query": "sailboats for sale",
- "searchKind": "WebSearch"
- },
- {
- "url": "https://www.bing.com/cr?IG\u003d2ACC4FE8B02F4AACB9182A6502B0E556\u0026CID\u003d1D546424A4CB64AF2D386F26A5CD6583\u0026rd\u003d1\u0026h\u003dmnMdREUH20SepmHQH1zlh9Hy_w7jpOlZFm3KG2R_BoA\u0026v\u003d1\u0026r\u003dhttps%3a%2f%2fwww.bing.com%2fsearch%3fq%3dsailing%2banarchy%26FORM%3dUSBAPI\u0026p\u003dDevEx,5006.1",
- "displayText": "sailing anarchy",
- "query": "sailing anarchy",
- "searchKind": "WebSearch"
- },
- {
- "url": "https://www.bing.com/cr?IG\u003d2ACC4FE8B02F4AACB9182A6502B0E556\u0026CID\u003d1D546424A4CB64AF2D386F26A5CD6583\u0026rd\u003d1\u0026h\u003dWLFO-B1GG5qtBGnoU1Bizz02YKkg5fgAQtHwhXn4z8I\u0026v\u003d1\u0026r\u003dhttps%3a%2f%2fwww.bing.com%2fsearch%3fq%3dsailpoint%26FORM%3dUSBAPI\u0026p\u003dDevEx,5007.1",
- "displayText": "sailpoint",
- "query": "sailpoint",
- "searchKind": "WebSearch"
- },
- {
- "url": "https://www.bing.com/cr?IG\u003d2ACC4FE8B02F4AACB9182A6502B0E556\u0026CID\u003d1D546424A4CB64AF2D386F26A5CD6583\u0026rd\u003d1\u0026h\u003dquBMwmKlGwqC5wAU0K7n416plhWcR8zQCi7r-Fw9Y0w\u0026v\u003d1\u0026r\u003dhttps%3a%2f%2fwww.bing.com%2fsearch%3fq%3dsailflow%26FORM%3dUSBAPI\u0026p\u003dDevEx,5008.1",
- "displayText": "sailflow",
- "query": "sailflow",
- "searchKind": "WebSearch"
- },
- {
- "url": "https://www.bing.com/cr?IG\u003d2ACC4FE8B02F4AACB9182A6502B0E556\u0026CID\u003d1D546424A4CB64AF2D386F26A5CD6583\u0026rd\u003d1\u0026h\u003d0udadFl0gCTKCp0QmzQTXS3_y08iO8FpwsoKPHPS6kw\u0026v\u003d1\u0026r\u003dhttps%3a%2f%2fwww.bing.com%2fsearch%3fq%3dsailboatdata%26FORM%3dUSBAPI\u0026p\u003dDevEx,5009.1",
- "displayText": "sailboatdata",
- "query": "sailboatdata",
- "searchKind": "WebSearch"
- },
- {
- "url": "https://www.bing.com/cr?IG\u003d2ACC4FE8B02F4AACB9182A6502B0E556\u0026CID\u003d1D546424A4CB64AF2D386F26A5CD6583\u0026rd\u003d1\u0026h\u003deSSt0MRSbl2V0RFPSuVd-gC7fGOT4717pz55EBUgPec\u0026v\u003d1\u0026r\u003dhttps%3a%2f%2fwww.bing.com%2fsearch%3fq%3dsailor%2b2025%26FORM%3dUSBAPI\u0026p\u003dDevEx,5010.1",
- "displayText": "sailor 2025",
- "query": "sailor 2025",
- "searchKind": "WebSearch"
- }
- ]
- }
- ]
-}
-```
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Create a single-page web app](../tutorials/autosuggest.md)
-
-## See also
--- [What is Bing Autosuggest?](../get-suggested-search-terms.md)-- [Bing Autosuggest API v7 reference](/rest/api/cognitiveservices-bingsearch/bing-autosuggest-api-v7-reference)
cognitive-services Autosuggest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Autosuggest/tutorials/autosuggest.md
- Title: "Tutorial: Getting Automatic suggestions Results using Bing Autosuggest API"-
-description: In this tutorial, you will build a web page that allows users to query the Bing Autosuggest API and displays the query results.
------- Previously updated : 03/05/2019---
-# Tutorial: Get search suggestions on a web page
--
-In this tutorial, we'll build a Web page that allows users to query the Bing Autosuggest API.
-
-This tutorial shows you how to:
-
-> [!div class="checklist"]
-> - Make a simple query to the Bing Autosuggest API
-> - Display query results
-
-## Prerequisites
-
-To follow along with the tutorial, you need a subscription key for the Bing Autosuggest API. If you don't have one, [create a Bing Autosuggest resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesBingAutosuggest-v7) in the Azure portal.
-
-## Create a new Web page
-
-Open a text editor. Create a new file named, for example, autosuggest.html.
-
-## HTML header
-
-Add the HTML header information and begin the script section as follows.
-
-```html
-<!DOCTYPE html>
-<html>
-<head>
- <meta charset="UTF-8">
- <title>Bing Autosuggest</title>
-
-<style type="text/css">
- html, body, div, p, h1, h2 {font-family: Verdana, "Lucida Sans", sans-serif;}
-
- html, body, div, p {font-weight: normal;}
- h1, h2 {font-weight: bold;}
- sup {font-weight: normal;}
-
- html, body, div, p {font-size: 12px;}
- h1 {font-size: 20px;}
- h2 {font-size: 16px;}
- h1, h2 {clear: left;}
-
- img#logo {float: right;
-</style>
-
-<script type="text/javascript">
-```
-
-## getSubscriptionKey function
-
-The getSubscriptionKey function returns the Bing Autosuggest API key. It either retrieves it from
-local storage (that is, a cookie) or prompts the user for if needed.
-
-Begin the getSubscriptionKey function and declare the cookie name as follows.
-
-```html
-getSubscriptionKey = function() {
-
- var COOKIE = "bing-autosuggest-api-key"; // name used to store API key in key/value storage
-```
-
-The findCookie helper function returns the value of the specified cookie; if the cookie is not
-found, it returns an empty string.
-
-```html
- function findCookie(name) {
- var cookies = document.cookie.split(";");
- for (var i = 0; i < cookies.length; i++) {
- var keyvalue = cookies[i].split("=");
- if (keyvalue[0].trim() === name) {
- return keyvalue[1];
- }
- }
- return "";
- }
-```
-
-The getSubscriptionKeyCookie helper function prompts the user for the value of the Bing
-Autosuggest API key, and returns the key value.
-
-```html
- function getSubscriptionKeyCookie() {
- var key = findCookie(COOKIE);
- while (key.length !== 32) {
- key = prompt("Enter Bing Autosuggest API subscription key:", "").trim();
- var expiry = new Date();
- expiry.setFullYear(expiry.getFullYear() + 2);
- document.cookie = COOKIE + "=" + key.trim() + "; expires=" + expiry.toUTCString();
- }
- return key;
- }
-```
-
-The getSubscriptionKeyLocalStorage helper function first tries to retrieve the Bing Autosuggest
-API key by looking up the appropriate cookie. If the cookie is not found, it prompts the user for
-the key value. It then returns the key value.
-
-```html
- function getSubscriptionKeyLocalStorage() {
- var key = localStorage.getItem(COOKIE) || "";
- while (key.length !== 32)
- key = prompt("Enter Bing Autosuggest API subscription key:", "").trim();
- localStorage.setItem(COOKIE, key)
- return key;
- }
-```
-
-The getSubscriptionKey helper function takes one parameter, **invalidate**. If **invalidate** is
-**true**, getSubscriptionKey deletes the cookie that contains the Bing Autosuggest API key. If
-**invalidate** is **false**, getSubscriptionKey returns the value of the Bing Autosuggest API key.
-
-```html
- function getSubscriptionKey(invalidate) {
- if (invalidate) {
- try {
- localStorage.removeItem(COOKIE);
- } catch (e) {
- document.cookie = COOKIE + "=";
- }
- } else {
- try {
- return getSubscriptionKeyLocalStorage();
- } catch (e) {
- return getSubscriptionKeyCookie();
- }
- }
- }
-```
-
-Return the getSubscriptionKey helper function as the result of the outer getSubscriptionKey
-function. Close the definition of the outer getSubscriptionKey function.
-
-```html
- return getSubscriptionKey;
-
-}();
-```
-
-## Helper functions
-
-The pre helper function returns the specified text preformatted with the [pre](https://www.w3schools.com/tags/tag_pre.asp)
-HTML tag.
-
-```html
-function pre(text) {
- return "<pre>" + text.replace(/&/g, "&amp;").replace(/</g, "&lt;") + "</pre>"
-}
-```
-
-The renderSearchResults function displays the specified results from the Bing Autosuggest API, using JSON pretty printing.
-
-```html
-function renderSearchResults(results) {
- document.getElementById("results").innerHTML = pre(JSON.stringify(results, null, 2));
-}
-```
-
-The renderErrorMessage function displays the specified error message and error code.
-
-```html
-function renderErrorMessage(message, code) {
- if (code)
- document.getElementById("results").innerHTML = "<pre>Status " + code + ": " + message + "</pre>";
- else
- document.getElementById("results").innerHTML = "<pre>" + message + "</pre>";
-}
-```
-
-## bingAutosuggest function
-
-The bingAutosuggest function is called each time the user enters text in the HTML form field.
-It takes two parameters: the contents of the HTML form field, and the Bing Autosuggest API key.
-
-```html
-function bingAutosuggest(query, key) {
-```
-
-Specify the Bing Autosuggest API endpoint and declare an XMLHttpRequest object, which we will
-use to send requests. You can use the global endpoint below, or the [custom subdomain](../../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
-
-```html
- var endpoint = "https://api.cognitive.microsoft.com/bing/v7.0/Suggestions";
-
- var request = new XMLHttpRequest();
-
- try {
- request.open("GET", endpoint + "?q=" + encodeURIComponent(query));
- }
- catch (e) {
- renderErrorMessage("Bad request");
- return false;
- }
-```
-
-Set the **Ocp-Apim-Subscription-Key** header to the value of the Bing Autosuggest API key.
-
-```html
- request.setRequestHeader("Ocp-Apim-Subscription-Key", key);
-```
-
-Handle the response from the endpoint. If the status is 200 (OK), display the results; otherwise,
-display the error information.
-
-```html
- request.addEventListener("load", function() {
- if (this.status === 200) {
- renderSearchResults(JSON.parse(this.responseText));
- }
- else {
- if (this.status === 401) getSubscriptionKey(true);
- renderErrorMessage(this.statusText, this.status);
- }
- });
-```
-
-Also handle possible error events from the XMLHttpRequest object.
-
-```html
- request.addEventListener("error", function() {
- renderErrorMessage("Network error");
- });
-
- request.addEventListener("abort", function() {
- renderErrorMessage("Request aborted");
- });
-```
-
-Send the request. Close the bingAutosuggest function, the **script** tag, and the **head** tag.
-
-```html
- request.send();
- return false;
-}
-// --></script>
-
-</head>
-```
-
-## HTML body
-
-When the Web page loads, make sure we have the Bing Autosuggest API key, prompting the user for it if needed.
-
-```html
-<body onload="document.forms.bing.query.focus(); getSubscriptionKey();">
-```
-
-Display the Bing logo.
-
-```html
-<img id="logo" align=base src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAHgAAAAyCAIAAAAYxYiPAAAAA3NCSVQICAjb4U/gAAARMElEQVR42u2bCVRUV5rHi8VxaeNuOumYTs706aTTZrp7TqbTk5g+9kn3OZN0pjudpZM5SfdJzEzPyZmO1gbIJhmNmijy6hUFsisCgsqigoCt7IoKgoDgUgXILntR+/aWzHfvfQUFFEURsU8cKe/hFFL16r3f++53/9//uyXSWUwjZgPDshzHcy4PnuMXHvP4EJ1qufpPyRHby3Iv93XqbDY7y7IC9QU48wr6RMtVEb1NpJAvoeQvpVF7L5c0jQ6ZHAwJcH6B+HyBzm6pEymkIlomouUiWiqiJCvpwDdOxCdfr+nV6x0Mwy+gnqeIJqAxa3iikJDhEyX5fmx4eZcGJ+yFxz2DPg6pQwA9eQBuSnJC3bCQPe4/6ChxjqbxAVQgnHM8OKBzW5s4lucfsOSxAHoWPh4eggRy/ubprQzL6a1Wo83KfZuWl5lBU39v0CDeQcDbGQa0PB7jT4RfHawDJD562bTzERiznI1l4xurX0yNfCVdcUbTAtAXQE+PSnbEYgkoyfmkOGNL8dEtxZkwPhFGFjz/tCR7b+35su5WrcXCuq1gOa5ZO7Q6eruIBuEk/WH8zj6LaQH0dNB8t8X03dgIqJ6cQyainENBhmSJQvxi2v4j12tMqIydFN3wy8XuO0sOSNEVUZI1ypA23cgCaDegewTQAlYfGNTEQCWVQkrO1l8h+eu5E2M2m+u5AfRBq+Xf0unFlHSxUv5BQZqRcSyAdg/60dgd+NPFf8hPiaotPQCjpnR/bWnExcI/5h96KmmXHyqsUGbwo+S7Lp2zu0Y0immuR6/NbLqSc7NhxGb59qyGXoMm6/59Bt0rgEYcY+svsOz4IscxHJhdXK/REFRZsISENiX9fkx4q0E3nqnRKxFrbIux5I3fnhL8Rp038o77u2iluxbjo7Fh+HwkqmvVnBt1wVoZ9rPibB8KQCPc6Tfr3cmQb6HX4QH0gW0ENATIHe2gwW5lp4rb+wZaKVE2uAWNgraqp2OJkqRsyb7qc+OgJ+tuMhG5mWS6kGsEhc4730TeJ/zXN1X9bh4zg4bhAlpSfPS149Gqa1U3RgeMdlCraCqji55f0GZIHeEkoqMbqqdXd/j3r2/ptd+JDhQpUbLec6GYnQyaQY46KlsQLpfcgZx2koI4IScRSQ6vtzIM1DhjVovJbnOgtCOkHo+qH+t+JPAdAERvMessZrPdzuBqYNLxcQ3lFWh4Y2mnelmU2EcpWR8T+ubJ5JTmq61jWjPjmF683V/QuLRuHBlcCuKPkvlFSVKba3ERw5HbAJjKutU5rU25msbmgT7X0zE5HPmtzdmaxhx1Y59eR25Jl24sqeHynwozXj2m2pRJv5EXF1p++lJfp4VhZpy1+H/hzzqrtayrNbQ8/628xFcyqV8di34vL2XfxfMtw/1WtEywl3o7cjXXc2431fZ2zgI6D0CjIzN6u+Pl1AOiaCJRpb5Rkqfid/65MCNPfb3PqIeIwPGN/t1X0CwSFmx6S70f0nmyNcqgOu0AClyeJbcB5N4v0ykQLT6UJLAkx/XG95j0j0YH+dAS36itJ243WR3M0VsNG5N2+0fB2itGKzC6amQRr1WGhFadGXWmymmzioPbWdvf87vchOWwTlBEO4iJePc/INkQu2NfXaXWbn8//7Nsr17X0N9T1aWBErSkSwNlt2Z0SG+DpOCm8fJ/b7k8gBQkHh4AAAAASUVORK5CYII=">
-```
-
-Create an HTML form with a text field. Handle the `oninput` event and call the `bingAutosuggest()`
-function, passing the contents of the text field and the Bing Autosuggest API key.
-
-```html
-<form name="bing" oninput="return bingAutosuggest(this.query.value, getSubscriptionKey())">
- <h2>Autosuggest</h2>
- <input type="text" name="query" size="80" placeholder="Autosuggest" autocomplete=off>
-</form>
-```
-
-Add the HTML **div** tag that we use to display the results. The JavaScript we defined
-previously refers to this **div** tag.
-
-```html
-<h2>Results</h2>
-<div id="results">
-<p>None yet.</p>
-
-</div>
-
-</body>
-</html>
-```
-
-Save the file.
-
-## Display results
-
-Open the Web page in your browser. At the prompt, enter your Bing Autosuggest API subscription key. Then enter a query (for example, "sail") in the **Autosuggest** text box. As you type, the Web page automatically updates to display the Autosuggest results.
-
-```json
-{
- "_type": "Suggestions",
- "queryContext": {
- "originalQuery": "sail"
- },
- "suggestionGroups": [
- {
- "name": "Web",
- "searchSuggestions": [
- {
- "url": "https://www.bing.com/cr?IG=30C49D910FAE478288D54A8DBC5D66F1&CID=122B759B00966D02199C7E9001906C30&rd=1&h=vheQSvKZylM3dlX_B9bQ8-hQEsEJo8zDD2y7H1nsBjE&v=1&r=https%3a%2f%2fwww.bing.com%2fsearch%3fq%3dsailor%2bbrinkley%2bcook%26FORM%3dUSBAPI&p=DevEx,5003.1",
- "displayText": "sailor brinkley cook",
- "query": "sailor brinkley cook",
- "searchKind": "WebSearch"
- },
- {
- "url": "https://www.bing.com/cr?IG=30C49D910FAE478288D54A8DBC5D66F1&CID=122B759B00966D02199C7E9001906C30&rd=1&h=EStLqAfxGCa44Ur3jEMXBv-Qp-lXUSFJbkBfnUdKKDg&v=1&r=https%3a%2f%2fwww.bing.com%2fsearch%3fq%3dsailor%2bbrinkley%26FORM%3dUSBAPI&p=DevEx,5004.1",
- "displayText": "sailor brinkley",
- "query": "sailor brinkley",
- "searchKind": "WebSearch"
- },
- {
- "url": "https://www.bing.com/cr?IG=30C49D910FAE478288D54A8DBC5D66F1&CID=122B759B00966D02199C7E9001906C30&rd=1&h=gvtP9TS9NwhajSapY2Se6y1eCbP2fq_GiP2n-cxi6OY&v=1&r=https%3a%2f%2fwww.bing.com%2fsearch%3fq%3dsailrite%26FORM%3dUSBAPI&p=DevEx,5005.1",
- "displayText": "sailrite",
- "query": "sailrite",
- "searchKind": "WebSearch"
- },
- {
- "url": "https://www.bing.com/cr?IG=30C49D910FAE478288D54A8DBC5D66F1&CID=122B759B00966D02199C7E9001906C30&rd=1&h=c0QOA_j6swCZJy9FxqOwke2KslJE7ZRmMooGClAuCpY&v=1&r=https%3a%2f%2fwww.bing.com%2fsearch%3fq%3dsailboats%2bfor%2bsale%26FORM%3dUSBAPI&p=DevEx,5006.1",
- "displayText": "sailboats for sale",
- "query": "sailboats for sale",
- "searchKind": "WebSearch"
- },
- {
- "url": "https://www.bing.com/cr?IG=30C49D910FAE478288D54A8DBC5D66F1&CID=122B759B00966D02199C7E9001906C30&rd=1&h=mnMdREUH20SepmHQH1zlh9Hy_w7jpOlZFm3KG2R_BoA&v=1&r=https%3a%2f%2fwww.bing.com%2fsearch%3fq%3dsailing%2banarchy%26FORM%3dUSBAPI&p=DevEx,5007.1",
- "displayText": "sailing anarchy",
- "query": "sailing anarchy",
- "searchKind": "WebSearch"
- },
- {
- "url": "https://www.bing.com/cr?IG=30C49D910FAE478288D54A8DBC5D66F1&CID=122B759B00966D02199C7E9001906C30&rd=1&h=0udadFl0gCTKCp0QmzQTXS3_y08iO8FpwsoKPHPS6kw&v=1&r=https%3a%2f%2fwww.bing.com%2fsearch%3fq%3dsailboatdata%26FORM%3dUSBAPI&p=DevEx,5008.1",
- "displayText": "sailboatdata",
- "query": "sailboatdata",
- "searchKind": "WebSearch"
- },
- {
- "url": "https://www.bing.com/cr?IG=30C49D910FAE478288D54A8DBC5D66F1&CID=122B759B00966D02199C7E9001906C30&rd=1&h=BTS0G6AakxntIl9rmbDXtk1n6rQpsZZ99aQ7ClE7dTY&v=1&r=https%3a%2f%2fwww.bing.com%2fsearch%3fq%3dsail%2bsand%2bpoint%26FORM%3dUSBAPI&p=DevEx,5009.1",
- "displayText": "sail sand point",
- "query": "sail sand point",
- "searchKind": "WebSearch"
- },
- {
- "url": "https://www.bing.com/cr?IG=30C49D910FAE478288D54A8DBC5D66F1&CID=122B759B00966D02199C7E9001906C30&rd=1&h=quBMwmKlGwqC5wAU0K7n416plhWcR8zQCi7r-Fw9Y0w&v=1&r=https%3a%2f%2fwww.bing.com%2fsearch%3fq%3dsailflow%26FORM%3dUSBAPI&p=DevEx,5010.1",
- "displayText": "sailflow",
- "query": "sailflow",
- "searchKind": "WebSearch"
- }
- ]
- }
- ]
-}
-```
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Use and display requirements](../../bing-web-search/use-display-requirements.md)
cognitive-services Call Endpoint Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Custom-Search/call-endpoint-csharp.md
- Title: "Quickstart: Call your Bing Custom Search endpoint using C# | Microsoft Docs"-
-description: "Use this quickstart to begin requesting search results from your Bing Custom Search instance in C#. "
------ Previously updated : 05/08/2020----
-# Quickstart: Call your Bing Custom Search endpoint using C#
--
-Use this quickstart to learn how to request search results from your Bing Custom Search instance. Although this application is written in C#, the Bing Custom Search API is a RESTful web service compatible with most programming languages. The source code for this sample is available on [GitHub](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/blob/master/dotnet/Search/BingCustomSearchv7.cs).
-
-## Prerequisites
--- A Bing Custom Search instance. For more information, see [Quickstart: Create your first Bing Custom Search instance](quick-start.md).-- [Microsoft .NET Core](https://dotnet.microsoft.com/download).-- Any edition of [Visual Studio 2019 or later](https://www.visualstudio.com/downloads/).-- If you're using Linux/MacOS, this application can be run using [Mono](https://www.mono-project.com/).-- The [Bing Custom Search](https://www.nuget.org/packages/Microsoft.Azure.CognitiveServices.Search.CustomSearch/2.0.0) NuGet package. -
- To install this package in Visual Studio:
- 1. Right-click your project in **Solution Explorer**, and then select **Manage NuGet Packages**.
- 2. Search for and select *Microsoft.Azure.CognitiveServices.Search.CustomSearch*, and then install the package.
-
- When you install the Bing Custom Search NuGet package, Visual Studio also installs the following packages:
- - **Microsoft.Rest.ClientRuntime**
- - **Microsoft.Rest.ClientRuntime.Azure**
- - **Newtonsoft.Json**
---
-## Create and initialize the application
-
-1. Create a new C# console application in Visual Studio. Then, add the following packages to your project:
-
- ```csharp
- using System;
- using System.Net.Http;
- using System.Web;
- using Newtonsoft.Json;
- ```
-
-2. Create the following classes to store the search results returned by the Bing Custom Search API:
-
- ```csharp
- public class BingCustomSearchResponse {
- public string _type{ get; set; }
- public WebPages webPages { get; set; }
- }
-
- public class WebPages {
- public string webSearchUrl { get; set; }
- public int totalEstimatedMatches { get; set; }
- public WebPage[] value { get; set; }
- }
-
- public class WebPage {
- public string name { get; set; }
- public string url { get; set; }
- public string displayUrl { get; set; }
- public string snippet { get; set; }
- public DateTime dateLastCrawled { get; set; }
- public string cachedPageUrl { get; set; }
- }
- ```
-
-3. In the main method of your project, create the following variables for your Bing Custom Search API subscription key, search instance's custom configuration ID, and search term:
-
- ```csharp
- var subscriptionKey = "YOUR-SUBSCRIPTION-KEY";
- var customConfigId = "YOUR-CUSTOM-CONFIG-ID";
- var searchTerm = args.Length > 0 ? args[0]:"microsoft";
- ```
-
-4. Construct the request URL by appending your search term to the `q=` query parameter, and your search instance's custom configuration ID to the `customconfig=` parameter. Separate the parameters with an ampersand (`&`). For the `url` variable value, you can use the global endpoint in the following code, or use the [custom subdomain](../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
-
- ```csharp
- var url = "https://api.cognitive.microsoft.com/bingcustomsearch/v7.0/search?" +
- "q=" + searchTerm + "&" +
- "customconfig=" + customConfigId;
- ```
-
-## Send and receive a search request
-
-1. Create a request client, and add your subscription key to the `Ocp-Apim-Subscription-Key` header.
-
- ```csharp
- var client = new HttpClient();
- client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", subscriptionKey);
- ```
-
-2. Perform the search request and get the response as a JSON object.
-
- ```csharp
- var httpResponseMessage = client.GetAsync(url).Result;
- var responseContent = httpResponseMessage.Content.ReadAsStringAsync().Result;
- BingCustomSearchResponse response = JsonConvert.DeserializeObject<BingCustomSearchResponse>(responseContent);
- ```
-## Process and view the results
--- Iterate over the response object to display information about each search result, including its name, url, and the date the webpage was last crawled.-
- ```csharp
- for(int i = 0; i < response.webPages.value.Length; i++) {
- var webPage = response.webPages.value[i];
-
- Console.WriteLine("name: " + webPage.name);
- Console.WriteLine("url: " + webPage.url);
- Console.WriteLine("displayUrl: " + webPage.displayUrl);
- Console.WriteLine("snippet: " + webPage.snippet);
- Console.WriteLine("dateLastCrawled: " + webPage.dateLastCrawled);
- Console.WriteLine();
- }
- Console.WriteLine("Press any key to exit...");
- Console.ReadKey();
- ```
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Build a Custom Search web app](./tutorials/custom-search-web-page.md)
cognitive-services Call Endpoint Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Custom-Search/call-endpoint-java.md
- Title: "Quickstart: Call your Bing Custom Search endpoint using Java | Microsoft Docs"-
-description: Use this quickstart to begin requesting search results from your Bing Custom Search instance in Java.
------ Previously updated : 05/08/2020----
-# Quickstart: Call your Bing Custom Search endpoint using Java
--
-Use this quickstart to learn how to request search results from your Bing Custom Search instance. Although this application is written in Java, the Bing Custom Search API is a RESTful web service compatible with most programming languages. The source code for this sample is available on [GitHub](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/blob/master/java/Search/BingCustomSearchv7.java).
-
-## Prerequisites
--- A Bing Custom Search instance. For more information, see [Quickstart: Create your first Bing Custom Search instance](quick-start.md).--- The latest [Java Development Kit](https://www.oracle.com/technetwork/java/javase/downloads/https://docsupdatetracker.net/index.html).--- The [Gson library](https://github.com/google/gson).--
-## Create and initialize the application
-
-1. Create a new Java project in your favorite IDE or editor, and import the following libraries:
-
- ```java
- import java.io.InputStream;
- import java.net.URL;
- import java.net.URLEncoder;
- import java.util.HashMap;
- import java.util.List;
- import java.util.Map;
- import java.util.Scanner;
- import javax.net.ssl.HttpsURLConnection;
- import com.google.gson.Gson;
- import com.google.gson.GsonBuilder;
- import com.google.gson.JsonObject;
- import com.google.gson.JsonParser;
- ```
-
-2. Create a class named `CustomSrchJava`, and then create variables for your subscription key, custom search endpoint, and search instance's custom configuration ID. You can use the global endpoint in the following code, or use the [custom subdomain](../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
- ```java
- public class CustomSrchJava {
- static String host = "https://api.cognitive.microsoft.com";
- static String path = "/bingcustomsearch/v7.0/search";
- static String subscriptionKey = "YOUR-SUBSCRIPTION-KEY";
- static String customConfigId = "YOUR-CUSTOM-CONFIG-ID";
- static String searchTerm = "Microsoft";
- ...
- ```
-
-3. Create another class named `SearchResults` to contain the response from your Bing Custom Search instance.
-
- ```java
- class SearchResults {
- HashMap<String, String> relevantHeaders;
- String jsonResponse;
- SearchResults(HashMap<String, String> headers, String json) {
- relevantHeaders = headers;
- jsonResponse = json;
- }
- }
- ```
-
-4. Create a function named `prettify()` to format the JSON response from the Bing Custom Search API.
-
- ```java
- // pretty-printer for JSON; uses GSON parser to parse and re-serialize
- public static String prettify(String json_text) {
- JsonParser parser = new JsonParser();
- JsonObject json = parser.parse(json_text).getAsJsonObject();
- Gson gson = new GsonBuilder().setPrettyPrinting().create();
- return gson.toJson(json);
- }
- ```
-
-## Send and receive a search request
-
-1. Create a function named `SearchWeb()` that sends a request and returns a `SearchResults` object. Create the request url by combining your custom configuration ID, query, and endpoint information. Add your subscription key to the `Ocp-Apim-Subscription-Key` header.
-
- ```java
- public class CustomSrchJava {
- ...
- public static SearchResults SearchWeb (String searchQuery) throws Exception {
- // construct the URL for your search request (endpoint + query string)
- URL url = new URL(host + path + "?q=" + URLEncoder.encode(searchTerm, "UTF-8") + "&CustomConfig=" + customConfigId);
- HttpsURLConnection connection = (HttpsURLConnection)url.openConnection();
- connection.setRequestProperty("Ocp-Apim-Subscription-Key", subscriptionKey);
- ...
- ```
-
-2. Create a stream and store the JSON response in a `SearchResults` object.
-
- ```java
- public class CustomSrchJava {
- ...
- public static SearchResults SearchWeb (String searchQuery) throws Exception {
- ...
- // receive the JSON body
- InputStream stream = connection.getInputStream();
- String response = new Scanner(stream).useDelimiter("\\A").next();
-
- // construct result object for return
- SearchResults results = new SearchResults(new HashMap<String, String>(), response);
-
- stream.close();
- return results;
- }
- ```
-
-3. Print the JSON response.
-
- ```java
- System.out.println("\nJSON Response:\n");
- System.out.println(prettify(result.jsonResponse));
- ```
-
-4. Run the program.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Build a Custom Search web app](./tutorials/custom-search-web-page.md)
cognitive-services Call Endpoint Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Custom-Search/call-endpoint-nodejs.md
- Title: "Quickstart: Call your Bing Custom Search endpoint using Node.js | Microsoft Docs"-
-description: Use this quickstart to begin requesting search results from your Bing Custom Search instance using Node.js.
------ Previously updated : 05/08/2020----
-# Quickstart: Call your Bing Custom Search endpoint using Node.js
--
-Use this quickstart to learn how to request search results from your Bing Custom Search instance. Although this application is written in JavaScript, the Bing Custom Search API is a RESTful web service compatible with most programming languages. The source code for this sample is available on [GitHub](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/blob/master/nodejs/Search/BingCustomSearchv7.js).
-
-## Prerequisites
--- A Bing Custom Search instance. For more information, see [Quickstart: Create your first Bing Custom Search instance](quick-start.md).--- [The Node.js JavaScript runtime](https://www.nodejs.org/).--- The [JavaScript request library](https://github.com/request/request).--
-## Create and initialize the application
--- Create a new JavaScript file in your favorite IDE or editor, and add a `require()` statement for the requests library. Create variables for your subscription key, custom configuration ID, and search term.-
- ```javascript
- var request = require("request");
-
- var subscriptionKey = 'YOUR-SUBSCRIPTION-KEY';
- var customConfigId = 'YOUR-CUSTOM-CONFIG-ID';
- var searchTerm = 'microsoft';
- ```
-
-## Send and receive a search request
-
-1. Create a variable to store the information being sent in your request. Construct the request URL by appending your search term to the `q=` query parameter, and your search instance's custom configuration ID to the `customconfig=` parameter. Separate the parameters with an ampersand (`&`). You can use the global endpoint in the following code, or use the [custom subdomain](../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
-
- ```javascript
- var info = {
- url: 'https://api.cognitive.microsoft.com/bingcustomsearch/v7.0/search?' +
- 'q=' + searchTerm + "&" +
- 'customconfig=' + customConfigId,
- headers: {
- 'Ocp-Apim-Subscription-Key' : subscriptionKey
- }
- }
- ```
-
-1. Use the JavaScript request library to send a search request to your Bing Custom Search instance and print information about the results, including its name, url, and the date the webpage was last crawled.
-
- ```javascript
- request(info, function(error, response, body){
- var searchResponse = JSON.parse(body);
- for(var i = 0; i < searchResponse.webPages.value.length; ++i){
- var webPage = searchResponse.webPages.value[i];
- console.log('name: ' + webPage.name);
- console.log('url: ' + webPage.url);
- console.log('displayUrl: ' + webPage.displayUrl);
- console.log('snippet: ' + webPage.snippet);
- console.log('dateLastCrawled: ' + webPage.dateLastCrawled);
- console.log();
- }
- ```
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Build a Custom Search web app](./tutorials/custom-search-web-page.md)
cognitive-services Call Endpoint Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Custom-Search/call-endpoint-python.md
- Title: "Quickstart: Call your Bing Custom Search endpoint using Python | Microsoft Docs"-
-description: Use this quickstart to begin requesting search results from your Bing Custom Search instance using Python.
------ Previously updated : 05/08/2020----
-# Quickstart: Call your Bing Custom Search endpoint using Python
--
-Use this quickstart to learn how to request search results from your Bing Custom Search instance. Although this application is written in Python, the Bing Custom Search API is a RESTful web service compatible with most programming languages. The source code for this sample is available on [GitHub](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/blob/master/python/Search/BingCustomSearchv7.py).
-
-## Prerequisites
--- A Bing Custom Search instance. For more information, see [Quickstart: Create your first Bing Custom Search instance](quick-start.md).-- [Python](https://www.python.org/) 2.x or 3.x.---
-## Create and initialize the application
--- Create a new Python file in your favorite IDE or editor, and add the following import statements. Create variables for your subscription key, custom configuration ID, and search term.-
- ```python
- import json
- import requests
-
- subscriptionKey = "YOUR-SUBSCRIPTION-KEY"
- customConfigId = "YOUR-CUSTOM-CONFIG-ID"
- searchTerm = "microsoft"
- ```
-
-## Send and receive a search request
-
-1. Construct the request URL by appending your search term to the `q=` query parameter, and your search instance's custom configuration ID to the `customconfig=` parameter. Separate the parameters with an ampersand (`&`). You can use the global endpoint in the following code, or use the [custom subdomain](../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
-
- ```python
- url = 'https://api.cognitive.microsoft.com/bingcustomsearch/v7.0/search?' + 'q=' + searchTerm + '&' + 'customconfig=' + customConfigId
- ```
-
-2. Send the request to your Bing Custom Search instance, and print the returned search results.
-
- ```python
- r = requests.get(url, headers={'Ocp-Apim-Subscription-Key': subscriptionKey})
- print(r.text)
- ```
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Build a Custom Search web app](./tutorials/custom-search-web-page.md)
cognitive-services Define Custom Suggestions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Custom-Search/define-custom-suggestions.md
- Title: Define Custom Autosuggest suggestions - Bing Custom Search-
-description: Custom Autosuggest returns a list of suggested search query strings that are relevant to your search experience.
------- Previously updated : 02/12/2019---
-# Configure your custom autosuggest experience
--
-Custom Autosuggest returns a list of suggested search query strings that are relevant to your search experience. The suggested query strings are based on a partial query string that the user provides in the search box. The list will contain a maximum of 10 suggestions.
-
-You specify whether to return only custom suggestions or to also include Bing suggestions. If you include Bing suggestions, custom suggestions appear before the Bing suggestions. If you provide enough relevant suggestions, it's possible that the returned list of suggestions will not include Bing suggestions. Bing suggestions are always in the context of your Custom Search instance.
-
-To configure search query suggestions for your instance, click the **Autosuggest** tab.
-
-> [!NOTE]
-> To use this feature, you must subscribe to Custom Search at the appropriate level (see [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/bing-custom-search/)).
-
-It can take up to 24 hours for suggestions to be reflected in the serving endpoint (API or hosted UI).
-
-## Enable Bing suggestions
-
-To enable Bing suggestions, toggle the **Automatic Bing suggestions** slider to the on position. The slider becomes blue.
-
-## Add your own suggestions
-
-To add your own query string suggestions, add them to the list under **User-defined suggestions**. After adding a suggestion in the list, press the enter key or click the **+** icon. You can specify the suggestion in any language. You can add a maximum of 5,000 query string suggestions.
-
-## Upload suggestions
-
-As an option, you can upload a list of suggestions from a file. The file must contain one search query string per line. To upload the file, click the upload icon and select the file to upload. The service extracts the suggestions from the file and adds them to the list.
-
-## Remove suggestions
-
-To remove a query string suggestion, click the remove icon next to the suggestion you want to remove.
-
-## Block suggestions
-
-If you include Bing suggestions, you can add a list of search query strings you don't want Bing to return. To add blocked query strings, click **Show blocked suggestions**. Add the query string to the list and press the enter key or click the **+** icon. You can add a maximum of 50 blocked query strings.
----
->[!NOTE]
->It may take up to 24 hours for Custom Autosuggest configuration changes to take effect.
--
-## Enabling Autosuggest in Hosted UI
-
-To enable query string suggestions for your hosted UI, click **Hosted UI**. Scroll down to the **Additional Configuration** section. Under **Web search**, select **On** for **Enable autosuggest**. To enable Autosuggest, you must select a layout that includes a search box.
--
-## Calling the Autosuggest API
-
-To get suggested query strings using the Bing Custom Search API, send a `GET` request to the following endpoint.
-
-```
-GET https://api.cognitive.microsoft.com/bingcustomsearch/v7.0/Suggestions
-```
-
-The response contains a list of `SearchAction` objects that contain the suggested query strings.
-
-```
- {
- "displayText" : "sailing lessons seattle",
- "query" : "sailing lessons seattle",
- "searchKind" : "CustomSearch"
- },
-```
-
-Each suggestion includes a `displayText` and `query` field. The `displayText` field contains the suggested query string that you use to populate your search box's dropdown list.
-
-If the user selects a suggested query string from the dropdown list, use the query string in the `query` field when calling the [Bing Custom Search API](overview.md).
--
-## Next steps
--- [Get custom suggestions]()-- [Search your custom instance](./search-your-custom-view.md)-- [Configure and consume custom hosted UI](./hosted-ui.md)
cognitive-services Define Your Custom View https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Custom-Search/define-your-custom-view.md
- Title: Configure your Bing Custom Search experience | Microsoft Docs-
-description: The portal lets you create a search instance that specifies the slices of the web; domains, subpages, and webpages.
------- Previously updated : 02/12/2019---
-# Configure your Bing Custom Search experience
--
-A Custom Search instance lets you tailor the search experience to include content only from websites that your users care about. Instead of performing a web-wide search, Bing searches only the slices of the web that interest you. To create your custom view of the web, use the Bing Custom Search [portal](https://www.customsearch.ai).
-
-The portal lets you create a search instance that specifies the slices of the web: domains, subpages, and webpages, that you want Bing to search, and those that you donΓÇÖt want it to search. The portal can also suggest content that you may want to include.
-
-Use the following when defining your slices of the web:
-
-| Slice name | Description |
-|||
-| Domain | A domain slice includes all content found within an internet domain. For example, `www.microsoft.com`. Omitting `www.` causes Bing to also search the domainΓÇÖs subdomains. For example, if you specify `microsoft.com`, Bing also returns results from `support.microsoft.com` or `technet.microsoft.com`. |
-| Subpage | A subpage slice includes all content found in the subpage and paths below it. You may specify a maximum of two subpages in the path. For example, `www.microsoft.com/en-us/windows/` |
-| Webpage | A webpage slice can include only that webpage in a custom search. You can optionally specify whether to include subpages. |
-
-> [!IMPORTANT]
-> All domains, subpages, and webpages that you specify must be public and indexed by Bing. If you own a public site that you want to include in the search, and Bing hasnΓÇÖt indexed it, see the Bing [webmaster documentation](https://www.bing.com/webmaster/help/webmaster-guidelines-30fba23a) for details about getting Bing to index it. Also, see the webmaster documentation for details about getting Bing to update your crawled site if the index is out of date.
-
-## Add slices of the web to your custom search instance
-
-When you create your custom search instance, you can specify the slices of the web: domains, subpages, and webpages, that you want to have included or blocked from your search results.
-
-If you know the slices you want to include in your custom search instance, add them to your instanceΓÇÖs **Active** list.
-
-If youΓÇÖre not sure which slices to include, you can send search queries to Bing in the **Preview** pane and select the slices that you want. To do this:
-
-1. select "Bing" from the dropdown list in the Preview pane, and enter a search query
-
-2. Click **Add site** next to the result you want to include. Then click OK.
-
->[!NOTE]
-> [!INCLUDE[publish or revert](./includes/publish-revert.md)]
-
-<a name="active-and-blocked-lists"></a>
-
-### Customize your search experience with Active and Blocked lists
-
-You can access the list of active and blocked slices by clicking on the **Active** and **Blocked** tabs in your custom search instance. Slices added to the active list will be included in your custom search. Blocked slices won't be searched, and won't appear in your search results.
-
-To specify the slices of the web you want Bing to search, click the **Active** tab and add one or more URLs. To edit or delete URLs, use the options under the **Controls** column.
-
-When adding URLs to the **Active** list you can add single URLs, or multiple URLs at once by uploading a text file using the upload icon.
-
-![The Bing Custom Search Active tab](media/file-upload-icon.png)
-
-To upload a file, create a text file and specify a single domain, subpage, or webpage per line. Your file will be rejected if it isn't formatted correctly.
-
-> [!NOTE]
-> * You can only upload a file to the **Active** list. You cannot use it to add slices to the **Blocked** list.
-> * If the **Blocked** list contains a domain, subpage, or webpage that you specified in the upload file, it will be removed from the **Blocked** list, and added to the **Active** list.
-> * Duplicate entries in your upload file will be ignored by Bing Custom Search.
-
-### Get website suggestions for your search experience
-
-After adding web slices to the **Active** list, the Bing Custom Search portal will generate website and subpage suggestions at the bottom of the tab. These are slices that Bing Custom Search thinks you might want to include. Click **Refresh** to get updated suggestions after updating your custom search instance's settings. This section is only visible if suggestions are available.
-
-## Search for images and videos
-
-You can search for images and videos similarly to web content by using the [Bing Custom Image Search API](/rest/api/cognitiveservices-bingsearch/bing-custom-images-api-v7-reference) or the [Bing Custom Video Search API](/rest/api/cognitiveservices-bingsearch/bing-custom-videos-api-v7-reference). You can display these results with the [hosted UI](hosted-ui.md), or the APIs.
-
-These APIs are similar to the non-custom [Bing Image Search](../bing-image-search/overview.md) and [Bing Video Search](../bing-video-search/overview.md) APIs, but search the entire web, and do not require the `customConfig` query parameter. See these documentation sets for more information on working with images and videos.
-
-## Test your search instance with the Preview pane
-
-You can test your search instance by using the preview pane on the portal's right side to submit search queries and view the results.
-
-1. Below the search box, select **My Instance**. You can compare the results from your search experience to Bing, by selecting **Bing**.
-2. Select a safe search filter and which market to search (see [Query Parameters](/rest/api/cognitiveservices-bingsearch/bing-custom-search-api-v7-reference#query-parameters)).
-3. Enter a query and press enter or click the search icon to view the results from the current configuration. You can change your search type you perform by clicking **Web**, **Image**, or **Video** to get corresponding results.
-
-<a name="adjustrank"></a>
-
-## Adjust the rank of specific search results
-
-The portal enables you to adjust the search ranking of content from specific domains, subpages, and webpages. After sending a search query in the preview pane, each search result contains a list of adjustments you can make for it:
-
-| Adjustment | Description |
-||-|
-| Block | Moves the domain, subpage, or webpage to the Blocked list. Bing will exclude content from the selected site from appearing in the search results. |
-| Boost | Boosts content from the domain or subpage to be higher in the search results. |
-| Demote | Demotes content from the domain or subpage lower in the search results. You select whether to demote content from the domain or subpage that the webpage belongs to. |
-| Pin to top | Moves the domain, subpage, or webpage to the **Pinned** list. This Forces the webpage to appear as the top search result for a given search query. |
-
-Adjusting rank is not available for image or video searches.
-
-### Boosting and demoting search results
-
-You can super boost, boost, or demote any domain or subpage in the **Active** list. By default, all slices are added with no ranking adjustments. Slices of the web that are Super boosted or Boosted are ranked higher in the search results (with super boost ranking higher than boost). Items that are demoted are ranked lower in the search results.
-
-You can super boost, boost, or demote items by using the **Ranking Adjust** controls in the **Active** list, or by using the Boost and Demote controls in the Preview pane. The service adds the slice to your Active list and adjusts the ranking accordingly.
-
-> [!NOTE]
-> Boosting and demoting domains and subpages is one of many methods Bing Custom Search uses to determine the order of search results. Because of other factors influencing the ranking of different web content, the effects of adjusting rank may vary. Use the Preview pane to test the effects of adjusting the rank of your search results.
-
-Super boost, boost, and demote are not available for the image and video searches.
-
-## Pin slices to the top of search results
-
-The portal also lets you pin URLs to the top of search results for specific search terms, using the **Pinned** tab. Enter a URL and query to specify the webpage that will appear as the top result. Note that you can pin a maximum of one webpage per search query, and only indexed webpages will be displayed in searches. Pinning results is not available for image or video searches.
-
-You can pin a webpage to the top in two ways:
-
-* In the **Pinned** tab, enter the URL of the webpage to pin to the top, and its corresponding query.
-
-* In the **Preview** pane, enter a search query and click search. Find the webpage you want to pin for your query, and click **Pin to top**. the webpage and query will be added to the **Pinned** list.
-
-### Specify the pin's match condition
-
-By default, webpages are only pinned to the top of search results when a user's query string exactly matches one listed in the **Pinned** list. You can change this behavior by specifying one of the following match conditions:
-
-> [!NOTE]
-> All comparisons between the user's search query, and the pin's search query are case insensitive.
-
-| Value | Description |
-||-|
-| Starts with | The pin is a match if the user's query string starts with the pin's query string |
-| Ends with | The pin is a match if the user's query string ends with the pin's query string. |
-| Contains | The pin is a match if the user's query string contains the pin's query string. |
--
-To change the pin's match condition, click the pin's edit icon. In the **Query match condition** column, click the dropdown list and select the new condition to use. Then, click the save icon to save the change.
-
-### Change the order of your pinned sites
-
-To change the order of your pins, you can drag-and-drop the them, or edit their order number by clicking the "edit" icon in the **Controls** Column of the **Pinned** list.
-
-If multiple pins satisfy a match condition, Bing Custom Search will use the one highest in the list.
-
-## View statistics
-
-If you subscribed to Custom Search at the appropriate level (see the [pricing pages](https://azure.microsoft.com/pricing/details/cognitive-services/bing-custom-search/)), a **Statistics** tab is added to your production instances. The statistics tab shows details about how your Custom Search endpoints are used, including call volume, top queries, geographic distribution, response codes, and safe search. You can filter details using the provided controls.
-
-## Usage guidelines
--- For each custom search instance, the maximum number of ranking adjustments that you may make to **Active** and **Blocked** slices is limited to 400.-- Adding a slice to the Active or Blocked tabs counts as one ranking adjustment.-- Boosting and demoting count as two ranking adjustments.-- For each custom search instance, the maximum number of pins that you may make is limited to 200.-
-## Next steps
--- [Call your custom search](./search-your-custom-view.md)-- [Configure your hosted UI experience](./hosted-ui.md)-- [Use decoration markers to highlight text](../bing-web-search/hit-highlighting.md)-- [Page webpages](../bing-web-search/paging-search-results.md)
cognitive-services Endpoint Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Custom-Search/endpoint-custom.md
- Title: Bing Custom Search endpoint-
-description: Create tailored search experiences for topics that you care about. Users see search results tailored to the content they care about.
------- Previously updated : 03/04/2019---
-# Custom Search
-
-Bing Custom Search enables you to create tailored search experiences for topics that you care about. Your users see search results tailored to the content they care about instead of having to page through search results that have irrelevant content.
-
-## Custom Search Endpoint
-To get results using the Bing Custom Search API, send a `GET` request to the following endpoint. Use the headers and URL parameters to define further specifications.
-
-Endpoint: Returns search suggestions as JSON results that are relevant to the user's input defined by `?q=""`.
-```
- GET https://api.cognitive.microsoft.com/bingcustomsearch/v7.0/search
-```
-
-For examples that describe how to set up Custom Search sources, see the [tutorial](./tutorials/custom-search-web-page.md). For details about headers, parameters, market codes, response objects, errors, etc., see the [Bing Custom Search API v7](/rest/api/cognitiveservices-bingsearch/bing-custom-search-api-v7-reference) reference.
-
-## Custom Search Response JSON
-A custom search request returns results as JSON objects, see [Response objects](/rest/api/cognitiveservices-bingsearch/bing-custom-search-api-v7-reference#response-objects).
-
-## Custom Autosuggest
-The Custom Autosuggest API lets you send a partial search query term to Bing and get back a list of suggested queries that you can configure. With Custom Autosuggest, you add suggestions returned by the API and optionally specify whether to include suggestions generated by Bing.
-
-## Custom Autosuggest Endpoint
-To request custom query suggestions, send a GET request to:
-
-```
-https://api.cognitive.microsoft.com/bingcustomsearch/v7.0/Suggestions
-```
-
-For information about defining custom suggestions, see [Define custom search suggestions](define-custom-suggestions.md).
-
-## Custom Image Search
-The Custom Image Search API lets you send a search query to Bing and get back a list of relevant images from your Custom Search instance.
-
-## Custom Image Search Endpoint
-To request images from your Custom Search instance, send a GET request to the following URL:
-
-```
-https://api.cognitive.microsoft.com/bingcustomsearch/v7.0/images/search
-```
-
-For information about configuring a Custom Search instance, see [Configure your custom search experience](./define-your-custom-view.md).
-
-## Next steps
-The **Bing** APIs support search actions that return results according to their type. All search endpoints return results as JSON response objects.  All endpoints support queries that return a specific language and/or location by longitude, latitude, and search radius.
-
-For complete information about the parameters supported by each endpoint, see the reference pages for each type.
-For examples of basic requests using the Custom Search API, see [Custom Search Quick-starts](/azure/cognitive-services/bing-custom-search/quick-start)
cognitive-services Get Images From Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Custom-Search/get-images-from-instance.md
- Title: Get images from your custom view - Bing Custom Search-
-description: High-level overview about using Bing Custom Search to get images from your custom view of the Web.
------- Previously updated : 09/10/2018---
-# Get images from your custom view
--
-Bing Custom Images Search lets you enrich your custom search experience with images. Similar to web results, custom search supports searching for images in your instance's list of websites. You can get the images using the Bing Custom Images Search API or through the Hosted UI feature. Using the Hosted UI feature is simple to use and recommended for getting your search experience up and running in short order. For information about configuring your Hosted UI to include images, see [Configure your hosted UI experience](hosted-ui.md).
-
-If you want more control over displaying the search results, you can use the Bing Custom Images Search API. Because calling the API is similar to calling the Bing Image Search API, checkout [Bing Image Search](../bing-image-search/overview.md) for examples calling the API. But before you do that, familiarize yourself with the [Custom Images Search API reference](/rest/api/cognitiveservices-bingsearch/bing-custom-images-api-v7-reference) content. The main differences are the supported query parameters (you must include the customConfig query parameter) and the endpoint you send requests to.
-
-<!--
-## Next steps
-
-[Call your custom view](search-your-custom-view.md)
>
cognitive-services Get Videos From Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Custom-Search/get-videos-from-instance.md
- Title: Get videos from your custom view - Bing Custom Search-
-description: High-level overview about using Bing Custom Search to get videos from your custom view of the Web.
------- Previously updated : 09/10/2018---
-# Get videos from your custom view
--
-Bing Custom Videos Search lets you enrich your custom search experience with videos. Similar to web results, custom search supports searching for videos in your instance's list of websites. You can get the videos using the Bing Custom Videos Search API or through the Hosted UI feature. Using the Hosted UI feature is simple to use and recommended for getting your search experience up and running in short order. For information about configuring your Hosted UI to include videos, see [Configure your hosted UI experience](hosted-ui.md).
-
-If you want more control over displaying the search results, you can use the Bing Custom Videos Search API. Because calling the API is similar to calling the Bing Video Search API, checkout [Bing Video Search](../bing-video-search/overview.md) for examples calling the API. But before you do that, familiarize yourself with the [Custom Videos Search API reference](/rest/api/cognitiveservices-bingsearch/bing-custom-videos-api-v7-reference) content. The main differences are the supported query parameters (you must include the customConfig query parameter) and the endpoint you send requests to.
-
-<!--
-## Next steps
-
-[Call your custom view](search-your-custom-view.md)
>
cognitive-services Hosted Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Custom-Search/hosted-ui.md
- Title: Configure a hosted UI for Bing Custom Search | Microsoft Docs-
-description: Use this article to configure and integrate a hosted UI for Bing Custom Search.
------- Previously updated : 02/12/2019----
-# Configure your hosted UI experience
--
-Bing Custom Search provides a hosted UI that you can easily integrate into your webpages and web applications as a JavaScript code snippet. Using the Bing Custom Search portal, you can configure the layout, color, and search options of the UI.
---
-## Configure the custom hosted UI
-
-To configure a hosted UI for your web applications, follow these steps. As you make changes, the pane on the right will give you a preview of your UI. The displayed search results are not actual results for your instance.
-
-1. Sign in to Bing Custom Search [portal](https://customsearch.ai).
-
-2. Select your Bing Custom Search instance.
-
-3. Click the **Hosted UI** tab.
-
-4. Select a layout.
-
- - Search bar and results (default): Displays a search box with search results below it.
- - Results only: Displays search results only, without a search box. When using this layout, you must provide the search query (`&q=<query string>`). Add the query parameter to the request URL in the JavaScript snippet, or the HTML endpoint link.
- - Pop-over: Provides a search box and displays the search results in a sliding overlay.
-
-5. Select a color theme. You can customize the colors to fit your application by clicking **Customize theme**. To change a color, either enter the color's RGB HEX value (for example, `#366eb8`), or click on the color preview.
-
- You can preview your changes on the right side of the portal. Clicking **Reset to default** will revert your changes to the default colors for the selected theme.
-
- > [!NOTE]
- > Consider accessibility when choosing colors.
-
-6. Under **Additional Configurations**, provide values as appropriate for your app. These settings are optional. To see the effect of applying or removing them, see the preview pane on the right. Available configuration options are:
-
-7. Enter the search subscription key or choose one from the dropdown list. The dropdown list is populated with keys from your Azure account's subscriptions. See [Azure AI services API account](../cognitive-services-apis-create-account.md).
-
-8. If you enabled autosuggest, enter the autosuggest subscription key or choose one from the dropdown list. The dropdown list is populated with keys from your Azure account's subscriptions. Custom Autosuggest requires a specific subscription tier, see the [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/bing-custom-search/).
--
-## Consume custom UI
-
-To consume the hosted UI, either:
--- Include the script in your web page
-
- ```html
- <html>
- <body>
- <script type="text/javascript"
- id="bcs_js_snippet"
- src="https://ui.customsearch.ai /api/ux/rendering-js?customConfig=<YOUR-CUSTOM-CONFIG-ID>&market=en-US&safeSearch=Moderate&version=latest&q=">
- </script>
- </body>
- </html>
- ```
--- Or, use the following URL in a Web browser.
-
- `https://ui.customsearch.ai/hosted?customConfig=YOUR-CUSTOM-CONFIG-ID`
-
- > [!NOTE]
- > Add the following query parameters to the URL as needed. For information about these parameters, see [Custom Search API](/rest/api/cognitiveservices-bingsearch/bing-custom-search-api-v7-reference#query-parameters) reference.
- >
- > - q
- > - mkt
- > - safesearch
- > - setlang
-
- > [!IMPORTANT]
- > The page cannot display your privacy statement or other notices and terms. Suitability for your use may vary.
-
-For additional information, including your Custom Configuration ID, go to **Endpoints** under the **Production** tab.
-
-## Configuration options
-
-You can configure the behavior of your hosted UI by clicking **Additional Configurations**, and providing values. These settings are optional. To see the effect of applying or removing them, see the preview pane on the right.
-
-### Web search configurations
--- Web results enabled: Determines if web search is enabled (you will see a Web tab at the top of the page)-- Enable autosuggest: Determines if custom autosuggest is enabled (see [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/bing-custom-search/) for additional cost).-- Web results per page: Number of web search results to display at a time (the maximum is 50 results per page).-- Image caption: Determines if images are displayed with search results.-
-The following configurations are shown if you click **Show advanced configurations**:
--- Highlight words: Determines if results are displayed with search terms in bold.-- Link target: Determines if the webpage opens in a new browser tab (Blank) or the same browser tab (self) when the user clicks a search result.-
-### Image search configurations
--- Image results enabled: Determines if image search is enabled (you will see an Images tab at the top of the page).-- Image results per page: Number of image search results to display at a time (the maximum is 150 results per page).-
-The following configuration is shown if you click **Show advanced configurations**.
-
-- Enable filters: Adds filters that the user can use to filter the images that Bing returns. For example, the user can filter the results for only animated GIFs.-
-### Video search configurations
--- Video results enabled: Determines if video search is enabled (you will see a Videos tab at the top of the page).-- Video results per page: Number of video search results to display at a time (the maximum is 150 results per page).-
-The following configuration is shown if you click **Show advanced configurations**.
-
-- Enable filters: Adds filters that the user can use to filter the videos that Bing returns. For example, the user can filter the results for videos with a specific resolution or videos discovered in the last 24 hours.-
-### Miscellaneous configurations
--- Page Title: Text displayed in the title area of the search results page (not for pop-over layout).-- Toolbar theme: Determines the background color of the title area of the search results page.-
-The following configurations are shown if you click **Show advanced configurations**.
-
-|Column1 |Column2 |
-|||
-|Search box text placeholder | Text displayed in the search box prior to input. |
-|Title link url |Target for the title link. |
-|Logo URL | Image displayed next to the title. |
-|Favicon | Icon displayed in the browser's title bar. |
-
-The following configurations apply only if you consume the Hosted UI through the HTML endpoint (they don't apply if you use the JavaScript snippet).
--- Page title-- Toolbar theme-- Title link URL-- Logo URL-- Faviicon URL -
-## Next steps
--- [Use decoration markers to highlight text](../bing-web-search/hit-highlighting.md)-- [Page webpages](../bing-web-search/paging-search-results.md)
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Custom-Search/language-support.md
- Title: Language support - Bing Custom Search API-
-description: A list of supported languages and regions for the Bing Custom Search API.
------- Previously updated : 09/25/2018---
-# Language and region support for the Bing Custom Search API
--
-The Bing Custom Search API supports more than three dozen countries/regions, many with more than one language.
-
-Although it's optional, the request should specify the [mkt](/rest/api/cognitiveservices-bingsearch/bing-custom-search-api-v7-reference#mkt) query parameter, which identifies the market where you want the results to come from. For a list of optional query parameters, see [Query Parameters](/rest/api/cognitiveservices-bingsearch/bing-custom-search-api-v7-reference#query-parameters)
-
-You can specify a country/region using the `cc` query parameter. If you specify a country/region, you must also specify one or more language codes using the `Accept-Language` header. The supported languages vary by country/region; they are given for each country/region in the **Markets** table.
-
-The `Accept-Language` header and the `setLang` query parameter are mutually exclusiveΓÇödo not specify both. For details, see [Accept-Language](/rest/api/cognitiveservices-bingsearch/bing-custom-search-api-v7-reference#acceptlanguage).
-
-## Countries/Regions
-
-|Country/region|Code|
-|-|-|
-|Argentina|AR|
-|Australia|AU|
-|Austria|AT|
-|Belgium|BE|
-|Brazil|BR|
-|Canada|CA|
-|Chile|CL|
-|Denmark|DK|
-|Finland|FI|
-|France|FR|
-|Germany|DE|
-|Hong Kong SAR|HK|
-|India|IN|
-|Indonesia|ID|
-|Italy|IT|
-|Japan|JP|
-|Korea|KR|
-|Malaysia|MY|
-|Mexico|MX|
-|Netherlands|NL|
-|New Zealand|NZ|
-|Norway|NO|
-|China|CN|
-|Poland|PL|
-|Portugal|PT|
-|Philippines|PH|
-|Russia|RU|
-|Saudi Arabia|SA|
-|South Africa|ZA|
-|Spain|ES|
-|Sweden|SE|
-|Switzerland|CH|
-|Taiwan|TW|
-|T├╝rkiye|TR|
-|United Kingdom|GB|
-|United States|US|
--
-## Markets
-
-|Country/region|Language|Market Code|
-|-|--|--|
-|Argentina|Spanish|es-AR|
-|Australia|English|en-AU|
-|Austria|German|de-AT|
-|Belgium|Dutch|nl-BE|
-|Belgium|French|fr-BE|
-|Brazil|Portuguese|pt-BR|
-|Canada|English|en-CA|
-|Canada|French|fr-CA|
-|Chile|Spanish|es-CL|
-|Denmark|Danish|da-DK|
-|Finland|Finnish|fi-FI|
-|France|French|fr-FR|
-|Germany|German|de-DE|
-|Hong Kong SAR|Traditional Chinese|zh-HK|
-|India|English|en-IN|
-|Indonesia|English|en-ID|
-|Italy|Italian|it-IT|
-|Japan|Japanese|ja-JP|
-|Korea|Korean|ko-KR|
-|Malaysia|English|en-MY|
-|Mexico|Spanish|es-MX|
-|Netherlands|Dutch|nl-NL|
-|New Zealand|English|en-NZ|
-|Norway|Norwegian|no-NO|
-|China|Chinese|zh-CN|
-|Poland|Polish|pl-PL|
-|Portugal|Portuguese|pt-PT|
-|Philippines|English|en-PH|
-|Russia|Russian|ru-RU|
-|Saudi Arabia|Arabic|ar-SA|
-|South Africa|English|en-ZA|
-|Spain|Spanish|es-ES|
-|Sweden|Swedish|sv-SE|
-|Switzerland|French|fr-CH|
-|Switzerland|German|de-CH|
-|Taiwan|Traditional Chinese|zh-TW|
-|T├╝rkiye|Turkish|tr-TR|
-|United Kingdom|English|en-GB|
-|United States|English|en-US|
-|United States|Spanish|es-US|
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Custom-Search/overview.md
- Title: What is the Bing Custom Search API?-
-description: The Bing Custom Search API enables you to create tailored search experiences for topics that you care about.
------- Previously updated : 12/18/2019---
-# What is the Bing Custom Search API?
--
-The Bing Custom Search API enables you to create tailored ad-free search experiences for topics that you care about. You can specify the domains and webpages for Bing to search, as well as pin, boost, or demote specific content to create a custom view of the web and help your users quickly find relevant search results.
-
-## Features
-
-|Feature |Description |
-|||
-|[Custom real-time search suggestions](define-custom-suggestions.md) | Provide search suggestions that can be displayed as a dropdown list as your users type. |
-|[Custom image search experiences](get-images-from-instance.md) | Enable your users to search for images from the domains and websites specified in your custom search instance. |
-|[Custom video search experiences](get-videos-from-instance.md) | Enable your users to search for videos from the domains and sites specified in your custom search instance. |
-|[Share your custom search instance](share-your-custom-search.md) | Collaboratively edit and test your search instance by sharing it with members of your team. |
-|[Configure a UI for your applications and websites](hosted-ui.md) | Provides a hosted UI that you can easily integrate into your webpages and web applications as a JavaScript code snippet. |
-## Workflow
-
-You can create a customized search instance by using the [Bing Custom Search portal](https://customsearch.ai). The portal enables you to create a custom search instance that specifies the domains, websites, and webpages that you want Bing to search, along with the ones that you donΓÇÖt want it to search. You can also use the portal to: preview the search experience, adjust the search rankings that the API provides, and optionally configure a searchable user interface to be rendered in your websites and applications.
-
-After creating your search instance, you can integrate it (and optionally, a user interface) into your website or application by calling the Bing Custom Search API:
-
-![Image showing that you can connect to Bing custom search via the API](media/BCS-Overview.png "How Bing Custom Search works.")
--
-## Next steps
-
-To get started quickly, see [Create your first Bing Custom Search instance](quick-start.md).
-
-For details about customizing your search instance, see [Define a custom search instance](define-your-custom-view.md).
-
-Be sure to read [Bing Use and Display Requirements](../bing-web-search/use-display-requirements.md) for using search results in your services and applications.
-
-Visit the [Bing Search API hub page](../bing-web-search/overview.md) to explore the other available APIs.
-
-Familiarize yourself with the reference content for each of the custom search endpoints. The reference contains the endpoints, headers, and query parameters that you'd use to request search results. It also includes definitions of the response objects.
---- [Custom Search API](/rest/api/cognitiveservices-bingsearch/bing-custom-search-api-v7-reference)-- [Custom Image API](/rest/api/cognitiveservices-bingsearch/bing-custom-images-api-v7-reference)-- [Custom Video API](/rest/api/cognitiveservices-bingsearch/bing-custom-videos-api-v7-reference)-- [Custom Autosuggest API](/rest/api/cognitiveservices-bingsearch/bing-custom-autosuggest-api-v7-reference)
cognitive-services Quick Start https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Custom-Search/quick-start.md
- Title: "Quickstart: Create a first Bing Custom Search instance"-
-description: Use this quickstart to create a custom Bing instance that can search domains and webpages that you define.
------ Previously updated : 03/24/2020----
-# Quickstart: Create your first Bing Custom Search instance
--
-To use Bing Custom Search, you need to create a custom search instance that defines your view or slice of the web. This instance contains the public domains, websites, and webpages that you want to search, along with any ranking adjustments you may want.
-
-To create the instance, use the [Bing Custom Search portal](https://customsearch.ai).
-
-![A picture of the Bing Custom Search portal](media/blockedCustomSrch.png)
-
-## Prerequisites
--
-## Create a custom search instance
-
-To create a Bing Custom Search instance:
-
-1. Click **Get Started** on the [Bing Custom Search portal](https://customsearch.ai) webpage, and sign in with your Microsoft account.
-
-2. Click **New Instance**, and enter a descriptive name. You can change the name of your instance at any time.
-
-3. On the **Active** tab under **Search Experience**, enter the URL of one or more websites you want to include in your search.
-
- > [!NOTE]
- > Bing Custom Search instances will only return results for domains, and webpages that are public and have been indexed by Bing.
-
-4. You can use the right side of the Bing Custom Search portal to enter a query and examine the search results returned by your search instance. If no results are returned, try entering a different URL.
-
-5. Click **Publish** to publish your changes to the production environment, and update the instance's endpoints.
-
-6. Click on the **Production** tab under **Endpoints**, and copy your **Custom Configuration ID**. You need this ID to call the Custom Search API by appending it to the `customconfig=` query parameter in your calls.
--
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Quickstart: Call your Bing Custom Search endpoint](./call-endpoint-csharp.md)
cognitive-services Client Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Custom-Search/quickstarts/client-libraries.md
- Title: "Quickstart: Use the Bing Custom Search client library"-
-description: The Custom Search API offers client libraries that makes it easy to integrate search capabilities into your applications. Use this quickstart to start sending search requests, and get back results.
---
-zone_pivot_groups: programming-languages-set-eleven
--- Previously updated : 02/27/2020---
-# Quickstart: Use the Bing Custom Search client library
----------
cognitive-services Search Your Custom View https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Custom-Search/search-your-custom-view.md
- Title: Search a custom view - Bing Custom Search-
-description: After you've configured your custom search experience, you can test it from within the Bing Custom Search portal.
------- Previously updated : 02/03/2020---
-# Call your Bing Custom Search instance from the Portal
--
-After you've configured your custom search experience, you can test it from within the Bing Custom Search [portal](https://customsearch.ai).
-
-![a screenshot of the Bing custom search portal](media/portal-search-screen.png)
-## Create a search query
-
-After you've signed into the Bing Custom Search [portal](https://customsearch.ai), select your search instance and click the **Production** tab. Under **Endpoints**, select an API endpoint (for example, Web API). Your subscription determines what endpoints are shown.
-
-To create a search query, enter the parameter values for your endpoint. Note that the parameters displayed in the portal may change depending on the endpoint you choose. See the [Custom Search API reference](/rest/api/cognitiveservices-bingsearch/bing-custom-search-api-v7-reference#query-parameters) for more information. To change the subscription your search instance uses, add the appropriate subscription key, and update the appropriate market and/or language parameters.
-
-Some important parameters are below:
--
-|Parameter |Description |
-|||
-|Query | The search term to search for. Only available for Web, Image, Video, and Autosuggest endpoints |
-|Custom Configuration ID | The configuration ID of the selected Custom Search instance. This field is read only. |
-|Market | The market that results will originate from. Only available for the Web, Image, Video, and Hosted UI endpoints. |
-|Subscription Key | The subscription key to test with. You can select a key from the dropdown list or enter one manually. |
-
-Clicking **Additional Parameters** reveals the following parameters:
-
-|Parameter |Description |
-|||
-|Safe Search | A filter used to filter webpages for adult content. Only available for the Web, Image, Video, and Hosted UI endpoints. Note that Bing Custom Video Search only supports two values: `moderate` and `strict`. |
-|User Interface Language | The language used for user interface strings. For example, if you enable images and videos in Hosted UI, the **Image** and **Video** tabs use the specified language. |
-|Count | The number of search results to return in the response. Available only for Web, Image, and Video endpoints. |
-|Offset | The number of search results to skip before returning results. Available only for Web, Image, and Video endpoints. |
-
-After you've specified all required options, click **Call** to view the JSON response in the right pane. If you select the Hosted UI endpoint, you can test the search experience in the bottom pane.
-
-## Change your Bing Custom Search subscription
-
-You can change the subscription associated with your Bing Custom Search instance without creating a new instance. To have API calls sent and charged to a new subscription, create a new Bing Custom Search resource in the Azure portal. Use the new subscription key in your API requests, along with your instance's custom configuration ID.
-
-## Next steps
--- [Call your custom view with C#](./call-endpoint-csharp.md)-- [Call your custom view with Java](./call-endpoint-java.md)-- [Call your custom view with NodeJs](./call-endpoint-nodejs.md)-- [Call your custom view with Python](./call-endpoint-python.md)--- [Call your custom view with the C# SDK](./quickstarts/client-libraries.md?pivots=programming-language-csharp%253fpivots%253dprogramming-language-csharp)
cognitive-services Share Your Custom Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Custom-Search/share-your-custom-search.md
- Title: Share your custom search - Bing Custom Search-
-description: Easily allow collaborative editing and testing of your instance by sharing it with members of your team.
------- Previously updated : 03/04/2019---
-# Share your Custom Search instance
--
-You can easily allow collaborative editing and testing of your instance by sharing it with members of your team. You can share your instance with anyone using just their email address. To share an instance:
--- Sign in to [Custom Search](https://customsearch.ai)-- Select a Custom Search instance-- Click the settings icon (appears as a gear). -- Under **Share Your Instance**, enter the email address of the person to share your instance with and click **Share**. -
-After adding the email address, it's added to the **Instance shared with** list. Repeat the process for each person you want to share your instance with.
-
-To add someone's email to the list, it isn't necessary for them to have a Custom Search account. They will need to sign up for Custom Search before they make configuration changes though. After you share an instance with someone, they'll see it in their list of Custom Search instances. Only one person can modify an instance at a time. If you try to modify an instance that someone else is editing, a warning is shown. An instance can be shared with a maximum of 10 users.
-
-## Stop sharing
-
-To stop sharing an instance with someone, use the remove icon to remove their email address from the list. This also removes the instance from their list of instances.
-
-## Next steps
--- [Configure your Custom Autosuggest experience](define-custom-suggestions.md)
cognitive-services Custom Search Web Page https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Custom-Search/tutorials/custom-search-web-page.md
- Title: "Tutorial: Create a custom search web page - Bing Custom Search"-
-description: Learn how to configure a custom Bing search instance and integrate it into a web page with this tutorial.
------- Previously updated : 03/05/2019---
-# Tutorial: Build a Custom Search web page
--
-Bing Custom Search enables you to create tailored search experiences for topics that you care about. For example, if you own a martial arts website that provides a search experience, you can specify the domains, sub-sites, and webpages that Bing searches. Your users see search results tailored to the content they care about instead of paging through general search results that may contain irrelevant content.
-
-This tutorial demonstrates how to configure a custom search instance and integrate it into a new web page.
-
-The tasks covered are:
-
-> [!div class="checklist"]
-> - Create a custom search instance
-> - Add active entries
-> - Add blocked entries
-> - Add pinned entries
-> - Integrate custom search into a web page
-
-## Prerequisites
--- To follow along with the tutorial, you need a subscription key for the Bing Custom Search API. To get a key, [Create a Bing Custom Search resource](https://portal.azure.com/#create/Microsoft.CognitiveServicesBingCustomSearch) in the Azure portal.-- If you don't already have Visual Studio 2017 or later installed, you can download and use the **free** [Visual Studio 2019 Community Edition](https://www.visualstudio.com/downloads/).-
-## Create a custom search instance
-
-To create a Bing Custom Search instance:
-
-1. Open an internet browser.
-
-2. Navigate to the custom search [portal](https://customsearch.ai).
-
-3. Sign in to the portal using a Microsoft account (MSA). If you don't have an MSA, click **Create a Microsoft account**. If it's your first time using the portal, it will ask for permissions to access your data. Click **Yes**.
-
-4. After signing in, click **New custom search**. In the **Create a new custom search instance** window, enter a name that's meaningful and describes the type of content the search returns. You can change the name at any time.
-
- ![Screenshot of the Create a new custom search instance box](../media/newCustomSrch.png)
-
-5. Click OK, specify a URL and whether to include subpages of the URL.
-
- ![Screenshot of URL definition page](../media/newCustomSrch1-a.png)
--
-## Add active entries
-
-To include results from specific websites or URLs, add them to the **Active** tab.
-
-1. On the **Configuration** page, click the **Active** tab and enter the URL of one or more websites you want to include in your search.
-
- ![Screenshot of the Definition Editor active tab](../media/customSrchEditor.png)
-
-2. To confirm that your instance returns results, enter a query in the preview pane on the right. Bing returns only results for public websites that it has indexed.
-
-## Add blocked entries
-
-To exclude results from specific websites or URLs, add them to the **Blocked** tab.
-
-1. On the **Configuration** page, click the **Blocked** tab and enter the URL of one or more websites you want to exclude from your search.
-
- ![Screenshot of the Definition Editor blocked tab](../media/blockedCustomSrch.png)
--
-2. To confirm that your instance doesn't return results from the blocked websites, enter a query in the preview pane on the right.
-
-## Add pinned entries
-
-To pin a specific webpage to the top of the search results, add the webpage and query term to the **Pinned** tab. The **Pinned** tab contains a list of webpage and query term pairs that specify the webpage that appears as the top result for a specific query. The webpage is pinned only if the user's query string matches the pin's query string based on pin's match condition. Only indexed webpages will be displayed in searches. For more information, see [Define your custom view](../define-your-custom-view.md#pin-slices-to-the-top-of-search-results).
-
-1. On the **Configuration** page, click the **Pinned** tab and enter the webpage and query term of the webpage that you want returned as the top result.
-
-2. By default, the user's query string must exactly match your pin's query string for Bing to return the webpage as the top result. To change the match condition, edit the pin (click the pencil icon), click Exact in the **Query match condition** column, and select the match condition that's right for your application.
-
- ![Screenshot of the Definition Editor pinned tab](../media/pinnedCustomSrch.png)
-
-3. To confirm that your instance returns the specified webpage as the top result, enter the query term you pinned in the preview pane on the right.
-
-## Configure Hosted UI
-
-Custom Search provides a hosted UI to render the JSON response of your custom search instance. To define your UI experience:
-
-1. Click the **Hosted UI** tab.
-
-2. Select a layout.
-
- ![Screenshot of the Hosted UI select layout step](./media/custom-search-hosted-ui-select-layout.png)
-
-3. Select a color theme.
-
- ![Screenshot of the Hosted UI select color theme](./media/custom-search-hosted-ui-select-color-theme.png)
-
- If you need to fine-tune the color theme to better integrate with your web app, click **Customize theme**. Not all color configurations apply to all layout themes. To change a color, enter the color's RGB HEX value (for example, #366eb8) in the corresponding text box. Or, click the color button and then click the shade that works for you. Always think about accessibility when selecting colors.
-
- ![Screenshot of the Hosted UI customize color theme](./media/custom-search-hosted-ui-customize-color-theme.png)
-
-
-4. Specify additional configuration options.
-
- ![Screenshot of the Hosted UI additional configurations step](./media/custom-search-hosted-ui-additional-configurations.png)
-
- To get advanced configurations, click **Show advanced configurations**. This adds configurations such as *Link target* to Web search options, *Enable filters* to Image and Video options, and *Search box text placeholder* to Miscellaneous options.
-
- ![Screenshot of the Hosted UI advanced configurations step](./media/custom-search-hosted-ui-advanced-configurations.png)
-
-5. Select your subscription keys from the dropdown lists. Or, you can enter the subscription key manually.
-
- ![Screenshot of the Hosted UI subscription key](./media/custom-search-hosted-ui-subscription-key.png)
--
-<a name="consuminghostedui"></a>
-## Consuming Hosted UI
-
-There are two ways to consume the hosted UI.
--- Option 1: Integrate the provided JavaScript snippet into your application.-- Option 2: Use the HTML Endpoint provided.-
-The remainder of this tutorial illustrates **Option 1: JavaScript snippet**.
-
-## Set up your Visual Studio solution
-
-1. Open **Visual Studio** on your computer.
-
-2. On the **File** menu, select **New**, and then choose **Project**.
-
-3. In the **New Project** window, select **Visual C# / Web / ASP.NET Core Web Application**, name your project, and then click **OK**.
-
- ![Screenshot of new project window](./media/custom-search-new-project.png)
-
-4. In the **New ASP.NET Core Web Application** window, select **Web Application** and click **OK**.
-
- ![Screenshot of new webapp window](./media/custom-search-new-webapp.png)
-
-## Edit index.cshtml
-
-1. In the **Solution Explorer**, expand **Pages** and double-click **index.cshtml** to open the file.
-
- ![Screenshot of solution explorer with pages expanded and index.cshtml selected](./media/custom-search-visual-studio-webapp-solution-explorer-index.png)
-
-2. In index.cshtml, delete everything starting from line 7 and below.
-
- ```razor
- @page
- @model IndexModel
- @{
- ViewData["Title"] = "Home page";
- }
- ```
-
-3. Add a line break element and a div to act as a container.
-
- ```html
- @page
- @model IndexModel
- @{
- ViewData["Title"] = "Home page";
- }
- <br />
- <div id="customSearch"></div>
- ```
-
-4. In the **Hosted UI** page, scroll down to the section titled **Consuming the UI**. Click the *Endpoints* to access the JavaScript snippet. You can also get to the snippet by clicking **Production** and then the **Hosted UI** tab.
-
- <!-- Get new screenshot after prod gets new bits
- ![Screenshot of the Hosted UI save button](./media/custom-search-hosted-ui-consuming-ui.png)
- -->
-
-5. Paste the script element into the container you added.
-
- ``` html
- @page
- @model IndexModel
- @{
- ViewData["Title"] = "Home page";
- }
- <br />
- <div id="customSearch">
- <script type="text/javascript"
- id="bcs_js_snippet"
- src="https://ui.customsearch.ai /api/ux/rendering-js?customConfig=<YOUR-CUSTOM-CONFIG-ID>&market=en-US&safeSearch=Moderate&version=latest&q=">
- </script>
- </div>
- ```
-
-6. In the **Solution Explorer**, right click on **wwwroot** and click **View in Browser**.
-
- ![Screenshot of solution explorer selecting View in Browser from the wwwroot context menu](./media/custom-search-webapp-view-in-browser.png)
-
-Your new custom search web page should look similar to this:
-
-![Screenshot of custom search web page](./media/custom-search-webapp-browse-index.png)
-
-Performing a search renders results like this:
-
-![Screenshot of custom search results](./media/custom-search-webapp-results.png)
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Call Bing Custom Search endpoint (C#)](../call-endpoint-csharp.md)
cognitive-services Search For Entities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Entities-Search/concepts/search-for-entities.md
- Title: Search for entities with the Bing Entity Search API-
-description: Use the Bing Entity Search API to extract and search for entities and places from search queries.
------ Previously updated : 02/01/2019---
-# Searching for entities with the Bing Entity API
--
-## Suggest search terms with the Bing Autosuggest API
-
-If you provide a search box where the user enters their search term, use the [Bing Autosuggest API](../../bing-autosuggest/get-suggested-search-terms.md) to improve the experience. The API returns suggested query strings based on partial search terms as the user types.
-
-After the user enters their search term, URL encode the term before setting the [q](/rest/api/cognitiveservices-bingsearch/bing-entities-api-v7-reference#query) query parameter. For example, if the user enters *Marcus Appel*, set `q` to *Marcus+Appel* or *Marcus%20Appel*.
-
-If the search term contains a spelling mistake, the search response includes a [QueryContext](/rest/api/cognitiveservices-bingsearch/bing-entities-api-v7-reference#querycontext) object. The object shows the original spelling and the corrected spelling that Bing used for the search.
-
-```json
-"queryContext": {
- "originalQuery": "hollo wrld",
- "alteredQuery": "hello world",
- "alterationOverrideQuery": "+hollo wrld",
- "adultIntent": false
-}
-```
-
-## The Bing Entity Search API response
-
-The API response contains a [SearchResponse](/rest/api/cognitiveservices-bingsearch/bing-entities-api-v7-reference#searchresponse) object. If Bing finds an entity or place that's relevant, the object includes the `entities` field, `places` field, or both. Otherwise, the response object does not include either field.
-> [!NOTE]
-> Entity responses support multiple markets, but the Places response supports only US Business locations.
-
-The `entities` field is an [EntityAnswer](/rest/api/cognitiveservices-bingsearch/bing-entities-api-v7-reference) object that contains a list of [Entity](/rest/api/cognitiveservices-bingsearch/bing-entities-api-v7-reference#entity) objects (see the `value` field). The list may contain a single dominant entity, multiple disambiguation entities, or both.
-
-A dominant entity is returned when Bing believes it to be the only entity that satisfies the request (there is no ambiguity as to which entity satisfies the request). If multiple entities could satisfy the request, the list contains more than one disambiguation entity. For example, if the request uses the generic title of a movie franchise, the list likely contains disambiguation entities. But, if the request specifies a specific title from the franchise, the list likely contains a single dominant entity.
-
-Entities include well-known personalities such as singers, actors, athletes, models, etc.; places and landmarks such as Mount Rainier or Lincoln Memorial; and things such as a banana, goldendoodle, book, or movie title. The [entityPresentationInfo](/rest/api/cognitiveservices-bingsearch/bing-entities-api-v7-reference#entitypresentationinfo) field contains hints that identify the entity's type. For example, if it's a person, movie, animal, or attraction. For a list of possible types, see [Entity Types](/rest/api/cognitiveservices-bingsearch/bing-entities-api-v7-reference#entity-types)
-
-```json
-"entityPresentationInfo": {
- "entityScenario": "DominantEntity",
- "entityTypeHints": ["Attraction"],
- "entityTypeDisplayHint": "Mountain"
-}, ...
-```
-
-The following shows a response that includes a dominant and disambiguation entity.
-
-```json
-{
- "_type": "SearchResponse",
- "queryContext": {
- "originalQuery": "Mount Rainier"
- },
- "entities": {
- "value": [{
- "contractualRules": [{
- "_type": "ContractualRules/LicenseAttribution",
- "targetPropertyName": "description",
- "mustBeCloseToContent": true,
- "license": {
- "name": "CC-BY-SA",
- "url": "https://creativecommons.org/licenses/by-sa/3.0/"
- },
- "licenseNotice": "Text under CC-BY-SA license"
- },
- {
- "_type": "ContractualRules/LinkAttribution",
- "targetPropertyName": "description",
- "mustBeCloseToContent": true,
- "text": "contoso.com",
- "url": "http://contoso.com/mount_rainier"
- },
- {
- "_type": "ContractualRules/MediaAttribution",
- "targetPropertyName": "image",
- "mustBeCloseToContent": true,
- "url": "http://contoso.com/mount-rainier"
- }],
- "webSearchUrl": "https://www.bing.com/search?q=Mount%20Rainier...",
- "name": "Mount Rainier",
- "url": "http://www.northwindtraders.com/",
- "image": {
- "name": "Mount Rainier",
- "thumbnailUrl": "https://www.bing.com/th?id=A4ae343983daa4...",
- "provider": [{
- "_type": "Organization",
- "url": "http://contoso.com/mount_rainier"
- }],
- "hostPageUrl": "http://contoso.com/commons/7/72/mount_rain...",
- "width": 110,
- "height": 110
- },
- "description": "Mount Rainier is 14,411 ft tall and the highest mountain...",
- "entityPresentationInfo": {
- "entityScenario": "DominantEntity",
- "entityTypeHints": ["Attraction"]
- },
- "bingId": "38b9431e-cf91-93be-0584-c42a3ecbfdc7"
- },
- {
- "contractualRules": [{
- "_type": "ContractualRules/MediaAttribution",
- "targetPropertyName": "image",
- "mustBeCloseToContent": true,
- "url": "http://contoso.com/mount_rainier_national_park"
- }],
- "webSearchUrl": "https://www.bing.com/search?q=Mount%20Rainier%20National...",
- "name": "Mount Rainier National Park",
- "url": "http://worldwideimporters.com/",
- "image": {
- "name": "Mount Rainier National Park",
- "thumbnailUrl": "https://www.bing.com/th?id=A91bdc5a1b648a695a39...",
- "provider": [{
- "_type": "Organization",
- "url": "http://contoso.com/mount_rainier_national_park"
- }],
- "hostPageUrl": "http://contoso.com/en/7/7a...",
- "width": 50,
- "height": 50
- },
- "description": "Mount Rainier National Park is a United States National Park...",
- "entityPresentationInfo": {
- "entityScenario": "DisambiguationItem",
- "entityTypeHints": ["Organization"]
- },
- "bingId": "29d4b681-227a-3924-7bb1-8a54e8666b8c"
- }]
- }
-}
-```
-
-The entity includes a `name`, `description`, and `image` field. When you display these fields in your user experience, you must attribute them. The `contractualRules` field contains a list of attributions that you must apply. The contractual rule identifies the field that the attribution applies to. For information about applying attribution, see [Attribution](#data-attribution).
-
-```json
-"contractualRules": [{
- "_type": "ContractualRules/LicenseAttribution",
- "targetPropertyName": "description",
- "mustBeCloseToContent": true,
- "license": {
- "name": "CC-BY-SA",
- "url": "https://creativecommons.org/licenses/by-sa/3.0/"
- },
- "licenseNotice": "Text under CC-BY-SA license"
-},
-{
- "_type": "ContractualRules/LinkAttribution",
- "targetPropertyName": "description",
- "mustBeCloseToContent": true,
- "text": "contoso.com",
- "url": "http://contoso.com/wiki/Mount_Rainier"
-},
-{
- "_type": "ContractualRules/MediaAttribution",
- "targetPropertyName": "image",
- "mustBeCloseToContent": true,
- "url": "http://contoso.com/wiki/Mount_Rainier"
-}], ...
-```
-
-When you display the entity information (name, description, and image), you must also use the URL in the `webSearchUrl` field to link to the Bing search results page that contains the entity.
-
-## Find places
-
-The `places` field is a [LocalEntityAnswer](/rest/api/cognitiveservices-bingsearch/bing-entities-api-v7-reference) object that contains a list of [Place](/rest/api/cognitiveservices-bingsearch/bing-entities-api-v7-reference#place) objects (see the [Entity Types](/rest/api/cognitiveservices-bingsearch/bing-entities-api-v7-reference#entity-types) for more information). The list contains one or more local entities that satisfy the request.
-
-Places include restaurant, hotels, or local businesses. The [entityPresentationInfo](/rest/api/cognitiveservices-bingsearch/bing-entities-api-v7-reference#entitypresentationinfo) field contains hints that identify the local entity's type. The list contains a list of hints such as Place, LocalBusiness, Restaurant. Each successive hint in the array narrows the entity's type. For a list of possible types, see [Entity Types](/rest/api/cognitiveservices-bingsearch/bing-entities-api-v7-reference#entity-types)
-
-```json
-"entityPresentationInfo": {
- "entityScenario": "ListItem",
- "entityTypeHints": ["Place",
- "LocalBusiness",
- "Restaurant"]
-}, ...
-```
-> [!NOTE]
-> Entity responses support multiple markets, but the Places response supports only US Business locations.
-
-Local aware entity queries such as *restaurant near me* require the user's location to provide accurate results. Your requests should always use the X-Search-Location and X-MSEdge-ClientIP headers to specify the user's location. If Bing thinks the query would benefit from the user's location, it sets the `askUserForLocation` field of [QueryContext](/rest/api/cognitiveservices-bingsearch/bing-entities-api-v7-reference#querycontext) to **true**.
-
-```json
-{
- "_type": "SearchResponse",
- "queryContext": {
- "originalQuery": "Sinful Bakery and Cafe",
- "askUserForLocation": true
- },
- ...
-}
-```
-
-A place result includes the place's name, address, telephone number, and URL to the entity's website. When you display the entity information, you must also use the URL in the `webSearchUrl` field to link to the Bing search results page that contains the entity.
-
-```json
-"places": {
- "value": [{
- "_type": "Restaurant",
- "webSearchUrl": "https://www.bing.com/search?q=Sinful%20Bakery...",
- "name": "Liberty's Delightful Sinful Bakery & Cafe",
- "url": "http://libertysdelightfulsinfulbakeryandcafe.com/",
- "entityPresentationInfo": {
- "entityScenario": "ListItem",
- "entityTypeHints": ["Place",
- "LocalBusiness",
- "Restaurant"]
- },
- "address": {
- "addressLocality": "Seattle",
- "addressRegion": "WA",
- "postalCode": "98112",
- "addressCountry": "US",
- "neighborhood": "Madison Park"
- },
- "telephone": "(800) 555-1212"
- }]
-}
-```
-
-> [!NOTE]
-> You, or a third party on your behalf, may not use, retain, store, cache, share, or distribute any data from the Entities API for the purpose of testing, developing, training, distributing or making available any non-Microsoft service or feature.
-
-## Data attribution
-
-Bing Entity API responses contain information owned by third parties. You are responsible to ensure your use is appropriate, for example by complying with any creative commons license your user experience may rely on.
-
-If an answer or result includes the `contractualRules`, `attributions`, or `provider` fields, you must attribute the data. If the answer does not include any of these fields, no attribution is required. If the answer includes the `contractualRules` field and the `attributions` and/or `provider` fields, you must use the contractual rules to attribute the data.
-
-The following example shows an entity that includes a MediaAttribution contractual rule and an Image that includes a `provider` field. The MediaAttribution rule identifies the image as the target of the rule, so you'd ignore the image's `provider` field and instead use the MediaAttribution rule to provide attribution.
-
-```json
-"value": [{
- "contractualRules": [
- ...
- {
- "_type": "ContractualRules/MediaAttribution",
- "targetPropertyName": "image",
- "mustBeCloseToContent": true,
- "url": "http://contoso.com/mount_rainier"
- }
- ],
- ...
- "image": {
- "name": "Mount Rainier",
- "thumbnailUrl": "https://www.bing.com/th?id=A46378861201...",
- "provider": [{
- "_type": "Organization",
- "url": "http://contoso.com/mount_rainier"
- }],
- "hostPageUrl": "http://www.graphicdesigninstitute.com/Uploaded...",
- "width": 110,
- "height": 110
- },
- ...
-}]
-```
-
-If a contractual rule includes the `targetPropertyName` field, the rule applies only to the targeted field. Otherwise, the rule applies to the parent object that contains the `contractualRules` field.
-
-In the following example, the `LinkAttribution` rule includes the `targetPropertyName` field, so the rule applies to the `description` field. For rules that apply to specific fields, you must include a line immediately following the targeted data that contains a hyperlink to the provider's website. For example, to attribute the description, include a line immediately following the description text that contains a hyperlink to the data on the provider's website, in this case create a link to contoso.com.
-
-```json
-"entities": {
- "value": [{
- ...
- "description": "Marcus Appel is a former American....",
- ...
- "contractualRules": [{
- "_type": "ContractualRules/LinkAttribution",
- "targetPropertyName": "description",
- "mustBeCloseToContent": true,
- "text": "contoso.com",
- "url": "http://contoso.com/cr?IG=B8AD73..."
- },
- ...
-
-```
-
-### License attribution
-
-If the list of contractual rules includes a [LicenseAttribution](/rest/api/cognitiveservices-bingsearch/bing-entities-api-v7-reference#licenseattribution) rule, you must display the notice on the line immediately following the content that the license applies to. The `LicenseAttribution` rule uses the `targetPropertyName` field to identify the property that the license applies to.
-
-The following shows an example that includes a `LicenseAttribution` rule.
-
-![License attribution](../media/cognitive-services-bing-entities-api/licenseattribution.png)
-
-The license notice that you display must include a hyperlink to the website that contains information about the license. Typically, you make the name of the license a hyperlink. For example, if the notice is **Text under CC-BY-SA license** and CC-BY-SA is the name of the license, you would make CC-BY-SA a hyperlink.
-
-### Link and text attribution
-
-The [LinkAttribution](/rest/api/cognitiveservices-bingsearch/bing-entities-api-v7-reference#linkattribution) and [TextAttribution](/rest/api/cognitiveservices-bingsearch/bing-entities-api-v7-reference#textattribution) rules are typically used to identify the provider of the data. The `targetPropertyName` field identifies the field that the rule applies to.
-
-To attribute the providers, include a line immediately following the content that the attributions apply to (for example, the targeted field). The line should be clearly labeled to indicate that the providers are the source of the data. For example, "Data from: contoso.com". For `LinkAttribution` rules, you must create a hyperlink to the provider's website.
-
-The following shows an example that includes `LinkAttribution` and `TextAttribution` rules.
-
-![Link text attribution](../media/cognitive-services-bing-entities-api/linktextattribution.png)
-
-### Media attribution
-
-If the entity includes an image and you display it, you must provide a click-through link to the provider's website. If the entity includes a [MediaAttribution](/rest/api/cognitiveservices-bingsearch/bing-entities-api-v7-reference#mediaattribution) rule, use the rule's URL to create the click-through link. Otherwise, use the URL included in the image's `provider` field to create the click-through link.
-
-The following shows an example that includes an image's `provider` field and contractual rules. Because the example includes the contractual rule, you ignore the image's `provider` field and apply the `MediaAttribution` rule.
-
-![Media attribution](../media/cognitive-services-bing-entities-api/mediaattribution.png)
-
-### Search or search-like experience
-
-Just like with Bing Web Search API, the Bing Entity Search API may only be used as a result of a direct user query or search, or as a result of an action within an app or experience that logically can be interpreted as a userΓÇÖs search request. For illustration purposes, the following are some examples of acceptable search or search-like experiences.
--- User enters a query directly into a search box in an app-- User selects specific text or image and requests ΓÇ£more informationΓÇ¥ or ΓÇ£additional informationΓÇ¥-- User asks a search bot about a particular topic-- User dwells on a particular object or entity in a visual search type scenario-
-If you are not sure if your experience can be considered a search-like experience, it is recommended that you check with Microsoft.
-
-## Throttling requests
--
-## Next steps
-
-* Try a [Quickstart](../quickstarts/csharp.md) to get started searching for entities with the Bing Entity Search API.
cognitive-services Sending Requests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Entities-Search/concepts/sending-requests.md
- Title: "Sending search requests to the Bing Entity Search API"-
-description: The Bing Entity Search API sends a search query to Bing and gets results that include entities and places.
------ Previously updated : 06/27/2019---
-# Sending search requests to the Bing Entity Search API
--
-The Bing Entity Search API sends a search query to Bing and gets results that include entities and places. Place results include restaurants, hotel, or other local businesses. For places, the query can specify the name of the local business or it can ask for a list (for example, restaurants near me). Entity results include persons, places, or things. Place in this context is tourist attractions, states, countries/regions, etc.
-
-## The endpoint
-
-To get entity and place search results, send a GET request to the following endpoint:
-
-```
-https://api.cognitive.microsoft.com/bing/v7.0/entities
-```
-
-Requests must use the HTTPS protocol.
-
-We recommend that all requests originate from a server. Distributing the key as part of a client application provides more opportunity for a malicious third party to access it. Also, making calls from a server provides a single upgrade point for future versions of the API.
-
-## Specifying query parameters and headers
-
-The request must specify the [q](/rest/api/cognitiveservices-bingsearch/bing-entities-api-v7-reference#query) query parameter, which contains the user's search term. The request must also specify the [mkt](/rest/api/cognitiveservices-bingsearch/bing-entities-api-v7-reference#mkt) query parameter, which identifies the market where you want the results to come from. For a list of optional query parameters, see [Query Parameters](/rest/api/cognitiveservices-bingsearch/bing-entities-api-v7-reference#query-parameters). URL encode all query parameters.
-
-The request must specify the [Ocp-Apim-Subscription-Key](/rest/api/cognitiveservices-bingsearch/bing-entities-api-v7-reference#subscriptionkey) header. Although optional, you are encouraged to also specify the following headers:
-
-- [User-Agent](/rest/api/cognitiveservices-bingsearch/bing-entities-api-v7-reference#useragent) -- [X-MSEdge-ClientID](/rest/api/cognitiveservices-bingsearch/bing-entities-api-v7-reference#clientid) -- [X-MSEdge-ClientIP](/rest/api/cognitiveservices-bingsearch/bing-entities-api-v7-reference#clientip) -- [X-Search-Location](/rest/api/cognitiveservices-bingsearch/bing-entities-api-v7-reference#location) -
-The client IP and location headers are important for returning location aware content.
-
-For a list of all request and response headers, see [Headers](/rest/api/cognitiveservices-bingsearch/bing-entities-api-v7-reference#headers).
-
-## The request
-
-The following shows an entities request that includes all the suggested query parameters and headers.
-
-```
-GET https://api.cognitive.microsoft.com/bing/v7.0/entities?q=mount+rainier&mkt=en-us HTTP/1.1
-Ocp-Apim-Subscription-Key: 123456789ABCDE
-User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows Phone 8.0; Trident/6.0; IEMobile/10.0; ARM; Touch; NOKIA; Lumia 822)
-X-Search-ClientIP: 999.999.999.999
-X-Search-Location: lat:47.60357;long:-122.3295;re:100
-X-MSEdge-ClientID: <blobFromPriorResponseGoesHere>
-Host: api.cognitive.microsoft.com
-```
-
-If it's your first time calling any of the Bing APIs, don't include the client ID header. Only include the client ID if you've previously called a Bing API and Bing returned a client ID for the user and device combination.
-
-## The response
-
-The following shows the response to the previous request. The example also shows the Bing-specific response headers. For information about the response object, see [SearchResponse](/rest/api/cognitiveservices-bingsearch/bing-entities-api-v7-reference#searchresponse).
--
-```json
-BingAPIs-TraceId: 76DD2C2549B94F9FB55B4BD6FEB6AC
-X-MSEdge-ClientID: 1C3352B306E669780D58D607B96869
-BingAPIs-Market: en-US
-
-{
- "_type" : "SearchResponse",
- "queryContext" : {
- "originalQuery" : "mount rainier"
- },
- "entities" : {
- "queryScenario" : "DominantEntity",
- "value" : [{
- "contractualRules" : [{
- "_type" : "ContractualRules\/LicenseAttribution",
- "targetPropertyName" : "description",
- "mustBeCloseToContent" : true,
- "license" : {
- "name" : "CC-BY-SA",
- "url" : "http:\/\/creativecommons.org\/licenses\/by-sa\/3.0\/"
- },
- "licenseNotice" : "Text under CC-BY-SA license"
- },
- {
- "_type" : "ContractualRules\/LinkAttribution",
- "targetPropertyName" : "description",
- "mustBeCloseToContent" : true,
- "text" : "en.wikipedia.org",
- "url" : "http:\/\/en.wikipedia.org\/wiki\/Mount_Rainier"
- },
- {
- "_type" : "ContractualRules\/MediaAttribution",
- "targetPropertyName" : "image",
- "mustBeCloseToContent" : true,
- "url" : "http:\/\/en.wikipedia.org\/wiki\/Mount_Rainier"
- }],
- "webSearchUrl" : "https:\/\/www.bing.com\/search?q=Mount%20Rainier...",
- "name" : "Mount Rainier",
- "image" : {
- "name" : "Mount Rainier",
- "thumbnailUrl" : "https:\/\/www.bing.com\/th?id=A21890c0e1f...",
- "provider" : [{
- "_type" : "Organization",
- "url" : "http:\/\/en.wikipedia.org\/wiki\/Mount_Rainier"
- }],
- "hostPageUrl" : "http:\/\/upload.wikimedia.org\/wikipedia...",
- "width" : 110,
- "height" : 110
- },
- "description" : "Mount Rainier, Mount Tacoma, or Mount Tahoma is the highest...",
- "entityPresentationInfo" : {
- "entityScenario" : "DominantEntity",
- "entityTypeHints" : ["Attraction"],
- "entityTypeDisplayHint" : "Mountain"
- },
- "bingId" : "9ae3e6ca-81ea-6fa1-ffa0-42e1d78906"
- }]
- }
-}
-```
--
-## Next steps
-
-* [Searching for entities with the Bing Entity API](search-for-entities.md)
-* [Bing API Use and Display Requirements](../../bing-web-search/use-display-requirements.md)
cognitive-services Entity Search Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Entities-Search/entity-search-endpoint.md
- Title: The Bing Entity Search API endpoint-
-description: The Bing Entity Search API has one endpoint that returns entities from the Web based on a query. These search results are returned in JSON.
------- Previously updated : 02/01/2019---
-# Bing Entity Search API endpoint
---
-The Bing Entity Search API has one endpoint that returns entities from the Web based on a query. These search results are returned in JSON.
-
-## Get entity results from the endpoint
-
-To get entity results using the **Bing API**, send a `GET` request to the following endpoint. Use [headers](/rest/api/cognitiveservices-bingsearch/bing-entities-api-v7-reference#headers) and [query parameters](/rest/api/cognitiveservices-bingsearch/bing-entities-api-v7-reference#query-parameters) to customize your search request. Search requests can be sent using the `?q=` parameter.
-
-```cURL
- GET https://api.cognitive.microsoft.com/bing/v7.0/entities
-```
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [What is the Bing Entity Search API?](overview.md)
-
-## See also
-
-For more information about headers, parameters, market codes, response objects, errors and more, see the [Bing Entity Search API v7](/rest/api/cognitiveservices-bingsearch/bing-entities-api-v7-reference) reference article.
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Entities-Search/overview.md
- Title: What is the Bing Entity Search API?-
-description: Learn details about the Bing Entity Search API and how to extract and search for entities and places from search queries.
------- Previously updated : 12/18/2019---
-# What is Bing Entity Search API?
--
-The Bing Entity Search API sends a search query to Bing and gets results that include entities and places. Place results include restaurants, hotel, or other local businesses. Bing returns places if the query specifies the name of the local business or asks for a type of business (for example, restaurants near me). Bing returns entities if the query specifies well-known people, places (tourist attractions, states, countries/regions, etc.), or things.
-
-|Feature |Description |
-|||
-|[Real-time search suggestions](concepts/search-for-entities.md#suggest-search-terms-with-the-bing-autosuggest-api) | Provide search suggestions that can be displayed as a dropdown list as your users type. |
-| [Entity disambiguation](concepts/search-for-entities.md#the-bing-entity-search-api-response) | Get multiple entities for queries with multiple possible meanings. |
-| [Find places](concepts/search-for-entities.md#find-places) | Search for and return information on local businesses and entities |
-
-## Workflow
-
-The Bing Entity Search API is a RESTful web service, making it easy to call from any programming language that can make HTTP requests and parse JSON. You can use the service using either the REST API, or the SDK.
-
-1. Create an [Azure AI services API account](../cognitive-services-apis-create-account.md) with access to the Bing Search APIs. If you don't have an Azure subscription, you can [create an account](https://azure.microsoft.com/free/cognitive-services/) for free.
-2. Send a request to the API, with a valid search query.
-3. Process the API response by parsing the returned JSON message.
-
-## Next steps
-
-* Try the [interactive demo](https://azure.microsoft.com/services/cognitive-services/Bing-entity-search-api/) for the Bing Entity Search API.
-* To get started quickly with your first request, try a [Quickstart](quickstarts/csharp.md).
-* The [Bing Entity Search API v7](/rest/api/cognitiveservices-bingsearch/bing-entities-api-v7-reference) reference section.
-* The [Bing Use and Display Requirements](../bing-web-search/use-display-requirements.md) specify acceptable uses of the content and information gained through the Bing search APIs.
-* Visit the [Bing Search API hub page](../bing-web-search/overview.md) to explore the other available APIs.
cognitive-services Client Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Entities-Search/quickstarts/client-libraries.md
- Title: 'Quickstart: Use the Bing Entity Search client library'-
-description: The Entity Search API offers client libraries that makes it easy to integrate search capabilities into your applications. Use this quickstart to start sending search requests, and get back results.
---
-zone_pivot_groups: programming-languages-set-ten
--- Previously updated : 03/06/2020---
-# Quickstart: Use the Bing Entity Search client library
-------------
cognitive-services Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Entities-Search/quickstarts/csharp.md
- Title: "Quickstart: Send a search request to the REST API using C# - Bing Entity Search"-
-description: "Use this quickstart to send a request to the Bing Entity Search REST API using C#, and receive a JSON response."
------ Previously updated : 10/19/2020----
-# Quickstart: Send a search request to the Bing Entity Search REST API using C#
--
-Use this quickstart to make your first call to the Bing Entity Search API and view the JSON response. This simple C# application sends a news search query to the API, and displays the response. The source code for this application is available on [GitHub](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/blob/master/dotnet/Search/BingEntitySearchv7.cs).
-
-Although this application is written in C#, the API is a RESTful Web service compatible with most programming languages.
--
-## Prerequisites
--- Any edition of [Visual Studio 2017 or later](https://www.visualstudio.com/downloads/).-- Or if you're using Linux or MacOS, you can follow this quickstart using [Visual Studio Code](https://code.visualstudio.com/) and [.NET Core](/dotnet/core/install/macos)-- [Free Azure account](https://azure.microsoft.com/free/dotnet)---
-## Create and initialize a project
-
-1. Create a new C# console solution in Visual Studio.
-1. Add the [Newtonsoft.Json](https://www.nuget.org/packages/Newtonsoft.Json/) NuGet package.
- 1. Right-click your project in **Solution Explorer**.
- 2. Select **Manage NuGet Packages**.
- 3. Search for and select *Newtonsoft.Json*, and then install the package.
-1. Then, add the following namespaces into the main code file:
-
- ```csharp
- using Newtonsoft.Json;
- using System;
- using System.Net.Http;
- using System.Text;
- ```
-
-2. Create a new class, and add variables for the API endpoint, your subscription key, and the query you want to search. You can use the global endpoint in the following code, or use the [custom subdomain](../../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
-
- ```csharp
- namespace EntitySearchSample
- {
- class Program
- {
- static string host = "https://api.bing.microsoft.com";
- static string path = "/v7.0/search";
-
- static string market = "en-US";
-
- // NOTE: Replace this example key with a valid subscription key.
- static string key = "ENTER YOUR KEY HERE";
-
- static string query = "italian restaurant near me";
- //...
- }
- }
- ```
-
-## Send a request and get the API response
-
-1. Within the class, create a function called `Search()`. Within this function, create a new `HttpClient` object, and add your subscription key to the `Ocp-Apim-Subscription-Key` header.
-
-2. Construct the URI for your request by combining the host and path. Then, add your market and URL-encode your query.
-
-3. Await `client.GetAsync()` to get an HTTP response, and then store the JSON response by awaiting `ReadAsStringAsync()`.
-
-4. Format the JSON string with `JsonConvert.DeserializeObject()` and print it to the console.
-
- ```csharp
- async static void Search()
- {
- //...
- HttpClient client = new HttpClient();
- client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", key);
-
- string uri = host + path + "?mkt=" + market + "&q=" + System.Net.WebUtility.UrlEncode(query);
-
- HttpResponseMessage response = await client.GetAsync(uri);
-
- string contentString = await response.Content.ReadAsStringAsync();
- dynamic parsedJson = JsonConvert.DeserializeObject(contentString);
- Console.WriteLine(parsedJson);
- }
- ```
-
-5. In the `Main()` method of your application, call the `Search()` function.
-
- ```csharp
- static void Main(string[] args)
- {
- Search();
- Console.ReadLine();
- }
- ```
--
-## Example JSON response
-
-A successful response is returned in JSON, as shown in the following example:
-
-```json
-{
- "_type": "SearchResponse",
- "queryContext": {
- "originalQuery": "italian restaurant near me",
- "askUserForLocation": true
- },
- "places": {
- "value": [
- {
- "_type": "LocalBusiness",
- "webSearchUrl": "https://www.bing.com/search?q=sinful+bakery&filters=local...",
- "name": "Liberty's Delightful Sinful Bakery & Cafe",
- "url": "https://www.contoso.com/",
- "entityPresentationInfo": {
- "entityScenario": "ListItem",
- "entityTypeHints": [
- "Place",
- "LocalBusiness"
- ]
- },
- "address": {
- "addressLocality": "Seattle",
- "addressRegion": "WA",
- "postalCode": "98112",
- "addressCountry": "US",
- "neighborhood": "Madison Park"
- },
- "telephone": "(800) 555-1212"
- },
-
- . . .
- {
- "_type": "Restaurant",
- "webSearchUrl": "https://www.bing.com/search?q=Pickles+and+Preserves...",
- "name": "Munson's Pickles and Preserves Farm",
- "url": "https://www.princi.com/",
- "entityPresentationInfo": {
- "entityScenario": "ListItem",
- "entityTypeHints": [
- "Place",
- "LocalBusiness",
- "Restaurant"
- ]
- },
- "address": {
- "addressLocality": "Seattle",
- "addressRegion": "WA",
- "postalCode": "98101",
- "addressCountry": "US",
- "neighborhood": "Capitol Hill"
- },
- "telephone": "(800) 555-1212"
- },
-
- . . .
- ]
- }
-}
-```
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Build a single-page web app](../tutorial-bing-entities-search-single-page-app.md)
-
-* [What is the Bing Entity Search API?](../overview.md)
-* [Bing Entity Search API reference](/rest/api/cognitiveservices-bingsearch/bing-entities-api-v7-reference).
cognitive-services Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Entities-Search/quickstarts/java.md
- Title: "Quickstart: Send a search request to the REST API using Java - Bing Entity Search"-
-description: Use this quickstart to send a request to the Bing Entity Search REST API using Java, and receive a JSON response.
------ Previously updated : 05/08/2020---
-# Quickstart: Send a search request to the Bing Entity Search REST API using Java
--
-Use this quickstart to make your first call to the Bing Entity Search API and view the JSON response. This simple Java application sends a news search query to the API, and displays the response.
-
-Although this application is written in Java, the API is a RESTful Web service compatible with most programming languages.
-
-## Prerequisites
-
-* The [Java Development Kit (JDK)](https://www.oracle.com/technetwork/java/javase/downloads/).
-* The [Gson library](https://github.com/google/gson).
---
-## Create and initialize a project
-
-1. Create a new Java project in your favorite IDE or editor, and import the following libraries:
-
- ```java
- import java.io.*;
- import java.net.*;
- import java.util.*;
- import javax.net.ssl.HttpsURLConnection;
- import com.google.gson.Gson;
- import com.google.gson.GsonBuilder;
- import com.google.gson.JsonObject;
- import com.google.gson.JsonParser;
- import com.google.gson.Gson;
- import com.google.gson.GsonBuilder;
- import com.google.gson.JsonObject;
- import com.google.gson.JsonParser;
- ```
-
-2. In a new class, create variables for the API endpoint, your subscription key, and a search query. You can use the global endpoint in the following code, or use the [custom subdomain](../../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
-
- ```java
- public class EntitySearch {
-
- static String subscriptionKey = "ENTER KEY HERE";
-
- static String host = "https://api.bing.microsoft.com";
- static String path = "/v7.0/search";
-
- static String mkt = "en-US";
- static String query = "italian restaurant near me";
- //...
-
- ```
-
-## Construct a search request string
-
-1. Create a function called `search()` that returns a JSON `String`. url-encode your search query, and add it to a parameters string with `&q=`. Add your market to the parameter string with `?mkt=`.
-
-2. Create a URL object with your host, path, and parameters strings.
-
- ```java
- //...
- public static String search () throws Exception {
- String encoded_query = URLEncoder.encode (query, "UTF-8");
- String params = "?mkt=" + mkt + "&q=" + encoded_query;
- URL url = new URL (host + path + params);
- //...
- ```
-
-## Send a search request and receive a response
-
-1. In the `search()` function created above, create a new `HttpsURLConnection` object with `url.openCOnnection()`. Set the request method to `GET`, and add your subscription key to the `Ocp-Apim-Subscription-Key` header.
-
- ```java
- //...
- HttpsURLConnection connection = (HttpsURLConnection) url.openConnection();
- connection.setRequestMethod("GET");
- connection.setRequestProperty("Ocp-Apim-Subscription-Key", subscriptionKey);
- connection.setDoOutput(true);
- //...
- ```
-
-2. Create a new `StringBuilder`. Use a new `InputStreamReader` as a parameter when instantiating `BufferedReader` to read the API response.
-
- ```java
- //...
- StringBuilder response = new StringBuilder ();
- BufferedReader in = new BufferedReader(
- new InputStreamReader(connection.getInputStream()));
- //...
- ```
-
-3. Create a `String` object to store the response from the `BufferedReader`. Iterate through it, and append each line to the string. Then, close the reader and return the response.
-
- ```java
- String line;
-
- while ((line = in.readLine()) != null) {
- response.append(line);
- }
- in.close();
-
- return response.toString();
- ```
-
-## Format the JSON response
-
-1. Create a new function called `prettify` to format the JSON response. Create a new `JsonParser`, call `parse()` on the JSON text, and then store it as a JSON object.
-
-2. Use the Gson library to create a new `GsonBuilder()`, use `setPrettyPrinting().create()` to format the JSON, and then return it.
-
- ```java
- //...
- public static String prettify (String json_text) {
- JsonParser parser = new JsonParser();
- JsonObject json = parser.parse(json_text).getAsJsonObject();
- Gson gson = new GsonBuilder().setPrettyPrinting().create();
- return gson.toJson(json);
- }
- //...
- ```
-
-## Call the search function
--- From the main method of your project, call `search()`, and use `prettify()` to format the text.-
- ```java
- public static void main(String[] args) {
- try {
- String response = search ();
- System.out.println (prettify (response));
- }
- catch (Exception e) {
- System.out.println (e);
- }
- }
- ```
-
-## Example JSON response
-
-A successful response is returned in JSON, as shown in the following example:
-
-```json
-{
- "_type": "SearchResponse",
- "queryContext": {
- "originalQuery": "italian restaurant near me",
- "askUserForLocation": true
- },
- "places": {
- "value": [
- {
- "_type": "LocalBusiness",
- "webSearchUrl": "https://www.bing.com/search?q=sinful+bakery&filters=local...",
- "name": "Liberty's Delightful Sinful Bakery & Cafe",
- "url": "https://www.contoso.com/",
- "entityPresentationInfo": {
- "entityScenario": "ListItem",
- "entityTypeHints": [
- "Place",
- "LocalBusiness"
- ]
- },
- "address": {
- "addressLocality": "Seattle",
- "addressRegion": "WA",
- "postalCode": "98112",
- "addressCountry": "US",
- "neighborhood": "Madison Park"
- },
- "telephone": "(800) 555-1212"
- },
-
- . . .
- {
- "_type": "Restaurant",
- "webSearchUrl": "https://www.bing.com/search?q=Pickles+and+Preserves...",
- "name": "Munson's Pickles and Preserves Farm",
- "url": "https://www.princi.com/",
- "entityPresentationInfo": {
- "entityScenario": "ListItem",
- "entityTypeHints": [
- "Place",
- "LocalBusiness",
- "Restaurant"
- ]
- },
- "address": {
- "addressLocality": "Seattle",
- "addressRegion": "WA",
- "postalCode": "98101",
- "addressCountry": "US",
- "neighborhood": "Capitol Hill"
- },
- "telephone": "(800) 555-1212"
- },
-
- . . .
- ]
- }
-}
-```
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Build a single-page web app](../tutorial-bing-entities-search-single-page-app.md)
-
-* [What is the Bing Entity Search API?](../overview.md)
-* [Bing Entity Search API reference](/rest/api/cognitiveservices-bingsearch/bing-entities-api-v7-reference).
cognitive-services Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Entities-Search/quickstarts/nodejs.md
- Title: "Quickstart: Send a search request to the REST API using Node.js - Bing Entity Search"-
-description: Use this quickstart to send a request to the Bing Entity Search REST API using Node.js and receive a JSON response.
------ Previously updated : 05/08/2020----
-# Quickstart: Send a search request to the Bing Entity Search REST API using Node.js
--
-Use this quickstart to make your first call to the Bing Entity Search API and view the JSON response. This simple JavaScript application sends a news search query to the API, and displays the response. The source code for this sample is available on [GitHub](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/blob/master/nodejs/Search/BingEntitySearchv7.js).
-
-Although this application is written in JavaScript, the API is a RESTful Web service compatible with most programming languages.
-
-## Prerequisites
-
-* The latest version of [Node.js](https://nodejs.org/en/download/).
-
-* The [JavaScript Request Library](https://github.com/request/request).
--
-## Create and initialize the application
-
-1. Create a new JavaScript file in your favorite IDE or editor, and set the strictness and HTTPS requirements.
-
- ```javaScript
- 'use strict';
- let https = require ('https');
- ```
-
-2. Create variables for the API endpoint, your subscription key, and search query. You can use the global endpoint in the following code, or use the [custom subdomain](../../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
-
- ```javascript
- let subscriptionKey = 'ENTER YOUR KEY HERE';
- let host = 'api.bing.microsoft.com';
- let path = '/v7.0/search';
-
- let mkt = 'en-US';
- let q = 'italian restaurant near me';
- ```
-
-3. Append your market and query parameters to a string called `query`. Be sure to url-encode your query with `encodeURI()`.
- ```javascript
- let query = '?mkt=' + mkt + '&q=' + encodeURI(q);
- ```
-
-## Handle and parse the response
-
-1. Define a function named `response_handler()` that takes an HTTP call, `response`, as a parameter.
-
-2. Within this function, define a variable to contain the body of the JSON response.
- ```javascript
- let response_handler = function (response) {
- let body = '';
- };
- ```
-
-3. Store the body of the response when the `data` flag is called.
- ```javascript
- response.on('data', function (d) {
- body += d;
- });
- ```
-
-4. When an `end` flag is signaled, parse the JSON, and print it.
-
- ```javascript
- response.on ('end', function () {
- let json = JSON.stringify(JSON.parse(body), null, ' ');
- console.log (json);
- });
- ```
-
-## Send a request
-
-1. Create a function called `Search()` to send a search request. In it, perform the following steps:
-
-2. Within this function, create a JSON object containing your request parameters. Use `Get` for the method, and add your host and path information. Add your subscription key to the `Ocp-Apim-Subscription-Key` header.
-
-3. Use `https.request()` to send the request with the response handler created previously, and your search parameters.
-
- ```javascript
- let Search = function () {
- let request_params = {
- method : 'GET',
- hostname : host,
- path : path + query,
- headers : {
- 'Ocp-Apim-Subscription-Key' : subscriptionKey,
- }
- };
-
- let req = https.request (request_params, response_handler);
- req.end ();
- }
- ```
-
-2. Call the `Search()` function.
-
-## Example JSON response
-
-A successful response is returned in JSON, as shown in the following example:
-
-```json
-{
- "_type": "SearchResponse",
- "queryContext": {
- "originalQuery": "italian restaurant near me",
- "askUserForLocation": true
- },
- "places": {
- "value": [
- {
- "_type": "LocalBusiness",
- "webSearchUrl": "https://www.bing.com/search?q=sinful+bakery&filters=local...",
- "name": "Liberty's Delightful Sinful Bakery & Cafe",
- "url": "https://www.contoso.com/",
- "entityPresentationInfo": {
- "entityScenario": "ListItem",
- "entityTypeHints": [
- "Place",
- "LocalBusiness"
- ]
- },
- "address": {
- "addressLocality": "Seattle",
- "addressRegion": "WA",
- "postalCode": "98112",
- "addressCountry": "US",
- "neighborhood": "Madison Park"
- },
- "telephone": "(800) 555-1212"
- },
-
- . . .
- {
- "_type": "Restaurant",
- "webSearchUrl": "https://www.bing.com/search?q=Pickles+and+Preserves...",
- "name": "Munson's Pickles and Preserves Farm",
- "url": "https://www.princi.com/",
- "entityPresentationInfo": {
- "entityScenario": "ListItem",
- "entityTypeHints": [
- "Place",
- "LocalBusiness",
- "Restaurant"
- ]
- },
- "address": {
- "addressLocality": "Seattle",
- "addressRegion": "WA",
- "postalCode": "98101",
- "addressCountry": "US",
- "neighborhood": "Capitol Hill"
- },
- "telephone": "(800) 555-1212"
- },
-
- . . .
- ]
- }
-}
-```
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Build a single-page web app](../tutorial-bing-entities-search-single-page-app.md)
-
-* [What is the Bing Entity Search API?](../overview.md)
-* [Bing Entity Search API reference](/rest/api/cognitiveservices-bingsearch/bing-entities-api-v7-reference).
cognitive-services Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Entities-Search/quickstarts/php.md
- Title: "Quickstart: Send a search request to the REST API using PHP - Bing Entity Search"-
-description: Use this quickstart to send a request to the Bing Entity Search REST API using PHP, and receive a JSON response.
------ Previously updated : 05/08/2020----
-# Quickstart: Send a search request to the Bing Entity Search REST API using PHP
--
-Use this quickstart to make your first call to the Bing Entity Search API and view the JSON response. This simple PHP application sends a news search query to the API, and displays the response.
-
-Although this application is written in PHP, the API is a RESTful Web service compatible with most programming languages.
-
-## Prerequisites
-
-* [PHP 5.6.x](https://php.net/downloads.php) or later
--
-## Search entities
-
-To run this application, follow these steps:
-
-1. Create a new PHP project in your favorite IDE.
-2. Add the code provided below.
-3. Replace the `key` value with an access key valid for your subscription.
-4. You can use the global endpoint in the following code, or use the [custom subdomain](../../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
-5. Run the program.
-
-```php
-<?php
-
-// NOTE: Be sure to uncomment the following line in your php.ini file.
-// ;extension=php_openssl.dll
-
-// **********************************************
-// *** Update or verify the following values. ***
-// **********************************************
-
-// Replace the subscriptionKey string value with your valid subscription key.
-$subscriptionKey = 'ENTER KEY HERE';
-
-$host = "https://api.bing.microsoft.com";
-$path = "/v7.0/search";
-
-$mkt = "en-US";
-$query = "italian restaurants near me";
-
-function search ($host, $path, $key, $mkt, $query) {
-
- $params = '?mkt=' . $mkt . '&q=' . urlencode ($query);
-
- $headers = "Ocp-Apim-Subscription-Key: $key\r\n";
-
- // NOTE: Use the key 'http' even if you are making an HTTPS request. See:
- // https://php.net/manual/en/function.stream-context-create.php
- $options = array (
- 'http' => array (
- 'header' => $headers,
- 'method' => 'GET'
- )
- );
- $context = stream_context_create ($options);
- $result = file_get_contents ($host . $path . $params, false, $context);
- return $result;
-}
-
-$result = search ($host, $path, $subscriptionKey, $mkt, $query);
-
-echo json_encode (json_decode ($result), JSON_PRETTY_PRINT);
-?>
-```
-
-## Example JSON response
-
-A successful response is returned in JSON, as shown in the following example:
-
-```json
-{
- "_type": "SearchResponse",
- "queryContext": {
- "originalQuery": "italian restaurant near me",
- "askUserForLocation": true
- },
- "places": {
- "value": [
- {
- "_type": "LocalBusiness",
- "webSearchUrl": "https://www.bing.com/search?q=sinful+bakery&filters=local...",
- "name": "Liberty's Delightful Sinful Bakery & Cafe",
- "url": "https://www.contoso.com/",
- "entityPresentationInfo": {
- "entityScenario": "ListItem",
- "entityTypeHints": [
- "Place",
- "LocalBusiness"
- ]
- },
- "address": {
- "addressLocality": "Seattle",
- "addressRegion": "WA",
- "postalCode": "98112",
- "addressCountry": "US",
- "neighborhood": "Madison Park"
- },
- "telephone": "(800) 555-1212"
- },
-
- . . .
- {
- "_type": "Restaurant",
- "webSearchUrl": "https://www.bing.com/search?q=Pickles+and+Preserves...",
- "name": "Munson's Pickles and Preserves Farm",
- "url": "https://www.princi.com/",
- "entityPresentationInfo": {
- "entityScenario": "ListItem",
- "entityTypeHints": [
- "Place",
- "LocalBusiness",
- "Restaurant"
- ]
- },
- "address": {
- "addressLocality": "Seattle",
- "addressRegion": "WA",
- "postalCode": "98101",
- "addressCountry": "US",
- "neighborhood": "Capitol Hill"
- },
- "telephone": "(800) 555-1212"
- },
-
- . . .
- ]
- }
-}
-```
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Build a single-page web app](../tutorial-bing-entities-search-single-page-app.md)
-
-* [What is the Bing Entity Search API?](../overview.md)
-* [Bing Entity Search API reference](/rest/api/cognitiveservices-bingsearch/bing-entities-api-v7-reference).
cognitive-services Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Entities-Search/quickstarts/python.md
- Title: "Quickstart: Send a search request to the REST API using Python - Bing Entity Search"-
-description: Use this quickstart to send a request to the Bing Entity Search REST API using Python, and receive a JSON response.
------ Previously updated : 05/08/2020----
-# Quickstart: Send a search request to the Bing Entity Search REST API using Python
--
-Use this quickstart to make your first call to the Bing Entity Search API and view the JSON response. This simple Python application sends a news search query to the API, and displays the response. The source code for this sample is available on [GitHub](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/blob/master/python/Search/BingEntitySearchv7.py).
-
-Although this application is written in Python, the API is a RESTful Web service compatible with most programming languages.
-
-## Prerequisites
-
-* [Python](https://www.python.org/downloads/) 2.x or 3.x
--
-## Create and initialize the application
-
-1. Create a new Python file in your favorite IDE or editor, and add the following imports. Create variables for your subscription key, endpoint, market, and search query. You can use the global endpoint in the following code, or use the [custom subdomain](../../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
-
- ```python
- import http.client, urllib.parse
- import json
-
- subscriptionKey = 'ENTER YOUR KEY HERE'
- host = 'api.bing.microsoft.com'
- path = '/v7.0/search'
- mkt = 'en-US'
- query = 'italian restaurants near me'
- ```
-
-2. Create a request url by appending your market variable to the `?mkt=` parameter. Url-encode your query and append it to the `&q=` parameter.
-
- ```python
- params = '?mkt=' + mkt + '&q=' + urllib.parse.quote (query)
- ```
-
-## Send a request and get a response
-
-1. Create a function called `get_suggestions()`.
-
-2. In this function, add your subscription key to a dictionary with `Ocp-Apim-Subscription-Key` as a key.
-
-3. Use `http.client.HTTPSConnection()` to create an HTTPS client object. Send a `GET` request using `request()` with your path and parameters, and header information.
-
-4. Store the response with `getresponse()`, and return `response.read()`.
-
- ```python
- def get_suggestions ():
- headers = {'Ocp-Apim-Subscription-Key': subscriptionKey}
- conn = http.client.HTTPSConnection (host)
- conn.request ("GET", path + params, None, headers)
- response = conn.getresponse ()
- return response.read()
- ```
-
-5. Call `get_suggestions()`, and print the JSON response.
-
- ```python
- result = get_suggestions ()
- print (json.dumps(json.loads(result), indent=4))
- ```
-
-## Example JSON response
-
-A successful response is returned in JSON, as shown in the following example:
-
-```json
-{
- "_type": "SearchResponse",
- "queryContext": {
- "originalQuery": "italian restaurant near me",
- "askUserForLocation": true
- },
- "places": {
- "value": [
- {
- "_type": "LocalBusiness",
- "webSearchUrl": "https://www.bing.com/search?q=sinful+bakery&filters=local...",
- "name": "Liberty's Delightful Sinful Bakery & Cafe",
- "url": "https://www.contoso.com/",
- "entityPresentationInfo": {
- "entityScenario": "ListItem",
- "entityTypeHints": [
- "Place",
- "LocalBusiness"
- ]
- },
- "address": {
- "addressLocality": "Seattle",
- "addressRegion": "WA",
- "postalCode": "98112",
- "addressCountry": "US",
- "neighborhood": "Madison Park"
- },
- "telephone": "(800) 555-1212"
- },
-
- . . .
- {
- "_type": "Restaurant",
- "webSearchUrl": "https://www.bing.com/search?q=Pickles+and+Preserves...",
- "name": "Munson's Pickles and Preserves Farm",
- "url": "https://www.princi.com/",
- "entityPresentationInfo": {
- "entityScenario": "ListItem",
- "entityTypeHints": [
- "Place",
- "LocalBusiness",
- "Restaurant"
- ]
- },
- "address": {
- "addressLocality": "Seattle",
- "addressRegion": "WA",
- "postalCode": "98101",
- "addressCountry": "US",
- "neighborhood": "Capitol Hill"
- },
- "telephone": "(800) 555-1212"
- },
-
- . . .
- ]
- }
-}
-```
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Build a single-page web app](../tutorial-bing-entities-search-single-page-app.md)
-
-* [What is the Bing Entity Search API?](../overview.md)
-* [Bing Entity Search API reference](/rest/api/cognitiveservices-bingsearch/bing-entities-api-v7-reference).
cognitive-services Ruby https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Entities-Search/quickstarts/ruby.md
- Title: "Quickstart: Send a search request to the REST API using Ruby - Bing Entity Search"-
-description: Use this quickstart to send a request to the Bing Entity Search REST API using Ruby, and receive a JSON response.
------ Previously updated : 05/08/2020----
-# Quickstart: Send a search request to the Bing Entity Search REST API using Ruby
--
-Use this quickstart to make your first call to the Bing Entity Search API and view the JSON response. This simple Ruby application sends a news search query to the API, and displays the response. The source code for this application is available on [GitHub](https://github.com/Azure-Samples/cognitive-services-REST-api-samples/blob/master/ruby/Search/BingEntitySearchv7.rb).
-
-Although this application is written in Ruby, the API is a RESTful Web service compatible with most programming languages.
-
-## Prerequisites
-
-* [Ruby 2.4](https://www.ruby-lang.org/en/downloads/) or later.
--
-## Create and initialize the application
-
-1. In your favorite IDE or code editor, create a news Ruby file and import the following packages:
-
- ```ruby
- require 'net/https'
- require 'cgi'
- require 'json'
- ```
-
-2. Create variables for your API endpoint, News search URL, your subscription key, and search query. You can use the global endpoint in the following code, or use the [custom subdomain](../../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
-
- ```ruby
- host = 'https://api.bing.microsoft.com'
- path = '/v7.0/search'
-
- mkt = 'en-US'
- query = 'italian restaurants near me'
- ```
-
-## Format and make an API request
-
-1. Create the parameters string for your request by appending your market variable to the `?mkt=` parameter. Encode your query and append it to the `&q=` parameter. Combine your API host, path, and the parameters for your request, and cast them as a URI object.
-
- ```ruby
- params = '?mkt=' + mkt + '&q=' + CGI.escape(query)
- uri = URI (host + path + params)
- ```
-
-2. Use the variables from the last step to create the request. Add your subscription key to the `Ocp-Apim-Subscription-Key` header.
-
- ```ruby
- request = Net::HTTP::Get.new(uri)
- request['Ocp-Apim-Subscription-Key'] = subscriptionKey
- ```
-
-3. Send the request, and print the response.
-
- ```ruby
- response = Net::HTTP.start(uri.host, uri.port, :use_ssl => uri.scheme == 'https') do |http|
- http.request (request)
- end
-
- puts JSON::pretty_generate (JSON (response.body))
- ```
-
-## Example JSON response
-
-A successful response is returned in JSON, as shown in the following example:
-
-```json
-{
- "_type": "SearchResponse",
- "queryContext": {
- "originalQuery": "italian restaurant near me",
- "askUserForLocation": true
- },
- "places": {
- "value": [
- {
- "_type": "LocalBusiness",
- "webSearchUrl": "https://www.bing.com/search?q=sinful+bakery&filters=local...",
- "name": "Liberty's Delightful Sinful Bakery & Cafe",
- "url": "https://www.contoso.com/",
- "entityPresentationInfo": {
- "entityScenario": "ListItem",
- "entityTypeHints": [
- "Place",
- "LocalBusiness"
- ]
- },
- "address": {
- "addressLocality": "Seattle",
- "addressRegion": "WA",
- "postalCode": "98112",
- "addressCountry": "US",
- "neighborhood": "Madison Park"
- },
- "telephone": "(800) 555-1212"
- },
-
- . . .
- {
- "_type": "Restaurant",
- "webSearchUrl": "https://www.bing.com/search?q=Pickles+and+Preserves...",
- "name": "Munson's Pickles and Preserves Farm",
- "url": "https://www.princi.com/",
- "entityPresentationInfo": {
- "entityScenario": "ListItem",
- "entityTypeHints": [
- "Place",
- "LocalBusiness",
- "Restaurant"
- ]
- },
- "address": {
- "addressLocality": "Seattle",
- "addressRegion": "WA",
- "postalCode": "98101",
- "addressCountry": "US",
- "neighborhood": "Capitol Hill"
- },
- "telephone": "(800) 555-1212"
- },
-
- . . .
- ]
- }
-}
-```
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Build a single-page web app](../tutorial-bing-entities-search-single-page-app.md)
-
-* [What is the Bing Entity Search API?](../overview.md)
-* [Bing Entity Search API reference](/rest/api/cognitiveservices-bingsearch/bing-entities-api-v7-reference).
cognitive-services Rank Results https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Entities-Search/rank-results.md
- Title: Using ranking to display answers - Bing Entity Search-
-description: Learn how to use ranking to display the answers that the Bing Entity Search API returns.
------- Previously updated : 02/01/2019---
-# Using ranking to display entity search results
--
-Each entity search response includes a [RankingResponse](/rest/api/cognitiveservices/bing-web-api-v7-reference#rankingresponse) answer that specifies how you must display search results returned by the Bing Entity Search API. The ranking response groups results into pole, mainline, and sidebar content. The pole result is the most important or prominent result and should be displayed first. If you do not display the remaining results in a traditional mainline and sidebar format, you must provide the mainline content higher visibility than the sidebar content.
-
-Within each group, the [Items](/rest/api/cognitiveservices/bing-web-api-v7-reference#rankinggroup-items) array identifies the order that the content must appear in. Each item provides two ways to identify the result within an answer.
-
-
-|Field | Description |
-|||
-|`answerType` and `resultIndex` | `answerType` identifies the answer (either Entity or Place) and `resultIndex` identifies a result within that answer (for example, an entity). The index starts at 0.|
-|`value` | `value` Contains an ID that matches the ID of either an answer or a result within the answer. Either the answer or the results contain the ID but not both. |
-
-Using the `answerType` and `resultIndex` is a two-step process. First, use `answerType` to identify the answer that contains the results to display. Then use `resultIndex` to index into that answer's results to get the result to display. (The `answerType` value is the name of the field in the [SearchResponse](/rest/api/cognitiveservices/bing-web-api-v7-reference#searchresponse) object.) If you're supposed to display all the answer's results together, the ranking response item doesn't include the `resultIndex` field.
-
-Using the ID requires you to match the ranking ID with the ID of an answer or one of its results. If an answer object includes an `id` field, display all the answer's results together. For example, if the `Entities` object includes the `id` field, display all the entities articles together. If the `Entities` object does not include the `id` field, then each entity contains an `id` field and the ranking response mixes the entities with the Places results.
-
-## Ranking response example
-
-The following shows an example [RankingResponse](/rest/api/cognitiveservices/bing-web-api-v7-reference#rankingresponse).
-
-```json
-{
- "_type": "SearchResponse",
- "queryContext": {
- "originalQuery": "Jimi Hendrix"
- },
- "entities": { ... },
- "rankingResponse": {
- "sidebar": {
- "items": [
- {
- "answerType": "Entities",
- "resultIndex": 0,
- "value": {
- "id": "https://www.bingapis.com/api/v7/#Entities.0"
- }
- },
- {
- "answerType": "Entities",
- "resultIndex": 1,
- "value": {
- "id": "https://www.bingapis.com/api/v7/#Entities.1"
- }
- }
- ]
- }
- }
-}
-```
-
-Based on this ranking response, the sidebar would display the two entity results related to Jimi Hendrix.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Create a single-page web app](tutorial-bing-entities-search-single-page-app.md)
cognitive-services Tutorial Bing Entities Search Single Page App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Entities-Search/tutorial-bing-entities-search-single-page-app.md
- Title: "Tutorial: Bing Entity Search single-page web app"-
-description: This tutorial shows how to use the Bing Entity Search API in a single-page Web application.
---- Previously updated : 03/05/2020---
-# Tutorial: Single-page web app
--
-The Bing Entity Search API lets you search the Web for information about *entities* and *places.* You may request either kind of result, or both, in a given query. The definitions of places and entities are provided below.
-
-| Result | Description |
-|-|-|
-|Entities|Well-known people, places, and things that you find by name|
-|Places|Restaurants, hotels, and other local businesses that you find by name *or* by type (Italian restaurants)|
-
-In this tutorial, we build a single-page Web application that uses the Bing Entity Search API to display search results right in the page. The application includes HTML, CSS, and JavaScript components.
-
-The API lets you prioritize results by location. In a mobile app, you can ask the device for its own location. In a Web app, you can use the `getPosition()` function. But this call works only in secure contexts, and it may not provide a precise location. Also, the user may want to search for entities near a location other than their own.
-
-Our app therefore calls upon the Bing Maps service to obtain latitude and longitude from a user-entered location. The user can then enter the name of a landmark ("Space Needle") or a full or partial address ("New York City"), and the Bing Maps API provides the coordinates.
-
-<!-- Remove until we can replace with a sanitized version.
-![[Single-page Bing Entity Search app]](media/entity-search-spa-demo.png)
>-
-> [!NOTE]
-> The JSON and HTTP headings at the bottom of the page reveal the JSON response and HTTP request information when clicked. These details are useful when exploring the service.
-
-The tutorial app illustrates how to:
-
-> [!div class="checklist"]
-> * Perform a Bing Entity Search API call in JavaScript
-> * Perform a Bing Maps `locationQuery` API call in JavaScript
-> * Pass search options to the API calls
-> * Display search results
-> * Handle the Bing client ID and API subscription keys
-> * Deal with any errors that might occur
-
-The tutorial page is entirely self-contained; it does not use any external frameworks, style sheets, or even image files. It uses only widely supported JavaScript language features and works with current versions of all major Web browsers.
-
-In this tutorial, we discuss only selected portions of the source code. The full source code is available [on a separate page](). Copy and paste this code into a text editor and save it as `bing.html`.
-
-> [!NOTE]
-> This tutorial is substantially similar to the [single-page Bing Web Search app tutorial](../bing-web-search/tutorial-bing-web-search-single-page-app.md), but deals only with entity search results.
-
-## Prerequisites
-
-To follow along with the tutorial, you need subscription keys for the Bing Search API, and Bing Maps API.
-
-* An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/)
-* Once you have your Azure subscription:
- * <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesBingSearch-v7" title="Create a Bing Search resource" target="_blank">Create a Bing Search resource </a> in the Azure portal to get your key and endpoint. After it deploys, click **Go to resource**.
- * <a href="https://www.microsoft.com/maps/create-a-bing-maps-key.aspx" title="Create a Computer Vision resource" target="_blank">Create a Bing Maps resource </a> in the Azure portal to get your key and endpoint. After it deploys, click **Go to resource**.
-
-## App components
-
-Like any single-page Web app, the tutorial application includes three parts:
-
-> [!div class="checklist"]
-> * HTML - Defines the structure and content of the page
-> * CSS - Defines the appearance of the page
-> * JavaScript - Defines the behavior of the page
-
-This tutorial doesn't cover most of the HTML or CSS in detail, as they are straightforward.
-
-The HTML contains the search form in which the user enters a query and chooses search options. The form is connected to the JavaScript that actually performs the search by the `<form>` tag's `onsubmit` attribute:
-
-```html
-<form name="bing" onsubmit="return newBingEntitySearch(this)">
-```
-
-The `onsubmit` handler returns `false`, which keeps the form from being submitted to a server. The JavaScript code actually does the work of collecting the necessary information from the form and performing the search.
-
-The search is done in two phases. First, if the user has entered a location restriction, a Bing Maps query is done to convert it into coordinates. The callback for this query then kicks off the Bing Entity Search query.
-
-The HTML also contains the divisions (HTML `<div>` tags) where the search results appear.
-
-## Managing subscription keys
-
-> [!NOTE]
-> This app requires subscription keys for both the Bing Search API and the Bing Maps API.
-
-To avoid having to include the Bing Search and Bing Maps API subscription keys in the code, we use the browser's persistent storage to store them. If either key has not been stored, we prompt for it and store it for later use. If the key is later rejected by the API, we invalidate the stored key so the user is asked for it upon their next search.
-
-We define `storeValue` and `retrieveValue` functions that use either the `localStorage` object (if the browser supports it) or a cookie. Our `getSubscriptionKey()` function uses these functions to store and retrieve the user's key. You can use the global endpoint below, or the [custom subdomain](../../ai-services/cognitive-services-custom-subdomains.md) endpoint displayed in the Azure portal for your resource.
-
-```javascript
-// cookie names for data we store
-SEARCH_API_KEY_COOKIE = "bing-search-api-key";
-MAPS_API_KEY_COOKIE = "bing-maps-api-key";
-CLIENT_ID_COOKIE = "bing-search-client-id";
-
-// API endpoints
-SEARCH_ENDPOINT = "https://api.cognitive.microsoft.com/bing/v7.0/entities";
-MAPS_ENDPOINT = "https://dev.virtualearth.net/REST/v1/Locations";
-
-// ... omitted definitions of storeValue() and retrieveValue()
-
-// get stored API subscription key, or prompt if it's not found
-function getSubscriptionKey(cookie_name, key_length, api_name) {
- var key = retrieveValue(cookie_name);
- while (key.length !== key_length) {
- key = prompt("Enter " + api_name + " API subscription key:", "").trim();
- }
- // always set the cookie in order to update the expiration date
- storeValue(cookie_name, key);
- return key;
-}
-
-function getMapsSubscriptionKey() {
- return getSubscriptionKey(MAPS_API_KEY_COOKIE, 64, "Bing Maps");
-}
-
-function getSearchSubscriptionKey() {
- return getSubscriptionKey(SEARCH_API_KEY_COOKIE, 32, "Bing Search");
-}
-```
-
-The HTML `<body>` tag includes an `onload` attribute that calls `getSearchSubscriptionKey()` and `getMapsSubscriptionKey()` when the page has finished loading. These calls serve to immediately prompt the user for their keys if they haven't yet entered them.
-
-```html
-<body onload="document.forms.bing.query.focus(); getSearchSubscriptionKey(); getMapsSubscriptionKey();">
-```
-
-## Selecting search options
-
-![[Bing Entity Search form]](media/entity-search-spa-form.png)
-
-The HTML form includes the following controls:
-
-| Control | Description |
-|-|-|
-|`where`|A drop-down menu for selecting the market (location and language) used for the search.|
-|`query`|The text field in which to enter the search terms.|
-|`safe`|A checkbox indicating whether SafeSearch is turned on (restricts "adult" results)|
-|`what`|A menu for choosing to search for entities, places, or both.|
-|`mapquery`|The text field in which the user may enter a full or partial address, a landmark, etc. to help Bing Entity Search return more relevant results.|
-
-> [!NOTE]
-> Places results are currently available only in the United States. The `where` and `what` menus have code to enforce this restriction. If you choose a non-US market while Places is selected in the `what` menu, `what` changes to Anything. If you choose Places while a non-US market is selected in the `where` menu, `where` changes to the US.
-
-Our JavaScript function `bingSearchOptions()` converts these fields to a partial query string for the Bing Search API.
-
-```javascript
-// build query options from the HTML form
-function bingSearchOptions(form) {
-
- var options = [];
- options.push("mkt=" + form.where.value);
- options.push("SafeSearch=" + (form.safe.checked ? "strict" : "off"));
- if (form.what.selectedIndex) options.push("responseFilter=" + form.what.value);
- return options.join("&");
-}
-```
-
-For example, the SafeSearch feature can be `strict`, `moderate`, or `off`, with `moderate` being the default. But our form uses a checkbox, which has only two states. The JavaScript code converts this setting to either `strict` or `off` (we don't use `moderate`).
-
-The `mapquery` field isn't handled in `bingSearchOptions()` because it is used for the Bing Maps location query, not for Bing Entity Search.
-
-## Obtaining a location
-
-The Bing Maps API offers a [`locationQuery` method](//msdn.microsoft.com/library/ff701711.aspx), which we use to find the latitude and longitude of the location the user enters. These coordinates are then passed to the Bing Entity Search API with the user's request. The search results prioritize entities and places that are close to the specified location.
-
-We can't access the Bing Maps API using an ordinary `XMLHttpRequest` query in a Web app because the service does not support cross-origin queries. Fortunately, it supports JSONP (the "P" is for "padded"). A JSONP response is an ordinary JSON response wrapped in a function call. The request is made by inserting using a `<script>` tag into the document. (Loading scripts is not subject to browser security policies.)
-
-The `bingMapsLocate()` function creates and inserts the `<script>` tag for the query. The `jsonp=bingMapsCallback` segment of the query string specifies the name of the function to be called with the response.
-
-```javascript
-function bingMapsLocate(where) {
-
- where = where.trim();
- var url = MAPS_ENDPOINT + "?q=" + encodeURIComponent(where) +
- "&jsonp=bingMapsCallback&maxResults=1&key=" + getMapsSubscriptionKey();
-
- var script = document.getElementById("bingMapsResult")
- if (script) script.parentElement.removeChild(script);
-
- // global variable holds reference to timer that will complete the search if the maps query fails
- timer = setTimeout(function() {
- timer = null;
- var form = document.forms.bing;
- bingEntitySearch(form.query.value, "", bingSearchOptions(form), getSearchSubscriptionKey());
- }, 5000);
-
- script = document.createElement("script");
- script.setAttribute("type", "text/javascript");
- script.setAttribute("id", "bingMapsResult");
- script.setAttribute("src", url);
- script.setAttribute("onerror", "BingMapsCallback(null)");
- document.body.appendChild(script);
-
- return false;
-}
-```
-
-> [!NOTE]
-> If the Bing Maps API does not respond, the `bingMapsCallBack()` function is never called. Ordinarily, that would mean that `bingEntitySearch()` isn't called, and the entity search results do not appear. To avoid this scenario, `bingMapsLocate()` also sets a timer to call `bingEntitySearch()` after five seconds. There is logic in the callback function to avoid performing the entity search twice.
-
-When the query completes, the `bingMapsCallback()` function is called, as requested.
-
-```javascript
-function bingMapsCallback(response) {
-
- if (timer) { // we beat the timer; stop it from firing
- clearTimeout(timer);
- timer = null;
- } else { // the timer beat us; don't do anything
- return;
- }
-
- var location = "";
- var name = "";
- var radius = 1000;
-
- if (response) {
- try {
- if (response.statusCode === 401) {
- invalidateMapsKey();
- } else if (response.statusCode === 200) {
- var resource = response.resourceSets[0].resources[0];
- var coords = resource.point.coordinates;
- name = resource.name;
-
- // the radius is the largest of the distances between the location and the corners
- // of its bounding box (in case it's not in the center) with a minimum of 1 km
- try {
- var bbox = resource.bbox;
- radius = Math.max(haversineDistance(bbox[0], bbox[1], coords[0], coords[1]),
- haversineDistance(coords[0], coords[1], bbox[2], bbox[1]),
- haversineDistance(bbox[0], bbox[3], coords[0], coords[1]),
- haversineDistance(coords[0], coords[1], bbox[2], bbox[3]), 1000);
- } catch(e) { }
- var location = "lat:" + coords[0] + ";long:" + coords[1] + ";re:" + Math.round(radius);
- }
- }
- catch (e) { } // response is unexpected. this isn't fatal, so just don't provide location
- }
-
- var form = document.forms.bing;
- if (name) form.mapquery.value = name;
- bingEntitySearch(form.query.value, location, bingSearchOptions(form), getSearchSubscriptionKey());
-
-}
-```
-
-Along with latitude and longitude, the Bing Entity Search query requires a *radius* that indicates the precision of the location information. We calculate the radius using the *bounding box* provided in the Bing Maps response. The bounding box is a rectangle that surrounds the entire location. For example, if the user enters `NYC`, the result contains roughly central coordinates of New York City and a bounding box that encompasses the city.
-
-We first calculate the distances from the primary coordinates to each of the four corners of the bounding box using the function `haversineDistance()` (not shown). We use the largest of these four distances as the radius. The minimum radius is a kilometer. This value is also used as a default if no bounding box is provided in the response.
-
-Having obtained the coordinates and the radius, we then call `bingEntitySearch()` to perform the actual search.
-
-## Performing the search
-
-Given the query, a location, an options string, and the API key, the `BingEntitySearch()` function makes the Bing Entity Search request.
-
-```javascript
-// perform a search given query, location, options string, and API keys
-function bingEntitySearch(query, latlong, options, key) {
-
- // scroll to top of window
- window.scrollTo(0, 0);
- if (!query.trim().length) return false; // empty query, do nothing
-
- showDiv("noresults", "Working. Please wait.");
- hideDivs("pole", "mainline", "sidebar", "_json", "_http", "error");
-
- var request = new XMLHttpRequest();
- var queryurl = SEARCH_ENDPOINT + "?q=" + encodeURIComponent(query) + "&" + options;
-
- // open the request
- try {
- request.open("GET", queryurl);
- }
- catch (e) {
- renderErrorMessage("Bad request (invalid URL)\n" + queryurl);
- return false;
- }
-
- // add request headers
- request.setRequestHeader("Ocp-Apim-Subscription-Key", key);
- request.setRequestHeader("Accept", "application/json");
-
- var clientid = retrieveValue(CLIENT_ID_COOKIE);
- if (clientid) request.setRequestHeader("X-MSEdge-ClientID", clientid);
-
- if (latlong) request.setRequestHeader("X-Search-Location", latlong);
-
- // event handler for successful response
- request.addEventListener("load", handleBingResponse);
-
- // event handler for erorrs
- request.addEventListener("error", function() {
- renderErrorMessage("Error completing request");
- });
-
- // event handler for aborted request
- request.addEventListener("abort", function() {
- renderErrorMessage("Request aborted");
- });
-
- // send the request
- request.send();
- return false;
-}
-```
-
-Upon successful completion of the HTTP request, JavaScript calls our `load` event handler, the `handleBingResponse()` function, to handle a successful HTTP GET request to the API.
-
-```javascript
-// handle Bing search request results
-function handleBingResponse() {
- hideDivs("noresults");
-
- var json = this.responseText.trim();
- var jsobj = {};
-
- // try to parse JSON results
- try {
- if (json.length) jsobj = JSON.parse(json);
- } catch(e) {
- renderErrorMessage("Invalid JSON response");
- }
-
- // show raw JSON and HTTP request
- showDiv("json", preFormat(JSON.stringify(jsobj, null, 2)));
- showDiv("http", preFormat("GET " + this.responseURL + "\n\nStatus: " + this.status + " " +
- this.statusText + "\n" + this.getAllResponseHeaders()));
-
- // if HTTP response is 200 OK, try to render search results
- if (this.status === 200) {
- var clientid = this.getResponseHeader("X-MSEdge-ClientID");
- if (clientid) retrieveValue(CLIENT_ID_COOKIE, clientid);
- if (json.length) {
- if (jsobj._type === "SearchResponse") {
- renderSearchResults(jsobj);
- } else {
- renderErrorMessage("No search results in JSON response");
- }
- } else {
- renderErrorMessage("Empty response (are you sending too many requests too quickly?)");
- }
- if (divHidden("pole") && divHidden("mainline") && divHidden("sidebar"))
- showDiv("noresults", "No results.<p><small>Looking for restaurants or other local businesses? Those currently areen't supported outside the US.</small>");
- }
-
- // Any other HTTP status is an error
- else {
- // 401 is unauthorized; force re-prompt for API key for next request
- if (this.status === 401) invalidateSearchKey();
-
- // some error responses don't have a top-level errors object, so gin one up
- var errors = jsobj.errors || [jsobj];
- var errmsg = [];
-
- // display HTTP status code
- errmsg.push("HTTP Status " + this.status + " " + this.statusText + "\n");
-
- // add all fields from all error responses
- for (var i = 0; i < errors.length; i++) {
- if (i) errmsg.push("\n");
- for (var k in errors[i]) errmsg.push(k + ": " + errors[i][k]);
- }
-
- // also display Bing Trace ID if it isn't blocked by CORS
- var traceid = this.getResponseHeader("BingAPIs-TraceId");
- if (traceid) errmsg.push("\nTrace ID " + traceid);
-
- // and display the error message
- renderErrorMessage(errmsg.join("\n"));
- }
-}
-```
-
-> [!IMPORTANT]
-> A successful HTTP request does *not* necessarily mean that the search itself succeeded. If an error occurs in the search operation, the Bing Entity Search API returns a non-200 HTTP status code and includes error information in the JSON response. Additionally, if the request was rate-limited, the API returns an empty response.
-
-Much of the code in both of the preceding functions is dedicated to error handling. Errors may occur at the following stages:
-
-|Stage|Potential error(s)|Handled by|
-|-|-|-|
-|Building JavaScript request object|Invalid URL|`try`/`catch` block|
-|Making the request|Network errors, aborted connections|`error` and `abort` event handlers|
-|Performing the search|Invalid request, invalid JSON, rate limits|tests in `load` event handler|
-
-Errors are handled by calling `renderErrorMessage()` with any details known about the error. If the response passes the full gauntlet of error tests, we call `renderSearchResults()` to display the search results in the page.
-
-## Displaying search results
-
-The Bing Entity Search API [requires you to display results in a specified order](../bing-web-search/use-display-requirements.md). Since the API may return two different kinds of responses, it is not enough to iterate through the top level `Entities` or `Places` collection in the JSON response and display those results. (If you want only one type of result, use the `responseFilter` query parameter.)
-
-Instead, we use the `rankingResponse` collection in the search results to order the results for display. This object refers to items in the `Entitiess` and/or `Places` collections.
-
-`rankingResponse` may contain up to three collections of search results, designated `pole`, `mainline`, and `sidebar`.
-
-`pole`, if present, is the most relevant search result and should be displayed prominently. `mainline` refers to the bulk of the search results. Mainline results should be displayed immediately after `pole` (or first, if `pole` is not present).
-
-Finally. `sidebar` refers to auxiliary search results. They may be displayed in an actual sidebar or simply after the mainline results. We have chosen the latter for our tutorial app.
-
-Each item in a `rankingResponse` collection refers to the actual search result items in two different, but equivalent, ways.
-
-| Item | Description |
-|-|-|
-|`id`|The `id` looks like a URL, but should not be used for links. The `id` type of a ranking result matches the `id` of either a search result item in an answer collection, *or* an entire answer collection (such as `Entities`).
-|`answerType`<br>`resultIndex`|The `answerType` refers to the top-level answer collection that contains the result (for example, `Entities`). The `resultIndex` refers to the result's index within that collection. If `resultIndex` is omitted, the ranking result refers to the entire collection.
-
-> [!NOTE]
-> For more information on this part of the search response, see [Rank Results](rank-results.md).
-
-You may use whichever method of locating the referenced search result item is most convenient for your application. In our tutorial code, we use the `answerType` and `resultIndex` to locate each search result.
-
-Finally, it's time to look at our function `renderSearchResults()`. This function iterates over the three `rankingResponse` collections that represent the three sections of the search results. For each section, we call `renderResultsItems()` to render the results for that section.
-
-```javascript
-// render the search results given the parsed JSON response
-function renderSearchResults(results) {
-
- // if spelling was corrected, update search field
- if (results.queryContext.alteredQuery)
- document.forms.bing.query.value = results.queryContext.alteredQuery;
-
- // for each possible section, render the results from that section
- for (section in {pole: 0, mainline: 0, sidebar: 0}) {
- if (results.rankingResponse[section])
- showDiv(section, renderResultsItems(section, results));
- }
-}
-```
-
-## Rendering result items
-
-In our JavaScript code is an object, `searchItemRenderers`, that contains *renderers:* functions that generate HTML for each kind of search result.
-
-```javascript
-searchItemRenderers = {
- entities: function(item) { ... },
- places: function(item) { ... }
-}
-```
-
-A renderer function may accept the following parameters:
-
-| Parameter | Description |
-|-|-|
-|`item`|The JavaScript object containing the item's properties, such as its URL and its description.|
-|`index`|The index of the result item within its collection.|
-|`count`|The number of items in the search result item's collection.|
-
-The `index` and `count` parameters can be used to number results, to generate special HTML for the beginning or end of a collection, to insert line breaks after a certain number of items, and so on. If a renderer does not need this functionality, it does not need to accept these two parameters. In fact, we do not use them in the renderers for our tutorial app.
-
-Let's take a closer look at the `entities` renderer:
-
-```javascript
- entities: function(item) {
- var html = [];
- html.push("<p class='entity'>");
- if (item.image) {
- var img = item.image;
- if (img.hostPageUrl) html.push("<a href='" + img.hostPageUrl + "'>");
- html.push("<img src='" + img.thumbnailUrl + "' title='" + img.name + "' height=" + img.height + " width= " + img.width + ">");
- if (img.hostPageUrl) html.push("</a>");
- if (img.provider) {
- var provider = img.provider[0];
- html.push("<small>Image from ");
- if (provider.url) html.push("<a href='" + provider.url + "'>");
- html.push(provider.name ? provider.name : getHost(provider.url));
- if (provider.url) html.push("</a>");
- html.push("</small>");
- }
- }
- html.push("<p>");
- if (item.entityPresentationInfo) {
- var pi = item.entityPresentationInfo;
- if (pi.entityTypeHints || pi.entityTypeDisplayHint) {
- html.push("<i>");
- if (pi.entityTypeDisplayHint) html.push(pi.entityTypeDisplayHint);
- else if (pi.entityTypeHints) html.push(pi.entityTypeHints.join("/"));
- html.push("</i> - ");
- }
- }
- html.push(item.description);
- if (item.webSearchUrl) html.push("&nbsp;<a href='" + item.webSearchUrl + "'>More</a>")
- if (item.contractualRules) {
- html.push("<p><small>");
- var rules = [];
- for (var i = 0; i < item.contractualRules.length; i++) {
- var rule = item.contractualRules[i];
- var link = [];
- if (rule.license) rule = rule.license;
- if (rule.url) link.push("<a href='" + rule.url + "'>");
- link.push(rule.name || rule.text || rule.targetPropertyName + " source");
- if (rule.url) link.push("</a>");
- rules.push(link.join(""));
- }
- html.push("License: " + rules.join(" - "));
- html.push("</small>");
- }
- return html.join("");
- }, // places renderer omitted
-```
-
-Our entity renderer function:
-
-> [!div class="checklist"]
-> * Builds the HTML `<img>` tag to display the image thumbnail, if any.
-> * Builds the HTML `<a>` tag that links to the page that contains the image.
-> * Builds the description that displays information about the image and the site it's on.
-> * Incorporates the entity's classification using the display hints, if any.
-> * Includes a link to a Bing search to get more information about the entity.
-> * Displays any licensing or attribution information required by data sources.
-
-## Persisting client ID
-
-Responses from the Bing search APIs may include a `X-MSEdge-ClientID` header that should be sent back to the API with successive requests. If multiple Bing Search APIs are being used, the same client ID should be used with all of them, if possible.
-
-Providing the `X-MSEdge-ClientID` header allows the Bing APIs to associate all of a user's searches, which have two important benefits.
-
-First, it allows the Bing search engine to apply past context to searches to find results that better satisfy the user. If a user has previously searched for terms related to sailing, for example, a later search for "docks" might preferentially return information about places to dock a sailboat.
-
-Second, Bing may randomly select users to experience new features before they are made widely available. Providing the same client ID with each request ensures that users that have been chosen to see a feature always see it. Without the client ID, the user might see a feature appear and disappear, seemingly at random, in their search results.
-
-Browser security policies (CORS) may prevent the `X-MSEdge-ClientID` header from being available to JavaScript. This limitation occurs when the search response has a different origin from the page that requested it. In a production environment, you should address this policy by hosting a server-side script that does the API call on the same domain as the Web page. Since the script has the same origin as the Web page, the `X-MSEdge-ClientID` header is then available to JavaScript.
-
-> [!NOTE]
-> In a production Web application, you should perform the request server-side anyway. Otherwise, your Bing Search API key must be included in the Web page, where it is available to anyone who views source. You are billed for all usage under your API subscription key, even requests made by unauthorized parties, so it is important not to expose your key.
-
-For development purposes, you can make the Bing Web Search API request through a CORS proxy. The response from such a proxy has an `Access-Control-Expose-Headers` header that allow lists response headers and makes them available to JavaScript.
-
-It's easy to install a CORS proxy to allow our tutorial app to access the client ID header. First, if you don't already have it, [install Node.js](https://nodejs.org/en/download/). Then issue the following command in a command window:
-
-```console
-npm install -g cors-proxy-server
-```
-
-Next, change the Bing Web Search endpoint in the HTML file to:\
-`http://localhost:9090/https://api.cognitive.microsoft.com/bing/v7.0/search`
-
-Finally, start the CORS proxy with the following command:
-
-```console
-cors-proxy-server
-```
-
-Leave the command window open while you use the tutorial app; closing the window stops the proxy. In the expandable HTTP Headers section below the search results, you can now see the `X-MSEdge-ClientID` header (among others) and verify that it is the same for each request.
-
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Bing Entity Search API reference](/rest/api/cognitiveservices/bing-entities-api-v7-reference)
-
-> [!div class="nextstepaction"]
-> [Bing Maps API documentation](/bingmaps/)
cognitive-services Bing Image Search Resource Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Image-Search/bing-image-search-resource-faq.md
- Title: Frequently asked questions (FAQ) - Bing Image Search API-
-description: Find answers to commonly asked questions about concepts, code, and scenarios related to the Bing Image Search API.
------- Previously updated : 01/05/2022---
-# Frequently asked questions (FAQ) about the Bing Image Search API
--
-Find answers to commonly asked questions about concepts, code, and scenarios related to the Bing Image Search API for Azure AI services on Azure.
-
-## Response headers in JavaScript
-
-The following headers may occur in responses from the Bing Image Search API.
-
-| Attribute | Description |
-| - | - |
-| `X-MSEdge-ClientID` |The unique ID that Bing has assigned to the user |
-| `BingAPIs-Market` |The market that was used to fulfill the request |
-| `BingAPIs-TraceId` |The log entry on the Bing API server for this request (for support) |
-
-It is particularly important to persist the client ID and return it with subsequent requests. When you do this, the search will use past context in ranking search results and also provide a consistent user experience.
-
-However, when you call the Bing Image Search API from JavaScript, your browser's built-in security features (CORS) might prevent you from accessing the values of these headers.
-
-To gain access to the headers, you can make the Bing Image Search API request through a CORS proxy. The response from such a proxy has an `Access-Control-Expose-Headers` header that filters response headers and makes them available to JavaScript.
-
-It's easy to install a CORS proxy to allow our [tutorial app](tutorial-bing-image-search-single-page-app.md) to access the optional client headers. First, if you don't already have it, [install Node.js](https://nodejs.org/en/download/). Then enter the following command at a command prompt.
-
-```console
-npm install -g cors-proxy-server
-```
-
-Next, change the Bing Image Search API endpoint in the HTML file to:\
-`http://localhost:9090/https://api.cognitive.microsoft.com/bing/v7.0/search`
-
-Finally, start the CORS proxy with the following command:
-
-```console
-cors-proxy-server
-```
-
-Leave the command window open while you use the tutorial app; closing the window stops the proxy. In the expandable HTTP Headers section below the search results, you can now see the `X-MSEdge-ClientID` header (among others) and verify that it is the same for each request.
-
-## Response headers in production
-
-The CORS proxy approach described in the previous answer is appropriate for development, testing, and learning.
-
-In a production environment, however, you should host a server-side script on the same domain as the Web page that uses the Bing Web Search API. This script should actually do the API calls upon request from the Web page JavaScript and pass all results, including headers, back to the client. Since the two resources (page and script) share an origin, CORS does not come into play and the special headers are accessible to the JavaScript on the Web page.
-
-This approach also protects your API key from exposure to the public, since only the server-side script needs it. The script can use another method (such as the HTTP referrer) to make sure the request is authorized.
-
-## Next steps
-
-Is your question about a missing feature or functionality? Consider requesting or voting for it using the [feedback tool](https://feedback.azure.com/d365community/forum/09041fae-0b25-ec11-b6e6-000d3a4f0858).
-
-## See also
-
- [Stack Overflow: Azure AI services](https://stackoverflow.com/questions/tagged/bing-api)
cognitive-services Bing Image Upgrade Guide V5 To V7 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Bing-Image-Search/bing-image-upgrade-guide-v5-to-v7.md
- Title: Upgrade from Bing Image Search API v5 to v7-
-description: This upgrade guide describes changes between version 5 and version 7 of the Bing Image Search API. Use this guide to help you identify the parts of your application that you need to update to use version 7.
------ Previously updated : 01/05/2022---
-# Bing Image Search API v7 upgrade guide
--
-This upgrade guide identifies the changes between version 5 and version 7 of the Bing Image Search API. Use this guide to help you identify the parts of your application that you need to update to use version 7.
-
-## Breaking changes
-
-### Endpoints
--- The endpoint's version number changed from v5 to v7. For example, https:\//api.cognitive.microsoft.com/bing/\*\*v7.0**/images/search.-
-### Error response objects and error codes
--- All failed requests should now include an `ErrorResponse` object in the response body.--- Added the following fields to the `Error` object.
- - `subCode`&mdash;Partitions the error code into discrete buckets, if possible
- - `moreDetails`&mdash;Additional information about the error described in the `message` field
---- Replaced the v5 error codes with the following possible `code` and `subCode` values.-
-|Code|SubCode|Description
-|-|-|-
-|ServerError|UnexpectedError<br/>ResourceError<br/>NotImplemented|Bing returns ServerError whenever any of the sub-code conditions occur. The response includes these errors if the HTTP status code is 500.
-|InvalidRequest|ParameterMissing<br/>ParameterInvalidValue<br/>HttpNotAllowed<br/>Blocked|Bing returns InvalidRequest whenever any part of the request is not valid. For example, a required parameter is missing or a parameter value is not valid.<br/><br/>If the error is ParameterMissing or ParameterInvalidValue, the HTTP status code is 400.<br/><br/>If the error is HttpNotAllowed, the HTTP status code 410.
-|RateLimitExceeded||Bing returns RateLimitExceeded whenever you exceed your queries per second (QPS) or queries per month (QPM) quota.<br/><br/>Bing returns HTTP status code 429 if you exceeded QPS and 403 if you exceeded QPM.
-|InvalidAuthorization|AuthorizationMissing<br/>AuthorizationRedundancy|Bing returns InvalidAuthorization when Bing cannot authenticate the caller. For example, the `Ocp-Apim-Subscription-Key` header is missing or the subscription key is not valid.<br/><br/>Redundancy occurs if you specify more than one authentication method.<br/><br/>If the error is InvalidAuthorization, the HTTP status code is 401.
-|InsufficientAuthorization|AuthorizationDisabled<br/>AuthorizationExpired|Bing returns InsufficientAuthorization when the caller does not have permissions to access the resource. This can occur if the subscription key has been disabled or has expired. <br/><br/>If the error is InsufficientAuthorization, the HTTP status code is 403.
--- The following maps the previous error codes to the new codes. If you've taken a dependency on v5 error codes, update your code accordingly.-
-|Version 5 code|Version 7 code.subCode
-|-|-
-|RequestParameterMissing|InvalidRequest.ParameterMissing
-RequestParameterInvalidValue|InvalidRequest.ParameterInvalidValue
-ResourceAccessDenied|InsufficientAuthorization
-ExceededVolume|RateLimitExceeded
-ExceededQpsLimit|RateLimitExceeded
-Disabled|InsufficientAuthorization.AuthorizationDisabled
-UnexpectedError|ServerError.UnexpectedError
-DataSourceErrors|ServerError.ResourceError
-AuthorizationMissing|InvalidAuthorization.AuthorizationMissing
-HttpNotAllowed|InvalidRequest.HttpNotAllowed
-UserAgentMissing|InvalidRequest.ParameterMissing
-NotImplemented|ServerError.NotImplemented
-InvalidAuthorization|InvalidAuthorization
-InvalidAuthorizationMethod|InvalidAuthorization
-MultipleAuthorizationMethod|InvalidAuthorization.AuthorizationRedundancy
-ExpiredAuthorizationToken|InsufficientAuthorization.AuthorizationExpired
-InsufficientScope|InsufficientAuthorization
-Blocked|InvalidRequest.Blocked
---
-### Query parameters
--- Renamed the `modulesRequested` query parameter to [modules](/rest/api/cognitiveservices-bingsearch/bing-images-api-v7-reference). --- Renamed the Annotations to Tags. See [modules](/rest/api/cognitiveservices-bingsearch/bing-images-api-v7-reference) query parameter to Tags. --- Changed the list of supported markets of the ShoppingSources filter value to en-US only. See [imageType](/rest/api/cognitiveservices-bingsearch/bing-images-api-v7-reference#imagetype). --
-### Image insights changes
--- Renamed the `annotations` field of [ImagesInsights](/rest/api/cognitiveservices-bingsearch/bing-images-api-v7-reference#imageinsightsresponse) to `imageTags`. --- Renamed the `AnnotationModule` object to [ImageTagsModule](/rest/api/cognitiveservices-bingsearch/bing-images-api-v7-reference#imagetagsmodule). --- Renamed the `Annotation` object to [Tag](/rest/api/cognitiveservices-bingsearch/bing-images-api-v7-reference#tag), and removed the `confidence` field. --- Renamed the `insightsSourcesSummary` field of the [Image](/rest/api/cognitiveservices-bingsearch/bing-images-api-v7-reference#image) object to `insightsMetadata`. --- Renamed the `InsightsSourcesSummary` object to [InsightsMetadata](/rest/api/cognitiveservices-bingsearch/bing-images-api-v7-reference#insightsmetadata). ---