Updates from: 02/14/2024 02:07:50
Service Microsoft Docs article Related commit history on GitHub Change details
ai-services Developer Reference Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/developer-reference-resource.md
The Language Understanding service is accessed from an Azure resource you need t
* Use the **authoring** resource for training to create, edit, train, and publish. * Use the **prediction** for runtime to send user's text and receive a prediction.
-Learn about the [V3 prediction endpoint](luis-migration-api-v3.md).
- Use [Azure AI services sample code](https://github.com/Azure-Samples/cognitive-services-quickstart-code) to learn and use the most common tasks. ### REST specifications
ai-services How To Application Settings Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/how-to-application-settings-portal.md
You can edit your application name, and description. You can copy your App ID. T
1. Sign into the [LUIS portal](https://www.luis.ai). 1. Select an app from the **My apps** list.
-.
+ 1. Select **Manage** from the top navigation bar, then **Settings** from the left navigation bar. > [!div class="mx-imgBorder"]
ai-services Luis Concept Data Alteration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-concept-data-alteration.md
When a LUIS app uses the prebuilt [datetimeV2](luis-reference-prebuilt-datetimev
### V3 prediction API to alter timezone
-In V3, the `datetimeReference` determines the timezone offset. Learn more about [V3 predictions](luis-migration-api-v3.md#v3-post-body).
+In V3, the `datetimeReference` determines the timezone offset.
### V2 prediction API to alter timezone The timezone is corrected by adding the user's timezone to the endpoint using the `timezoneOffset` parameter based on the API version. The value of the parameter should be the positive or negative number, in minutes, to alter the time.
ai-services Luis Concept Data Extraction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-concept-data-extraction.md
The primary data is the top scoring **intent name**. The endpoint response is:
} ```
-Learn more about the [V3 prediction endpoint](luis-migration-api-v3.md).
+ * * *
Set the querystring parameter, `show-all-intents=true`. The endpoint response is
} ```
-Learn more about the [V3 prediction endpoint](luis-migration-api-v3.md).
+ * * *
If you add prebuilt domains, the intent name indicates the domain, such as `Util
} ```
-Learn more about the [V3 prediction endpoint](luis-migration-api-v3.md).
+ * * *
ai-services Luis Concept Devops Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-concept-devops-automation.md
This workflow should:
* Train and publish the LUIS app version. > [!NOTE]
- > As explained in [Running tests in an automated build workflow](luis-concept-devops-testing.md#running-tests-in-an-automated-build-workflow) you must publish the LUIS app version under test so that tools such as NLU.DevOps can access it. LUIS only supports two named publication slots, *staging* and *production* for a LUIS app, but you can also [publish a version directly](https://github.com/microsoft/botframework-cli/blob/master/packages/luis/README.md#bf-luisapplicationpublish) and [query by version](./luis-migration-api-v3.md#changes-by-slot-name-and-version-name). Use direct version publishing in your automation workflows to avoid being limited to using the named publishing slots.
+ > As explained in [Running tests in an automated build workflow](luis-concept-devops-testing.md#running-tests-in-an-automated-build-workflow) you must publish the LUIS app version under test so that tools such as NLU.DevOps can access it. LUIS only supports two named publication slots, *staging* and *production* for a LUIS app, but you can also [publish a version directly](https://github.com/microsoft/botframework-cli/blob/master/packages/luis/README.md#bf-luisapplicationpublish) and query by version. Use direct version publishing in your automation workflows to avoid being limited to using the named publishing slots.
* Run all the [unit tests](luis-concept-devops-testing.md).
ai-services Luis Container Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-container-howto.md
More [examples](luis-container-configuration.md#example-docker-run-commands) of
## Endpoint APIs supported by the container
-Both V2 and [V3](luis-migration-api-v3.md) versions of the API are available with the container.
+Both V2 and V3 versions of the API are available with the container.
## Query the container's prediction endpoint
ai-services Luis Get Started Create App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-get-started-create-app.md
To create a Prediction resource from the LUIS portal
} ```
-Learn more about the [V3 prediction endpoint](luis-migration-api-v3.md).
+ ## Clean up resources
ai-services Luis Get Started Get Intent From Browser https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-get-started-get-intent-from-browser.md
In order to query a public app, you need:
## Next steps
-* [V3 prediction endpoint](luis-migration-api-v3.md)
* [Custom subdomains](../cognitive-services-custom-subdomains.md) * [Use the client libraries or REST API](client-libraries-rest-api.md)
ai-services Luis Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-limits.md
If your app exceeds the LUIS model limits, consider using a [LUIS dispatch](how-
| [List entities](concepts/entities.md) | Parent: 50, child: 20,000 items. Canonical name is \*default character max. Synonym values have no length restriction. | | [machine-learning entities + roles](concepts/entities.md):<br> composite,<br>simple,<br>entity role | A limit of either 100 parent entities or 330 entities, whichever limit the user hits first. A role counts as an entity for the purpose of this limit. An example is a composite with a simple entity, which has 2 roles is: 1 composite + 1 simple + 2 roles = 4 of the 330 entities.<br>Subentities can be nested up to 5 levels, with a maximum of 20 children per level. | | Model as a feature | Maximum number of models that can be used as a feature to a specific model to be 10 models. The maximum number of phrase lists used as a feature for a specific model to be 10 phrase lists. |
-| [Preview - Dynamic list entities](./luis-migration-api-v3.md) | 2 lists of \~1k per query prediction endpoint request |
+| Preview - Dynamic list entities | 2 lists of \~1k per query prediction endpoint request |
| [Patterns](concepts/patterns-features.md) | 500 patterns per application.<br>Maximum length of pattern is 400 characters.<br>3 Pattern.any entities per pattern<br>Maximum of 2 nested optional texts in pattern | | [Pattern.any](concepts/entities.md) | 100 per application, 3 pattern.any entities per pattern | | [Phrase list][phrase-list] | 500 phrase lists. 10 global phrase lists due to the model as a feature limit. Non-interchangeable phrase list has max of 5,000 phrases. Interchangeable phrase list has max of 50,000 phrases. Maximum number of total phrases per application of 500,000 phrases. |
ai-services Luis Migration Api V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-migration-api-v3.md
- Title: Prediction endpoint changes in the V3 API
-description: The query prediction endpoint V3 APIs have changed. Use this guide to understand how to migrate to version 3 endpoint APIs.
--
-ms.
--- Previously updated : 01/19/2024----
-# Prediction endpoint changes for V3
---
-The query prediction endpoint V3 APIs have changed. Use this guide to understand how to migrate to version 3 endpoint APIs. There is currently no date by which migration needs to be completed.
-
-**Generally available status** - this V3 API include significant JSON request and response changes from V2 API.
-
-The V3 API provides the following new features:
-
-* [External entities](schema-change-prediction-runtime.md#external-entities-passed-in-at-prediction-time)
-* [Dynamic lists](schema-change-prediction-runtime.md#dynamic-lists-passed-in-at-prediction-time)
-* [Prebuilt entity JSON changes](#prebuilt-entity-changes)
-
-The prediction endpoint [request](#request-changes) and [response](#response-changes) have significant changes to support the new features listed above, including the following:
-
-* [Response object changes](#top-level-json-changes)
-* [Entity role name references instead of entity name](#entity-role-name-instead-of-entity-name)
-* [Properties to mark entities in utterances](#marking-placement-of-entities-in-utterances)
-
-[Reference documentation](https://aka.ms/luis-api-v3) is available for V3.
-
-## V3 changes from preview to GA
-
-V3 made the following changes as part of the move to GA:
-
-* The following prebuilt entities have different JSON responses:
- * [OrdinalV1](luis-reference-prebuilt-ordinal.md)
- * [GeographyV2](luis-reference-prebuilt-geographyv2.md)
- * [DatetimeV2](luis-reference-prebuilt-datetimev2.md)
- * Measurable unit key name from `units` to `unit`
-
-* Request body JSON change:
- * from `preferExternalEntities` to `preferExternalEntities`
- * optional `score` parameter for external entities
-
-* Response body JSON changes:
- * `normalizedQuery` removed
-
-## Suggested adoption strategy
-
-If you use Bot Framework, Bing Spell Check V7, or want to migrate your LUIS app authoring only, continue to use the V2 endpoint.
-
-If you know none of your client application or integrations (Bot Framework, and Bing Spell Check V7) are impacted and you are comfortable migrating your LUIS app authoring and your prediction endpoint at the same time, begin using the V3 prediction endpoint. The V2 prediction endpoint will still be available and is a good fall-back strategy.
-
-For information on using the Bing Spell Check API, see [How to correct misspelled words](luis-tutorial-bing-spellcheck.md).
--
-## Not supported
-
-### Bot Framework and Azure AI Bot Service client applications
-
-Continue to use the V2 API prediction endpoint until the V4.7 of the Bot Framework is released.
--
-## Endpoint URL changes
-
-### Changes by slot name and version name
-
-The [format of the V3 endpoint HTTP](developer-reference-resource.md#rest-endpoints) call has changed.
-
-If you want to query by version, you first need to [publish via API](https://westus.dev.cognitive.microsoft.com/docs/services/5890b47c39e2bb17b84a55ff/operations/5890b47c39e2bb052c5b9c3b) with `"directVersionPublish":true`. Query the endpoint referencing the version ID instead of the slot name.
-
-|Valid values for `SLOT-NAME`|
-|--|
-|`production`|
-|`staging`|
-
-## Request changes
-
-### Query string changes
--
-### V3 POST body
-
-```JSON
-{
- "query":"your utterance here",
- "options":{
- "datetimeReference": "2019-05-05T12:00:00",
- "preferExternalEntities": true
- },
- "externalEntities":[],
- "dynamicLists":[]
-}
-```
-
-|Property|Type|Version|Default|Purpose|
-|--|--|--|--|--|
-|`dynamicLists`|array|V3 only|Not required.|[Dynamic lists](schema-change-prediction-runtime.md#dynamic-lists-passed-in-at-prediction-time) allow you to extend an existing trained and published list entity, already in the LUIS app.|
-|`externalEntities`|array|V3 only|Not required.|[External entities](schema-change-prediction-runtime.md#external-entities-passed-in-at-prediction-time) give your LUIS app the ability to identify and label entities during runtime, which can be used as features to existing entities. |
-|`options.datetimeReference`|string|V3 only|No default|Used to determine [datetimeV2 offset](luis-concept-data-alteration.md#change-time-zone-of-prebuilt-datetimev2-entity). The format for the datetimeReference is [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601).|
-|`options.preferExternalEntities`|boolean|V3 only|false|Specifies if user's [external entity (with same name as existing entity)](schema-change-prediction-runtime.md#override-existing-model-predictions) is used or the existing entity in the model is used for prediction. |
-|`query`|string|V3 only|Required.|**In V2**, the utterance to be predicted is in the `q` parameter. <br><br>**In V3**, the functionality is passed in the `query` parameter.|
-
-## Response changes
-
-The query response JSON changed to allow greater programmatic access to the data used most frequently.
-
-### Top level JSON changes
---
-The top JSON properties for V2 are, when `verbose` is set to true, which returns all intents and their scores in the `intents` property:
-
-```JSON
-{
- "query":"this is your utterance you want predicted",
- "topScoringIntent":{},
- "intents":[],
- "entities":[],
- "compositeEntities":[]
-}
-```
-
-The top JSON properties for V3 are:
-
-```JSON
-{
- "query": "this is your utterance you want predicted",
- "prediction":{
- "topIntent": "intent-name-1",
- "intents": {},
- "entities":{}
- }
-}
-```
-
-The `intents` object is an unordered list. Do not assume the first child in the `intents` corresponds to the `topIntent`. Instead, use the `topIntent` value to find the score:
-
-```nodejs
-const topIntentName = response.prediction.topIntent;
-const score = intents[topIntentName];
-```
-
-The response JSON schema changes allow for:
-
-* Clear distinction between original utterance, `query`, and returned prediction, `prediction`.
-* Easier programmatic access to predicted data. Instead of enumerating through an array in V2, you can access values by **name** for both intents and entities. For predicted entity roles, the role name is returned because it is unique across the entire app.
-* Data types, if determined, are respected. Numerics are no longer returned as strings.
-* Distinction between first priority prediction information and additional metadata, returned in the `$instance` object.
-
-### Entity response changes
-
-#### Marking placement of entities in utterances
-
-**In V2**, an entity was marked in an utterance with the `startIndex` and `endIndex`.
-
-**In V3**, the entity is marked with `startIndex` and `entityLength`.
-
-#### Access `$instance` for entity metadata
-
-If you need entity metadata, the query string needs to use the `verbose=true` flag and the response contains the metadata in the `$instance` object. Examples are shown in the JSON responses in the following sections.
-
-#### Each predicted entity is represented as an array
-
-The `prediction.entities.<entity-name>` object contains an array because each entity can be predicted more than once in the utterance.
-
-<a name="prebuilt-entities-with-new-json"></a>
-
-#### Prebuilt entity changes
-
-The V3 response object includes changes to prebuilt entities. Review [specific prebuilt entities](luis-reference-prebuilt-entities.md) to learn more.
-
-#### List entity prediction changes
-
-The JSON for a list entity prediction has changed to be an array of arrays:
-
-```JSON
-"entities":{
- "my_list_entity":[
- ["canonical-form-1","canonical-form-2"],
- ["canonical-form-2"]
- ]
-}
-```
-Each interior array corresponds to text inside the utterance. The interior object is an array because the same text can appear in more than one sublist of a list entity.
-
-When mapping between the `entities` object to the `$instance` object, the order of objects is preserved for the list entity predictions.
-
-```nodejs
-const item = 0; // order preserved, use same enumeration for both
-const predictedCanonicalForm = entities.my_list_entity[item];
-const associatedMetadata = entities.$instance.my_list_entity[item];
-```
-
-#### Entity role name instead of entity name
-
-In V2, the `entities` array returned all the predicted entities with the entity name being the unique identifier. In V3, if the entity uses roles and the prediction is for an entity role, the primary identifier is the role name. This is possible because entity role names must be unique across the entire app including other model (intent, entity) names.
-
-In the following example: consider an utterance that includes the text, `Yellow Bird Lane`. This text is predicted as a custom `Location` entity's role of `Destination`.
-
-|Utterance text|Entity name|Role name|
-|--|--|--|
-|`Yellow Bird Lane`|`Location`|`Destination`|
-
-In V2, the entity is identified by the _entity name_ with the role as a property of the object:
-
-```JSON
-"entities":[
- {
- "entity": "Yellow Bird Lane",
- "type": "Location",
- "startIndex": 13,
- "endIndex": 20,
- "score": 0.786378264,
- "role": "Destination"
- }
-]
-```
-
-In V3, the entity is referenced by the _entity role_, if the prediction is for the role:
-
-```JSON
-"entities":{
- "Destination":[
- "Yellow Bird Lane"
- ]
-}
-```
-
-In V3, the same result with the `verbose` flag to return entity metadata:
-
-```JSON
-"entities":{
- "Destination":[
- "Yellow Bird Lane"
- ],
- "$instance":{
- "Destination": [
- {
- "role": "Destination",
- "type": "Location",
- "text": "Yellow Bird Lane",
- "startIndex": 25,
- "length":16,
- "score": 0.9837309,
- "modelTypeId": 1,
- "modelType": "Entity Extractor"
- }
- ]
- }
-}
-```
-
-<a name="external-entities-passed-in-at-prediction-time"></a>
-<a name="override-existing-model-predictions"></a>
-
-## Extend the app at prediction time
-
-Learn [concepts](schema-change-prediction-runtime.md) about how to extend the app at prediction runtime.
--
-## Next steps
-
-Use the V3 API documentation to update existing REST calls to LUIS [endpoint](https://westcentralus.dev.cognitive.microsoft.com/docs/services/luis-endpoint-api-v3-0/operations/5cb0a9459a1fe8fa44c28dd8) APIs.
ai-services Luis Reference Prebuilt Age https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-age.md
The following example shows the resolution of the **builtin.age** entity.
## Next steps
-Learn more about the [V3 prediction endpoint](luis-migration-api-v3.md).
+ Learn about the [currency](luis-reference-prebuilt-currency.md), [datetimeV2](luis-reference-prebuilt-datetimev2.md), and [dimension](luis-reference-prebuilt-dimension.md) entities.
ai-services Luis Reference Prebuilt Currency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-currency.md
The following example shows the resolution of the **builtin.currency** entity.
## Next steps
-Learn more about the [V3 prediction endpoint](luis-migration-api-v3.md).
+ Learn about the [datetimeV2](luis-reference-prebuilt-datetimev2.md), [dimension](luis-reference-prebuilt-dimension.md), and [email](luis-reference-prebuilt-email.md) entities.
ai-services Luis Reference Prebuilt Datetimev2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-datetimev2.md
To replace `datetime` with `datetimeV2` in your LUIS app, complete the following
## Next steps
-Learn more about the [V3 prediction endpoint](luis-migration-api-v3.md).
+ Learn about the [dimension](luis-reference-prebuilt-dimension.md), [email](luis-reference-prebuilt-email.md) entities, and [number](luis-reference-prebuilt-number.md).
ai-services Luis Reference Prebuilt Dimension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-dimension.md
The following example shows the resolution of the **builtin.dimension** entity.
## Next steps
-Learn more about the [V3 prediction endpoint](luis-migration-api-v3.md).
+ Learn about the [email](luis-reference-prebuilt-email.md), [number](luis-reference-prebuilt-number.md), and [ordinal](luis-reference-prebuilt-ordinal.md) entities.
ai-services Luis Reference Prebuilt Email https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-email.md
The following example shows the resolution of the **builtin.email** entity.
## Next steps
-Learn more about the [V3 prediction endpoint](luis-migration-api-v3.md).
+ Learn about the [number](luis-reference-prebuilt-number.md), [ordinal](luis-reference-prebuilt-ordinal.md), and [percentage](luis-reference-prebuilt-percentage.md).
ai-services Luis Reference Prebuilt Geographyv2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-geographyV2.md
The following example shows the resolution of the **builtin.geographyV2** entity
## Next steps
-Learn more about the [V3 prediction endpoint](luis-migration-api-v3.md).
+ Learn about the [email](luis-reference-prebuilt-email.md), [number](luis-reference-prebuilt-number.md), and [ordinal](luis-reference-prebuilt-ordinal.md) entities.
ai-services Luis Reference Prebuilt Keyphrase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-keyphrase.md
The following example shows the resolution of the **builtin.keyPhrase** entity.
## Next steps
-Learn more about the [V3 prediction endpoint](luis-migration-api-v3.md).
+ Learn about the [percentage](luis-reference-prebuilt-percentage.md), [number](luis-reference-prebuilt-number.md), and [age](luis-reference-prebuilt-age.md) entities.
ai-services Luis Reference Prebuilt Number https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-number.md
The following example shows a JSON response from LUIS, that includes the resolut
## Next steps
-Learn more about the [V3 prediction endpoint](luis-migration-api-v3.md).
+ Learn about the [currency](luis-reference-prebuilt-currency.md), [ordinal](luis-reference-prebuilt-ordinal.md), and [percentage](luis-reference-prebuilt-percentage.md).
ai-services Luis Reference Prebuilt Ordinal V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-ordinal-v2.md
The following example shows the resolution of the **builtin.ordinalV2** entity.
## Next steps
-Learn more about the [V3 prediction endpoint](luis-migration-api-v3.md).
+ Learn about the [percentage](luis-reference-prebuilt-percentage.md), [phone number](luis-reference-prebuilt-phonenumber.md), and [temperature](luis-reference-prebuilt-temperature.md) entities.
ai-services Luis Reference Prebuilt Ordinal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-ordinal.md
The following example shows the resolution of the **builtin.ordinal** entity.
## Next steps
-Learn more about the [V3 prediction endpoint](luis-migration-api-v3.md).
+ Learn about the [OrdinalV2](luis-reference-prebuilt-ordinal-v2.md), [phone number](luis-reference-prebuilt-phonenumber.md), and [temperature](luis-reference-prebuilt-temperature.md) entities.
ai-services Luis Reference Prebuilt Percentage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-percentage.md
The following example shows the resolution of the **builtin.percentage** entity.
## Next steps
-Learn more about the [V3 prediction endpoint](luis-migration-api-v3.md).
+ Learn about the [ordinal](luis-reference-prebuilt-ordinal.md), [number](luis-reference-prebuilt-number.md), and [temperature](luis-reference-prebuilt-temperature.md) entities.
ai-services Luis Reference Prebuilt Person https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-person.md
The following example shows the resolution of the **builtin.personName** entity.
## Next steps
-Learn more about the [V3 prediction endpoint](luis-migration-api-v3.md).
+ Learn about the [email](luis-reference-prebuilt-email.md), [number](luis-reference-prebuilt-number.md), and [ordinal](luis-reference-prebuilt-ordinal.md) entities.
ai-services Luis Reference Prebuilt Phonenumber https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-phonenumber.md
The following example shows the resolution of the **builtin.phonenumber** entity
## Next steps
-Learn more about the [V3 prediction endpoint](luis-migration-api-v3.md).
+ Learn about the [percentage](luis-reference-prebuilt-percentage.md), [number](luis-reference-prebuilt-number.md), and [temperature](luis-reference-prebuilt-temperature.md) entities.
ai-services Luis Reference Prebuilt Sentiment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-sentiment.md
For all other cultures, the response is:
## Next steps
-Learn more about the [V3 prediction endpoint](luis-migration-api-v3.md).
+
ai-services Luis Reference Prebuilt Temperature https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-temperature.md
The following example shows the resolution of the **builtin.temperature** entity
## Next steps
-Learn more about the [V3 prediction endpoint](luis-migration-api-v3.md).
+ Learn about the [percentage](luis-reference-prebuilt-percentage.md), [number](luis-reference-prebuilt-number.md), and [age](luis-reference-prebuilt-age.md) entities.
ai-services Luis Reference Prebuilt Url https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-reference-prebuilt-url.md
The following example shows the resolution of the https://www.luis.ai is a great
## Next steps
-Learn more about the [V3 prediction endpoint](luis-migration-api-v3.md).
+ Learn about the [ordinal](luis-reference-prebuilt-ordinal.md), [number](luis-reference-prebuilt-number.md), and [temperature](luis-reference-prebuilt-temperature.md) entities.
ai-services Schema Change Prediction Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/schema-change-prediction-runtime.md
External entities are the mechanism for extending any entity type while still be
This is useful for an entity that has data available only at query prediction runtime. Examples of this type of data are constantly changing data or specific per user. You can extend a LUIS contact entity with external information from a user's contact list.
-External entities are part of the V3 authoring API. Learn more about [migrating](luis-migration-api-v3.md) to this version.
+External entities are part of the V3 authoring API.
### Entity already exists in app
The prediction response includes that list entity, with all the other predicted
## Next steps
-* [Prediction score](luis-concept-prediction-score.md)
-* [Authoring API V3 changes](luis-migration-api-v3.md)
+* [Prediction score](luis-concept-prediction-score.md)
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/whats-new.md
Learn what's new in the service. These items include release notes, videos, blog
* Video - [Advanced Natural Language Understanding (NLU) models using LUIS and Azure AI services | BRK2188](https://www.youtube.com/watch?v=JdJEV2jV0_Y) * Improved developer productivity
- * General availability of our [prediction endpoint V3](luis-migration-api-v3.md).
+ * General availability of our prediction endpoint V3.
* Ability to import and export apps with `.lu` ([LUDown](https://github.com/microsoft/botbuilder-tools/tree/master/packages/Ludown)) format. This paves the way for an effective CI/CD process. * Language expansion * [Arabic and Hindi](luis-language-support.md) in public preview.
Learn what's new in the service. These items include release notes, videos, blog
The following features were released at the Build 2019 Conference:
-* [Preview of V3 API migration guide](luis-migration-api-v3.md)
+* Preview of V3 API migration guide
* [Improved analytics dashboard](luis-how-to-use-dashboard.md) * [Improved prebuilt domains](luis-reference-prebuilt-domains.md) * [Dynamic list entities](schema-change-prediction-runtime.md#dynamic-lists-passed-in-at-prediction-time)
ai-services Cognitive Services Virtual Networks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/cognitive-services-virtual-networks.md
Previously updated : 10/27/2023 Last updated : 02/13/2024
curl -i -X PATCH https://management.azure.com$rid?api-version=2023-10-01-preview
To revoke the exception, set `networkAcls.bypass` to `None`.
+> [!NOTE]
+> The trusted service feature is only available using the command line described above, and cannot be done using the Azure portal.
+ ### Pricing For pricing details, see [Azure Private Link pricing](https://azure.microsoft.com/pricing/details/private-link).
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/content-safety/whats-new.md
Learn what's new in the service. These items might be release notes, videos, blo
The Azure AI Content Safety service is now generally available through the following client library SDKs: -- **C#**: [Package](https://www.nuget.org/packages/Azure.AI.ContentSafety) | [API reference](/dotnet/api/overview/azure/ai.contentsafety-readme?view=azure-dotnet) | [Samples](https://github.com/Azure-Samples/AzureAIContentSafety/tree/main/dotnet/1.0.0) | Quickstarts: [Text](./quickstart-text.md), [Image](./quickstart-image.md)-- **Python**: [Package](https://pypi.org/project/azure-ai-contentsafety/) | [API reference](/python/api/overview/azure/ai-contentsafety-readme?view=azure-python) | [Samples](https://github.com/Azure-Samples/AzureAIContentSafety/tree/main/python/1.0.0) | Quickstarts: [Text](./quickstart-text.md), [Image](./quickstart-image.md)-- **Java**: [Package](https://oss.sonatype.org/#nexus-search;quick~contentsafety) | [API reference](/jav)
+- **C#**: [Package](https://www.nuget.org/packages/Azure.AI.ContentSafety) | [API reference](/dotnet/api/overview/azure/ai.contentsafety-readme) | [Samples](https://github.com/Azure-Samples/AzureAIContentSafety/tree/main/dotnet/1.0.0) | Quickstarts: [Text](./quickstart-text.md), [Image](./quickstart-image.md)
+- **Python**: [Package](https://pypi.org/project/azure-ai-contentsafety/) | [API reference](/python/api/overview/azure/ai-contentsafety-readme) | [Samples](https://github.com/Azure-Samples/AzureAIContentSafety/tree/main/python/1.0.0) | Quickstarts: [Text](./quickstart-text.md), [Image](./quickstart-image.md)
+- **Java**: [Package](https://oss.sonatype.org/#nexus-search;quick~contentsafety) | [API reference](/jav)
- **JavaScript**: [Package](https://www.npmjs.com/package/@azure-rest/ai-content-safety?activeTab=readme) | [API reference](https://www.npmjs.com/package/@azure-rest/ai-content-safety/v/1.0.0) | [Samples](https://github.com/Azure-Samples/AzureAIContentSafety/tree/main/js/1.0.0) | Quickstarts: [Text](./quickstart-text.md), [Image](./quickstart-image.md) > [!IMPORTANT]
ai-services Api Version Deprecation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/api-version-deprecation.md
Title: Azure OpenAI Service API version retirement
-description: Learn more about API version retirement in Azure OpenAI Services
+description: Learn more about API version retirement in Azure OpenAI Services.
Previously updated : 01/08/2024 Last updated : 02/13/2024 recommendations: false
This article is to help you understand the support lifecycle for the Azure OpenA
## Latest preview API release
-Azure OpenAI API version 2023-12-01-preview is currently the latest preview release.
+Azure OpenAI API version [2024-02-15-preview](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-02-15-preview/inference.json)
+is currently the latest preview release.
This version contains support for all the latest Azure OpenAI features including:
+- [Assistants API](./assistants-reference.md). [**Added in 2024-02-15-preview**]
+- [DALL-E 3](./dall-e-quickstart.md). [**Added in 2023-12-01-preview**]
- [Text to speech](./text-to-speech-quickstart.md). [**Added in 2024-02-15-preview**] - [Fine-tuning](./how-to/fine-tuning.md) `gpt-35-turbo`, `babbage-002`, and `davinci-002` models.[**Added in 2023-10-01-preview**] - [Whisper](./whisper-quickstart.md). [**Added in 2023-09-01-preview**]
On April 2, 2024 the following API preview releases will be retired and will sto
- 2023-07-01-preview - 2023-08-01-preview
-To avoid service disruptions, you must update to use the latest preview version prior to the retirement date.
+To avoid service disruptions, you must update to use the latest preview version before the retirement date.
## Updating API versions
-We recommend first testing the upgrade to new API versions to confirm there is no impact to your application from the API update prior to making the change globally across your environment.
+We recommend first testing the upgrade to new API versions to confirm there's no impact to your application from the API update before making the change globally across your environment.
-If you are using the OpenAI Python client library or the REST API, you will need to update your code directly to the latest preview API version.
+If you're using the OpenAI Python client library or the REST API, you'll need to update your code directly to the latest preview API version.
-If you are using one of the Azure OpenAI SDKs for C#, Go, Java, or JavaScript you will instead need to update to the latest version of the SDK. Each SDK release is hardcoded to work with specific versions of the Azure OpenAI API.
+If you're using one of the Azure OpenAI SDKs for C#, Go, Java, or JavaScript you'll instead need to update to the latest version of the SDK. Each SDK release is hardcoded to work with specific versions of the Azure OpenAI API.
## Next steps - [Learn more about Azure OpenAI](overview.md)-- [Learn about working with Azure OpenAI models](./how-to/working-with-models.md)
+- [Learn about working with Azure OpenAI models](./how-to/working-with-models.md)
ai-services Use Your Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/use-your-data.md
When using the API, pass the `filter` parameter in each API request. For example
## Schedule automatic index refreshes > [!NOTE]
-> Automatic index refreshing is supported for Azure Blob storage only.
+> * Automatic index refreshing is supported for Azure Blob storage only.
+> * If a document is deleted from input blob container, the corresponding chunk index records won't be removed by the scheduled refresh.
To keep your Azure AI Search index up-to-date with your latest data, you can schedule a refresh for it that runs automatically rather than manually updating it every time your data is updated. Automatic index refresh is only available when you choose **blob storage** as the data source. To enable an automatic index refresh:
To keep your Azure AI Search index up-to-date with your latest data, you can sch
:::image type="content" source="../media/use-your-data/indexer-schedule.png" alt-text="A screenshot of the indexer schedule in Azure OpenAI Studio." lightbox="../media/use-your-data/indexer-schedule.png":::
-After the data ingestion is set to a cadence other than once, Azure AI Search indexers will be created with a schedule equivalent to `0.5 * the cadence specified`. This means that at the specified cadence, the indexers will pull the documents that were added, modified, or deleted from the storage container, reprocess and index them. This ensures that the updated data gets preprocessed and indexed in the final index at the desired cadence automatically. To update your data, you only need to upload the additional documents from the Azure portal. From the portal, select **Storage Account** > **Containers**. Select the name of the original container, then **Upload**. The index will pick up the files automatically after the scheduled refresh period. The intermediate assets created in the Azure AI Search resource will not be cleaned up after ingestion to allow for future runs. These assets are:
+After the data ingestion is set to a cadence other than once, Azure AI Search indexers will be created with a schedule equivalent to `0.5 * the cadence specified`. This means that at the specified cadence, the indexers will pull the documents that were added or modified from the storage container, reprocess and index them. This ensures that the updated data gets preprocessed and indexed in the final index at the desired cadence automatically. To update your data, you only need to upload the additional documents from the Azure portal. From the portal, select **Storage Account** > **Containers**. Select the name of the original container, then **Upload**. The index will pick up the files automatically after the scheduled refresh period. The intermediate assets created in the Azure AI Search resource will not be cleaned up after ingestion to allow for future runs. These assets are:
- `{Index Name}-index` - `{Index Name}-indexer` - `{Index Name}-indexer-chunk`
ai-services Provisioned Throughput Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/provisioned-throughput-onboarding.md
Title: Azure OpenAI Service Provisioned Throughput Units (PTU) onboarding
description: Learn about provisioned throughput units onboarding and Azure OpenAI. Previously updated : 01/15/2024 Last updated : 02/13/2024 --++ recommendations: false
This article walks you through the process of onboarding to [Provisioned Through
## Sizing and estimation: provisioned managed only
-Determining the right amount of provisioned throughput, or PTUs, you require for your workload is an essential step to optimizing performance and cost. This section describes how to use the Azure OpenAI capacity planning tool. The tool provides you with an estimate of the required PTU.
+Determining the right amount of provisioned throughput, or PTUs, you require for your workload is an essential step to optimizing performance and cost. This section describes how to use the Azure OpenAI capacity planning tool. The tool provides you with an estimate of the required PTU to meet the needs of your workload.
### Estimate provisioned throughput and cost
-To get a quick estimate for your workload, open the capacity planner in the [Azure OpenAI Studio](https://oai.azure.com). The capacity planner is under **Management** > **Quotas** > **Provisioned**. The **Provisioned** option and the capacity planner are only available in certain regions within the Quota pane, if you don't see this option setting the quota region to *Sweden Central* will make this option available. Enter the following parameters based on your workload.
+To get a quick estimate for your workload, open the capacity planner in the [Azure OpenAI Studio](https://oai.azure.com). The capacity planner is under **Management** > **Quotas** > **Provisioned**.
+
+The **Provisioned** option and the capacity planner are only available in certain regions within the Quota pane, if you don't see this option setting the quota region to *Sweden Central* will make this option available. Enter the following parameters based on your workload.
| Input | Description | |||
To get a quick estimate for your workload, open the capacity planner in the [Azu
| Generation tokens | Number of tokens generated by the model on each call | | Peak calls per minute | Peak concurrent load to the endpoint measured in calls per minute|
-After you fill in the required details, select Calculate to view the suggested PTU for your scenario.
+After you fill in the required details, select **Calculate** to view the suggested PTU for your scenario.
:::image type="content" source="../media/how-to/provisioned-onboarding/capacity-calculator.png" alt-text="Screenshot of the Azure OpenAI Studio landing page." lightbox="../media/how-to/provisioned-onboarding/capacity-calculator.png":::
After you fill in the required details, select Calculate to view the suggested P
### Understanding the provisioned throughput purchase model
-Unlike Azure services where you're charged based on usage, the Azure OpenAI Provisioned Throughput feature is purchased as a renewable, monthly commitment. This commitment is charged to your subscription upon creation and at each monthly renewal. When you onboard to Provisioned Throughput, you need to create a commitment on each Azure OpenAI resource where you intend to create a provisioned deployment. The PTUs you purchase in this way are available for use when creating deployments on those resources.
+Unlike Azure services where you're charged based on usage, the Azure OpenAI Provisioned Throughput feature is purchased as a renewable, monthly commitment. This commitment is charged to your subscription upon creation and at each monthly renewal. When you onboard to Provisioned Throughput, you need to create a commitment on each Azure OpenAI resource where you intend to create a provisioned deployment. The PTUs you purchase in this way are available for use when creating deployments on those resources.
The total number of PTUs you can purchase via commitments is limited to the amount of Provisioned Throughput quota that is assigned to your subscription. The following table compares other characteristics of Provisioned Throughput quota (PTUs) and Provisioned Throughput commitments. |Topic|Quota|Commitments| |||| |Purpose| Grants permission to create provisioned deployments, and provides the upper limit on the capacity that can be used|Purchase vehicle for Provisioned Throughput capacity|
-|Lifetime| Quota might be removed from your subscription if it isn't purchased via a commitment within five days of being granted|The minimum term is one month, with customer-selectable autorenewal behavior. A commitment isn't cancelable, and can't be moved to a new resource while it's active|
-|Scope |Quota is specific to a subscription and region, and is shared across all Azure OpenAI resources | Commitments are an attribute of an Azure OpenAI resource, and are scoped to deployments within that resource. A subscription might contain as many active commitments as there are resources.|
+|Lifetime| Quota might be removed from your subscription if it isn't purchased via a commitment within five days of being granted|The minimum term is one month, with customer-selectable autorenewal behavior. A commitment isn't cancelable, and can't be moved to a new resource while it's active|
+|Scope |Quota is specific to a subscription and region, and is shared across all Azure OpenAI resources | Commitments are an attribute of an Azure OpenAI resource, and are scoped to deployments within that resource. A subscription might contain as many active commitments as there are resources.|
|Granularity| Quota is granted specific to a model family (for example, GPT-4) but is shareable across model versions within the family| Commitments aren't model or version specific. For example, a resourceΓÇÖs 1000 PTU commitment can cover deployments of both GPT-4 and GPT-35-Turbo| |Capacity guarantee| Having quota doesn't guarantee that capacity is available when you create the deployment| Capacity availability to cover committed PTUs is guaranteed as long as the commitment is active.| |Increases/Decreases| New quota can be requested and approved at any time, independent of your commitment renewal dates | The number of PTUs covered by a commitment can be increased at any time, but can't be decreased except at the time of renewal.|
-Quota and commitments work together to govern the creation of deployments within your subscriptions. To create a provisioned deployment, two criteria must be met:
+Quota and commitments work together to govern the creation of deployments within your subscriptions. To create a provisioned deployment, two criteria must be met:
-- Quota must be available for the desired model within the desired region and subscription. This means you can't exceed your subscription/region-wide limit for the model.
+- Quota must be available for the desired model within the desired region and subscription. This means you can't exceed your subscription/region-wide limit for the model.
- Committed PTUs must be available on the resource where you create the deployment. (The capacity you assign to the deployment is paid-for). ### Commitment properties and charging model
A commitment includes several properties.
Provisioned Throughput Commitments generate charges against your Azure subscription at the following times: -- At commitment creation. The charge is computed according to the current monthly PTU rate and the number of PTUs committed. You will receive a single up-front charge on your invoice.
+- At commitment creation. The charge is computed according to the current monthly PTU rate and the number of PTUs committed. You will receive a single up-front charge on your invoice.
-- At commitment renewal. If the renewal policy is set to autorenew, a new monthly charge is generated based on the PTUs committed in the new term. This charge appears as a single up-front charge on your invoice.
+- At commitment renewal. If the renewal policy is set to autorenew, a new monthly charge is generated based on the PTUs committed in the new term. This charge appears as a single up-front charge on your invoice.
-- When new PTUs are added to an existing commitment. The charge is computed based on the number of PTUs added to the commitment, pro-rated hourly to the end of the existing commitment term. For example, if 300 PTUs are added to an existing commitment of 900 PTUs exactly halfway through its term, there is a charge at the time of the addition for the equivalent of 150 PTUs (300 PTUs pro-rated to the commitment expiration date). If the commitment is renewed, the following monthΓÇÖs charge will be for the new PTU total of 1,200 PTUs.
+- When new PTUs are added to an existing commitment. The charge is computed based on the number of PTUs added to the commitment, pro-rated hourly to the end of the existing commitment term. For example, if 300 PTUs are added to an existing commitment of 900 PTUs exactly halfway through its term, there is a charge at the time of the addition for the equivalent of 150 PTUs (300 PTUs pro-rated to the commitment expiration date). If the commitment is renewed, the following monthΓÇÖs charge will be for the new PTU total of 1,200 PTUs.
-As long as the number of deployed PTUs in a resource is covered by the resourceΓÇÖs commitment, then you'll only see the charges. However, if the number of deployed PTUs in a resource becomes greater than the resourceΓÇÖs committed PTUs, the excess PTUs will be charged as overage at an hourly rate. Typically, the only way this overage will happen is if a commitment expires or is reduced at its renewal while the resource contains deployments. For example, if a 300 PTU commitment is allowed to expire on a resource that has 300 PTUs deployed, the deployed PTUs is no longer be covered by any commitment. Once the expiration date is reached, the subscription is charged an hourly overage fee based on the 300 excess PTUs.
+As long as the number of deployed PTUs in a resource is covered by the resourceΓÇÖs commitment, then you'll only see the commitment charges. However, if the number of deployed PTUs in a resource becomes greater than the resourceΓÇÖs committed PTUs, the excess PTUs will be charged as overage at an hourly rate. Typically, the only way this overage will happen is if a commitment expires or is reduced at its renewal while the resource contains deployments. For example, if a 300 PTU commitment is allowed to expire on a resource that has 300 PTUs deployed, the deployed PTUs is no longer be covered by any commitment. Once the expiration date is reached, the subscription is charged an hourly overage fee based on the 300 excess PTUs.
-The hourly rate is higher than the monthly commitment rate and the charges exceed the monthly rate within a few days. There are two ways to end hourly overage charges:
+The hourly rate is higher than the monthly commitment rate and the charges exceed the monthly rate within a few days. There are two ways to end hourly overage charges:
-- Delete or scale-down deployments so that they donΓÇÖt use more PTUs than are committed
+- Delete or scale-down deployments so that they donΓÇÖt use more PTUs than are committed.
- Create a new commitment on the resource to cover the deployed PTUs. - ## Purchasing and managing commitments ### Planning your commitments Upon receiving confirmation that Provisioned Throughput Unit (PTU) quota is assigned to a subscription, you must create commitments on the target resources (or extend existing commitments) to make the quota usable for deployments.
-Prior to creating commitments, plan how the provisioned deployments will be used and which Azure OpenAI resources will host them. Commitments have a one month minimum term and can't be decreased in size until the end of the term. They also can't be moved to new resources once created. Finally, the sum of your committed PTUs can't be greater than your quota ΓÇô PTUs committed on a resource are no longer available to commit to on a different resource until the commitment expires. Having a clear plan on which resources will be used for provisioned deployments and the capacity you intend to apply to them (for at least a month) will help ensure an optimal experience with your provisioned throughput setup.
+Prior to creating commitments, plan how the provisioned deployments will be used and which Azure OpenAI resources will host them. Commitments have a **one month minimum term and can't be decreased in size until the end of the term**. They also can't be moved to new resources once created. Finally, the sum of your committed PTUs can't be greater than your quota ΓÇô PTUs committed on a resource are no longer available to commit to on a different resource until the commitment expires. Having a clear plan on which resources will be used for provisioned deployments and the capacity you intend to apply to them (for at least a month) will help ensure an optimal experience with your provisioned throughput setup.
For example: -- DonΓÇÖt create a commitment and deployment on a ΓÇ£temporaryΓÇ¥ resource for the purpose of validation. YouΓÇÖll be locked into using that resource for at least month. Instead, if the plan is to ultimately use the PTUs on a production resource, create the commitment and test deployment on that resource right from the start.
+- DonΓÇÖt create a commitment and deployment on a *temporary* resource for the purpose of validation. YouΓÇÖll be locked into using that resource for at least month. Instead, if the plan is to ultimately use the PTUs on a production resource, create the commitment and test deployment on that resource right from the start.
-- Calculate the number of PTUs to commit on a resource based on the number, model and size of the deployments you intend to create, keeping in mind the minimum number of PTUs each model requires create a deployment.
+- Calculate the number of PTUs to commit on a resource based on the number, model, and size of the deployments you intend to create, keeping in mind the minimum number of PTUs each model requires create a deployment.
- - Example 1: GPT-4-32K requires a minimum of 200 PTUs to deploy. If you create a commitment of only 100 PTUs on a resource, you wonΓÇÖt have enough committed PTUs to deploy GPT-4-32K there
+ - Example 1: GPT-4-32K requires a minimum of 200 PTUs to deploy. If you create a commitment of only 100 PTUs on a resource, you wonΓÇÖt have enough committed PTUs to deploy GPT-4-32K there
- - Example 2: If you need to create multiple deployments on a resource, sum the PTUs required for each deployment. A production resource hosting deployments for 300 PTUs of GPT-4, and 500 PTUs of GPT-4-32K will require a commitment of at least 800 PTUs to cover both deployments.
+ - Example 2: If you need to create multiple deployments on a resource, sum the PTUs required for each deployment. A production resource hosting deployments for 300 PTUs of GPT-4, and 500 PTUs of GPT-4-32K will require a commitment of at least 800 PTUs to cover both deployments.
-- Distribute or consolidate PTUs as needed. For example, total quota of 1000 PTUs can be distributed across resources as needed to support your deployments. It could be committed on a single resource to support one or more deployments adding up to 1000 PTUs, or distributed across multiple resources (for example, a dev and a prod resource) as long as the total number of committed PTUs is less than or equal to the quota of 1000.
+- Distribute or consolidate PTUs as needed. For example, total quota of 1000 PTUs can be distributed across resources as needed to support your deployments. It could be committed on a single resource to support one or more deployments adding up to 1000 PTUs, or distributed across multiple resources (for example, a dev and a prod resource) as long as the total number of committed PTUs is less than or equal to the quota of 1000.
-- Consider operational requirements in your plan. For example:
+- Consider operational requirements in your plan. For example:
- Organizationally required resource naming conventions - Business continuity policies that require multiple deployments of a model per region, perhaps on different Azure OpenAI resources
-### Creating Provisioned Throughput commitments
+### Managing Provisioned Throughput Commitments
+
+Provisioned throughput commitments are created and managed from the **Manage Commitments** view in Azure OpenAI Studio. You can navigate to this view by selecting **Manage Commitments** from the Quota pane:
++
+From the Manage Commitments view, you can do several things:
+
+- Purchase new commitments or edit existing commitments.
+- Monitor all commitments in your subscription.
+- Identify and take action on commitments that might cause unexpected billing.
-With the plan ready, the next step is to create the commitments. Commitments are created manually via Azure OpenAI Studio and require the user creating the commitment to have either the [Contributor or Cognitive Services Contributor role](./role-based-access-control.md) at the subscription level.
+The sections below will take you through these tasks.
+
+### Purchase a Provisioned Throughput Commitment
+
+With your commitment plan ready, the next step is to create the commitments. Commitments are created manually via Azure OpenAI Studio and require the user creating the commitment to have either the [Contributor or Cognitive Services Contributor role](./role-based-access-control.md) at the subscription level.
For each new commitment you need to create, follow these steps:
-1. Launch the Provisioned Throughput purchase dialog by selecting **Quotas** > **Provisioned** > **Click here to purchase**.
+1. Launch the Provisioned Throughput purchase dialog by selecting **Quotas** > **Provisioned** > **Manage Commitments**.
:::image type="content" source="../media/how-to/provisioned-onboarding/quota.png" alt-text="Screenshot of the purchase dialog." lightbox="../media/how-to/provisioned-onboarding/quota.png":::
-2. Select the Azure OpenAI resource and purchase the commitment.
+2. Select **Purchase commitment**.
+
+3. Select the Azure OpenAI resource and purchase the commitment. You will see your resources divided into resources with existing commitments, which you can edit and resources that don't currently have a commitment.
| Setting | Notes | ||-|
-| **Select a resource** | Choose the resource where you will create the provisioned deployment. Once you have purchased the commitment, you will be unable to use the PTUs on another resource until the current commitment expires. |
-| **Amount to commit (PTU)** | Choose the number of PTUs you're committing to. This number can be increased later, but can't be decreased |
+| **Select a resource** | Choose the resource where you'll create the provisioned deployment. Once you have purchased the commitment, you will be unable to use the PTUs on another resource until the current commitment expires. |
+| **Select a commitment type** | Select Provisioned. (Provisioned is equivalent to Provisioned Managed) |
+| **Current uncommitted provisioned quota** | The number of PTUs currently available for you to commit to this resource. |
+| **Amount to commit (PTU)** | Choose the number of PTUs you're committing to. **This number can be increased during the commitment term, but can't be decreased**. Enter values in increments of 50 for the commitment type Provisioned. |
| **Commitment tier for current period** | The commitment period is set to one month. |
-| **Renewal settings** | Select Purchase. A confirmation dialog will be displayed. After you confirm, your PTUs will be committed, and you can use them to create a provisioned deployment. |
+| **Renewal settings** | Auto-renew at current PTUs <br> Auto-renew at lower PTUs <br> Do not auto-renew |
+
+4. Select Purchase. A confirmation dialog will be displayed. After you confirm, your PTUs will be committed, and you can use them to create a provisioned deployment. |
+
+> [!IMPORTANT]
+> A new commitment is billed up-front for the entire term. If the renewal settings are set to auto-renew, then you will be billed again on each renewal date based on the renewal settings.
+
+## Edit an existing Provisioned Throughput commitment
+
+From the Manage Commitments view, you can also edit an existing commitment. There are two types of changes you can make to an existing commitment:
+
+- You can add PTUs to the commitment.
+- You can change the renewal settings.
+
+To edit a commitment, select the current to edit, then select Edit commitment.
### Adding Provisioned Throughput Units to existing commitments
-The steps are the same as in the previous example, but you'll increase the **amount to commit (PTU)** value. The value shown here is the total amount of PTUs purchased not incremental. The additional price charge displayed will represent a pro-rated amount to pay for the added PTUs over the remaining time in the time period.
+Adding PTUs to an existing commitment will allow you to create larger or more numerous deployments within the resource. You can do this at any time during the term of your commitment.
:::image type="content" source="../media/how-to/provisioned-onboarding/increase-commitment.png" alt-text="Screenshot of commitment purchase UI with an increase in the amount to commit value." lightbox="../media/how-to/provisioned-onboarding/increase-commitment.png":::
-### Managing commitments
+> [!IMPORTANT]
+> When you add PTUs to a commitment, they will be billed immediately, at a pro-rated amount from the current date to the end of the existing commitment term. Adding PTUs does not reset the commitment term.
+
+### Changing renewal settings
+
+Commitment renewal settings can be changed at any time before the expiration date of your commitment. Reasons you might want to change the renewal settings include ending your use of provisioned throughput by setting the commitment to not auto-renew, or to decrease usage of provisioned throughput by lowering the number of PTUs that will be committed in the next period.
+
+> [!IMPORTANT]
+> If you allow a commitment to expire or decrease in size such that the deployments under the resource require more PTUs than you have in your resource commitment, you will receive hourly overage charges for any excess PTUs. For example, a resource that has deployments that total 500 PTUs and a commitment for 300 PTUs will generate hourly overage charges for 200 PTUs.
+
+## Monitor commitments and prevent unexpected billings
+
+The manage commitments pane provides a subscription wide overview of all resources with commitments and PTU usage within a given Azure Subscription. Of particular importance interest are:
+
+- **PTUs Committed, Deployed and Usage** ΓÇô These figures provide the sizes of your commitments, and how much is in use by deployments. Maximize your investment by using all of your committed PTUs.
+- **Expiration policy and date** - The expiration date and policy tell you when a commitment will expire and what will happen when it does. A commitment set to auto-renew will generate a billing event on the renewal date. For commitments that are expiring, be sure you delete deployments from these resources prior to the expiration date to prevent hourly overage billingThe current renewal settings for a commitment.
+- **Notifications** - Alerts regarding important conditions like unused commitments, and configurations that might result in billing overages. Billing overages can be caused by situations such as when a commitment has expired and deployments are still present, but have shifted to hourly billing.
+
+## Common Commitment Management Scenarios
**Discontinue use of provisioned throughput**
-To end use of provisioned throughput, and stop any charges after the current commitments are expired, two steps must be taken:
+To end use of provisioned throughput, and prevent hourly overage charges after commitment expiration, stop any charges after the current commitments are expired, two steps must be taken:
1. Set the renewal policy on all commitments to *Don't autorenew*. 2. Delete the provisioned deployments using the quota. **Move a commitment/deployment to a new resource in the same subscription/region**
-It isn't possible in Azure OpenAI Studio to directly *move* a deployment or a commitment to a new resource. Instead, a new deployment needs to be created on the target resource and traffic moved to it. There will need to be a commitment purchased established on the new resource to accomplish this. Because commitments are charged up-front for a 30-day period, it's necessary to time this move with the expiration of the original commitment to minimize overlap with the new commitment and ΓÇ£double-billingΓÇ¥ during the overlap.
+It isn't possible in Azure OpenAI Studio to directly *move* a deployment or a commitment to a new resource. Instead, a new deployment needs to be created on the target resource and traffic moved to it. There will need to be a commitment purchased established on the new resource to accomplish this. Because commitments are charged up-front for a 30-day period, it's necessary to time this move with the expiration of the original commitment to minimize overlap with the new commitment and ΓÇ£double-billingΓÇ¥ during the overlap.
-There are two approaches that can be taken to implement this transition.
+There are two approaches that can be taken to implement this transition.
**Option 1: No-Overlap Switchover**
This option requires some downtime, but requires no extra quota and generates no
| Steps | Notes | |-|-| |Set the renewal policy on the existing commitment to expire| This will prevent the commitment from renewing and generating further charges |
-|Before expiration of the existing commitment, delete its deployment | Downtime will start at this point and will last until the new deployment is created and traffic is moved. You'll minimize the duration by timing the deletion to happen as close to the expiration date/time as possible.|
+|Before expiration of the existing commitment, delete its deployment | Downtime will start at this point and will last until the new deployment is created and traffic is moved. You'll minimize the duration by timing the deletion to happen as close to the expiration date/time as possible.|
|After expiration of the existing commitment, create the commitment on the new resource|Minimize downtime by executing this and the next step as soon after expiration as possible.| |Create the deployment on the new resource and move traffic to it|| **Option 2: Overlapped Switchover**
-This option has no downtime by having both existing and new deployments live at the same time. This requires having quota available to create the new deployment, and will generate extra costs for the duration of the overlapped deployments.
+This option has no downtime by having both existing and new deployments live at the same time. This requires having quota available to create the new deployment, and will generate extra costs for the duration of the overlapped deployments.
| Steps | Notes | |-|-|
If the final step takes longer than expected and will finish after the existing
- **Pay overage**: Keep the original deployment and pay hourly until you have moved traffic off and deleted the deployment. - **Reset the original commitment** to renew one more time. This will give you time to complete the move with a known cost.
-Both paying for an overage and resetting the original commitment will generate charges beyond the original expiration date. Paying overage charges might be cheaper than a new one-month commitment if you only need a day or two to complete the move. Compare the costs of both options to find the lowest-cost approach.
+Both paying for an overage and resetting the original commitment will generate charges beyond the original expiration date. Paying overage charges might be cheaper than a new one-month commitment if you only need a day or two to complete the move. Compare the costs of both options to find the lowest-cost approach.
### Move the deployment to a new region and or subscription
The same approaches apply in moving the commitment and deployment within the reg
### View and edit an existing resource
-In Azure OpenAI Studio, select **Quota** > **Provisioned** > **Manage Commitment Tiers** and select a resource with an existing commitment to view/change it.
+In Azure OpenAI Studio, select **Quota** > **Provisioned** > **Manage commitments** and select a resource with an existing commitment to view/change it.
## Next steps
ai-services Use Your Data Securely https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/use-your-data-securely.md
Previously updated : 01/19/2024 Last updated : 02/13/2024 recommendations: false
To allow access to your Azure AI Search resource from your client machines, like
:::image type="content" source="../media/use-your-data/approve-private-endpoint.png" alt-text="A screenshot showing private endpoint approval screen." lightbox="../media/use-your-data/approve-private-endpoint.png":::
+The private endpoint resource is provisioned in a Microsoft managed tenant, while the linked resource is in your tenant. You can't access the private endpoint resource by just clicking the **private endpoint** link (in blue font) in the **Private access** tab of the **Networking page**. Instead, click elsewhere on the row, then the **Approve**` button above should be clickable.
+ Learn more about the [manual approval workflow](/azure/private-link/private-endpoint-overview#access-to-a-private-link-resource-using-approval-workflow).
To allow access to your Storage Account from Azure OpenAI and Azure AI Search, w
In the Azure portal, navigate to your storage account networking tab, choose "Selected networks", and then select **Allow Azure services on the trusted services list to access this storage account** and click Save.
+> [!NOTE]
+> The trusted service feature is only available using the command line described above, and cannot be done using the Azure portal.
+ ### Disable public network access You can disable public network access of your Storage Account in the Azure portal.
ai-services Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/reference.md
Title: Azure OpenAI Service REST API reference description: Learn how to use Azure OpenAI's REST API. In this article, you learn about authorization options, how to structure a request and receive a response.
-#
Previously updated : 11/06/2023 Last updated : 02/13/2024 recommendations: false
This article provides details on the inference REST API endpoints for Azure Open
## Authentication
-Azure OpenAI provides two methods for authentication. you can use either API Keys or Microsoft Entra ID.
+Azure OpenAI provides two methods for authentication. You can use either API Keys or Microsoft Entra ID.
- **API Key authentication**: For this type of authentication, all API requests must include the API Key in the ```api-key``` HTTP header. The [Quickstart](./quickstart.md) provides guidance for how to make calls with this type of authentication.
POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen
| Parameter | Type | Required? | Description | |--|--|--|--| | ```your-resource-name``` | string | Required | The name of your Azure OpenAI Resource. |
-| ```deployment-id``` | string | Required | The deployment name you chose when you deployed the model. |
-| ```api-version``` | string | Required |The API version to use for this operation. This follows the YYYY-MM-DD format. |
+| ```deployment-id``` | string | Required | The deployment name you chose when you deployed the model.|
+| ```api-version``` | string | Required |The API version to use for this operation. This follows the YYYY-MM-DD format.|
**Supported versions** - `2022-12-01` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2022-12-01/inference.json)-- `2023-03-15-preview` (retiring 2024-04-02) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-03-15-preview/inference.json)
+- `2023-03-15-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-03-15-preview/inference.json)
- `2023-05-15` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2023-05-15/inference.json)-- `2023-06-01-preview` (retiring 2024-04-02) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)-- `2023-07-01-preview` (retiring 2024-04-02) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json)-- `2023-08-01-preview` (retiring 2024-04-02) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)-- `2023-09-01-preview` (retiring 2024-04-02) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json)
+- `2023-06-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)
+- `2023-07-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json)
+- `2023-08-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)
+- `2023-09-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json)
- `2023-12-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json)
+- `2024-02-15-preview`[Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-12-15-preview/inference.json)
**Request body**
POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen
|--|--|--|--|--| | ```prompt``` | string or array | Optional | ```<\|endoftext\|>``` | The prompt(s) to generate completions for, encoded as a string, or array of strings. Note that ```<\|endoftext\|>``` is the document separator that the model sees during training, so if a prompt isn't specified the model generates as if from the beginning of a new document. | | ```max_tokens``` | integer | Optional | 16 | The maximum number of tokens to generate in the completion. The token count of your prompt plus max_tokens can't exceed the model's context length. Most models have a context length of 2048 tokens (except for the newest models, which support 4096). |
-| ```temperature``` | number | Optional | 1 | What sampling temperature to use, between 0 and 2. Higher values means the model takes more risks. Try 0.9 for more creative applications, and 0 (`argmax sampling`) for ones with a well-defined answer. We generally recommend altering this or top_p but not both. |
+| ```temperature``` | number | Optional | 1 | What sampling temperature to use, between 0 and 2. Higher values mean the model takes more risks. Try 0.9 for more creative applications, and 0 (`argmax sampling`) for ones with a well-defined answer. We generally recommend altering this or top_p but not both. |
| ```top_p``` | number | Optional | 1 | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both. | | ```logit_bias``` | map | Optional | null | Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the GPT tokenizer) to an associated bias value from -100 to 100. You can use this tokenizer tool (which works for both GPT-2 and GPT-3) to convert text to token IDs. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect varies per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token. As an example, you can pass {"50256": -100} to prevent the <\|endoftext\|> token from being generated. | | ```user``` | string | Optional | | A unique identifier representing your end-user, which can help monitoring and detecting abuse | | ```n``` | integer | Optional | 1 | How many completions to generate for each prompt. Note: Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for max_tokens and stop. | | ```stream``` | boolean | Optional | False | Whether to stream back partial progress. If set, tokens are sent as data-only server-sent events as they become available, with the stream terminated by a data: [DONE] message.|
-| ```logprobs``` | integer | Optional | null | Include the log probabilities on the logprobs most likely tokens, as well the chosen tokens. For example, if logprobs is 10, the API will return a list of the 10 most likely tokens. the API will always return the logprob of the sampled token, so there might be up to logprobs+1 elements in the response. This parameter cannot be used with `gpt-35-turbo`. |
+| ```logprobs``` | integer | Optional | null | Include the log probabilities on the logprobs most likely tokens, as well the chosen tokens. For example, if logprobs is 10, the API will return a list of the 10 most likely tokens. The API will always return the logprob of the sampled token, so there might be up to logprobs+1 elements in the response. This parameter cannot be used with `gpt-35-turbo`. |
| ```suffix```| string | Optional | null | The suffix that comes after a completion of inserted text. | | ```echo``` | boolean | Optional | False | Echo back the prompt in addition to the completion. This parameter cannot be used with `gpt-35-turbo`. | | ```stop``` | string or array | Optional | null | Up to four sequences where the API will stop generating further tokens. The returned text won't contain the stop sequence. For GPT-4 Turbo with Vision, up to two sequences are supported. |
POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen
**Supported versions** - `2022-12-01` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2022-12-01/inference.json)-- `2023-03-15-preview` (retiring 2024-04-02) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-03-15-preview/inference.json)
+- `2023-03-15-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-03-15-preview/inference.json)
- `2023-05-15` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2023-05-15/inference.json)-- `2023-06-01-preview` (retiring 2024-04-02) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)-- `2023-07-01-preview` (retiring 2024-04-02) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json)-- `2023-08-01-preview` (retiring 2024-04-02) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)-- `2023-09-01-preview` (retiring 2024-04-02) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json)
+- `2023-06-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)
+- `2023-07-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json)
+- `2023-08-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)
+- `2023-09-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json)
- `2023-12-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json)
+- `2024-02-15-preview`[Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-02-15-preview/inference.json)
**Request body** | Parameter | Type | Required? | Default | Description | |--|--|--|--|--|
-| ```input```| string or array | Yes | N/A | Input text to get embeddings for, encoded as an array or string. The number of input tokens varies depending on what [model you are using](./concepts/models.md). Only `text-embedding-ada-002 (Version 2)` supports array input.|
+| ```input```| string or array | Yes | N/A | Input text to get embeddings for, encoded as an array or string. The number of input tokens varies depending on what [model you're using](./concepts/models.md). Only `text-embedding-ada-002 (Version 2)` supports array input.|
| ```user``` | string | No | Null | A unique identifier representing your end-user. This will help Azure OpenAI monitor and detect abuse. **Do not pass PII identifiers instead use pseudoanonymized values such as GUIDs** | #### Example request
POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen
**Supported versions** -- `2023-03-15-preview` (retiring 2024-04-02) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-03-15-preview/inference.json)
+- `2023-03-15-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-03-15-preview/inference.json)
- `2023-05-15` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/stable/2023-05-15/inference.json)-- `2023-06-01-preview` (retiring 2024-04-02) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)-- `2023-07-01-preview` (retiring 2024-04-02) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json)-- `2023-08-01-preview` (retiring 2024-04-02) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)-- `2023-09-01-preview` (retiring 2024-04-02) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json)
+- `2023-06-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)
+- `2023-07-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json)
+- `2023-08-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)
+- `2023-09-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json)
- `2023-12-01-preview` (required for Vision scenarios) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/tree/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview)
+- `2024-02-15-preview`[Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-02-15-preview/inference.json)
+ **Request body**
The request body consists of a series of messages. The model will generate a res
|--|--|--|--|--| | `messages` | array | Yes | N/A | The series of messages associated with this chat completion request. It should include previous messages in the conversation. Each message has a `role` and `content`. | | `role`| string | Yes | N/A | Indicates who is giving the current message. Can be `system`,`user`,`assistant`,`tool`, or `function`.|
-| `content` | string or array | Yes | N/A | The content of the message. It must be a string, unless in a Vision-enabled scenario. If it's part of the `user` message, using the GPT-4 Turbo with Vision model, with the latest API version, then `content` must be an array of structures, where each item represents either text or an image: <ul><li> `text`: input text is represented as a structure with the following properties: </li> <ul> <li> `type` = "text" </li> <li> `text` = the input text </li> </ul> <li> `images`: an input image is represented as a structure with the following properties: </li><ul> <li> `type` = "image_url" </li> <li> `image_url` = a structure with the following properties: </li> <ul> <li> `url` = the image URL </li> <li>(optional) `detail` = "high", "low", or "auto" </li> </ul> </ul> </ul>|
+| `content` | string or array | Yes | N/A | The content of the message. It must be a string, unless in a Vision-enabled scenario. If it's part of the `user` message, using the GPT-4 Turbo with Vision model, with the latest API version, then `content` must be an array of structures, where each item represents either text or an image: <ul><li> `text`: input text is represented as a structure with the following properties: </li> <ul> <li> `type` = "text" </li> <li> `text` = the input text </li> </ul> <li> `images`: an input image is represented as a structure with the following properties: </li><ul> <li> `type` = "image_url" </li> <li> `image_url` = a structure with the following properties: </li> <ul> <li> `url` = the image URL </li> <li>(optional) `detail` = `high`, `low`, or `auto` </li> </ul> </ul> </ul>|
| `contentPart` | object | No | N/A | Part of a user's multi-modal message. It can be either text type or image type. If text, it will be a text string. If image, it will be a `contentPartImage` object. | | `contentPartImage` | object | No | N/A | Represents a user-uploaded image. It has a `url` property, which is either a URL of the image or the base 64 encoded image data. It also has a `detail` property which can be `auto`, `low`, or `high`.| | `enhancements` | object | No | N/A | Represents the Vision enhancement features requested for the chat. It has a `grounding` and `ocr` property, which each have a boolean `enabled` property. Use these to request the OCR service and/or the object detection/grounding service.|
In the example response, `finish_reason` equals `stop`. If `finish_reason` equal
| ```frequency_penalty``` | number | Optional | 0 | Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.| | ```logit_bias``` | object | Optional | null | Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect varies per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.| | ```user``` | string | Optional | | A unique identifier representing your end-user, which can help Azure OpenAI to monitor and detect abuse.|
-|```function_call```| | Optional | | `[Deprecated in 2023-12-01-preview replacement paremeter is tools_choice]`Controls how the model responds to function calls. "none" means the model doesn't call a function, and responds to the end-user. "auto" means the model can pick between an end-user or calling a function. Specifying a particular function via {"name": "my_function"} forces the model to call that function. "none" is the default when no functions are present. "auto" is the default if functions are present. This parameter requires API version [`2023-07-01-preview`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/generated.json) |
+|```function_call```| | Optional | | `[Deprecated in 2023-12-01-preview replacement parameter is tools_choice]`Controls how the model responds to function calls. "none" means the model doesn't call a function, and responds to the end-user. `auto` means the model can pick between an end-user or calling a function. Specifying a particular function via {"name": "my_function"} forces the model to call that function. "none" is the default when no functions are present. `auto` is the default if functions are present. This parameter requires API version [`2023-07-01-preview`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/generated.json) |
|```functions``` | [`FunctionDefinition[]`](#functiondefinition-deprecated) | Optional | | `[Deprecated in 2023-12-01-preview replacement paremeter is tools]` A list of functions the model can generate JSON inputs for. This parameter requires API version [`2023-07-01-preview`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/generated.json)| |```tools```| string (The type of the tool. Only [`function`](#function) is supported.) | Optional | |A list of tools the model can call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model can generate JSON inputs for. This parameter requires API version [`2023-12-01-preview`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/generated.json) |
-|```tool_choice```| string or object | Optional | none is the default when no functions are present. auto is the default if functions are present. | Controls which (if any) function is called by the model. none means the model won't call a function and instead generates a message. auto means the model can pick between generating a message or calling a function. Specifying a particular function via {"type: "function", "function": {"name": "my_function"}} forces the model to call that function. This parameter requires API version [`2023-12-01-preview`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json)|
+|```tool_choice```| string or object | Optional | none is the default when no functions are present. `auto` is the default if functions are present. | Controls which (if any) function is called by the model. None means the model won't call a function and instead generates a message. `auto` means the model can pick between generating a message or calling a function. Specifying a particular function via {"type: "function", "function": {"name": "my_function"}} forces the model to call that function. This parameter requires API version [`2023-12-01-preview`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json) or later.|
### ChatMessage
POST {your-resource-name}/openai/deployments/{deployment-id}/extensions/chat/com
| ```api-version``` | string | Required |The API version to use for this operation. This follows the YYYY-MM-DD format. | **Supported versions**-- `2023-06-01-preview` (retiring 2024-04-02) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)-- `2023-07-01-preview` (retiring 2024-04-02) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json)-- `2023-08-01-preview` (retiring 2024-04-02) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)-- `2023-09-01-preview` (retiring 2024-04-02) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json)
+- `2023-06-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)
+- `2023-07-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json)
+- `2023-08-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)
+- `2023-09-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json)
- `2023-12-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json)
+- `2024-02-15-preview`[Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-02-15-preview/inference.json)
+ #### Example request
POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen
**Supported versions** - `2023-12-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json)
+- `2024-02-15-preview`[Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-02-15-preview/inference.json)
+ **Request body**
POST https://{your-resource-name}.openai.azure.com/openai/images/generations:sub
**Supported versions** -- `2023-06-01-preview` (retiring 2024-04-02) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)-- `2023-07-01-preview` (retiring 2024-04-02) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json)-- `2023-08-01-preview` (retiring 2024-04-02) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)
+- `2023-06-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)
+- `2023-07-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json)
+- `2023-08-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)
- `2023-12-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json)
+- `2024-02-15-preview`[Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-02-15-preview/inference.json)
+ **Request body**
GET https://{your-resource-name}.openai.azure.com/openai/operations/images/{oper
**Supported versions** -- `2023-06-01-preview` (retiring 2024-04-02) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)-- `2023-07-01-preview` (retiring 2024-04-02) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json)-- `2023-08-01-preview` (retiring 2024-04-02) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)
+- `2023-06-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)
+- `2023-07-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json)
+- `2023-08-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)
+- `2024-02-15-preview`[Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-02-15-preview/inference.json)
+ #### Example request
DELETE https://{your-resource-name}.openai.azure.com/openai/operations/images/{o
**Supported versions** -- `2023-06-01-preview` (retiring 2024-04-02) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)-- `2023-07-01-preview` (retiring 2024-04-02) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json)-- `2023-08-01-preview` (retiring 2024-04-02) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)
+- `2023-06-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)
+- `2023-07-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-07-01-preview/inference.json)
+- `2023-08-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-08-01-preview/inference.json)
#### Example request
POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen
**Supported versions** -- `2023-09-01-preview` (retiring 2024-04-02) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json)
+- `2023-09-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json)
- `2023-12-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json)
+- `2024-02-15-preview`[Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-02-15-preview/inference.json)
**Request body**
POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen
**Supported versions** -- `2023-09-01-preview` (retiring 2024-04-02) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json)
+- `2023-09-01-preview` (retiring February, 4, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-09-01-preview/inference.json)
- `2023-12-01-preview` [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json)
+- `2024-02-15-preview`[Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-02-15-preview/inference.json)
**Request body**
POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen
**Supported versions** -- `2024-02-15-preview`
+- `2024-02-15-preview`[Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2024-02-15-preview/inference.json)
**Request body**
Azure OpenAI is deployed as a part of the Azure AI services. All Azure AI servic
## Next steps
-Learn about [Models, and fine-tuning with the REST API](/rest/api/azureopenai/fine-tuning?view=rest-azureopenai-2023-10-01-preview).
+Learn about [Models, and fine-tuning with the REST API](/rest/api/azureopenai/fine-tuning).
Learn more about the [underlying models that power Azure OpenAI](./concepts/models.md).
ai-services Responsible Use Of Ai Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/responsible-use-of-ai-overview.md
Title: Overview of Responsible use of AI description: Azure AI services provides information and guidelines on how to responsibly use our AI services in applications. Below are the links to articles that provide this guidance for the different services within the Azure AI services suite.
-#
# Responsible use of AI with Azure AI services
-Azure AI services provides information and guidelines on how to responsibly use artificial intelligence in applications. Below are the links to articles that provide this guidance for the different services within the Azure AI services suite.
-
-## Anomaly Detector
-
-* [Transparency note and use cases](/legal/cognitive-services/anomaly-detector/ad-transparency-note?context=/azure/ai-services/anomaly-detector/context/context)
-* [Data, privacy, and security](/legal/cognitive-services/anomaly-detector/data-privacy-security?context=/azure/ai-services/anomaly-detector/context/context)
-
-## Azure AI Vision - OCR
-
-* [Transparency note and use cases](/legal/cognitive-services/computer-vision/ocr-transparency-note?context=/azure/ai-services/computer-vision/context/context)
-* [Characteristics and limitations](/legal/cognitive-services/computer-vision/ocr-characteristics-and-limitations?context=/azure/ai-services/computer-vision/context/context)
-* [Integration and responsible use](/legal/cognitive-services/computer-vision/ocr-guidance-integration-responsible-use?context=/azure/ai-services/computer-vision/context/context)
-* [Data, privacy, and security](/legal/cognitive-services/computer-vision/ocr-data-privacy-security?context=/azure/ai-services/computer-vision/context/context)
-
-## Azure AI Vision - Image Analysis
-
-* [Transparency note](/legal/cognitive-services/computer-vision/imageanalysis-transparency-note?context=/azure/ai-services/computer-vision/context/context)
-* [Characteristics and limitations](/legal/cognitive-services/computer-vision/imageanalysis-characteristics-and-limitations?context=/azure/ai-services/computer-vision/context/context)
-* [Integration and responsible use](/legal/cognitive-services/computer-vision/imageanalysis-guidance-for-integration?context=/azure/ai-services/computer-vision/context/context)
-* [Data, privacy, and security](/legal/cognitive-services/computer-vision/imageanalysis-data-privacy-security?context=/azure/ai-services/computer-vision/context/context)
-* [Limited Access features](/legal/cognitive-services/computer-vision/limited-access?context=/azure/ai-services/computer-vision/context/context)
-
-## Azure AI Vision - Face
-
-* [Transparency note and use cases](/legal/cognitive-services/face/transparency-note?context=/azure/ai-services/computer-vision/context/context)
-* [Characteristics and limitations](/legal/cognitive-services/face/characteristics-and-limitations?context=/azure/ai-services/computer-vision/context/context)
-* [Integration and responsible use](/legal/cognitive-services/face/guidance-integration-responsible-use?context=/azure/ai-services/computer-vision/context/context)
-* [Data privacy and security](/legal/cognitive-services/face/data-privacy-security?context=/azure/ai-services/computer-vision/context/context)
-* [Limited Access features](/legal/cognitive-services/computer-vision/limited-access-identity?context=/azure/ai-services/computer-vision/context/context)
-
-## Azure AI Vision - Spatial Analysis
-
-* [Transparency note and use cases](/legal/cognitive-services/computer-vision/transparency-note-spatial-analysis?context=/azure/ai-services/computer-vision/context/context)
-* [Responsible use in AI deployment](/legal/cognitive-services/computer-vision/responsible-use-deployment?context=/azure/ai-services/computer-vision/context/context)
-* [Disclosure design guidelines](/legal/cognitive-services/computer-vision/disclosure-design?context=/azure/ai-services/computer-vision/context/context)
-* [Research insights](/legal/cognitive-services/computer-vision/research-insights?context=/azure/ai-services/computer-vision/context/context)
-* [Data, privacy, and security](/legal/cognitive-services/computer-vision/compliance-privacy-security-2?context=/azure/ai-services/computer-vision/context/context)
-
-## Custom Vision
-
-* [Transparency note and use cases](/legal/cognitive-services/custom-vision/custom-vision-cvs-transparency-note?context=/azure/ai-services/custom-vision-service/context/context)
-* [Characteristics and limitations](/legal/cognitive-services/custom-vision/custom-vision-cvs-characteristics-and-limitations?context=/azure/ai-services/custom-vision-service/context/context)
-* [Integration and responsible use](/legal/cognitive-services/custom-vision/custom-vision-cvs-guidance-integration-responsible-use?context=/azure/ai-services/custom-vision-service/context/context)
-* [Data, privacy, and security](/legal/cognitive-services/custom-vision/custom-vision-cvs-data-privacy-security?context=/azure/ai-services/custom-vision-service/context/context)
-
-## Language service
-
-* [Transparency note](/legal/cognitive-services/language-service/transparency-note?context=/azure/ai-services/language-service/context/context)
-* [Integration and responsible use](/legal/cognitive-services/language-service/guidance-integration-responsible-use?context=/azure/ai-services/language-service/context/context)
-* [Data, privacy, and security](/legal/cognitive-services/language-service/data-privacy?context=/azure/ai-services/language-service/context/context)
-
-## Language - Custom text classification
-
-* [Transparency note](/legal/cognitive-services/language-service/ctc-transparency-note?context=/azure/ai-services/language-service/context/context)
-* [Integration and responsible use](/legal/cognitive-services/language-service/ctc-guidance-integration-responsible-use?context=/azure/ai-services/language-service/context/context)
-* [Characteristics and limitations](/legal/cognitive-services/language-service/ctc-characteristics-and-limitations?context=/azure/ai-services/language-service/context/context)
-* [Data, privacy, and security](/legal/cognitive-services/language-service/ctc-data-privacy-security?context=/azure/ai-services/language-service/context/context)
-
-## Language - Named entity recognition
-
-* [Transparency note](/legal/cognitive-services/language-service/transparency-note-named-entity-recognition?context=/azure/ai-services/language-service/context/context)
-* [Integration and responsible use](/legal/cognitive-services/language-service/guidance-integration-responsible-use?context=/azure/ai-services/language-service/context/context)
-* [Data, privacy, and security](/legal/cognitive-services/language-service/data-privacy?context=/azure/ai-services/language-service/context/context)
-
-## Language - Custom named entity recognition
-
-* [Transparency note](/legal/cognitive-services/language-service/cner-transparency-note?context=/azure/ai-services/language-service/context/context)
-* [Integration and responsible use](/legal/cognitive-services/language-service/cner-guidance-integration-responsible-use?context=/azure/ai-services/language-service/context/context)
-* [Characteristics and limitations](/legal/cognitive-services/language-service/cner-characteristics-and-limitations?context=/azure/ai-services/language-service/context/context)
-* [Data, privacy, and security](/legal/cognitive-services/language-service/cner-data-privacy-security?context=/azure/ai-services/language-service/context/context)
-
-## Language - Entity linking
-
-* [Integration and responsible use](/legal/cognitive-services/language-service/guidance-integration-responsible-use?context=/azure/ai-services/language-service/context/context)
-* [Data, privacy, and security](/legal/cognitive-services/language-service/data-privacy?context=/azure/ai-services/language-service/context/context)
-
-## Language - Language detection
-
-* [Transparency note](/legal/cognitive-services/language-service/transparency-note-language-detection?context=/azure/ai-services/language-service/context/context)
-* [Integration and responsible use](/legal/cognitive-services/language-service/guidance-integration-responsible-use?context=/azure/ai-services/language-service/context/context)
-* [Data, privacy, and security](/legal/cognitive-services/language-service/data-privacy?context=/azure/ai-services/language-service/context/context)
-
-## Language - Key phrase extraction
-
-* [Transparency note](/legal/cognitive-services/language-service/transparency-note-key-phrase-extraction?context=/azure/ai-services/language-service/context/context)
-* [Integration and responsible use](/legal/cognitive-services/language-service/guidance-integration-responsible-use?context=/azure/ai-services/language-service/context/context)
-* [Data, privacy, and security](/legal/cognitive-services/language-service/data-privacy?context=/azure/ai-services/language-service/context/context)
-
-## Language - Personally identifiable information detection
-
-* [Transparency note](/legal/cognitive-services/language-service/transparency-note-personally-identifiable-information?context=/azure/ai-services/language-service/context/context)
-* [Integration and responsible use](/legal/cognitive-services/language-service/guidance-integration-responsible-use?context=/azure/ai-services/language-service/context/context)
-* [Data, privacy, and security](/legal/cognitive-services/language-service/data-privacy?context=/azure/ai-services/language-service/context/context)
-
-## Language - Question Answering
-
-* [Transparency note](/legal/cognitive-services/language-service/transparency-note-question-answering?context=/azure/ai-services/language-service/context/context)
-* [Integration and responsible use](/legal/cognitive-services/language-service/guidance-integration-responsible-use-question-answering?context=/azure/ai-services/language-service/context/context)
-* [Data, privacy, and security](/legal/cognitive-services/language-service/data-privacy-security-question-answering?context=/azure/ai-services/language-service/context/context)
-
-## Language - Sentiment Analysis and opinion mining
-
-* [Transparency note](/legal/cognitive-services/language-service/transparency-note-sentiment-analysis?context=/azure/ai-services/language-service/context/context)
-* [Integration and responsible use](/legal/cognitive-services/language-service/guidance-integration-responsible-use?context=/azure/ai-services/language-service/context/context)
-* [Data, privacy, and security](/legal/cognitive-services/language-service/data-privacy?context=/azure/ai-services/language-service/context/context)
-
-## Language - Text Analytics for health
-
-* [Transparency note](/legal/cognitive-services/language-service/transparency-note-health?context=/azure/ai-services/language-service/context/context)
-* [Integration and responsible use](/legal/cognitive-services/language-service/guidance-integration-responsible-use?context=/azure/ai-services/language-service/context/context)
-* [Data, privacy, and security](/legal/cognitive-services/language-service/data-privacy?context=/azure/ai-services/language-service/context/context)
-
-## Language - Summarization
-
-* [Transparency note](/legal/cognitive-services/language-service/transparency-note-extractive-summarization?context=/azure/ai-services/language-service/context/context)
-* [Integration and responsible use](/legal/cognitive-services/language-service/guidance-integration-responsible-use-summarization?context=/azure/ai-services/language-service/context/context)
-* [Characteristics and limitations](/legal/cognitive-services/language-service/characteristics-and-limitations-summarization?context=/azure/ai-services/language-service/context/context)
-* [Data, privacy, and security](/legal/cognitive-services/language-service/data-privacy?context=/azure/ai-services/language-service/context/context)
-
-## Language Understanding
-
-* [Transparency note and use cases](/legal/cognitive-services/luis/luis-transparency-note?context=/azure/ai-services/LUIS/context/context)
-* [Characteristics and limitations](/legal/cognitive-services/luis/characteristics-and-limitations?context=/azure/ai-services/LUIS/context/context)
-* [Integration and responsible use](/legal/cognitive-services/luis/guidance-integration-responsible-use?context=/azure/ai-services/LUIS/context/context)
-* [Data, privacy, and security](/legal/cognitive-services/luis/data-privacy-security?context=/azure/ai-services/LUIS/context/context)
-
-## Azure OpenAI Service
-
-* [Transparency note](/legal/cognitive-services/openai/transparency-note?context=/azure/ai-services/openai/context/context)
-* [Limited access](/legal/cognitive-services/openai/limited-access?context=/azure/ai-services/openai/context/context)
-* [Code of conduct](/legal/cognitive-services/openai/code-of-conduct?context=/azure/ai-services/openai/context/context)
-* [Data, privacy, and security](/legal/cognitive-services/openai/data-privacy?context=/azure/ai-services/openai/context/context)
-
-## Personalizer
-
-* [Transparency note and use cases](./personalizer/responsible-use-cases.md)
-* [Characteristics and limitations](./personalizer/responsible-characteristics-and-limitations.md)
-* [Integration and responsible use](./personalizer/responsible-guidance-integration.md)
-* [Data and privacy](./personalizer/responsible-data-and-privacy.md)
-
-## QnA Maker
-
-* [Transparency note and use cases](/legal/cognitive-services/qnamaker/transparency-note-qnamaker?context=/azure/ai-services/qnamaker/context/context)
-* [Characteristics and limitations](/legal/cognitive-services/qnamaker/characteristics-and-limitations-qnamaker?context=/azure/ai-services/qnamaker/context/context)
-* [Integration and responsible use](/legal/cognitive-services/qnamaker/guidance-integration-responsible-use-qnamaker?context=/azure/ai-services/qnamaker/context/context)
-* [Data, privacy, and security](/legal/cognitive-services/qnamaker/data-privacy-security-qnamaker?context=/azure/ai-services/qnamaker/context/context)
-
-## Speech - Pronunciation Assessment
-
-* [Transparency note and use cases](/legal/cognitive-services/speech-service/pronunciation-assessment/transparency-note-pronunciation-assessment?context=/azure/ai-services/speech-service/context/context)
-* [Characteristics and limitations](/legal/cognitive-services/speech-service/pronunciation-assessment/characteristics-and-limitations-pronunciation-assessment?context=/azure/ai-services/speech-service/context/context)
-
-## Speech - Speaker Recognition
-
-* [Transparency note and use cases](/legal/cognitive-services/speech-service/speaker-recognition/transparency-note-speaker-recognition?context=/azure/ai-services/speech-service/context/context)
-* [Characteristics and limitations](/legal/cognitive-services/speech-service/speaker-recognition/characteristics-and-limitations-speaker-recognition?context=/azure/ai-services/speech-service/context/context)
-* [Limited access](/legal/cognitive-services/speech-service/speaker-recognition/limited-access-speaker-recognition?context=/azure/ai-services/speech-service/context/context)
-* [General guidelines](/legal/cognitive-services/speech-service/speaker-recognition/guidance-integration-responsible-use-speaker-recognition?context=/azure/ai-services/speech-service/context/context)
-* [Data, privacy, and security](/legal/cognitive-services/speech-service/speaker-recognition/data-privacy-speaker-recognition?context=/azure/ai-services/speech-service/context/context)
-
-## Speech - Custom Neural Voice
-
-* [Transparency note and use cases](/legal/cognitive-services/speech-service/custom-neural-voice/transparency-note-custom-neural-voice?context=/azure/ai-services/speech-service/context/context)
-* [Characteristics and limitations](/legal/cognitive-services/speech-service/custom-neural-voice/characteristics-and-limitations-custom-neural-voice?context=/azure/ai-services/speech-service/context/context)
-* [Limited access](/legal/cognitive-services/speech-service/custom-neural-voice/limited-access-custom-neural-voice?context=/azure/ai-services/speech-service/context/context)
-* [Responsible deployment of synthetic speech](/legal/cognitive-services/speech-service/custom-neural-voice/concepts-guidelines-responsible-deployment-synthetic?context=/azure/ai-services/speech-service/context/context)
-* [Disclosure of voice talent](/legal/cognitive-services/speech-service/disclosure-voice-talent?context=/azure/ai-services/speech-service/context/context)
-* [Disclosure of design guidelines](/legal/cognitive-services/speech-service/custom-neural-voice/concepts-disclosure-guidelines?context=/azure/ai-services/speech-service/context/context)
-* [Disclosure of design patterns](/legal/cognitive-services/speech-service/custom-neural-voice/concepts-disclosure-patterns?context=/azure/ai-services/speech-service/context/context)
-* [Code of conduct](/legal/cognitive-services/speech-service/tts-code-of-conduct?context=/azure/ai-services/speech-service/context/context)
-* [Data, privacy, and security](/legal/cognitive-services/speech-service/custom-neural-voice/data-privacy-security-custom-neural-voice?context=/azure/ai-services/speech-service/context/context)
-
-## Speech - Text to speech
-
-* [Transparency note and use cases](/legal/cognitive-services/speech-service/text-to-speech/transparency-note?context=/azure/ai-services/speech-service/context/context)
-
-## Speech - Speech to text
-
-* [Transparency note and use cases](/legal/cognitive-services/speech-service/speech-to-text/transparency-note?context=/azure/ai-services/speech-service/context/context)
-* [Characteristics and limitations](/legal/cognitive-services/speech-service/speech-to-text/characteristics-and-limitations?context=/azure/ai-services/speech-service/context/context)
-* [Integration and responsible use](/legal/cognitive-services/speech-service/speech-to-text/guidance-integration-responsible-use?context=/azure/ai-services/speech-service/context/context)
-* [Data, privacy, and security](/legal/cognitive-services/speech-service/speech-to-text/data-privacy-security?context=/azure/ai-services/speech-service/context/context)
+Azure AI services provides information and guidelines on how to responsibly use artificial intelligence in applications. Below are the links to articles that provide this guidance for the different services within the Azure AI services suite.
+
+## Vision
+- [Azure AI Vision - Image Analysis](/legal/cognitive-services/computer-vision/imageanalysis-transparency-note?context=/azure/ai-services/computer-vision/context/context)
+- [Azure AI Vision - OCR](/legal/cognitive-services/computer-vision/ocr-transparency-note?context=/azure/ai-services/computer-vision/context/context)
+- [Azure AI Vision - Face](/legal/cognitive-services/face/transparency-note?context=/azure/ai-services/computer-vision/context/context)
+- [Azure AI Vision - Spatial Analysis](/legal/cognitive-services/computer-vision/transparency-note-spatial-analysis?context=/azure/ai-services/computer-vision/context/context)
+- [Azure Custom Vision](/legal/cognitive-services/custom-vision/custom-vision-cvs-transparency-note?context=/azure/ai-services/custom-vision-service/context/context)
+- [Azure Video Indexer](/legal/azure-video-indexer/transparency-note?context=%2Fazure%2Fazure-video-indexer%2Fcontext%2Fcontext)
++
+## Language
+- [Azure AI Language](/legal/cognitive-services/language-service/transparency-note?context=/azure/ai-services/language-service/context/context)
+- [Azure AI Language - Custom text classification](/legal/cognitive-services/language-service/ctc-transparency-note?context=/azure/ai-services/language-service/context/context)
+- [Azure AI Language - Named entity recognition](/legal/cognitive-services/language-service/transparency-note-named-entity-recognition?context=/azure/ai-services/language-service/context/context)
+- [Azure AI Language - Custom named entity recognition](/legal/cognitive-services/language-service/cner-transparency-note?context=/azure/ai-services/language-service/context/context)
+- [Azure AI Language - Entity linking](/legal/cognitive-services/language-service/guidance-integration-responsible-use?context=/azure/ai-services/language-service/context/context)
+- [Azure AI Language - Language detection](/legal/cognitive-services/language-service/transparency-note-language-detection?context=/azure/ai-services/language-service/context/context)
+- [Azure AI Language - Key phrase extraction](/legal/cognitive-services/language-service/transparency-note-key-phrase-extraction?context=/azure/ai-services/language-service/context/context)
+- [Azure AI Language - Personally identifiable information detection](/legal/cognitive-services/language-service/transparency-note-personally-identifiable-information?context=/azure/ai-services/language-service/context/context)
+- [Azure AI Language - Question Answering](/legal/cognitive-services/language-service/transparency-note-question-answering?context=/azure/ai-services/language-service/context/context)
+- [Azure AI Language - Sentiment Analysis and opinion mining](/legal/cognitive-services/language-service/transparency-note-sentiment-analysis?context=/azure/ai-services/language-service/context/context)
+- [Azure AI Language - Text Analytics for health](/legal/cognitive-services/language-service/transparency-note-health?context=/azure/ai-services/language-service/context/context)
+- [Azure AI Language - Summarization](/legal/cognitive-services/language-service/transparency-note-extractive-summarization?context=/azure/ai-services/language-service/context/context)
+- [Language Understanding](/legal/cognitive-services/luis/luis-transparency-note?context=/azure/ai-services/LUIS/context/context)
+
+## Speech
+- [Azure AI Speech - Pronunciation Assessment](/legal/cognitive-services/speech-service/pronunciation-assessment/transparency-note-pronunciation-assessment?context=/azure/ai-services/speech-service/context/context)
+- [Azure AI Speech - Speaker Recognition](/legal/cognitive-services/speech-service/speaker-recognition/transparency-note-speaker-recognition?context=/azure/ai-services/speech-service/context/context)
+- [Azure AI Speech - Text to speech](/legal/cognitive-services/speech-service/custom-neural-voice/transparency-note-custom-neural-voice?context=/azure/ai-services/speech-service/context/context)
+- [Azure AI Speech - Speech to text](/legal/cognitive-services/speech-service/speech-to-text/transparency-note?context=/azure/ai-services/speech-service/context/context)
+
+## Search
+- [Azure AI Search](/legal/search/transparency-note?context=%2Fazure%2Fsearch%2Fcontext%2Fcontext&tabs=enrichment)
+
+## Other
+- [Azure OpenAI](/legal/cognitive-services/openai/transparency-note?context=/azure/ai-services/openai/context/context)
+- [Azure AI Content Safety](/legal/cognitive-services/content-safety/transparency-note?context=%2Fazure%2Fai-services%2Fcontent-safety%2Fcontext%2Fcontext)
+- [Azure AI Document Intelligence](/legal/cognitive-services/document-intelligence/transparency-note?toc=%2Fazure%2Fai-services%2Fdocument-intelligence%2Ftoc.json&bc=%2Fazure%2Fai-services%2Fdocument-intelligence%2Fbreadcrumb%2Ftoc.json)
+- [Anomaly Detector](/legal/cognitive-services/anomaly-detector/ad-transparency-note?context=/azure/ai-services/anomaly-detector/context/context)
+- [Personalizer](./personalizer/responsible-use-cases.md)
+- [QnA Maker](/legal/cognitive-services/qnamaker/transparency-note-qnamaker?context=/azure/ai-services/qnamaker/context/context)
ai-studio Configure Managed Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/configure-managed-network.md
- ignite-2023 Previously updated : 11/15/2023 Last updated : 02/13/2024
We have two network isolation aspects. One is the network isolation to access an Azure AI. Another is the network isolation of computing resources in your Azure AI and Azure AI projects such as Compute Instance, Serverless and Managed Online Endpoint. This document explains the latter highlighted in the diagram. You can use Azure AI built-in network isolation to protect your computing resources. You need to configure following network isolation configurations.
The managed VNet is preconfigured with [required default rules](#list-of-require
The following diagram shows a managed VNet configured to __allow internet outbound__: The following diagram shows a managed VNet configured to __allow only approved outbound__: > [!NOTE] > In this configuration, the storage, key vault, and container registry used by the Azure AI are flagged as private. Since they are flagged as private, a private endpoint is used to communicate with them. ## Configure a managed virtual network to allow internet outbound
ai-studio Configure Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/configure-private-link.md
Previously updated : 11/15/2023 Last updated : 02/13/2024
We have two network isolation aspects. One is the network isolation to access an Azure AI. Another is the network isolation of computing resources in your Azure AI and Azure AI projects such as Compute Instance, Serverless and Managed Online Endpoint. This document explains the former highlighted in the diagram. You can use private link to establish the private connection to your Azure AI and its default resources. You get several Azure AI default resources in your resource group. You need to configure following network isolation configurations.
aks Azure Files Csi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-files-csi.md
mountOptions:
### Create NFS file share storage class
-Create a file named `nfs-sc.yaml` and copy the manifest below. For a list of supported `mountOptions`, see [NFS mount options][nfs-file-share-mount-options]
+Create a file named `nfs-sc.yaml` and copy the manifest below. For a list of supported `mountOptions`, see [NFS mount options][nfs-file-share-mount-options].
+
+> [!NOTE]
+> `vers`, `minorversion`, `sec` are configured by the Azure File CSI driver. Specifying a value in your manifest for these properties aren't supported.
```yml apiVersion: storage.k8s.io/v1
The output of the commands resembles the following example:
[statically-provision-a-volume]: azure-csi-files-storage-provision.md#statically-provision-a-volume [azure-private-endpoint-dns]: ../private-link/private-endpoint-dns.md#azure-services-dns-zone-configuration [azure-netapp-files-mount-options-best-practices]: ../azure-netapp-files/performance-linux-mount-options.md#rsize-and-wsize
-[nfs-file-share-mount-options]: ../storage/files/storage-files-how-to-mount-nfs-shares.md#mount-options
+[nfs-file-share-mount-options]: ../storage/files/storage-files-how-to-mount-nfs-shares.md#mount-options
aks Configure Kubenet Dual Stack https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-kubenet-dual-stack.md
Once the cluster has been created, you can deploy your workloads. This article w
## Expose the workload via a `LoadBalancer` type service > [!IMPORTANT]
-> There are currently **two limitations** pertaining to IPv6 services in AKS.
->
-> 1. Azure Load Balancer sends health probes to IPv6 destinations from a link-local address. In Azure Linux node pools, this traffic can't be routed to a pod, so traffic flowing to IPv6 services deployed with `externalTrafficPolicy: Cluster` fail. IPv6 services must be deployed with `externalTrafficPolicy: Local`, which causes `kube-proxy` to respond to the probe on the node.
-> 2. Starting from AKS v1.27, you can directly create a dualstack service. However, for older versions, only the first IP address for a service will be provisioned to the load balancer, so a dual-stack service only receives a public IP for its first-listed IP family. To provide a dual-stack service for a single deployment, please create two services targeting the same selector, one for IPv4 and one for IPv6.
+> Starting in AKS v1.27, you can create a dual-stack LoadBalancer service which will be provisioned with 1 IPv4 public IP and 1 IPv6 public IP. However, in older versions, only the first IP address for a service will be provisioned to the load balancer, so a dual-stack service only receives a public IP for its first-listed IP family. To provide a dual-stack service for a single deployment, please create two services targeting the same selector, one for IPv4 and one for IPv6.
# [kubectl](#tab/kubectl)
aks Developer Best Practices Pod Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/developer-best-practices-pod-security.md
Title: Developer best practices - Pod security in Azure Kubernetes Services (AKS
description: Learn the developer best practices for how to secure pods in Azure Kubernetes Service (AKS) Last updated 01/12/2024-+ # Best practices for pod security in Azure Kubernetes Service (AKS)
aks Howto Deploy Java Liberty App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/howto-deploy-java-liberty-app.md
These values will be used later in this article. Note that several other useful
## Create an Azure SQL Database
-The following steps guide you through creating an Azure SQL Database single database for use with your app.
-
-1. Create a single database in Azure SQL Database by following the steps in [Quickstart: Create an Azure SQL Database single database](/azure/azure-sql/database/single-database-create-quickstart), carefully noting the differences in the box below. Return to this article after creating and configuring the database server.
-
- > [!NOTE]
- > At the **Basics** step, write down **Resource group**, **Database name**, **_\<server-name>_.database.windows.net**, **Server admin login**, and **Password**. The database **Resource group** will be referred to as `<db-resource-group>` later in this article.
- >
- > At the **Networking** step, set **Connectivity method** to **Public endpoint**, **Allow Azure services and resources to access this server** to **Yes**, and **Add current client IP address** to **Yes**.
- >
- > :::image type="content" source="media/howto-deploy-java-liberty-app/create-sql-database-networking.png" alt-text="Screenshot of the Azure portal that shows the Networking tab of the Create SQL Database page with the Connectivity method and Firewall rules settings highlighted." lightbox="media/howto-deploy-java-liberty-app/create-sql-database-networking.png":::
Now that the database and AKS cluster have been created, we can proceed to preparing AKS to host your Open Liberty application.
aks Howto Deploy Java Wls App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/howto-deploy-java-wls-app.md
description: Shows how to quickly stand up WebLogic Server on Azure Kubernetes S
Previously updated : 06/22/2023 Last updated : 02/09/2024 # Deploy a Java application with WebLogic Server on an Azure Kubernetes Service (AKS) cluster
-This article shows you how to quickly deploy WebLogic Application Server (WLS) on Azure Kubernetes Service (AKS) with the simplest possible set of configuration choices using the Azure portal. For a more full featured tutorial, including the use of Azure Application Gateway to make WLS on AKS securely visible on the public Internet, see [Tutorial: Migrate a WebLogic Server cluster to Azure with Azure Application Gateway as a load balancer](/azure/developer/java/migration/migrate-weblogic-with-app-gateway).
+This article demonstrates how to:
+
+- Run your Java, Java EE, or Jakarta EE on Oracle WebLogic Server (WLS).
+- Stand up a WLS cluster using the Azure Marketplace offer.
+- Build the application Docker image to serve as auxiliary image to provide WebLogic Deploy Tooling (WDT) models and applications.
+- Deploy the containerized application to the existing WLS cluster on AKS with connection to Microsoft Azure SQL.
+
+This article uses the Azure Marketplace offer for WLS to accelerate your journey to AKS. The offer automatically provisions several Azure resources, including the following resources:
+
+- An Azure Container Registry instance
+- An AKS cluster
+- An Azure App Gateway Ingress Controller (AGIC) instance
+- The WebLogic Operator
+- A container image including the WebLogic runtime
+- A WLS cluster without an application
+
+Then, this article introduces building an auxiliary image step by step to update an existing WLS cluster. The auxiliary image provides application and WDT models.
+
+For full automation, you can select your application and configure datasource connection from Azure portal before the offer deployment. To see the offer, visit the [Azure portal](https://aka.ms/wlsaks).
For step-by-step guidance in setting up WebLogic Server on Azure Kubernetes Service, see the official documentation from Oracle at [Azure Kubernetes Service](https://oracle.github.io/weblogic-kubernetes-operator/samples/azure-kubernetes-service/).
For step-by-step guidance in setting up WebLogic Server on Azure Kubernetes Serv
> [!NOTE] > Get a support entitlement from Oracle before going to production. Failure to do so results in running insecure images that are not patched for critical security flaws. For more information on Oracle's critical patch updates, see [Critical Patch Updates, Security Alerts and Bulletins](https://www.oracle.com/security-alerts/) from Oracle. - Accept the license agreement.-
-## Create a storage account and storage container to hold the sample application
-
-Use the following steps to create a storage account and container. Some of these steps direct you to other guides. After completing the steps, you can upload a sample application to run on WLS on AKS.
-
-1. Download a sample application as a *.war* or *.ear* file. The sample app should be self-contained and not have any database, messaging, or other external connection requirements. The sample app from the WLS Kubernetes Operator documentation is a good choice. You can download [testwebapp.war](https://aka.ms/wls-aks-testwebapp) from Oracle. Save the file to your local filesystem.
-1. Sign in to the [Azure portal](https://aka.ms/publicportal).
-1. Create a storage account by following the steps in [Create a storage account](/azure/storage/common/storage-account-create). You don't need to perform all the steps in the article. Just fill out the fields as shown on the **Basics** pane, then select **Review + create** to accept the default options. Proceed to validate and create the account, then return to this article.
-1. Create a storage container within the account. Then, upload the sample application you downloaded in step 1 by following the steps in [Quickstart: Upload, download, and list blobs with the Azure portal](/azure/storage/blobs/storage-quickstart-blobs-portal). Upload the sample application as the blob, then return to this article.
+- Prepare a local machine with Unix-like operating system installed (for example, Ubuntu, Azure Linux, macOS, Windows Subsystem for Linux).
+ - [Azure CLI](/cli/azure). Use `az --version` to test whether az works. This document was tested with version 2.55.1.
+ - [Docker](https://docs.docker.com/get-docker). This document was tested with Docker version 20.10.7. Use `docker info` to test whether Docker Daemon is running.
+ - [kubectl](https://kubernetes-io-vnext-staging.netlify.com/docs/tasks/tools/install-kubectl/). Use `kubectl version` to test whether kubectl works. This document was tested with version v1.21.2.
+ - A Java JDK compatible with the version of WLS you intend to run. The article directs you to install a version of WLS that uses JDK 11. Azure recommends [Microsoft Build of OpenJDK](/java/openjdk/download). Ensure that your `JAVA_HOME` environment variable is set correctly in the shells in which you run the commands.
+ - [Maven](https://maven.apache.org/download.cgi) 3.5.0 or higher.
+ - Ensure that you have the zip/unzip utility installed. Use `zip/unzip -v` to test whether `zip/unzip` works.
+- All of the steps in this article, except for those involving Docker, can also be executed in the Azure Cloud Shell. To learn more about Azure Cloud Shell, see [What is Azure Cloud Shell?](/azure/cloud-shell/overview)
## Deploy WLS on AKS
The steps in this section direct you to deploy WLS on AKS in the simplest possib
The following steps show you how to find the WLS on AKS offer and fill out the **Basics** pane.
-1. In the search bar at the top of the Azure portal, enter *weblogic*. In the auto-suggested search results, in the **Marketplace** section, select **Oracle WebLogic Server on Azure Kubernetes Service**.
+1. In the search bar at the top of the Azure portal, enter *weblogic*. In the auto-suggested search results, in the **Marketplace** section, select **WebLogic Server on AKS**.
- :::image type="content" source="media/howto-deploy-java-wls-app/marketplace-search-results.png" alt-text="Screenshot of the Azure portal showing WLS in search results." lightbox="media/howto-deploy-java-wls-app/marketplace-search-results.png":::
+ :::image type="content" source="media/howto-deploy-java-wls-app/marketplace-search-results.png" alt-text="Screenshot of the Azure portal that shows WLS in the search results." lightbox="media/howto-deploy-java-wls-app/marketplace-search-results.png":::
- You can also go directly to the [Oracle WebLogic Server on Azure Kubernetes Service](https://aka.ms/wlsaks) offer.
+ You can also go directly to the [WebLogic Server on AKS](https://aka.ms/wlsaks) offer.
1. On the offer page, select **Create**.
-1. On the **Basics** pane, ensure the value shown in the **Subscription** field is the same one that has the roles listed in the prerequisites section.
+1. On the **Basics** pane, ensure the value shown in the **Subscription** field is the same one that you logged into in Azure. Make sure you have the roles listed in the prerequisites section.
- :::image type="content" source="media/howto-deploy-java-wls-app/portal-start-experience.png" alt-text="Screenshot of the Azure portal showing WebLogic Server on AKS." lightbox="media/howto-deploy-java-wls-app/portal-start-experience.png":::
+ :::image type="content" source="media/howto-deploy-java-wls-app/portal-start-experience.png" alt-text="Screenshot of the Azure portal that shows WebLogic Server on AKS." lightbox="media/howto-deploy-java-wls-app/portal-start-experience.png":::
-1. You must deploy the offer in an empty resource group. In the **Resource group** field, select **Create new** and then fill in a value for the resource group. Because resource groups must be unique within a subscription, pick a unique name. An easy way to have unique names is to use a combination of your initials, today's date, and some identifier - for example, `ejb0723wls``.
+1. You must deploy the offer in an empty resource group. In the **Resource group** field, select **Create new** and then fill in a value for the resource group. Because resource groups must be unique within a subscription, pick a unique name. An easy way to have unique names is to use a combination of your initials, today's date, and some identifier - for example, `ejb0723wls`.
1. Under **Instance details**, select the region for the deployment. For a list of Azure regions where AKS is available, see [AKS region availability](https://azure.microsoft.com/global-infrastructure/services/?products=kubernetes-service). 1. Under **Credentials for WebLogic**, leave the default value for **Username for WebLogic Administrator**. 1. Fill in `wlsAksCluster2022` for the **Password for WebLogic Administrator**. Use the same value for the confirmation and **Password for WebLogic Model encryption** fields. 1. Scroll to the bottom of the **Basics** pane and notice the helpful links for documentation, community support, and how to report problems.
-1. Select **Next: AKS**.
+1. Select **Next**.
The following steps show you how to start the deployment process. 1. Scroll to the section labeled **Provide an Oracle Single Sign-On (SSO) account**. Fill in your Oracle SSO credentials from the preconditions.
- :::image type="content" source="media/howto-deploy-java-wls-app/configure-single-sign-on.png" alt-text="Screenshot of the Azure portal showing the configure sso pane." lightbox="media/howto-deploy-java-wls-app/configure-single-sign-on.png":::
+ :::image type="content" source="media/howto-deploy-java-wls-app/configure-single-sign-on.png" alt-text="Screenshot of the Azure portal that shows the configured SSO pane." lightbox="media/howto-deploy-java-wls-app/configure-single-sign-on.png":::
-1. In the **Application** section, next to **Deploy an application?**, select **Yes**.
+1. Follow the steps in the info box starting with **Before moving forward, you must accept the Oracle Standard Terms and Restrictions.**
- :::image type="content" source="media/howto-deploy-java-wls-app/configure-application.png" alt-text="Screenshot of the Azure portal showing the configure applications pane." lightbox="media/howto-deploy-java-wls-app/configure-application.png":::
+1. Depending on whether or not the Oracle SSO account has an Oracle support entitlement, select the appropriate option for **Select the type of WebLogic Server Images.**. If the account has a support entitlement, select **Patched WebLogic Server Images**. Otherwise, select **General WebLogic Server Images**.
-1. Next to **Application package (.war,.ear,.jar)**, select **Browse**.
-1. Start typing the name of the storage account from the preceding section. When the desired storage account appears, select it.
-1. Select the storage container from the preceding section.
-1. Select the checkbox next to the sample app uploaded from the preceding section. Select **Select**.
+1. Leave the value in **Select desired combination of WebLogic Server...** at its default value. You have a broad range of choices for WLS, JDK, and OS version.
-The following steps make it so the WLS admin console and the sample app are exposed to the public Internet with a built-in Kubernetes `LoadBalancer` service. For a more secure and scalable way to expose functionality to the public Internet, see [Tutorial: Migrate a WebLogic Server cluster to Azure with Azure Application Gateway as a load balancer](/azure/developer/java/migration/migrate-weblogic-with-app-gateway).
+1. In the **Application** section, next to **Deploy an application?**, select **No**.
+The following steps make it so the WLS admin console and the sample app are exposed to the public Internet with a built-in Application Gateway ingress add-on. For a more information, see [What is Application Gateway Ingress Controller?](/azure/application-gateway/ingress-controller-overview)
-1. Select the **Load balancing** pane.
-1. Next to **Load Balancing Options**, select **Standard Load Balancer Service**.
-1. In the table that appears, under **Service name prefix**, fill in the values as shown in the following table. The port values of *7001* for the admin server and *8001* for the cluster must be filled in exactly as shown.
- | Service name prefix | Target | Port |
- ||--||
- | console | admin-server | 7001 |
- | app | cluster-1 | 8001 |
+1. Select **Next** to see the **TLS/SSL** pane.
+1. Select **Next** to see the **Load balancing** pane.
+1. Next to **Load Balancing Options**, select **Application Gateway Ingress Controller**.
+1. Under the **Application Gateway Ingress Controller**, you should see all fields prepopulated with the defaults for **Virtual network** and **Subnet**. Leave the default values.
+1. For **Create ingress for Administration Console**, select **Yes**.
-1. Select **Review + create**. Ensure the green **Validation Passed** message appears at the top. If it doesn't, fix any validation problems, then select **Review + create** again.
+ :::image type="content" source="media/howto-deploy-java-wls-app/configure-appgateway-ingress-admin-console.png" alt-text="Screenshot of the Azure portal that shows the Application Gateway Ingress Controller configuration on the Create Oracle WebLogic Server on Azure Kubernetes Service page." lightbox="media/howto-deploy-java-wls-app/configure-appgateway-ingress-admin-console.png":::
+
+1. Leave the default values for other fields.
+1. Select **Review + create**. Ensure the validation doesn't fail. If it fails, fix any validation problems, then select **Review + create** again.
1. Select **Create**. 1. Track the progress of the deployment on the **Deployment is in progress** page.
-Depending on network conditions and other activity in your selected region, the deployment may take up to 30 minutes to complete.
+Depending on network conditions and other activity in your selected region, the deployment might take up to 50 minutes to complete.
+
+You can perform the steps in the section [Create an Azure SQL Database](#create-an-azure-sql-database) while you wait. Return to this section when you finish creating the database.
## Examine the deployment output
-The steps in this section show you how to verify that the deployment has successfully completed.
+Use the steps in this section to verify that the deployment was successful.
-If you navigated away from the **Deployment is in progress** page, the following steps will show you how to get back to that page. If you're still on the page that shows **Your deployment is complete**, you can skip to the steps after the image below.
+If you navigated away from the **Deployment is in progress** page, the following steps show you how to get back to that page. If you're still on the page that shows **Your deployment is complete**, you can skip to step 5 after the next screenshot.
-1. In the upper left of any portal page, select the hamburger menu and select **Resource groups**.
+1. In the corner of any Azure portal page, select the hamburger menu and select **Resource groups**.
1. In the box with the text **Filter for any field**, enter the first few characters of the resource group you created previously. If you followed the recommended convention, enter your initials, then select the appropriate resource group.
-1. In the left navigation pane, in the **Settings** section, select **Deployments**. You'll see an ordered list of the deployments to this resource group, with the most recent one first.
+1. In the navigation pane, in the **Settings** section, select **Deployments**. You see an ordered list of the deployments to this resource group, with the most recent one first.
1. Scroll to the oldest entry in this list. This entry corresponds to the deployment you started in the preceding section. Select the oldest deployment, as shown in the following screenshot.
- :::image type="content" source="media/howto-deploy-java-wls-app/resource-group-deployments.png" alt-text="Screenshot of the Azure portal showing the resource group deployments list." lightbox="media/howto-deploy-java-wls-app/resource-group-deployments.png":::
+ :::image type="content" source="media/howto-deploy-java-wls-app/resource-group-deployments.png" alt-text="Screenshot of the Azure portal that shows the resource group deployments list." lightbox="media/howto-deploy-java-wls-app/resource-group-deployments.png":::
-1. In the left panel, select **Outputs**. This list shows the output values from the deployment. Useful information is included in the outputs.
+1. In the navigation pane, select **Outputs**. This list shows the output values from the deployment. Useful information is included in the outputs.
1. The **adminConsoleExternalUrl** value is the fully qualified, public Internet visible link to the WLS admin console for this AKS cluster. Select the copy icon next to the field value to copy the link to your clipboard. Save this value aside for later.
-1. The **clusterExternalUrl** value is the fully qualified, public Internet visible link to the sample app deployed in WLS on this AKS cluster. Select the copy icon next to the field value to copy the link to your clipboard. Save this value aside for later.
+1. The **clusterExternalUrl** value is the fully qualified, public Internet visible link to the sample app deployed in WLS on this AKS cluster. Select the copy icon next to the field value to copy the link to your clipboard. Save this value aside for later.
+1. The **shellCmdtoOutputWlsImageModelYaml** value is the base64 string of WDT model that built in the container image. Save this value aside for later.
+1. The **shellCmdtoOutputWlsImageProperties** value is base64 string of WDT model properties that built in the container image. Save this value aside for later.
+1. The **shellCmdtoConnectAks** value is the Azure CLI command to connect to this specific AKS cluster. This lets you use `kubectl` to administer the cluster.
The other values in the outputs are beyond the scope of this article, but are explained in detail in the [WebLogic on AKS user guide](https://aka.ms/wls-aks-docs).
+## Create an Azure SQL Database
++
+2. Create a schema for the sample application. Follow [Query the database](/azure/azure-sql/database/single-database-create-quickstart#query-the-database) to open the **Query editor** pane. Enter and run the following query:
+
+ ```sql
+ CREATE TABLE COFFEE (ID NUMERIC(19) NOT NULL, NAME VARCHAR(255) NULL, PRICE FLOAT(32) NULL, PRIMARY KEY (ID));
+ CREATE TABLE SEQUENCE (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT NUMERIC(28) NULL, PRIMARY KEY (SEQ_NAME));
+ ```
+
+ After a successful run, you should see the message **Query succeeded: Affected rows: 0**. If you don't see this message, troubleshoot and resolve the problem before proceeding.
+
+The database, tables, AKS cluster, and WLS cluster are created. If you want, you can explore the admin console by opening a browser and navigating to the address of **adminConsoleExternalUrl**. Sign in with the values you entered during the WLS on AKS deployment.
+
+You can proceed to preparing AKS to host your WebLogic application.
+
+## Configure and deploy the sample application
+
+The offer provisions the WLS cluster via [model in image](https://oracle.github.io/weblogic-kubernetes-operator/samples/domains/model-in-image/). Currently, the WLS cluster has no application deployed.
+
+This section updates the WLS cluster by deploying a sample application using [auxiliary image](https://oracle.github.io/weblogic-kubernetes-operator/managing-domains/model-in-image/auxiliary-images/#using-docker-to-create-an-auxiliary-image).
+
+### Check out the application
+
+In this section, you clone the sample code for this guide. The sample is on GitHub in the [weblogic-on-azure](https://github.com/microsoft/weblogic-on-azure) repository in the *javaee/weblogic-cafe/* folder. Here's the file structure of the application.
+
+```text
+weblogic-cafe
+Γö£ΓöÇΓöÇ pom.xml
+ΓööΓöÇΓöÇ src
+ ΓööΓöÇΓöÇ main
+ Γö£ΓöÇΓöÇ java
+ │   └── cafe
+ │   ├── model
+ │   │   ├── CafeRepository.java
+ │   │   └── entity
+ │   │   └── Coffee.java
+ │   └── web
+ │   ├── rest
+ │   │   └── CafeResource.java
+ │   └── view
+ │   └── Cafe.java
+ Γö£ΓöÇΓöÇ resources
+ │   ├── META-INF
+ │   │   └── persistence.xml
+ │   └── cafe
+ │   └── web
+ │   ├── messages.properties
+ │   └── messages_es.properties
+ ΓööΓöÇΓöÇ webapp
+ Γö£ΓöÇΓöÇ WEB-INF
+ │   ├── beans.xml
+ │   ├── faces-config.xml
+ │   └── web.xml
+ Γö£ΓöÇΓöÇ index.xhtml
+ ΓööΓöÇΓöÇ resources
+ ΓööΓöÇΓöÇ components
+ ΓööΓöÇΓöÇ inputPrice.xhtml
+```
+
+Use the following commands to clone the repository:
+
+```bash
+cd <parent-directory-to-check-out-sample-code>
+export BASE_DIR=$PWD
+git clone --single-branch https://github.com/microsoft/weblogic-on-azure.git --branch 20240201 $BASE_DIR/weblogic-on-azure
+```
+
+If you see a message about being in "detached HEAD" state, this message is safe to ignore. It just means you checked out a tag.
+
+Use the following command to build *javaee/weblogic-cafe/*:
+
+```bash
+mvn clean package --file $BASE_DIR/weblogic-on-azure/javaee/weblogic-cafe/pom.xml
+```
+
+The package should be successfully generated and located at *$BASE_DIR/weblogic-on-azure/javaee/weblogic-cafe/target/weblogic-cafe.war*. If you don't see the package, you must troubleshoot and resolve the issue before you continue.
+
+### Use Docker to create an auxiliary image
+
+The steps in this section show you how to build an auxiliary image. This image includes the following components:
+
+- The *Model in Image* model files
+- Your application
+- The JDBC driver archive file
+- The WebLogic Deploy Tooling installation
+
+An *auxiliary image* is a Docker container image containing your app and configuration. The WebLogic Kubernetes Operator combines your auxiliary image with the `domain.spec.image` in the AKS cluster that contains the WebLogic Server, JDK, and operating system. For more information about auxiliary images, see [Auxiliary images](https://oracle.github.io/weblogic-kubernetes-operator/managing-domains/model-in-image/auxiliary-images/) in the Oracle documentation.
+
+This section requires a Linux terminal with Azure CLI and kubectl installed.
+
+Use the following steps to build the image:
+
+1. Use the following commands to create a directory to stage the models and application:
+
+ ```bash
+ mkdir -p ${BASE_DIR}/mystaging/models
+ cd ${BASE_DIR}/mystaging/models
+ ```
+
+1. Copy the **shellCmdtoOutputWlsImageModelYaml** value that you saved from the deployment outputs, paste it into the Bash window, and run the command. The command should look similar to the following example:
+
+ ```bash
+ echo -e IyBDb3B5cmlna...Cgo= | base64 -d > model.yaml
+ ```
+
+ This command produces a *${BASE_DIR}/mystaging/models/model.yaml* file with contents similar to the following example:
+
+ ```yaml
+ # Copyright (c) 2020, 2021, Oracle and/or its affiliates.
+ # Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl.
+
+ # Based on ./kubernetes/samples/scripts/create-weblogic-domain/model-in-image/model-images/model-in-image__WLS-v1/model.10.yaml
+ # in https://github.com/oracle/weblogic-kubernetes-operator.
+
+ domainInfo:
+ AdminUserName: "@@SECRET:__weblogic-credentials__:username@@"
+ AdminPassword: "@@SECRET:__weblogic-credentials__:password@@"
+ ServerStartMode: "prod"
+
+ topology:
+ Name: "@@ENV:CUSTOM_DOMAIN_NAME@@"
+ ProductionModeEnabled: true
+ AdminServerName: "admin-server"
+ Cluster:
+ "cluster-1":
+ DynamicServers:
+ ServerTemplate: "cluster-1-template"
+ ServerNamePrefix: "@@ENV:MANAGED_SERVER_PREFIX@@"
+ DynamicClusterSize: "@@PROP:CLUSTER_SIZE@@"
+ MaxDynamicClusterSize: "@@PROP:CLUSTER_SIZE@@"
+ MinDynamicClusterSize: "0"
+ CalculatedListenPorts: false
+ Server:
+ "admin-server":
+ ListenPort: 7001
+ ServerTemplate:
+ "cluster-1-template":
+ Cluster: "cluster-1"
+ ListenPort: 8001
+ SecurityConfiguration:
+ NodeManagerUsername: "@@SECRET:__weblogic-credentials__:username@@"
+ NodeManagerPasswordEncrypted: "@@SECRET:__weblogic-credentials__:password@@"
+
+ resources:
+ SelfTuning:
+ MinThreadsConstraint:
+ SampleMinThreads:
+ Target: "cluster-1"
+ Count: 1
+ MaxThreadsConstraint:
+ SampleMaxThreads:
+ Target: "cluster-1"
+ Count: 10
+ Work
+ SampleWM:
+ Target: "cluster-1"
+ MinThreadsConstraint: "SampleMinThreads"
+ MaxThreadsConstraint: "SampleMaxThreads"
+ ```
+
+1. In a similar way, copy the **shellCmdtoOutputWlsImageProperties** value, paste it into the Bash window, and run the command. The command should look similar to the following example:
+
+ ```bash
+ echo -e IyBDb3B5cml...pFPTUK | base64 -d > model.properties
+ ```
+
+ This command produces a *${BASE_DIR}/mystaging/models/model.properties* file with contents similar to the following example:
+
+ ```properties
+ # Copyright (c) 2021, Oracle Corporation and/or its affiliates.
+ # Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl.
+
+ # Based on ./kubernetes/samples/scripts/create-weblogic-domain/model-in-image/model-images/model-in-image__WLS-v1/model.10.properties
+ # in https://github.com/oracle/weblogic-kubernetes-operator.
+
+ CLUSTER_SIZE=5
+ ```
+
+1. Use the following steps to create the application model file.
+
+ 1. Use the following commands to copy *weblogic-cafe.war* and save it to *wlsdeploy/applications*:
+
+ ```bash
+ mkdir -p ${BASE_DIR}/mystaging/models/wlsdeploy/applications
+ cp $BASE_DIR/weblogic-on-azure/javaee/weblogic-cafe/target/weblogic-cafe.war ${BASE_DIR}/mystaging/models/wlsdeploy/applications/weblogic-cafe.war
+ ```
+
+ 1. Use the following commands to create the application model file with the contents shown. Save the model file to *${BASE_DIR}/mystaging/models/appmodel.yaml*.
+
+ ```bash
+ cat <<EOF >appmodel.yaml
+ appDeployments:
+ Application:
+ weblogic-cafe:
+ SourcePath: 'wlsdeploy/applications/weblogic-cafe.war'
+ ModuleType: ear
+ Target: 'cluster-1'
+ EOF
+ ```
+
+1. Use the following commands to download and install Microsoft SQL Server JDBC driver to *wlsdeploy/externalJDBCLibraries*:
+
+ ```bash
+ export DRIVER_VERSION="10.2.1.jre8"
+ export MSSQL_DRIVER_URL="https://repo.maven.apache.org/maven2/com/microsoft/sqlserver/mssql-jdbc/${DRIVER_VERSION}/mssql-jdbc-${DRIVER_VERSION}.jar"
+
+ mkdir ${BASE_DIR}/mystaging/models/wlsdeploy/externalJDBCLibraries
+ curl -m 120 -fL ${MSSQL_DRIVER_URL} -o ${BASE_DIR}/mystaging/models/wlsdeploy/externalJDBCLibraries/mssql-jdbc-${DRIVER_VERSION}.jar
+ ```
+
+1. Next, use the following commands to create the database connection model file with the contents shown. Save the model file to *${BASE_DIR}/mystaging/models/dbmodel.yaml*. The model uses placeholders (secret `sqlserver-secret`) for database username, password, and URL. Make sure the following fields are set correctly. The following model names the resource with `jdbc/WebLogicCafeDB`.
+
+ | Item Name | Field | Value |
+ |-|-||
+ | JNDI name | `resources.JDBCSystemResource.<resource-name>.JdbcResource.JDBCDataSourceParams.JNDIName` | `jdbc/WebLogicCafeDB` |
+ | Driver name | `resources.JDBCSystemResource.<resource-name>.JDBCDriverParams.DriverName` | `com.microsoft.sqlserver.jdbc.SQLServerDriver` |
+ | Database Url | `resources.JDBCSystemResource.<resource-name>.JDBCDriverParams.URL` | `@@SECRET:sqlserver-secret:url@@` |
+ | Database password | `resources.JDBCSystemResource.<resource-name>.JDBCDriverParams.PasswordEncrypted` | `@@SECRET:sqlserver-secret:password@@` |
+ | Database username | `resources.JDBCSystemResource.<resource-name>.JDBCDriverParams.Properties.user.Value` | `'@@SECRET:sqlserver-secret:user@@'` |
+
+ ```bash
+ cat <<EOF >dbmodel.yaml
+ resources:
+ JDBCSystemResource:
+ jdbc/WebLogicCafeDB:
+ Target: 'cluster-1'
+ JdbcResource:
+ JDBCDataSourceParams:
+ JNDIName: [
+ jdbc/WebLogicCafeDB
+ ]
+ GlobalTransactionsProtocol: None
+ JDBCDriverParams:
+ DriverName: com.microsoft.sqlserver.jdbc.SQLServerDriver
+ URL: '@@SECRET:sqlserver-secret:url@@'
+ PasswordEncrypted: '@@SECRET:sqlserver-secret:password@@'
+ Properties:
+ user:
+ Value: '@@SECRET:sqlserver-secret:user@@'
+ JDBCConnectionPoolParams:
+ TestTableName: SQL SELECT 1
+ TestConnectionsOnReserve: true
+ EOF
+ ```
+
+1. Use the following commands to create an application archive file and then remove the *wlsdeploy* folder, which you don't need anymore:
+
+ ```bash
+ cd ${BASE_DIR}/mystaging/models
+ zip -r archive.zip wlsdeploy
+
+ rm -f -r wlsdeploy
+ ```
+
+1. Use the following commands to download and install [WebLogic Deploy Tooling](https://oracle.github.io/weblogic-deploy-tooling/) (WDT) in the staging directory and remove its *weblogic-deploy/bin/\*.cmd* files, which aren't used in UNIX environments:
+
+ ```bash
+ cd ${BASE_DIR}/mystaging
+ curl -m 120 -fL https://github.com/oracle/weblogic-deploy-tooling/releases/latest/download/weblogic-deploy.zip -o weblogic-deploy.zip
+
+ unzip weblogic-deploy.zip -d .
+ rm ./weblogic-deploy/bin/*.cmd
+ ```
+
+1. Use the following command to remove the WDT installer:
+
+ ```bash
+ rm weblogic-deploy.zip
+ ```
+
+1. Use the following commands to build an auxiliary image using docker:
+
+ ```bash
+ cd ${BASE_DIR}/mystaging
+ cat <<EOF >Dockerfile
+ FROM busybox
+ ARG AUXILIARY_IMAGE_PATH=/auxiliary
+ ARG USER=oracle
+ ARG USERID=1000
+ ARG GROUP=root
+ ENV AUXILIARY_IMAGE_PATH=\${AUXILIARY_IMAGE_PATH}
+ RUN adduser -D -u \${USERID} -G \$GROUP \$USER
+ # ARG expansion in COPY command's --chown is available in docker version 19.03.1+.
+ # For older docker versions, change the Dockerfile to use separate COPY and 'RUN chown' commands.
+ COPY --chown=\$USER:\$GROUP ./ \${AUXILIARY_IMAGE_PATH}/
+ USER \$USER
+ EOF
+ ```
+
+1. Run the `docker buildx build` command using *${BASE_DIR}/mystaging/Dockerfile*, as shown in the following example:
+
+ ```bash
+ cd ${BASE_DIR}/mystaging
+ docker buildx build --platform linux/amd64 --build-arg AUXILIARY_IMAGE_PATH=/auxiliary --tag model-in-image:WLS-v1 .
+ ```
+
+ When you build the image successfully, the output looks similar to the following example:
+
+ ```output
+ [+] Building 12.0s (8/8) FINISHED docker:default
+ => [internal] load build definition from Dockerfile 0.8s
+ => => transferring dockerfile: 473B 0.0s
+ => [internal] load .dockerignore 1.1s
+ => => transferring context: 2B 0.0s
+ => [internal] load metadata for docker.io/library/busybox:latest 5.0s
+ => [1/3] FROM docker.io/library/busybox@sha256:6d9ac9237a84afe1516540f40a0f 0.0s
+ => [internal] load build context 0.3s
+ => => transferring context: 21.89kB 0.0s
+ => CACHED [2/3] RUN adduser -D -u 1000 -G root oracle 0.0s
+ => [3/3] COPY --chown=oracle:root ./ /auxiliary/ 1.5s
+ => exporting to image 1.3s
+ => => exporting layers 1.0s
+ => => writing image sha256:2477d502a19dcc0e841630ea567f50d7084782499fe3032a 0.1s
+ => => naming to docker.io/library/model-in-image:WLS-v1 0.2s
+ ```
+
+1. If you have successfully created the image, then it should now be in your local machineΓÇÖs Docker repository. You can verify the image creation by using the following command:
+
+ ```text
+ docker images model-in-image:WLS-v1
+ ```
+
+ This command should produce output similar to the following example:
+
+ ```output
+ REPOSITORY TAG IMAGE ID CREATED SIZE
+ model-in-image WLS-v1 76abc1afdcc6 2 hours ago 8.61MB
+ ```
+
+ After the image is created, it should have the WDT executables in */auxiliary/weblogic-deploy*, and WDT model, property, and archive files in */auxiliary/models*. Use the following command on the Docker image to verify this result:
+
+ ```bash
+ docker run -it --rm model-in-image:WLS-v1 find /auxiliary -maxdepth 2 -type f -print
+ ```
+
+ This command should produce output similar to the following example:
+
+ ```output
+ /auxiliary/models/dbmodel.yaml
+ /auxiliary/models/archive.zip
+ /auxiliary/models/model.properties
+ /auxiliary/models/model.yaml
+ /auxiliary/weblogic-deploy/VERSION.txt
+ /auxiliary/weblogic-deploy/LICENSE.txt
+ /auxiliary/Dockerfile
+ ```
+
+1. Use the following steps to push the auxiliary image to Azure Container Registry:
+
+ 1. Open the Azure portal and go to the resource group that you provisioned in the [Deploy WSL on AKS](#deploy-wls-on-aks) section.
+ 1. Select the resource of type **Container registry** from the resource list.
+ 1. Hover the mouse over the value next to **Login server** and select the copy icon next to the text.
+ 1. Save the value in the `ACR_LOGIN_SERVER` environment variable by using the following command:
+
+ ```bash
+ export ACR_LOGIN_SERVER=<value-from-clipboard>
+ ```
+
+ 1. Run the following commands to tag and push the image. Make sure Docker is running before executing these commands.
+
+ ```bash
+ export ACR_NAME=$(echo ${ACR_LOGIN_SERVER} | cut -d '.' -f 1)
+ az acr login -n $ACR_NAME
+ docker tag model-in-image:WLS-v1 $ACR_LOGIN_SERVER/wlsaks-auxiliary-image:1.0
+ docker push $ACR_LOGIN_SERVER/wlsaks-auxiliary-image:1.0
+ ```
+
+ 1. You can run `az acr repository show` to test whether the image is pushed to the remote repository successfully, as shown in the following example:
+
+ ```bash
+ az acr repository show --name ${ACR_NAME} --image wlsaks-auxiliary-image:1.0
+ ```
+
+ This command should produce output similar to the following example:
+
+ ```output
+ {
+ "changeableAttributes": {
+ "deleteEnabled": true,
+ "listEnabled": true,
+ "readEnabled": true,
+ "writeEnabled": true
+ },
+ "createdTime": "2024-01-24T06:14:19.4546321Z",
+ "digest": "sha256:a1befbefd0181a06c6fe00848e76f1743c1fecba2b42a975e9504ba2aaae51ea",
+ "lastUpdateTime": "2024-01-24T06:14:19.4546321Z",
+ "name": "1.0",
+ "quarantineState": "Passed",
+ "signed": false
+ }
+ ```
+
+### Apply the auxiliary image
+
+In the previous steps, you created the auxiliary image including models and WDT. Before you apply the auxiliary image to the WLS cluster, use the following steps to create the secret for the datasource URL, username, and password. The secret is used as part of the placeholder in the *dbmodel.yaml*.
+
+1. Connect to the AKS cluster by copying the **shellCmdtoConnectAks** value that you saved aside previously, pasting it into the Bash window, then running the command. The command should look similar to the following example:
+
+ ```bash
+ az account set --subscription <subscription>;
+ az aks get-credentials \
+ --resource-group <resource-group> \
+ --name <name>
+ ```
+
+ You should see output similar to the following example. If you don't see this output, troubleshoot and resolve the problem before continuing.
+
+ ```output
+ Merged "<name>" as current context in /Users/<username>/.kube/config
+ ```
+
+1. Use the following steps to get values for the variables shown in the following table. You use these values later to create the secret for the datasource connection.
+
+ | Variable | Description | Example |
+ ||--|--|
+ | `DB_CONNECTION_STRING` | The connection string of SQL server. | `jdbc:sqlserver://sqlserverforwlsaks.database.windows.net:1433;database=wlsaksquickstart0125` |
+ | `DB_USER` | The username to sign in to the SQL server. | `welogic@sqlserverforwlsaks` |
+ | `DB_PASSWORD` | The password to sign in to the sQL server. | `Secret123456` |
+
+ 1. Visit the SQL database resource in the Azure portal.
+
+ 1. In the navigation pane, under **Settings**, select **Connection strings**.
+
+ 1. Select the **JDBC** tab.
+
+ 1. Select the copy icon to copy the connection string to the clipboard.
+
+ 1. For `DB_CONNECTION_STRING`, use the entire connection string, but replace the placeholder `{your_password_here}` with your database password.
+
+ 1. For `DB_USER`, use the portion of the connection string from `azureuser` up to but not including `;password={your_password_here}`.
+
+ 1. For `DB_PASSWORD`, use the value you entered when you created the database.
+
+1. Use the following commands to create the [Kubernetes Secret](https://kubernetes.io/docs/concepts/configuration/secret/). This article uses the secret name `sqlserver-secret` for the secret of the datasource connection. If you use a different name, make sure the value is the same as the one in *dbmodel.yaml*.
+
+ In the following commands, be sure to set the variables `DB_CONNECTION_STRING`, `DB_USER`, and `DB_PASSWORD` correctly by replacing the placeholder examples with the values described in the previous steps. Be sure to enclose the value of the `DB_` variables in single quotes to prevent the shell from interfering with the values.
+
+ ```bash
+ export DB_CONNECTION_STRING='<example-jdbc:sqlserver://sqlserverforwlsaks.database.windows.net:1433;database=wlsaksquickstart0125>'
+ export DB_USER='<example-welogic@sqlserverforwlsaks>'
+ export DB_PASSWORD='<example-Secret123456>'
+ export WLS_DOMAIN_NS=sample-domain1-ns
+ export WLS_DOMAIN_UID=sample-domain1
+ export SECRET_NAME=sqlserver-secret
+
+ kubectl -n ${WLS_DOMAIN_NS} create secret generic \
+ ${SECRET_NAME} \
+ --from-literal=password='${DB_PASSWORD}' \
+ --from-literal=url='${DB_CONNECTION_STRING}' \
+ --from-literal=user='${DB_USER}'
+
+ kubectl -n ${WLS_DOMAIN_NS} label secret \
+ ${SECRET_NAME} \
+ weblogic.domainUID=${WLS_DOMAIN_UID}
+ ```
+
+ You must see the following output before you continue. If you don't see this output, troubleshoot and resolve the problem before you continue.
+
+ ```output
+ secret/sqlserver-secret created
+ secret/sqlserver-secret labeled
+ ```
+
+1. Apply the auxiliary image by patching the domain custom resource definition (CRD) using the `kubectl patch` command.
+
+ The auxiliary image is defined in `spec.configuration.model.auxiliaryImages`, as shown in the following example. For more information, see [auxiliary images](https://oracle.github.io/weblogic-kubernetes-operator/managing-domains/model-in-image/auxiliary-images/).
+
+ ```yaml
+ spec:
+ clusters:
+ - name: sample-domain1-cluster-1
+ configuration:
+ model:
+ auxiliaryImages:
+ - image: wlsaksacrafvzeyyswhxek.azurecr.io/wlsaks-auxiliary-image:1.0
+ imagePullPolicy: IfNotPresent
+ sourceModelHome: /auxiliary/models
+ sourceWDTInstallHome: /auxiliary/weblogic-deploy
+ ```
+
+ Use the following commands to increase the `restartVersion` value and use `kubectl patch` to apply the auxiliary image to the domain CRD using the definition shown:
+
+ ```bash
+ export VERSION=$(kubectl -n ${WLS_DOMAIN_NS} get domain ${WLS_DOMAIN_UID} -o=jsonpath='{.spec.restartVersion}' | tr -d "\"")
+ export VERSION=$((VERSION+1))
+
+ cat <<EOF >patch-file.json
+ [
+ {
+ "op": "replace",
+ "path": "/spec/restartVersion",
+ "value": "${VERSION}"
+ },
+ {
+ "op": "add",
+ "path": "/spec/configuration/model/auxiliaryImages",
+ "value": [{"image": "$ACR_LOGIN_SERVER/wlsaks-auxiliary-image:1.0", "imagePullPolicy": "IfNotPresent", "sourceModelHome": "/auxiliary/models", "sourceWDTInstallHome": "/auxiliary/weblogic-deploy"}]
+ },
+ {
+ "op": "add",
+ "path": "/spec/configuration/secrets",
+ "value": ["${SECRET_NAME}"]
+ }
+ ]
+ EOF
+
+ kubectl -n ${WLS_DOMAIN_NS} patch domain ${WLS_DOMAIN_UID} \
+ --type=json \
+ --patch-file patch-file.json
+
+ kubectl get pod -n ${WLS_DOMAIN_NS} -w
+ ```
+
+1. Wait until the admin server and managed servers show the values in the following output block before you proceed:
+
+ ```output
+ NAME READY STATUS RESTARTS AGE
+ sample-domain1-admin-server 1/1 Running 0 20m
+ sample-domain1-managed-server1 1/1 Running 0 19m
+ sample-domain1-managed-server2 1/1 Running 0 18m
+ ```
+
+ It might take 5-10 minutes for the system to reach this state. The following list provides an overview of what's happening while you wait:
+
+ - You should see the `sample-domain1-introspector` running first. This software looks for changes to the domain custom resource so it can take the necessary actions on the Kubernetes cluster.
+ - When changes are detected, the domain introspector kills and starts new pods to roll out the changes.
+ - Next, you should see the `sample-domain1-admin-server` pod terminate and restart.
+ - Then, you should see the two managed servers terminate and restart.
+ - Only when all three pods show the `1/1 Running` state, is it ok to proceed.
+ ## Verify the functionality of the deployment
-The following steps show you how to verify the functionality of the deployment by viewing the WLS admin console and the sample app.
+Use the following steps to verify the functionality of the deployment by viewing the WLS admin console and the sample app:
+
+1. Paste the **adminConsoleExternalUrl** value into the address bar of an Internet-connected web browser. You should see the familiar WLS admin console login screen.
+
+1. Sign in with the username `weblogic` and the password you entered when deploying WLS from the Azure portal. Recall that this value is `wlsAksCluster2022`.
-1. Paste the value for **adminConsoleExternalUrl** in an Internet-connected web browser. You should see the familiar WLS admin console login screen as shown in the following screenshot.
+1. In the **Domain Structure** box, select **Deployments**.
- :::image type="content" source="media/howto-deploy-java-wls-app/wls-admin-login.png" alt-text="Screenshot of WLS admin login screen." border="false":::
+1. In the **Deployments** table, there should be one row. The name should be the same value as the `Application` value in your *appmodel.yaml* file. Select the name.
+
+1. In the **Settings** panel, select the **Testing** tab.
+
+1. Select **weblogic-cafe**.
+
+1. In the **Settings for weblogic-cafe** panel, select the **Testing** tab.
+
+1. Expand the **+** icon next to **weblogic-cafe**. Your screen should look similar to the following example. In particular, you should see values similar to `http://sample-domain1-managed-server1:8001/weblogic-cafe/index.xhtml` in the **Test Point** column.
+
+ :::image type="content" source="media/howto-deploy-java-wls-app/weblogic-cafe-deployment.png" alt-text="Screenshot of weblogic-cafe test points." border="false":::
> [!NOTE]
- > This article shows the WLS admin console merely by way of demonstration. Don't use the WLS admin console for any durable configuration changes when running WLS on AKS. The cloud-native design of WLS on AKS requires that any durable configuration must be represented in the initial docker images or applied to the running AKS cluster using CI/CD techniques such as updating the model, as described in the [Oracle documentation](https://aka.ms/wls-aks-docs-update-model).
+ > The hyperlinks in the **Test Point** column are not selectable because we did notconfigure the admin console with the external URL on which it is running. This article shows the WLS admin console merely by way of demonstration. Don't use the WLS admin console for any durable configuration changes when running WLS on AKS. The cloud-native design of WLS on AKS requires that any durable configuration must be represented in the initial docker images or applied to the running AKS cluster using CI/CD techniques such as updating the model, as described in the [Oracle documentation](https://aka.ms/wls-aks-docs-update-model).
-1. Understand the `context-path` of the sample app you deployed. If you deployed the recommended sample app, the `context-path` is `testwebapp`.
-1. Construct a fully qualified URL for the sample app by appending the `context-path` to the value of **clusterExternalUrl**. If you deployed the recommended sample app, the fully qualified URL will be something like `http://123.456.789.012:8001/testwebapp/`.
-1. Paste the fully qualified URL in an Internet-connected web browser. If you deployed the recommended sample app, you should see results similar to the following screenshot.
+1. Understand the `context-path` value of the sample app you deployed. If you deployed the recommended sample app, the `context-path` is `weblogic-cafe`.
+1. Construct a fully qualified URL for the sample app by appending the `context-path` to the **clusterExternalUrl** value. If you deployed the recommended sample app, the fully qualified URL should be something like `http://wlsgw202401-wls-aks-domain1.eastus.cloudapp.azure.com/weblogic-cafe/`.
+1. Paste the fully qualified URL in an Internet-connected web browser. If you deployed the recommended sample app, you should see results similar to the following screenshot:
- :::image type="content" source="media/howto-deploy-java-wls-app/test-web-app.png" alt-text="Screenshot of test web app." border="false":::
+ :::image type="content" source="media/howto-deploy-java-wls-app/weblogic-cafe-app.png" alt-text="Screenshot of test web app." border="false":::
## Clean up resources
-To avoid Azure charges, you should clean up unnecessary resources. When you no longer need the cluster, use the [az group delete](/cli/azure/group#az-group-delete) command. The following command will remove the resource group, container service, container registry, and all related resources.
+To avoid Azure charges, you should clean up unnecessary resources. When you no longer need the cluster, use the [az group delete](/cli/azure/group#az-group-delete) command. The following command removes the resource group, container service, container registry, and all related resources:
```azurecli az group delete --name <resource-group-name> --yes --no-wait
+az group delete --name <db-resource-group-name> --yes --no-wait
``` ## Next steps
Learn more about running WLS on AKS or virtual machines by following these links
> [!div class="nextstepaction"] > [WLS on AKS](/azure/virtual-machines/workloads/oracle/weblogic-aks)
+> [!div class="nextstepaction"]
+> [Migrate WebLogic Server applications to Azure Kubernetes Service](/azure/developer/java/migration/migrate-weblogic-to-azure-kubernetes-service)
+ > [!div class="nextstepaction"] > [WLS on virtual machines](/azure/virtual-machines/workloads/oracle/oracle-weblogic)
aks Kubernetes Helm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/kubernetes-helm.md
description: Learn how to use the Helm packaging tool to deploy containers in an
Last updated 05/09/2023-+ #Customer intent: As a cluster operator or developer, I want to learn how to deploy Helm into an AKS cluster and then install and manage applications using Helm charts.
aks Manage Node Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/manage-node-pools.md
AKS offers a separate feature to automatically scale node pools with a feature c
For more information, see [use the cluster autoscaler](cluster-autoscaler.md#use-the-cluster-autoscaler-on-multiple-node-pools).
-## Associate capacity reservation groups to node pools (preview)
+## Associate capacity reservation groups to node pools
As your workload demands change, you can associate existing capacity reservation groups to node pools to guarantee allocated capacity for your node pools.
-For more information, see [capacity reservation groups][capacity-reservation-groups].
+## Prerequisites to use capacity reservation groups with AKS
-### Register preview feature
--
-1. Install the `aks-preview` extension using the [`az extension add`][az-extension-add] command.
-
- ```azurecli-interactive
- az extension add --name aks-preview
- ```
-
-2. Update to the latest version of the extension using the [`az extension update`][az-extension-update] command.
-
- ```azurecli-interactive
- az extension update --name aks-preview
- ```
-
-3. Register the `CapacityReservationGroupPreview` feature flag using the [`az feature register`][az-feature-register] command.
-
- ```azurecli-interactive
- az feature register --namespace "Microsoft.ContainerService" --name "CapacityReservationGroupPreview"
- ```
-
- It takes a few minutes for the status to show *Registered*.
-
-4. Verify the registration status using the [`az feature show][az-feature-show`] command.
+- Use CLI version 2.56 or above and API version 2023-10-01 or higher.
+- The capacity reservation group should already exist and should contain minimum one capacity reservation, otherwise the node pool is added to the cluster with a warning and no capacity reservation group gets associated. For more information, see [capacity reservation groups][capacity-reservation-groups].
+- You need to create a user-assigned managed identity for the resource group that contains the capacity reservation group (CRG). System-assigned managed identities won't work for this feature. In the following example, replace the environment variables with your own values.
```azurecli-interactive
- az feature show --namespace "Microsoft.ContainerService" --name "CapacityReservationGroupPreview"
+ IDENTITY_NAME=myID
+ RG_NAME=myResourceGroup
+ CLUSTER_NAME=myAKSCluster
+ VM_SKU=Standard_D4s_v3
+ NODE_COUNT=2
+ LOCATION=westus2
+ az identity create --name $IDENTITY_NAME --resource-group $RG_NAME
+ IDENTITY_ID=$(az identity show --name $IDENTITY_NAME --resource-group $RG_NAME --query identity.id -o tsv)
```-
-5. When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider using the [`az provider register`][az-provider-register] command.
-
+- You need to assign the `Contributor` role to the user-assigned identity created above. For more details, see [Steps to assign an Azure role](/azure/role-based-access-control/role-assignments-steps#privileged-administrator-roles).
+- Create a new cluster and assign the newly created identity.
```azurecli-interactive
- az provider register --namespace Microsoft.ContainerService
+ az aks create --resource-group $RG_NAME --name $CLUSTER_NAME --location $LOCATION \
+ --node-vm-size $VM_SKU --node-count $NODE_COUNT \
+ --assign-identity $IDENTITY_ID --enable-managed-identity
```-
-### Manage capacity reservations
+- You can also assign the user-managed identity on an existing managed cluster with update command.
+
+ ```azurecli-interactive
+ az aks update --resource-group $RG_NAME --name $CLUSTER_NAME --location $LOCATION \
+ --node-vm-size $VM_SKU --node-count $NODE_COUNT \
+ --assign-identity $IDENTITY_ID --enable-managed-identity
+ ```
+
+### Associate an existing capacity reservation group with a node pool
+
+Associate an existing capacity reservation group with a node pool using the [`az aks nodepool add`][az-aks-nodepool-add] command and specify a capacity reservation group with the `--crg-id` flag. The following example assumes you have a CRG named "myCRG".
+
+```azurecli-interactive
+RG_NAME=myResourceGroup
+CLUSTER_NAME=myAKSCluster
+NODEPOOL_NAME=myNodepool
+CRG_NAME=myCRG
+CRG_ID=$(az capacity reservation group show --capacity-reservation-group $CRG_NAME --resource-group $RG_NAME --query id -o tsv)
+az aks nodepool add --resource-group $RG_NAME --cluster-name $CLUSTER_NAME --name $NODEPOOL_NAME --crg-id $CRG_ID
+```
+
+### Associate an existing capacity reservation group with a system node pool
+
+To associate an existing capacity reservation group with a system node pool, associate the cluster with the user-assigned identity with the Contributor role on your CRG and the CRG itself during cluster creation. Use the [`az aks create`][az-aks-create] command with the `--assign-identity` and `--crg-id` flags.
+
+```azurecli-interactive
+IDENTITY_NAME=myID
+RG_NAME=myResourceGroup
+CLUSTER_NAME=myAKSCluster
+NODEPOOL_NAME=myNodepool
+CRG_NAME=myCRG
+CRG_ID=$(az capacity reservation group show --capacity-reservation-group $CRG_NAME --resource-group $RG_NAME --query id -o tsv)
+IDENTITY_ID=$(az identity show --name $IDENTITY_NAME --resource-group $RG_NAME --query identity.id -o tsv)
+az aks create --resource-group $RG_NAME --cluster-name $CLUSTER_NAME --crg-id $CRG_ID --assign-identity $IDENTITY_ID --enable-managed-identity
+```
> [!NOTE]
-> The capacity reservation group should already exist, otherwise the node pool is added to the cluster with a warning and no capacity reservation group gets associated.
-
-#### Associate an existing capacity reservation group to a node pool
-
-* Associate an existing capacity reservation group to a node pool using the [`az aks nodepool add`][az-aks-nodepool-add] command and specify a capacity reservation group with the `--capacityReservationGroup` flag.
-
- ```azurecli-interactive
- az aks nodepool add -g MyRG --cluster-name MyMC -n myAP --capacityReservationGroup myCRG
- ```
-
-#### Associate an existing capacity reservation group to a system node pool
-
-* Associate an existing capacity reservation group to a system node pool using the [`az aks create`][az-aks-create] command.
-
- ```azurecli-interactive
- az aks create -g MyRG --cluster-name MyMC --capacityReservationGroup myCRG
- ```
+> Deleting a node pool implicitly dissociates that node pool from any associated capacity reservation group before the node pool is deleted. Deleting a cluster implicitly dissociates all node pools in that cluster from their associated capacity reservation groups.
> [!NOTE]
-> Deleting a node pool implicitly dissociates that node pool from any associated capacity reservation group before the node pool is deleted. Deleting a cluster implicitly dissociates all node pools in that cluster from their associated capacity reservation groups.
+> You cannot update an existing node pool with a capacity reservation group. The recommended approach is to associate a capacity reservation group during the node pool creation.
## Specify a VM size for a node pool
aks Network Observability Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/network-observability-overview.md
When the Network Observability add-on is enabled, it allows for the collection a
| Metric Name | Description | Labels | Linux | Windows | |-|-|--|-||
-| **kappie_forward_count** | Total forwarded packet count | Direction, NodeName, Cluster | Yes | Yes |
-| **kappie_forward_bytes** | Total forwarded byte count | Direction, NodeName, Cluster | Yes | Yes |
-| **kappie_drop_count** | Total dropped packet count | Reason, Direction, NodeName, Cluster | Yes | Yes |
-| **kappie_drop_bytes** | Total dropped byte count | Reason, Direction, NodeName, Cluster | Yes | Yes |
-| **kappie_tcp_state** | TCP active socket count by TCP state. | State, NodeName, Cluster | Yes | Yes |
-| **kappie_tcp_connection_remote** | TCP active socket count by remote address. | Address, Port, NodeName, Cluster | Yes | No |
-| **kappie_tcp_connection_stats** | TCP connection statistics. (ex: Delayed ACKs, TCPKeepAlive, TCPSackFailures) | Statistic, NodeName, Cluster | Yes | Yes |
-| **kappie_tcp_flag_counters** | TCP packets count by flag. | Flag, NodeName, Cluster | Yes | Yes |
-| **kappie_ip_connection_stats** | IP connection statistics. | Statistic, NodeName, Cluster | Yes | No |
-| **kappie_udp_connection_stats** | UDP connection statistics. | Statistic, NodeName, Cluster | Yes | No |
-| **kappie_udp_active_sockets** | UDP active socket count | NodeName, Cluster | Yes | No |
-| **kappie_interface_stats** | Interface statistics. | InterfaceName, Statistic, NodeName, Cluster | Yes | Yes |
+| **networkobservability_forward_count** | Total forwarded packet count | Direction, NodeName, Cluster | Yes | Yes |
+| **networkobservability_forward_bytes** | Total forwarded byte count | Direction, NodeName, Cluster | Yes | Yes |
+| **networkobservability_drop_count** | Total dropped packet count | Reason, Direction, NodeName, Cluster | Yes | Yes |
+| **networkobservability_drop_bytes** | Total dropped byte count | Reason, Direction, NodeName, Cluster | Yes | Yes |
+| **networkobservability_tcp_state** | TCP active socket count by TCP state. | State, NodeName, Cluster | Yes | Yes |
+| **networkobservability_tcp_connection_remote** | TCP active socket count by remote address. | Address, Port, NodeName, Cluster | Yes | No |
+| **networkobservability_tcp_connection_stats** | TCP connection statistics. (ex: Delayed ACKs, TCPKeepAlive, TCPSackFailures) | Statistic, NodeName, Cluster | Yes | Yes |
+| **networkobservability_tcp_flag_counters** | TCP packets count by flag. | Flag, NodeName, Cluster | Yes | Yes |
+| **networkobservability_ip_connection_stats** | IP connection statistics. | Statistic, NodeName, Cluster | Yes | No |
+| **networkobservability_udp_connection_stats** | UDP connection statistics. | Statistic, NodeName, Cluster | Yes | No |
+| **networkobservability_udp_active_sockets** | UDP active socket count | NodeName, Cluster | Yes | No |
+| **networkobservability_interface_stats** | Interface statistics. | InterfaceName, Statistic, NodeName, Cluster | Yes | Yes |
## Limitations
Certain scale limitations apply when you use Azure managed Prometheus and Grafan
- To create an AKS cluster with Network Observability and Azure managed Prometheus and Grafana, see [Setup Network Observability for Azure Kubernetes Service (AKS) Azure managed Prometheus and Grafana](network-observability-managed-cli.md). - To create an AKS cluster with Network Observability and BYO Prometheus and Grafana, see [Setup Network Observability for Azure Kubernetes Service (AKS) BYO Prometheus and Grafana](network-observability-byo-cli.md).--
api-management Backends https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/backends.md
resource symbolicname 'Microsoft.ApiManagement/service/backends@2023-05-01-previ
properties: { description: 'Load balancer for multiple backends' type: 'Pool'
- protocol: 'https'
+ protocol: 'http'
url: 'https://example.com' pool: {
Include a JSON snippet similar to the following in your ARM template for a backe
"properties": { "description": "Load balancer for multiple backends", "type": "Pool",
- "protocol": "https",
+ "protocol": "http",
"url": "https://example.com", "pool": { "services": [
app-service App Service Web Tutorial Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-web-tutorial-custom-domain.md
ms.assetid: dc446e0e-0958-48ea-8d99-441d2b947a7c
Last updated 01/31/2023 -+
app-service Configure Ssl Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-ssl-certificate.md
The following table lists the options for you to add certificates in App Service
| Upload a private certificate | If you already have a private certificate from a third-party provider, you can upload it. See [Private certificate requirements](#private-certificate-requirements). | | Upload a public certificate | Public certificates aren't used to secure custom domains, but you can load them into your code if you need them to access remote resources. |
-> [!NOTE]
-> After you upload a certificate to an app, the certificate is stored in a deployment unit that's bound to the App Service plan's resource group, region, and operating system combination, internally called a *webspace*. That way, the certificate is accessible to other apps in the same resource group and region combination. Certificates uploaded or imported to App Service are shared with App Services in the same deployment unit.
- ## Prerequisites - [Create an App Service app](./index.yml). The app's [App Service plan](overview-hosting-plans.md) must be in the **Basic**, **Standard**, **Premium**, or **Isolated** tier. See [Scale up an app](manage-scale-up.md#scale-up-your-pricing-tier) to update the tier.
To secure a custom domain in a TLS binding, the certificate has more requirement
> [!NOTE] > **Elliptic Curve Cryptography (ECC) certificates** work with App Service but aren't covered by this article. For the exact steps to create ECC certificates, work with your certificate authority.
+> [!NOTE]
+> After you upload a private certificate to an app, the certificate is stored in a deployment unit that's bound to the App Service plan's resource group, region, and operating system combination, internally called a *webspace*. That way, the certificate is accessible to other apps in the same resource group and region combination. Private certificates uploaded or imported to App Service are shared with App Services in the same deployment unit.
+ ## Create a free managed certificate The free App Service managed certificate is a turn-key solution for securing your custom DNS name in App Service. Without any action from you, this TLS/SSL server certificate is fully managed by App Service and is automatically renewed continuously in six-month increments, 45 days before expiration, as long as the prerequisites that you set up stay the same. All the associated bindings are updated with the renewed certificate. You create and bind the certificate to a custom domain, and let App Service do the rest.
You're now ready upload the certificate to App Service.
Public certificates are supported in the *.cer* format.
+> [!NOTE]
+> After you upload a public certificate to an app, it is only accessible by the app it is uploaded to. Public certificates must be uploaded to each individual web app that needs access. For App Service Environment specific scenarios, refer to [the documentation for certificates and the App Service Environment](../app-service/environment/overview-certificates.md)
+>
+> You can upload up to 1000 public certificates per App Service Plan.
+ 1. In the [Azure portal](https://portal.azure.com), from the left menu, select **App Services** > **\<app-name>**. 1. From your app's navigation menu, select **Certificates** > **Public key certificates (.cer)** > **Add certificate**.
app-service Overview Access Restrictions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-access-restrictions.md
Title: App Service Access restrictions
-description: This article provides an overview of the access restriction features in App Service
+description: This article provides an overview of the access restriction features in App Service.
Previously updated : 01/25/2024 Last updated : 02/13/2024
You have the option of configuring a set of access restriction rules for each si
## App access
-App access allows you to configure if access is available through the default (public) endpoint. You configure this behavior to either be `Disabled` or `Enabled`. When access is enabled, you can add [Site access](#site-access) restriction rules to control access from select virtual networks and IP addresses. If the setting isn't configured, the default behavior is to enable access unless a private endpoint exists which changes the behavior to disable access.
+App access allows you to configure if access is available through the default (public) endpoint. You configure this behavior to either be `Disabled` or `Enabled`. When access is enabled, you can add [Site access](#site-access) restriction rules to control access from select virtual networks and IP addresses.
+
+If the setting isn't set (the property is `null`), the default behavior is to enable access unless a private endpoint exists which changes the behavior to disable access. In Azure portal, when the property isn't set, the radio button is also not set and you're then using default behavior.
:::image type="content" source="media/overview-access-restrictions/app-access-portal.png" alt-text="Screenshot of app access option in Azure portal.":::
-In the Azure Resource Manager API, app access is called `publicNetworkAccess`. For ILB App Service Environment, the default entry point for apps is always internal to the virtual network. Enabling app access (`publicNetworkAccess`) doesn't grant direct public access to the apps; instead, it allows access from the default entry point, which corresponds to the internal IP address of the App Service Environment. If you disable app access on an ILB App Service Environment, you can only access the apps through private endpoints added to the individual apps.
+In the Azure Resource Manager API, the property controlling app access is called `publicNetworkAccess`. For internal load balancer (ILB) App Service Environment, the default entry point for apps is always internal to the virtual network. Enabling app access (`publicNetworkAccess`) doesn't grant direct public access to the apps; instead, it allows access from the default entry point, which corresponds to the internal IP address of the App Service Environment. If you disable app access on an ILB App Service Environment, you can only access the apps through private endpoints added to the individual apps.
## Site access
You can't create these rules in the portal, but you can modify an existing servi
For any rule, regardless of type, you can add http header filtering. Http header filters allow you to further inspect the incoming request and filter based on specific http header values. Each header can have up to eight values per rule. The following lists the supported http headers:
-* **X-Forwarded-For**. [Standard header](https://developer.mozilla.org/docs/Web/HTTP/Headers/X-Forwarded-For) for identifying the originating IP address of a client connecting through a proxy server. Accepts valid CIDR values.
+* **X-Forwarded-For**. [Standard header](https://developer.mozilla.org/docs/Web/HTTP/Headers/X-Forwarded-For) for identifying the originating IP address of a client connecting through a proxy server. Accepts valid IP addresses.
* **X-Forwarded-Host**. [Standard header](https://developer.mozilla.org/docs/Web/HTTP/Headers/X-Forwarded-Host) for identifying the original host requested by the client. Accepts any string up to 64 characters in length. * **X-Azure-FDID**. [Custom header](../frontdoor/front-door-http-headers-protocol.md#from-the-front-door-to-the-backend) for identifying the reverse proxy instance. Azure Front Door sends a guid identifying the instance, but it can also be used for non-Microsoft proxies to identify the specific instance. Accepts any string up to 64 characters in length. * **X-FD-HealthProbe**. [Custom header](../frontdoor/front-door-http-headers-protocol.md#from-the-front-door-to-the-backend) for identifying the health probe of the reverse proxy. Azure Front Door sends "1" to uniquely identify a health probe request. The header can also be used for non-Microsoft proxies to identify health probes. Accepts any string up to 64 characters in length.
app-service Quickstart Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-arm-template.md
ms.assetid: 582bb3c2-164b-42f5-b081-95bfcb7a502a Previously updated : 12/20/2023 Last updated : 02/06/2024 zone_pivot_groups: app-service-platform-windows-linux-windows-container adobe-target: true
adobe-target-content: ./quickstart-arm-template-uiex
# Quickstart: Create App Service app using an ARM template
-Get started with [Azure App Service](overview.md) by deploying an app to the cloud using an Azure Resource Manager template (ARM template) and [Azure CLI](/cli/azure/get-started-with-azure-cli) in Cloud Shell. Because you use a free App Service tier, you incur no costs to complete this quickstart.
+Get started with [Azure App Service](overview.md) by deploying an app to the cloud using an Azure Resource Manager template (ARM template) and [Azure CLI](/cli/azure/get-started-with-azure-cli) in Cloud Shell. A Resource Manager template is a JavaScript Object Notation (JSON) file that defines the infrastructure and configuration for your project. You incur no costs to complete this quickstart because you use a free App Service tier.
- [!INCLUDE [About Azure Resource Manager](../../includes/resource-manager-quickstart-introduction.md)]
+To complete this quickstart, you'll need an Azure account with an active subscription. If you don't have an Azure account, you can [create one for free](https://azure.microsoft.com/free/).
-If your environment meets the prerequisites and you're familiar with using ARM templates, select the **Deploy to Azure** button. The template will open in the Azure portal.
+## Skip to the end
-Use the following button to deploy on **Windows**:
+If you're familiar with using ARM templates, you can skip to the end by selecting this [![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.web%2Fapp-service-docs-windows%2Fazuredeploy.json) button. This button opens the ARM template in the Azure portal.
++
+In the Azure portal, select **Create new** to create a new Resource Group and then select the **Review + create** button to deploy the app.
-[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.web%2Fapp-service-docs-windows%2Fazuredeploy.json)
::: zone-end ::: zone pivot="platform-linux"
-Use the following button to deploy on **Linux**:
+Get started with [Azure App Service](overview.md) by deploying an app to the cloud using an Azure Resource Manager template (ARM template) and [Azure CLI](/cli/azure/get-started-with-azure-cli) in Cloud Shell. A Resource Manager template is a JavaScript Object Notation (JSON) file that defines the infrastructure and configuration for your project. You incur no costs to complete this quickstart because you use a free App Service tier.
+
+To complete this quickstart, you'll need an Azure account with an active subscription. If you don't have an Azure account, you can [create one for free](https://azure.microsoft.com/free/).
+
+## Skip to the end
-[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.web%2Fapp-service-docs-linux%2Fazuredeploy.json)
+If you're familiar with using ARM templates, you can skip to the end by selecting this [![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.web%2Fapp-service-docs-linux%2Fazuredeploy.json) button. This button opens the ARM template in the Azure portal.
++
+In the Azure portal, select **Create new** to create a new Resource Group and then select the **Review + create** button to deploy the app.
::: zone-end ::: zone pivot="platform-windows-container"
-Use the following button to deploy on **Windows container**:
+Get started with [Azure App Service](overview.md) by deploying an app to the cloud using an Azure Resource Manager template (ARM template) and [Azure CLI](/cli/azure/get-started-with-azure-cli) in Cloud Shell. A Resource Manager template is a JavaScript Object Notation (JSON) file that defines the infrastructure and configuration for your project. A premium plan is needed to deploy a Windows container app. See the [App Service pricing page](https://azure.microsoft.com/pricing/details/app-service/windows/#pricing) for pricing details.
-[![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.web%2Fapp-service-docs-windows-container%2Fazuredeploy.json)
+## Skip to the end
-## Prerequisites
+If you're familiar with using ARM templates, you can skip to the end by selecting this [![Deploy to Azure](../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.web%2Fapp-service-docs-windows-container%2Fazuredeploy.json) button. This button opens the ARM template in the Azure portal.
+
+In the Azure portal, select **Create new** to create a new Resource Group and then select the **Review + create** button to deploy the app.
## Review the template
Two Azure resources are defined in the template:
* [**Microsoft.Web/serverfarms**](/azure/templates/microsoft.web/serverfarms): create an App Service plan. * [**Microsoft.Web/sites**](/azure/templates/microsoft.web/sites): create an App Service app.
-This template contains several parameters that are predefined for your convenience. See the table below for parameter defaults and their descriptions:
+This template contains several parameters that are predefined for your convenience. See the table for parameter defaults and their descriptions:
| Parameters | Type | Default value | Description | ||||-|
-| webAppName | string | "webApp-**[`<uniqueString>`](../azure-resource-manager/templates/template-functions-string.md#uniquestring)**" | App name |
-| location | string | "[[resourceGroup().location](../azure-resource-manager/templates/template-functions-resource.md#resourcegroup)]" | App region |
-| sku | string | "F1" | Instance size (F1 = Free Tier) |
-| language | string | ".net" | Programming language stack (.NET, php, node, html) |
-| helloWorld | boolean | False | True = Deploy "Hello World" app |
-| repoUrl | string | " " | External Git repo (optional) |
+| webAppName | string | `webApp-<uniqueString>` | App name based on a [unique string value](../azure-resource-manager/templates/template-functions-string.md#uniquestring) |
+| appServicePlanName | string | `webAppPlan-<uniqueString>` | App Service Plan name based on a [unique string value](../azure-resource-manager/templates/template-functions-string.md#uniquestring) |
+| location | string | `[resourceGroup().location]` | [App region](../azure-resource-manager/templates/template-functions-resource.md#resourcegroup) |
+| sku | string | `F1` | Instance size (F1 = Free Tier) |
+| language | string | `.NET` | Programming language stack (.NET, php, node, html) |
+| helloWorld | boolean | `False` | True = Deploy "Hello World" app |
+| repoUrl | string | ` ` | External Git repo (optional) |
::: zone-end ::: zone pivot="platform-linux" The template used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/app-service-docs-linux). It deploys an App Service plan and an App Service app on Linux. It's compatible with all supported programming languages on App Service.
Two Azure resources are defined in the template:
* [**Microsoft.Web/serverfarms**](/azure/templates/microsoft.web/serverfarms): create an App Service plan. * [**Microsoft.Web/sites**](/azure/templates/microsoft.web/sites): create an App Service app.
-This template contains several parameters that are predefined for your convenience. See the table below for parameter defaults and their descriptions:
+This template contains several parameters that are predefined for your convenience. See the table for parameter defaults and their descriptions:
| Parameters | Type | Default value | Description | ||||-|
-| webAppName | string | "webApp-**[`<uniqueString>`](../azure-resource-manager/templates/template-functions-string.md#uniquestring)**" | App name |
-| location | string | "[[resourceGroup().location](../azure-resource-manager/templates/template-functions-resource.md#resourcegroup)]" | App region |
-| sku | string | "F1" | Instance size (F1 = Free Tier) |
-| linuxFxVersion | string | "DOTNETCORE&#124;3.0 | "Programming language stack &#124; Version" |
-| repoUrl | string | " " | External Git repo (optional) |
+| webAppName | string | `webApp-<uniqueString>` | App name based on a [unique string value](../azure-resource-manager/templates/template-functions-string.md#uniquestring) |
+| appServicePlanName | string | `webAppPlan-<uniqueString>` | App Service Plan name based on a [unique string value](../azure-resource-manager/templates/template-functions-string.md#uniquestring) |
+| location | string | `[resourceGroup().location]` | [App region](../azure-resource-manager/templates/template-functions-resource.md#resourcegroup)
+| sku | string | `F1` | Instance size (F1 = Free Tier) |
+| linuxFxVersion | string | `DOTNETCORE|3.0` | "Programming language stack &#124; Version" |
+| repoUrl | string | ` ` | External Git repo (optional) |
::: zone-end
Two Azure resources are defined in the template:
* [**Microsoft.Web/serverfarms**](/azure/templates/microsoft.web/serverfarms): create an App Service plan. * [**Microsoft.Web/sites**](/azure/templates/microsoft.web/sites): create an App Service app.
-This template contains several parameters that are predefined for your convenience. See the table below for parameter defaults and their descriptions:
+This template contains several parameters that are predefined for your convenience. See the table for parameter defaults and their descriptions:
| Parameters | Type | Default value | Description | ||||-|
-| webAppName | string | "webApp-**[`<uniqueString>`](../azure-resource-manager/templates/template-functions-string.md#uniquestring)**" | App name |
-| appServicePlanName | string | "webAppPlan-**[`<uniqueString>`](../azure-resource-manager/templates/template-functions-string.md#uniquestring)**" | App Service Plan name |
-| location | string | "[[resourceGroup().location](../azure-resource-manager/templates/template-functions-resource.md#resourcegroup)]" | App region |
-| skuTier | string | "P1v3" | Instance size ([View available SKUs](configure-custom-container.md?tabs=debian&pivots=container-windows#customize-container-memory)) |
-| appSettings | string | "[{"name": "PORT","value": "8080"}]" | App Service listening port. Needs to be 8080. |
-| kind | string | "windows" | External Git repo (optional) |
-| hyperv | string | "true" | External Git repo (optional) |
-| windowsFxVersion | string | "DOCKER&#124;mcr.microsoft.com/dotnet/samples:aspnetapp" | External Git repo (optional) |
+| webAppName | string | `webApp-<uniqueString>` | App name based on a [unique string value](../azure-resource-manager/templates/template-functions-string.md#uniquestring) |
+| appServicePlanName | string | `webAppPlan-<uniqueString>` | App Service Plan name based on a [unique string value](../azure-resource-manager/templates/template-functions-string.md#uniquestring) |
+| location | string | `[resourceGroup().location]`| [App region](../azure-resource-manager/templates/template-functions-resource.md#resourcegroup) |
+| skuTier | string | `P1v3` | Instance size ([View available SKUs](configure-custom-container.md?tabs=debian&pivots=container-windows#customize-container-memory)) |
+| appSettings | string | `[{"name": "PORT","value": "8080"}]`| App Service listening port. Needs to be 8080. |
+| kind | string | `windows` | Operating System |
+| hyperv | string | `true` | Isolation mode |
+| windowsFxVersion | string | `DOCKER|mcr.microsoft.com/dotnet/samples:aspnetapp` | Container image |
++ ::: zone-end ## Deploy the template
Azure CLI is used here to deploy the template. You can also use the Azure portal
The following code creates a resource group, an App Service plan, and a web app. A default resource group, App Service plan, and location have been set for you. Replace `<app-name>` with a globally unique app name (valid characters are `a-z`, `0-9`, and `-`). ::: zone pivot="platform-windows"
-Run the code below to deploy a .NET framework app on Windows.
+Run the following commands to deploy a .NET framework app on Windows.
```azurecli-interactive
-az group create --name myResourceGroup --location "southcentralus" &&
+az group create --name myResourceGroup --location "southcentralus"
+ az deployment group create --resource-group myResourceGroup \parameters language=".net" helloWorld="true" webAppName="<app-name>" \
+--parameters language=".NET" helloWorld="true" webAppName="<app-name>" \
--template-uri "https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.web/app-service-docs-windows/azuredeploy.json" ::: zone-end ::: zone pivot="platform-linux"
-Run the code below to create a Python app on Linux.
+Run the following commands to create a Python app on Linux:
```azurecli-interactive
-az group create --name myResourceGroup --location "southcentralus" &&
+az group create --name myResourceGroup --location "southcentralus"
+ az deployment group create --resource-group myResourceGroup --parameters webAppName="<app-name>" linuxFxVersion="PYTHON|3.9" \ --template-uri "https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.web/app-service-docs-linux/azuredeploy.json" ```
-To deploy a different language stack, update `linuxFxVersion` with appropriate values. Samples are shown below. To show current versions, run the following command in the Cloud Shell: `az webapp config show --resource-group myResourceGroup --name <app-name> --query linuxFxVersion`
+To deploy a different language stack, update `linuxFxVersion` with appropriate values. Samples are shown in the table. To show current versions, run the following command in the Cloud Shell: `az webapp config show --resource-group myResourceGroup --name <app-name> --query linuxFxVersion`
| Language | Example | |-||
To deploy a different language stack, update `linuxFxVersion` with appropriate v
::: zone-end ::: zone pivot="platform-windows-container"
-Run the code below to deploy a [.NET app](https://mcr.microsoft.com/product/dotnet/samples/tags) on a Windows container.
+Run the following commands to deploy a [.NET app](https://mcr.microsoft.com/product/dotnet/samples/tags) on a Windows container.
```azurecli-interactive
-az group create --name myResourceGroup --location "southcentralus" &&
+az group create --name myResourceGroup --location "southcentralus"
+ az deployment group create --resource-group myResourceGroup \ --parameters webAppName="<app-name>" \ --template-uri "https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/quickstarts/microsoft.web/app-service-docs-windows-container/azuredeploy.json"
az deployment group create --resource-group myResourceGroup \
## Validate the deployment Browse to `http://<app_name>.azurewebsites.net/` and verify it's been created. ## Clean up resources
application-gateway Private Link Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/private-link-configure.md
A private endpoint is a network interface that uses a private IP address from th
> [!Note] > If you're provisioning a **Private Endpoint** from within another tenant, you will need to utilize the Azure Application Gateway Resource ID and the _Name_ of the Frontend IP configuration as the target sub-resource. For example, if I had a private IP associated to the Application Gateway and the Name listed in Frontend IP configuration of the portal for the private IP is _PrivateFrontendIp_, the target sub-resource value would be: _PrivateFrontendIp_.
+> [!Note]
+> If you have to move a **Private Endpoint** to another subscription, you must first delete the existing **Private Endpoint** connection between the **Private Link** and **Private Endpoint**. Once this is completed, you have to re-create a new **Private Endpoint** connection in the new subscription to establish connection between **Private Link** and **Private Endpoint**.
+++ # [Azure PowerShell](#tab/powershell) To configure Private link on an existing Application Gateway via Azure PowerShell, use following commands:
azure-arc Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/release-notes.md
This article highlights capabilities, features, and enhancements recently released or improved for Azure Arc-enabled data services.
+## February 12, 2024
+
+**Image tag**:`v1.27.0_2023-02-13`
+
+For complete release version information, review [Version log](version-log.md#february-12-2024).
+ ## December 12, 2023 **Image tag**: `v1.26.0_2023-12-12`
azure-arc Version Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/version-log.md
- ignite-2023 Previously updated : 10/10/2023 Last updated : 02/12/2024 #Customer intent: As a data professional, I want to understand what versions of components align with specific releases.
This article identifies the component versions with each release of Azure Arc-enabled data services.
+## February 12, 2024
+
+|Component|Value|
+|--|--|
+|Container images tag |`v1.27.0_2023-02-13`|
+|**CRD names and version:**| |
+|`activedirectoryconnectors.arcdata.microsoft.com`| v1beta1, v1beta2, v1, v2|
+|`datacontrollers.arcdata.microsoft.com`| v1beta1, v1 through v5|
+|`exporttasks.tasks.arcdata.microsoft.com`| v1beta1, v1, v2|
+|`failovergroups.sql.arcdata.microsoft.com`| v1beta1, v1beta2, v1, v2|
+|`kafkas.arcdata.microsoft.com`| v1beta1 through v1beta4|
+|`monitors.arcdata.microsoft.com`| v1beta1, v1, v3|
+|`postgresqls.arcdata.microsoft.com`| v1beta1 through v1beta6|
+|`postgresqlrestoretasks.tasks.postgresql.arcdata.microsoft.com`| v1beta1|
+|`sqlmanagedinstances.sql.arcdata.microsoft.com`| v1beta1, v1 through v13|
+|`sqlmanagedinstancemonitoringprofiles.arcdata.microsoft.com`| v1beta1, v1beta2|
+|`sqlmanagedinstancereprovisionreplicatasks.tasks.sql.arcdata.microsoft.com`| v1beta1|
+|`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`| v1beta1, v1|
+|`telemetrycollectors.arcdata.microsoft.com`| v1beta1 through v1beta5|
+|`telemetryrouters.arcdata.microsoft.com`| v1beta1 through v1beta5|
+|Azure Resource Manager (ARM) API version|2023-11-01-preview|
+|`arcdata` Azure CLI extension version|1.6.0 ([Download](https://aka.ms/az-cli-arcdata-ext))|
+|Arc-enabled Kubernetes helm chart extension version|1.27.0|
+|Azure Arc Extension for Azure Data Studio<br/>`arc`<br/>`azcli`|<br/>1.8.0 ([Download](https://aka.ms/ads-arcdata-ext))</br>1.8.0 ([Download](https://aka.ms/ads-azcli-ext))|
+|SQL Database version | 957 |
++ ## December 12, 2023 |Component|Value|
azure-arc Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/upgrade.md
az arcappliance show --resource-group [REQUIRED] --name [REQUIRED]
## Manual upgrade
-Arc resource bridge can be manually upgraded from the management machine. You must meet all upgrade prerequisites before attempting to upgrade. The management machine must have the kubeconfig and appliance configuration files stored locally.
+Arc resource bridge can be manually upgraded from the management machine. You must meet all upgrade prerequisites before attempting to upgrade. The management machine must have the kubeconfig and [appliance configuration files](system-requirements.md#configuration-files) stored locally or you will not be able to run the upgrade.
Manual upgrade generally takes between 30-90 minutes, depending on network speeds. The upgrade command takes your Arc resource bridge to the next appliance version, which might not be the latest available appliance version. Multiple upgrades could be needed to reach a [supported version](#supported-versions). You can check your appliance version by checking the Azure resource of your Arc resource bridge.
azure-arc Agent Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes.md
Title: What's new with Azure Connected Machine agent description: This article has release notes for Azure Connected Machine agent. For many of the summarized issues, there are links to more details. Previously updated : 12/06/2023 Last updated : 02/07/2024
The Azure Connected Machine agent receives improvements on an ongoing basis. To
This page is updated monthly, so revisit it regularly. If you're looking for items older than six months, you can find them in [archive for What's new with Azure Connected Machine agent](agent-release-notes-archive.md).
+## Version 1.38 - February 2024
+
+Download for [Windows](https://download.microsoft.com/download/e/#installing-a-specific-version-of-the-agent)
+
+### New features
+
+- AlmaLinux 9 is now a [supported operating system](prerequisites.md#supported-operating-systems)
+
+### Fixed
+
+- The hybrid instance metadata service (HIMDS) now listens on the IPv6 local loopback address (::1)
+- Improved logging in the extension manager and policy engine
+- Improved reliability when fetching the latest operating system metadata
+- Reduced extension manager CPU usage
+ ## Version 1.37 - December 2023 Download for [Windows](https://download.microsoft.com/download/f/6/4/f64c574f-d3d5-4128-8308-ed6a7097a93d/AzureConnectedMachineAgent.msi) or [Linux](manage-agent.md#installing-a-specific-version-of-the-agent)
Download for [Windows](https://download.microsoft.com/download/f/6/4/f64c574f-d3
- The installation script for Linux now automatically approves the request to import the packages.microsoft.com signing key to ensure a silent installation experience - Agent installation and upgrades apply more restrictive permissions to the agent's data directories on Windows - Improved reliability when detecting Azure Stack HCI as a cloud provider-- Removed the log zipping feature introduced in version 1.37 for extension manager and machine configuration agent logs. Log files will still be rotated automatically.-- Removed the scheduled tasks for automatic agent upgrades (introduced in agent version 1.30). We will reintroduce this functionality when the automatic upgrade mechanism is available.
+- Removed the log zipping feature introduced in version 1.37 for extension manager and machine configuration agent logs. Log files are still rotated automatically.
+- Removed the scheduled tasks for automatic agent upgrades (introduced in agent version 1.30). We'll reintroduce this functionality when the automatic upgrade mechanism is available.
- Resolved [Azure Connected Machine Agent Elevation of Privilege Vulnerability](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2023-35624) ## Version 1.36 - November 2023
The Windows Admin Center in Azure feature is incompatible with Azure Connected M
### New features - [azcmagent show](azcmagent-show.md) now reports extended security license status on Windows Server 2012 server machines.-- Introduced a new [proxy bypass](manage-agent.md#proxy-bypass-for-private-endpoints) option, `ArcData`, that covers the SQL Server enabled by Azure Arc endpoints. This will enable you to use a private endpoint with Azure Arc-enabled servers with the public endpoints for SQL Server enabled by Azure Arc.-- The [CPU limit for extension operations](agent-overview.md#agent-resource-governance) on Linux is now 30%. This increase will help improve reliability of extension install, upgrade and uninstall operations.
+- Introduced a new [proxy bypass](manage-agent.md#proxy-bypass-for-private-endpoints) option, `ArcData`, that covers the SQL Server enabled by Azure Arc endpoints. This enables you to use a private endpoint with Azure Arc-enabled servers with the public endpoints for SQL Server enabled by Azure Arc.
+- The [CPU limit for extension operations](agent-overview.md#agent-resource-governance) on Linux is now 30%. This increase helps improve reliability of extension install, upgrade, and uninstall operations.
- Older extension manager and machine configuration agent logs are automatically zipped to reduce disk space requirements. - New executable names for the extension manager (`gc_extension_service`) and machine configuration (`gc_arc_service`) agents on Windows to help you distinguish the two services. For more information, see [Windows agent installation details](./agent-overview.md#windows-agent-installation-details).
The Windows Admin Center in Azure feature is incompatible with Azure Connected M
- [azcmagent connect](azcmagent-connect.md) now uses the latest API version when creating the Azure Arc-enabled server resource to ensure Azure policies targeting new properties can take effect. - Upgraded the OpenSSL library and PowerShell runtime shipped with the agent to include the latest security fixes. - Fixed an issue that could prevent the agent from reporting the correct product type on Windows machines.-- Improved handling of upgrades when the previously installed extension version was not in a successful state.
+- Improved handling of upgrades when the previously installed extension version wasn't in a successful state.
## Version 1.35 - October 2023
Download for [Windows](https://download.microsoft.com/download/b/3/2/b3220316-13
### New features - [Extended Security Updates for Windows Server 2012 and 2012 R2](prepare-extended-security-updates.md) can be purchased and enabled through Azure Arc. If your server is already running the Azure Connected Machine agent, [upgrade to agent version 1.34](manage-agent.md#upgrade-the-agent) or later to take advantage of this new capability.-- Additional system metadata is collected to enhance your device inventory in Azure:
+- New system metadata is collected to enhance your device inventory in Azure:
- Total physical memory
- - Additional processor information
+ - More processor information
- Serial number - SMBIOS asset tag - Network requests to Microsoft Entra ID (formerly Azure Active Directory) now use `login.microsoftonline.com` instead of `login.windows.net`
azure-arc Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/prerequisites.md
Title: Connected Machine agent prerequisites description: Learn about the prerequisites for installing the Connected Machine agent for Azure Arc-enabled servers. Previously updated : 12/06/2023 Last updated : 02/07/2024
If two agents use the same configuration, you will encounter inconsistent behavi
Azure Arc supports the following Windows and Linux operating systems. Only x86-64 (64-bit) architectures are supported. The Azure Connected Machine agent does not run on x86 (32-bit) or ARM-based architectures.
+* AlmaLinux 9
* Amazon Linux 2 and 2023 * Azure Linux (CBL-Mariner) 1.0, 2.0 * Azure Stack HCI
azure-functions Functions Event Hub Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-event-hub-cosmos-db.md
Title: 'Tutorial: Use Java functions with Azure Cosmos DB and Event Hubs'
description: This tutorial shows you how to consume events from Event Hubs to make updates in Azure Cosmos DB using a function written in Java. Previously updated : 11/04/2019 Last updated : 02/13/2024 ms.devlang: java
public class Function {
@CosmosDBOutput( name = "databaseOutput", databaseName = "TelemetryDb",
- collectionName = "TelemetryInfo",
- connectionStringSetting = "CosmosDBConnectionString")
+ containerName = "TelemetryInfo",
+ connection = "CosmosDBConnectionSetting")
OutputBinding<TelemetryItem> document, final ExecutionContext context) {
azure-maps About Azure Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/about-azure-maps.md
Stay up to date on Azure Maps:
[How to use the Get Map Attribution API]: how-to-show-attribution.md [Quickstart: Create a web app]: quick-demo-map-app.md [What is Azure Maps Creator?]: about-creator.md
-[v1]: /rest/api/maps/data?view=rest-maps-1.0
+[v1]: /rest/api/maps/data?view=rest-maps-1.0&preserve-view=true
[v2]: /rest/api/maps/data [How to create data registry]: how-to-create-data-registries.md <! REST API Links >
Stay up to date on Azure Maps:
[Render]: /rest/api/maps/render [REST APIs]: /rest/api/maps/ [Route]: /rest/api/maps/route
-[Search]: /rest/api/maps/search?view=rest-maps-1.0
+[Search]: /rest/api/maps/search?view=rest-maps-1.0&preserve-view=true
[Spatial]: /rest/api/maps/spatial [TilesetID]: /rest/api/maps/render/get-map-tile#tilesetid [Timezone]: /rest/api/maps/timezone
azure-maps About Creator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/about-creator.md
This section provides a high-level overview of the indoor map creation workflow.
[Dynamic maps StylesObject]: schema-stateset-stylesobject.md [Edit indoor maps using the QGIS plugin]: creator-qgis-plugin.md [Facility Ontology]: creator-facility-ontology.md
-[Features API]: /rest/api/maps-creator/features?view=rest-maps-creator-2023-03-01-preview
+[Features API]: /rest/api/maps-creator/features?view=rest-maps-creator-2023-03-01-preview&preserve-view=true
[features]: glossary.md#feature [How to create data registry]: how-to-create-data-registries.md [Implement Dynamic styling for indoor maps]: indoor-map-dynamic-styling.md
This section provides a high-level overview of the indoor map creation workflow.
[manifest]: drawing-requirements.md#manifest-file-requirements [onboarding tool]: https://azure.github.io/azure-maps-creator-onboarding-tool [Query datasets with WFS API]: how-to-creator-wfs.md
-[Routeset]: /rest/api/maps-creator/routeset/create?view=rest-maps-creator-2023-03-01-preview
+[Routeset]: /rest/api/maps-creator/routeset/create?view=rest-maps-creator-2023-03-01-preview&preserve-view=true
[tileset]: creator-indoor-maps.md#tilesets [Tilesets]: creator-indoor-maps.md#tilesets [Use Azure Maps Creator to create indoor maps]: tutorial-creator-indoor-maps.md [Use the Azure Maps Indoor Maps module]: how-to-use-indoor-module.md [visual style editor]: https://azure.github.io/Azure-Maps-Style-Editor
-[Wayfinding service]: /rest/api/maps-creator/wayfinding?view=rest-maps-creator-2023-03-01-preview
+[Wayfinding service]: /rest/api/maps-creator/wayfinding?view=rest-maps-creator-2023-03-01-preview&preserve-view=true
[Wayfinding]: creator-indoor-maps.md#wayfinding-preview [Work with datasets using the QGIS plugin]: creator-qgis-plugin.md
azure-maps Azure Maps Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/azure-maps-authentication.md
To learn more about authenticating the Azure Maps Control with Microsoft Entra I
[Data]: /rest/api/maps/data [Creator]: /rest/api/maps-creator/ [Spatial]: /rest/api/maps/spatial
-[Search]: /rest/api/maps/search?view=rest-maps-1.0
+[Search]: /rest/api/maps/search?view=rest-maps-1.0&preserve-view=true
[Route]: /rest/api/maps/route [How to configure Azure RBAC for Azure Maps]: how-to-manage-authentication.md
azure-maps Azure Maps Qps Rate Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/azure-maps-qps-rate-limits.md
When QPS limits are reached, an HTTP 429 error is returned. If you're using the
[Azure portal]: https://portal.azure.com/ [Manage the pricing tier of your Azure Maps account]: how-to-manage-pricing-tier.md
-[v1]: /rest/api/maps/data?view=rest-maps-1.0
+[v1]: /rest/api/maps/data?view=rest-maps-1.0&preserve-view=true
[v2]: /rest/api/maps/data [Data Registry]: /rest/api/maps/data-registry [How to create data registry]: how-to-create-data-registries.md
azure-maps Creator Facility Ontology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-facility-ontology.md
Learn more about Creator for indoor maps by reading:
[structures]: #structure <! REST API Links > [conversion service]: /rest/api/maps-creator/conversion
-[dataset]: /rest/api/maps-creator/dataset?view=rest-maps-creator-2023-03-01-preview
+[dataset]: /rest/api/maps-creator/dataset?view=rest-maps-creator-2023-03-01-preview&preserve-view=true
[GeoJSON Point geometry]: /rest/api/maps-creator/wfs/get-features#geojsonpoint [MultiPolygon]: /rest/api/maps-creator/wfs/get-features?tabs=HTTP#geojsonmultipolygon [Point]: /rest/api/maps-creator/wfs/get-features#geojsonpoint
azure-maps Creator Indoor Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-indoor-maps.md
The following example shows how to update a dataset, create a new tileset, and d
[Upload a drawing package]: #upload-a-drawing-package <!-- REST API Links ->
-[Creator - map configuration Rest API]: /rest/api/maps-creator/map-configuration?view=rest-maps-creator-2023-03-01-preview
-[routeset]: /rest/api/maps-creator/routeset?view=rest-maps-creator-2023-03-01-preview
-[Style - Create]: /rest/api/maps-creator/style/create?view=rest-maps-creator-2023-03-01-preview
-[style]: /rest/api/maps-creator/style?view=rest-maps-creator-2023-03-01-preview
-[tileset]: /rest/api/maps-creator/tileset?view=rest-maps-creator-2023-03-01-preview
-[wayfinding path]: /rest/api/maps-creator/wayfinding/get-path?view=rest-maps-creator-2023-03-01-preview
-[wayfinding service]: /rest/api/maps-creator/wayfinding?view=rest-maps-creator-2023-03-01-preview
-[wayfinding]: /rest/api/maps-creator/wayfinding?view=rest-maps-creator-2023-03-01-preview
+[Creator - map configuration Rest API]: /rest/api/maps-creator/map-configuration?view=rest-maps-creator-2023-03-01-preview&preserve-view=true
+[routeset]: /rest/api/maps-creator/routeset?view=rest-maps-creator-2023-03-01-preview&preserve-view=true
+[Style - Create]: /rest/api/maps-creator/style/create?view=rest-maps-creator-2023-03-01-preview&preserve-view=true
+[style]: /rest/api/maps-creator/style?view=rest-maps-creator-2023-03-01-preview&preserve-view=true
+[tileset]: /rest/api/maps-creator/tileset?view=rest-maps-creator-2023-03-01-preview&preserve-view=true
+[wayfinding path]: /rest/api/maps-creator/wayfinding/get-path?view=rest-maps-creator-2023-03-01-preview&preserve-view=true
+[wayfinding service]: /rest/api/maps-creator/wayfinding?view=rest-maps-creator-2023-03-01-preview&preserve-view=true
+[wayfinding]: /rest/api/maps-creator/wayfinding?view=rest-maps-creator-2023-03-01-preview&preserve-view=true
[Alias API]: /rest/api/maps-creator/alias [Conversion service]: /rest/api/maps-creator/conversion [Dataset Create]: /rest/api/maps-creator/dataset/create
azure-maps Creator Qgis Plugin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-qgis-plugin.md
If you have question related to Azure Maps, see [MICROSOFT Q&A]. Be sure and tag
[Download QGIS]: https://qgis.org/en/site/forusers/download.html [geographic information system (GIS)]: https://www.usgs.gov/faqs/what-geographic-information-system-gis [Installing New Plugins]: https://docs.qgis.org/3.28/en/docs/training_manual/qgis_plugins/fetching_plugins.html#basic-fa-installing-new-plugins
-[layer definition]: /rest/api/maps-creator/features/get-collection-definition?view=rest-maps-creator-2023-03-01-preview?tabs=HTTP
+[layer definition]: /rest/api/maps-creator/features/get-collection-definition?view=rest-maps-creator-2023-03-01-preview&preserve-view=true?tabs=HTTP
[MICROSOFT Q&A]: /answers/questions/ask [QGIS]: https://qgis.org/en/site/ [subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
azure-maps Extend Geojson https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/extend-geojson.md
Review the glossary of common technical terms associated with Azure Maps and loc
> [Azure Maps glossary] [GeoJSON spec]: https://tools.ietf.org/html/rfc7946
-[Search Inside Geometry]: /rest/api/maps/search/postsearchinsidegeometry?view=rest-maps-1.0
+[Search Inside Geometry]: /rest/api/maps/search/postsearchinsidegeometry?view=rest-maps-1.0&preserve-view=true
[Geofence GeoJSON format]: geofence-geojson.md [Azure Maps glossary]: glossary.md
azure-maps Geocoding Coverage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/geocoding-coverage.md
Learn more about Azure Maps geocoding:
> [!div class="nextstepaction"] > [Azure Maps Search service]
-[Search service]: /rest/api/maps/search?view=rest-maps-1.0
-[Azure Maps Search service]: /rest/api/maps/search?view=rest-maps-1.0
-[Get Search Address]: /rest/api/maps/search/get-search-address?view=rest-maps-1.0
+[Search service]: /rest/api/maps/search?view=rest-maps-1.0&preserve-view=true
+[Azure Maps Search service]: /rest/api/maps/search?view=rest-maps-1.0&preserve-view=true
+[Get Search Address]: /rest/api/maps/search/get-search-address?view=rest-maps-1.0&preserve-view=true
azure-maps Geographic Scope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/geographic-scope.md
For information on limiting what regions a SAS token can use in, see [Authentica
[Authentication with Azure Maps]: azure-maps-authentication.md#create-sas-tokens [Azure geographies]: https://azure.microsoft.com/global-infrastructure/geographies [Azure Government cloud support]: how-to-use-map-control.md#azure-government-cloud-support
-[Search - Get Search Address]: /rest/api/maps/search/get-search-address?view=rest-maps-1.0
+[Search - Get Search Address]: /rest/api/maps/search/get-search-address?view=rest-maps-1.0&preserve-view=true
azure-maps How To Create Custom Styles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-create-custom-styles.md
Now when you select that unit in the map, the pop-up menu has the new layer ID,
[map configuration]: creator-indoor-maps.md#map-configuration [style editor]: https://azure.github.io/Azure-Maps-Style-Editor [subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
-[tileset get]: /rest/api/maps-creator/tileset/get?view=rest-maps-creator-2023-03-01-preview
-[tileset]: /rest/api/maps-creator/tileset?view=rest-maps-creator-2023-03-01-preview
+[tileset get]: /rest/api/maps-creator/tileset/get?view=rest-maps-creator-2023-03-01-preview&preserve-view=true
+[tileset]: /rest/api/maps-creator/tileset?view=rest-maps-creator-2023-03-01-preview&preserve-view=true
[unitProperties]: drawing-requirements.md#unitproperties [Use Creator to create indoor maps]: tutorial-creator-indoor-maps.md [Use the Azure Maps Indoor Maps module]: how-to-use-indoor-module.md
azure-maps How To Creator Wayfinding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-creator-wayfinding.md
The wayfinding service calculates the path through specific intervening points.
[wayfinding service]: creator-indoor-maps.md#wayfinding-preview [wayfinding]: creator-indoor-maps.md#wayfinding-preview <! REST API Links >
-[routeset]: /rest/api/maps-creator/routeset?view=rest-maps-creator-2023-03-01-preview
-[wayfinding API]: /rest/api/maps-creator/wayfinding?view=rest-maps-creator-2023-03-01-preview
+[routeset]: /rest/api/maps-creator/routeset?view=rest-maps-creator-2023-03-01-preview&preserve-view=true
+[wayfinding API]: /rest/api/maps-creator/wayfinding?view=rest-maps-creator-2023-03-01-preview&preserve-view=true
azure-maps How To Dataset Geojson https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dataset-geojson.md
Feature IDs can only contain alpha-numeric (a-z, A-Z, 0-9), hyphen (-), dot (.)
[Create a tileset]: tutorial-creator-indoor-maps.md#create-a-tileset [Creator for indoor maps]: creator-indoor-maps.md [Creator resource]: how-to-manage-creator.md
-[Dataset Create API]: /rest/api/maps-creator/dataset/create?view=rest-maps-creator-2023-03-01-preview
+[Dataset Create API]: /rest/api/maps-creator/dataset/create?view=rest-maps-creator-2023-03-01-preview&preserve-view=true
[Dataset Create]: /rest/api/maps-creator/dataset/create [dataset]: creator-indoor-maps.md#datasets [Facility Ontology 2.0]: creator-facility-ontology.md?pivots=facility-ontology-v2
azure-maps How To Search For Address https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-search-for-address.md
This example demonstrates how to search for a cross street based on the coordina
> [Best practices for Azure Maps Search service] [Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account
-[Azure Maps Search service]: /rest/api/maps/search?view=rest-maps-1.0
+[Azure Maps Search service]: /rest/api/maps/search?view=rest-maps-1.0&preserve-view=true
[Best practices for Azure Maps Search service]: how-to-use-best-practices-for-search.md [Best Practices for Search]: how-to-use-best-practices-for-search.md#geobiased-search-results
-[Entity Types]: /rest/api/maps/search/getsearchaddressreverse?view=rest-maps-1.0#entitytype
-[Fuzzy Search URI Parameters]: /rest/api/maps/search/getsearchfuzzy?view=rest-maps-1.0#uri-parameters
-[Fuzzy Search]: /rest/api/maps/search/getsearchfuzzy?view=rest-maps-1.0
-[Get Search Address Reverse]: /rest/api/maps/search/getsearchaddressreverse?view=rest-maps-1.0
-[Get Search Address]: /rest/api/maps/search/getsearchaddress?view=rest-maps-1.0
-[point of interest result]: /rest/api/maps/search/getsearchpoi?view=rest-maps-1.0#searchpoiresponse
+[Entity Types]: /rest/api/maps/search/getsearchaddressreverse?view=rest-maps-1.0&preserve-view=true#entitytype
+[Fuzzy Search URI Parameters]: /rest/api/maps/search/getsearchfuzzy?view=rest-maps-1.0&preserve-view=true#uri-parameters
+[Fuzzy Search]: /rest/api/maps/search/getsearchfuzzy?view=rest-maps-1.0&preserve-view=true
+[Get Search Address Reverse]: /rest/api/maps/search/getsearchaddressreverse?view=rest-maps-1.0&preserve-view=true
+[Get Search Address]: /rest/api/maps/search/getsearchaddress?view=rest-maps-1.0&preserve-view=true
+[point of interest result]: /rest/api/maps/search/getsearchpoi?view=rest-maps-1.0&preserve-view=true#searchpoiresponse
[Post Search Address Batch]: /rest/api/maps/search/postsearchaddressbatch
-[Post Search Address Reverse Batch]: /rest/api/maps/search/postsearchaddressreversebatch?view=rest-maps-1.0
+[Post Search Address Reverse Batch]: /rest/api/maps/search/postsearchaddressreversebatch?view=rest-maps-1.0&preserve-view=true
[Postman]: https://www.postman.com/
-[Reverse Address Search Results]: /rest/api/maps/search/getsearchaddressreverse?view=rest-maps-1.0#searchaddressreverseresult
-[Reverse Address Search]: /rest/api/maps/search/getsearchaddressreverse?view=rest-maps-1.0
-[Reverse Search Parameters]: /rest/api/maps/search/getsearchaddressreverse?view=rest-maps-1.0#uri-parameters
-[Road Use Types]: /rest/api/maps/search/getsearchaddressreverse?view=rest-maps-1.0#uri-parameters
+[Reverse Address Search Results]: /rest/api/maps/search/getsearchaddressreverse?view=rest-maps-1.0&preserve-view=true#searchaddressreverseresult
+[Reverse Address Search]: /rest/api/maps/search/getsearchaddressreverse?view=rest-maps-1.0&preserve-view=true
+[Reverse Search Parameters]: /rest/api/maps/search/getsearchaddressreverse?view=rest-maps-1.0&preserve-view=true#uri-parameters
+[Road Use Types]: /rest/api/maps/search/getsearchaddressreverse?view=rest-maps-1.0&preserve-view=true#uri-parameters
[Route]: /rest/api/maps/route
-[Search Address Reverse Cross Street]: /rest/api/maps/search/getsearchaddressreversecrossstreet?view=rest-maps-1.0
-[Search Address]: /rest/api/maps/search/getsearchaddress?view=rest-maps-1.0
+[Search Address Reverse Cross Street]: /rest/api/maps/search/getsearchaddressreversecrossstreet?view=rest-maps-1.0&preserve-view=true
+[Search Address]: /rest/api/maps/search/getsearchaddress?view=rest-maps-1.0&preserve-view=true
[Search Coverage]: geocoding-coverage.md
-[Search Polygon API]: /rest/api/maps/search/getsearchpolygon?view=rest-maps-1.0
-[Search]: /rest/api/maps/search?view=rest-maps-1.0
+[Search Polygon API]: /rest/api/maps/search/getsearchpolygon?view=rest-maps-1.0&preserve-view=true
+[Search]: /rest/api/maps/search?view=rest-maps-1.0&preserve-view=true
[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
-[URI Parameter reference]: /rest/api/maps/search/getsearchfuzzy?view=rest-maps-1.0#uri-parameters
+[URI Parameter reference]: /rest/api/maps/search/getsearchfuzzy?view=rest-maps-1.0&preserve-view=true#uri-parameters
[Weather]: /rest/api/maps/weather
azure-maps How To Use Best Practices For Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-best-practices-for-routing.md
To learn more, please see:
[Azure Maps npm Package]: https://www.npmjs.com/package/azure-maps-rest [Azure Maps Route service]: /rest/api/maps/route [How to use the Service module]: how-to-use-services-module.md
-[Point of Interest]: /rest/api/maps/search/getsearchpoi?view=rest-maps-1.0
+[Point of Interest]: /rest/api/maps/search/getsearchpoi?view=rest-maps-1.0&preserve-view=true
[Post Route Directions API documentation]: /rest/api/maps/route/postroutedirections#supportingpoints [Post Route Directions]: /rest/api/maps/route/postroutedirections [Postman]: https://www.postman.com/downloads/
azure-maps How To Use Best Practices For Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-best-practices-for-search.md
To learn more, please see:
> [How to build Azure Maps Search service requests](./how-to-search-for-address.md) > [!div class="nextstepaction"]
-> [Search service API documentation](/rest/api/maps/search?view=rest-maps-1.0)
+> [Search service API documentation](/rest/api/maps/search?view=rest-maps-1.0&preserve-view=true)
-[Search service]: /rest/api/maps/search?view=rest-maps-1.0
-[Search Fuzzy]: /rest/api/maps/search/getsearchfuzzy?view=rest-maps-1.0
+[Search service]: /rest/api/maps/search?view=rest-maps-1.0&preserve-view=true
+[Search Fuzzy]: /rest/api/maps/search/getsearchfuzzy?view=rest-maps-1.0&preserve-view=true
[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account [subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account [Postman]: https://www.postman.com/downloads/ [Geocoding coverage]: geocoding-coverage.md
-[Search Address Reverse]: /rest/api/maps/search/getsearchaddressreverse?view=rest-maps-1.0
-[POI category search]: /rest/api/maps/search/getsearchpoicategory?view=rest-maps-1.0
-[Search Nearby]: /rest/api/maps/search/getsearchnearby?view=rest-maps-1.0
-[Get Search Address]: /rest/api/maps/search/getsearchaddress?view=rest-maps-1.0
+[Search Address Reverse]: /rest/api/maps/search/getsearchaddressreverse?view=rest-maps-1.0&preserve-view=true
+[POI category search]: /rest/api/maps/search/getsearchpoicategory?view=rest-maps-1.0&preserve-view=true
+[Search Nearby]: /rest/api/maps/search/getsearchnearby?view=rest-maps-1.0&preserve-view=true
+[Get Search Address]: /rest/api/maps/search/getsearchaddress?view=rest-maps-1.0&preserve-view=true
[Azure Maps supported languages]: supported-languages.md
-[Search Address]: /rest/api/maps/search/getsearchaddress?view=rest-maps-1.0
-[Search Polygon service]: /rest/api/maps/search/getsearchpolygon?view=rest-maps-1.0
+[Search Address]: /rest/api/maps/search/getsearchaddress?view=rest-maps-1.0&preserve-view=true
+[Search Polygon service]: /rest/api/maps/search/getsearchpolygon?view=rest-maps-1.0&preserve-view=true
[Set up a geofence]: tutorial-geofence.md
-[Search POIs inside the geometry]: /rest/api/maps/search/postsearchinsidegeometry?view=rest-maps-1.0
+[Search POIs inside the geometry]: /rest/api/maps/search/postsearchinsidegeometry?view=rest-maps-1.0&preserve-view=true
azure-maps How To Use Indoor Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-indoor-module.md
Learn more about how to add more data to your map:
[Drawing package requirements]: drawing-requirements.md [dynamic map styling]: indoor-map-dynamic-styling.md [Indoor Maps dynamic styling]: indoor-map-dynamic-styling.md
-[map configuration API]: /rest/api/maps-creator/map-configuration?view=rest-maps-creator-2023-03-01-preview
+[map configuration API]: /rest/api/maps-creator/map-configuration?view=rest-maps-creator-2023-03-01-preview&preserve-view=true
[map configuration]: creator-indoor-maps.md#map-configuration
-[Style Rest API]: /rest/api/maps-creator/style?view=rest-maps-creator-2023-03-01-preview
+[Style Rest API]: /rest/api/maps-creator/style?view=rest-maps-creator-2023-03-01-preview&preserve-view=true
[style-loader]: https://webpack.js.org/loaders/style-loader [Subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account [Tileset List API]: /rest/api/maps-creator/tileset/list
azure-maps Map Get Information From Coordinate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-get-information-from-coordinate.md
See the following articles for full code examples:
> [!div class="nextstepaction"] > [Show traffic](./map-show-traffic.md)
-[Reverse Address Search API]: /rest/api/maps/search/getsearchaddressreverse?view=rest-maps-1.0
+[Reverse Address Search API]: /rest/api/maps/search/getsearchaddressreverse?view=rest-maps-1.0&preserve-view=true
[Fetch API]: https://fetch.spec.whatwg.org/ [Create a map]: map-create.md [popup]: /javascript/api/azure-maps-control/atlas.popup#open [Add a popup on the map]: map-add-popup.md [event listener]: /javascript/api/azure-maps-control/atlas.map#events
-[Get Search Address Reverse API]: /rest/api/maps/search/getsearchaddressreverse?view=rest-maps-1.0
+[Get Search Address Reverse API]: /rest/api/maps/search/getsearchaddressreverse?view=rest-maps-1.0&preserve-view=true
[load event listener]: /javascript/api/azure-maps-control/atlas.map#events [setOptions]: /javascript/api/azure-maps-control/atlas.popup#setoptions-popupoptions- [@azure-rest/maps-search]: https://www.npmjs.com/package/@azure-rest/maps-search
azure-maps Map Search Location https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-search-location.md
The following image is a screenshot showing the results of the two code samples.
Learn more about **Fuzzy Search**: > [!div class="nextstepaction"]
-> [Azure Maps Fuzzy Search API](/rest/api/maps/search/getsearchfuzzy?view=rest-maps-1.0)
+> [Azure Maps Fuzzy Search API](/rest/api/maps/search/getsearchfuzzy?view=rest-maps-1.0&preserve-view=true)
Learn more about the classes and methods used in this article:
See the following articles for full code examples:
> [!div class="nextstepaction"] > [Show directions from A to B](map-route.md)
-[Fuzzy search API]: /rest/api/maps/search/getsearchfuzzy?view=rest-maps-1.0
+[Fuzzy search API]: /rest/api/maps/search/getsearchfuzzy?view=rest-maps-1.0&preserve-view=true
[Fetch API]: https://fetch.spec.whatwg.org/ [DataSource]: /javascript/api/azure-maps-control/atlas.source.datasource [symbol layer]: /javascript/api/azure-maps-control/atlas.layer.symbollayer [Create a map]: map-create.md
-[Get Search Fuzzy rest API]: /rest/api/maps/search/getsearchfuzzy?view=rest-maps-1.0
+[Get Search Fuzzy rest API]: /rest/api/maps/search/getsearchfuzzy?view=rest-maps-1.0&preserve-view=true
[setCamera]: /javascript/api/azure-maps-control/atlas.map#setcamera-cameraoptionscameraboundsoptionsanimationoptions- [event listener]: /javascript/api/azure-maps-control/atlas.map#events [BoundingBox]: /javascript/api/azure-maps-control/atlas.data.boundingbox
azure-maps Migrate From Bing Maps Web Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-bing-maps-web-services.md
Learn more about the Azure Maps REST services.
[Best practices for Azure Maps Route service]: how-to-use-best-practices-for-routing.md [Best practices for Azure Maps Search service]: how-to-use-best-practices-for-search.md [free account]: https://azure.microsoft.com/free/
-[fuzzy search]: /rest/api/maps/search/get-search-fuzzy?view=rest-maps-1.0
+[fuzzy search]: /rest/api/maps/search/get-search-fuzzy?view=rest-maps-1.0&preserve-view=true
[Geolocation API]: /rest/api/maps/geolocation/get-ip-to-location [Get Map Static Image]: /rest/api/maps/render/get-map-static-image [Get Map Tile]: /rest/api/maps/render/get-map-tile [Get Route Directions]: /rest/api/maps/route/get-route-directions [Get Route Range]: /rest/api/maps/route/get-route-range
-[Get Search Address Reverse Cross Street]: /rest/api/maps/search/get-search-address-reverse-cross-street?view=rest-maps-1.0
-[Get Search Address Reverse]: /rest/api/maps/search/get-search-address-reverse?view=rest-maps-1.0
-[Get Search Address Structured]: /rest/api/maps/search/get-search-address-structured?view=rest-maps-1.0
-[Get Search Address]: /rest/api/maps/search/get-search-address?view=rest-maps-1.0
-[Get Search Fuzzy]: /rest/api/maps/search/get-search-fuzzy?view=rest-maps-1.0
-[Get Search POI Category]: /rest/api/maps/search/get-search-poi-category?view=rest-maps-1.0
-[Get Search POI]: /rest/api/maps/search/get-search-poi?view=rest-maps-1.0
-[Get Search Polygon]: /rest/api/maps/search/get-search-polygon?view=rest-maps-1.0
+[Get Search Address Reverse Cross Street]: /rest/api/maps/search/get-search-address-reverse-cross-street?view=rest-maps-1.0&preserve-view=true
+[Get Search Address Reverse]: /rest/api/maps/search/get-search-address-reverse?view=rest-maps-1.0&preserve-view=true
+[Get Search Address Structured]: /rest/api/maps/search/get-search-address-structured?view=rest-maps-1.0&preserve-view=true
+[Get Search Address]: /rest/api/maps/search/get-search-address?view=rest-maps-1.0&preserve-view=true
+[Get Search Fuzzy]: /rest/api/maps/search/get-search-fuzzy?view=rest-maps-1.0&preserve-view=true
+[Get Search POI Category]: /rest/api/maps/search/get-search-poi-category?view=rest-maps-1.0&preserve-view=true
+[Get Search POI]: /rest/api/maps/search/get-search-poi?view=rest-maps-1.0&preserve-view=true
+[Get Search Polygon]: /rest/api/maps/search/get-search-polygon?view=rest-maps-1.0&preserve-view=true
[Get Timezone By Coordinates]: /rest/api/maps/timezone/get-timezone-by-coordinates [Get Timezone By ID]: /rest/api/maps/timezone/get-timezone-by-id [Get Timezone Enum IANA]: /rest/api/maps/timezone/get-timezone-enum-iana
Learn more about the Azure Maps REST services.
[Localization support in Azure Maps]: supported-languages.md [Manage authentication in Azure Maps]: how-to-manage-authentication.md [Manage the pricing tier of your Azure Maps account]: how-to-manage-pricing-tier.md
-[nearby search]: /rest/api/maps/search/getsearchnearby?view=rest-maps-1.0
+[nearby search]: /rest/api/maps/search/getsearchnearby?view=rest-maps-1.0&preserve-view=true
[NetTopologySuite]: https://github.com/NetTopologySuite/NetTopologySuite [Post Route Directions Batch]: /rest/api/maps/route/post-route-directions-batch [Post Route Directions]: /rest/api/maps/route/post-route-directions [Post Route Matrix]: /rest/api/maps/route/post-route-matrix
-[Post Search Address Batch]: /rest/api/maps/search/post-search-address-batch?view=rest-maps-1.0
-[Post Search Address Reverse Batch]: /rest/api/maps/search/post-search-address-reverse-batch?view=rest-maps-1.0
-[Post Search Along Route]: /rest/api/maps/search/post-search-along-route?view=rest-maps-1.0
-[Post Search Fuzzy Batch]: /rest/api/maps/search/post-search-fuzzy-batch?view=rest-maps-1.0
-[Post Search Inside Geometry]: /rest/api/maps/search/post-search-inside-geometry?view=rest-maps-1.0
+[Post Search Address Batch]: /rest/api/maps/search/post-search-address-batch?view=rest-maps-1.0&preserve-view=true
+[Post Search Address Reverse Batch]: /rest/api/maps/search/post-search-address-reverse-batch?view=rest-maps-1.0&preserve-view=true
+[Post Search Along Route]: /rest/api/maps/search/post-search-along-route?view=rest-maps-1.0&preserve-view=true
+[Post Search Fuzzy Batch]: /rest/api/maps/search/post-search-fuzzy-batch?view=rest-maps-1.0&preserve-view=true
+[Post Search Inside Geometry]: /rest/api/maps/search/post-search-inside-geometry?view=rest-maps-1.0&preserve-view=true
[quadtree tile pyramid math]: zoom-levels-and-tile-grid.md [Render custom data on a raster map]: how-to-render-custom-data.md [Route]: /rest/api/maps/route [Search for a location using Azure Maps Search services]: how-to-search-for-address.md
-[Search within geometry]: /rest/api/maps/search/post-search-inside-geometry?view=rest-maps-1.0
-[Search]: /rest/api/maps/search?view=rest-maps-1.0
+[Search within geometry]: /rest/api/maps/search/post-search-inside-geometry?view=rest-maps-1.0&preserve-view=true
+[Search]: /rest/api/maps/search?view=rest-maps-1.0&preserve-view=true
[Snap points to logical route path]: https://samples.azuremaps.com/?sample=snap-points-to-logical-route-path [Spatial operations]: /rest/api/maps/spatial [subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
azure-maps Migrate From Google Maps Web Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-google-maps-web-services.md
Learn more about Azure Maps REST
[Get Map Tile]: /rest/api/maps/render/get-map-tile [Get Route Directions]: /rest/api/maps/route/get-route-directions [Get Route Range]: /rest/api/maps/route/get-route-range
-[Get Search Address Reverse Cross Street]: /rest/api/maps/search/get-search-address-reverse-cross-street?view=rest-maps-1.0
-[Get Search Address Reverse]: /rest/api/maps/search/get-search-address-reverse?view=rest-maps-1.0
-[Get Search Address Structured]: /rest/api/maps/search/get-search-address-structured?view=rest-maps-1.0
-[Get Search Address]: /rest/api/maps/search/get-search-address?view=rest-maps-1.0
-[Get Search Fuzzy]: /rest/api/maps/search/get-search-fuzzy?view=rest-maps-1.0
-[Get Search Nearby]: /rest/api/maps/search/get-search-nearby?view=rest-maps-1.0
-[Get Search POI Category]: /rest/api/maps/search/get-search-poi-category?view=rest-maps-1.0
-[Get Search POI]: /rest/api/maps/search/get-search-poi?view=rest-maps-1.0
+[Get Search Address Reverse Cross Street]: /rest/api/maps/search/get-search-address-reverse-cross-street?view=rest-maps-1.0&preserve-view=true
+[Get Search Address Reverse]: /rest/api/maps/search/get-search-address-reverse?view=rest-maps-1.0&preserve-view=true
+[Get Search Address Structured]: /rest/api/maps/search/get-search-address-structured?view=rest-maps-1.0&preserve-view=true
+[Get Search Address]: /rest/api/maps/search/get-search-address?view=rest-maps-1.0&preserve-view=true
+[Get Search Fuzzy]: /rest/api/maps/search/get-search-fuzzy?view=rest-maps-1.0&preserve-view=true
+[Get Search Nearby]: /rest/api/maps/search/get-search-nearby?view=rest-maps-1.0&preserve-view=true
+[Get Search POI Category]: /rest/api/maps/search/get-search-poi-category?view=rest-maps-1.0&preserve-view=true
+[Get Search POI]: /rest/api/maps/search/get-search-poi?view=rest-maps-1.0&preserve-view=true
[Get Timezone By Coordinates]: /rest/api/maps/timezone/get-timezone-by-coordinates [Get Timezone By ID]: /rest/api/maps/timezone/get-timezone-by-id [Get Timezone Enum IANA]: /rest/api/maps/timezone/get-timezone-enum-iana
Learn more about Azure Maps REST
[NuGet package]: https://www.nuget.org/packages/AzureMapsRestToolkit [Post Route Directions Batch]: /rest/api/maps/route/post-route-directions-batch [Post Route Matrix]: /rest/api/maps/route/post-route-matrix
-[Post Search Address Batch]: /rest/api/maps/search/post-search-address-batch?view=rest-maps-1.0
-[Post Search Address Reverse Batch]: /rest/api/maps/search/post-search-address-reverse-batch?view=rest-maps-1.0
-[Post Search Along Route]: /rest/api/maps/search/post-search-along-route?view=rest-maps-1.0
-[Post Search Fuzzy Batch]: /rest/api/maps/search/post-search-fuzzy-batch?view=rest-maps-1.0
-[Post Search Inside Geometry]: /rest/api/maps/search/post-search-inside-geometry?view=rest-maps-1.0
+[Post Search Address Batch]: /rest/api/maps/search/post-search-address-batch?view=rest-maps-1.0&preserve-view=true
+[Post Search Address Reverse Batch]: /rest/api/maps/search/post-search-address-reverse-batch?view=rest-maps-1.0&preserve-view=true
+[Post Search Along Route]: /rest/api/maps/search/post-search-along-route?view=rest-maps-1.0&preserve-view=true
+[Post Search Fuzzy Batch]: /rest/api/maps/search/post-search-fuzzy-batch?view=rest-maps-1.0&preserve-view=true
+[Post Search Inside Geometry]: /rest/api/maps/search/post-search-inside-geometry?view=rest-maps-1.0&preserve-view=true
[Render custom data on a raster map]: how-to-render-custom-data.md [Render]: /rest/api/maps/render/get-map-static-image [Reverse geocode a coordinate]: #reverse-geocode-a-coordinate [Route]: /rest/api/maps/route [Search for a location using Azure Maps Search services]: how-to-search-for-address.md
-[Search]: /rest/api/maps/search?view=rest-maps-1.0
+[Search]: /rest/api/maps/search?view=rest-maps-1.0&preserve-view=true
[Spatial operations]: /rest/api/maps/spatial [subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account [Supported map styles]: supported-map-styles.md
azure-maps Open Source Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/open-source-projects.md
Find more open-source Azure Maps projects.
[BotBuilder Location]: https://github.com/Microsoft/BotBuilder-Location [Cesium JS]: https://cesium.com/cesiumjs/ [Code samples]: /samples/browse/?products=azure-maps
-[geocoding services]: /rest/api/maps/search?view=rest-maps-1.0
+[geocoding services]: /rest/api/maps/search?view=rest-maps-1.0&preserve-view=true
[Implement IoT spatial analytics using Azure Maps]: https://github.com/Azure-Samples/iothub-to-azure-maps-geofencing [leaflet]: https://leafletjs.com [LiveMaps]: https://github.com/Azure-Samples/LiveMaps
azure-maps Supported Search Categories https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/supported-search-categories.md
When doing a [category search] for points of interest, there are over a hundred
| WINERY | winery | | ZOOS\_ARBORETA\_BOTANICAL\_GARDEN | wildlife park, aquatic zoo marine park, arboreta botanical gardens, zoo, zoos, arboreta botanical garden |
-[category search]: /rest/api/maps/search/getsearchpoicategory?view=rest-maps-1.0
+[category search]: /rest/api/maps/search/getsearchpoicategory?view=rest-maps-1.0&preserve-view=true
azure-maps Tutorial Create Store Locator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-create-store-locator.md
To see more code examples and an interactive coding experience:
[Simple Store Locator.html]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/master/Samples/Tutorials/Simple%20Store%20Locator/Simple%20Store%20Locator.html [data]: https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/Samples/Tutorials/Simple%20Store%20Locator/data
-[Search service]: /rest/api/maps/search?view=rest-maps-1.0
+[Search service]: /rest/api/maps/search?view=rest-maps-1.0&preserve-view=true
[Spherical Mercator projection]: glossary.md#spherical-mercator-projection [EPSG:3857]: https://epsg.io/3857 [EPSG:4326]: https://epsg.io/4326
azure-maps Tutorial Creator Indoor Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-creator-indoor-maps.md
For more information, see [Map configuration] in the article about indoor map co
[Drawing conversion errors and warnings]: drawing-conversion-error-codes.md [Dataset Create API]: /rest/api/maps-creator/dataset/create [Dataset service]: /rest/api/maps-creator/dataset
-[Tileset service]: /rest/api/maps-creator/tileset?view=rest-maps-creator-2023-03-01-preview
-[tileset get]: /rest/api/maps-creator/tileset/get?view=rest-maps-creator-2023-03-01-preview
+[Tileset service]: /rest/api/maps-creator/tileset?view=rest-maps-creator-2023-03-01-preview&preserve-view=true
+[tileset get]: /rest/api/maps-creator/tileset/get?view=rest-maps-creator-2023-03-01-preview&preserve-view=true
[Map configuration]: creator-indoor-maps.md#map-configuration [What is Azure Maps Creator?]: about-creator.md [Creator for indoor maps]: creator-indoor-maps.md
azure-maps Tutorial Ev Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-ev-routing.md
To learn more about Azure Notebooks, see
[manage authentication in Azure Maps]: how-to-manage-authentication.md [Matrix Routing API]: /rest/api/maps/route/postroutematrix [Post Route Matrix]: /rest/api/maps/route/postroutematrix
-[Post Search Inside Geometry API]: /rest/api/maps/search/postsearchinsidegeometry?view=rest-maps-1.0
-[Post Search Inside Geometry]: /rest/api/maps/search/postsearchinsidegeometry?view=rest-maps-1.0
+[Post Search Inside Geometry API]: /rest/api/maps/search/postsearchinsidegeometry?view=rest-maps-1.0&preserve-view=true
+[Post Search Inside Geometry]: /rest/api/maps/search/postsearchinsidegeometry?view=rest-maps-1.0&preserve-view=true
[Quickstart: Sign in and set a user ID]: https://notebooks.azure.com [Render - Get Map Image]: /rest/api/maps/render/get-map-static-image [*requirements.txt*]: https://github.com/Azure-Samples/Azure-Maps-Jupyter-Notebook/blob/master/AzureMapsJupyterSamples/Tutorials/EV%20Routing%20and%20Reachable%20Range/requirements.txt
azure-maps Tutorial Iot Hub Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-iot-hub-maps.md
To learn more about how to send device-to-cloud telemetry, and the other way aro
[free account]: https://azure.microsoft.com/free/ [general-purpose v2 storage account]: ../storage/common/storage-account-overview.md [Get Geofence]: /rest/api/maps/spatial/getgeofence
-[Get Search Address Reverse]: /rest/api/maps/search/getsearchaddressreverse?view=rest-maps-1.0
+[Get Search Address Reverse]: /rest/api/maps/search/getsearchaddressreverse?view=rest-maps-1.0&preserve-view=true
[How to create data registry]: how-to-create-data-registries.md [IoT Hub message routing]: ../iot-hub/iot-hub-devguide-routing-query-syntax.md [IoT Plug and Play]: ../iot-develop/index.yml
To learn more about how to send device-to-cloud telemetry, and the other way aro
[rentalCarSimulation]: https://github.com/Azure-Samples/iothub-to-azure-maps-geofencing/tree/master/src/rentalCarSimulation [resource group]: ../azure-resource-manager/management/manage-resource-groups-portal.md#create-resource-groups [the root of the sample]: https://github.com/Azure-Samples/iothub-to-azure-maps-geofencing
-[Search Address Reverse]: /rest/api/maps/search/getsearchaddressreverse?view=rest-maps-1.0
+[Search Address Reverse]: /rest/api/maps/search/getsearchaddressreverse?view=rest-maps-1.0&preserve-view=true
[Send telemetry from a device]: ../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-csharp [Spatial Geofence Get API]: /rest/api/maps/spatial/getgeofence [subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
azure-maps Tutorial Search Location https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-search-location.md
The next tutorial demonstrates how to display a route between two locations.
[Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account [free account]: https://azure.microsoft.com/free/
-[Fuzzy Search service]: /rest/api/maps/search/get-search-fuzzy?view=rest-maps-1.0
+[Fuzzy Search service]: /rest/api/maps/search/get-search-fuzzy?view=rest-maps-1.0&preserve-view=true
[manage authentication in Azure Maps]: how-to-manage-authentication.md [MapControlCredential]: /javascript/api/azure-maps-rest/atlas.service.mapcontrolcredential [pipeline]: /javascript/api/azure-maps-rest/atlas.service.pipeline [Route to a destination]: tutorial-route-location.md
-[Search API]: /rest/api/maps/search?view=rest-maps-1.0
+[Search API]: /rest/api/maps/search?view=rest-maps-1.0&preserve-view=true
[Search for points of interest]: https://samples.azuremaps.com/?sample=search-for-points-of-interest [search tutorial]: https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/Samples/Tutorials/Search [searchURL]: /javascript/api/azure-maps-rest/atlas.service.searchurl
azure-maps Understanding Azure Maps Transactions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/understanding-azure-maps-transactions.md
The following table summarizes the Azure Maps services that generate transaction
[Conversion]: /rest/api/maps-creator/conversion [Creator table]: #azure-maps-creator [Data registry]: /rest/api/maps/data-registry
-[v1]: /rest/api/maps/data?view=rest-maps-1.0
+[v1]: /rest/api/maps/data?view=rest-maps-1.0&preserve-view=true
[v2]: /rest/api/maps/data [How to create data registry]: how-to-create-data-registries.md [Dataset]: /rest/api/maps-creator/dataset
The following table summarizes the Azure Maps services that generate transaction
[Pricing calculator]: https://azure.microsoft.com/pricing/calculator/ [Render]: /rest/api/maps/render [Route]: /rest/api/maps/route
-[Search v1]: /rest/api/maps/search?view=rest-maps-1.0
+[Search v1]: /rest/api/maps/search?view=rest-maps-1.0&preserve-view=true
[Search v2]: /rest/api/maps/search [Spatial]: /rest/api/maps/spatial [Tileset]: /rest/api/maps-creator/tileset
azure-monitor Log Alert Rule Health https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/log-alert-rule-health.md
+
+ Title: Monitor the health of log search alert rules
+description: This article how to monitor the health of a log search alert rule.
++++ Last updated : 02/08/2024+
+#Customer-intent: As a alerts administrator, I want to know when there are issues with an alert rule, so I can act to resolve the issue or know when to contact Microsoft for support.
++
+# Monitor the health of log search alert rules
+
+[Azure Service Health](../../service-health/overview.md) monitors the health of your cloud resources, including log search alert rules. When a log search alert rule is healthy, the rule runs and the query executes successfully. This article explains how to view the health status of your log search alert rule, and tells you what to do if there are issues affecting your log search alert rules.
+
+Azure Service Health monitors:
+- [Resource health](../../service-health/resource-health-overview.md): information about the health of your individual cloud resources, such as a specific log search alert rule.
+- [Service health](../../service-health/service-health-overview.md): information about the health of the Azure services and regions you're using, which might affect your log search alert rule, including communications about outages, planned maintenance activities, and other health advisories.
+
+## Permissions required
+
+- To view the health of a log search alert rule, you need `read` permissions to the log search alert rule.
+- To set up health status alerts, you need `write` permissions to the log search alert rule, as provided by the [Monitoring Contributor built-in role](../roles-permissions-security.md#monitoring-contributor), for example.
+
+## View health and set up health status alerts for log search alert rules
+
+To view the health of your log search alert rule and set up health status alerts:
+
+1. In the [portal](https://portal.azure.com/), select **Monitor**, then **Alerts**.
+1. From the top command bar, select **Alert rules**. The page shows all your alert rules on all subscriptions.
+1. Select the log search alert rule that you want to monitor.
+1. From the left pane, under **Help**, select **Resource health**.
+
+ :::image type="content" source="media/log-search-alert-health/log-search-alert-resource-health.png" alt-text="Screenshot of the Resource health section in a log search alert rule.":::
+
+1. The **Resource health** screen shows:
+
+ - **Health history**: Indicates whether Azure Service Health detected query execution issues in the specific log search alert rule. Select the health event to view details about the event.
+ - **Azure service issues**: Displayed when a known issue with an Azure service might affect execution of the log search alert query. Select the message to view details about the service issue in Azure Service Health.
+
+ > [!NOTE]
+ > - Service health notifications do not indicate that your log search alert rule is necessarily affected by the known service issue. If your log search alert rule health status is **Available**, Azure Service Health did not detect issues in your alert rule.
+
+ :::image type="content" source="media/log-search-alert-health/log-search-alert-resource-health-page.png" alt-text="Screenshot of the Resource health page for a log search alert rule.":::
+
+This table describes the possible resource health status values for a log search alert rule:
+
+| Resource health status | Description |Recommended steps|
+|||
+|Available|There are no known issues affecting this log search alert rule.| |
+|Unknown|This log search alert rule is currently disabled or in an unknown state.|[Log alert was disabled](alerts-troubleshoot-log.md#log-alert-was-disabled).|
+|Unknown reason|This log search alert rule is currently unavailable due to an unknown reason.|Check if the alert rule was recently created. Health status is updated after the rule completes its first evaluation.|
+|Degraded due to unknown reason|This log search alert rule is currently degraded due to an unknown reason.| |
+|Setting up resource health|Setting up Resource health for this resource.|Check if the alert rule was recently created. Health status is updated after the rule completes its first evaluation.|
+|Semantic error |The query is failing because of a semantic error. |Review the query and try again.|
+|Syntax error |The query is failing because of a syntax error.| Review the query and try again.|
+|The response size is too large|The query is failing because its response size is too large.|Review your query and the [log queries limits](../service-limits.md#log-queries-and-language).|
+|Query consuming too many resources |The query is failing because it's consuming too many resources.|Review your query. View our [best practices for optimizing log queries](../logs/query-optimization.md).|
+|Query validation error|The query is failing because of a validation error. |Check if the table referenced in your query is set to [Compare the Basic and Analytics log data plans](../logs/basic-logs-configure.md#compare-the-basic-and-analytics-log-data-plans), which doesn't support alerts. |
+|Workspace not found |The target Log Analytics workspace for this alert rule couldn't be found. |The target specified in the scope of the alert rule was moved, renamed, or deleted. Recreate your alert rule with a valid Log Analytics workspace target.|
+|Application Insights resource not found|The target Application Insights resource for this alert rule couldn't be found. |The target specified in the scope of the alert rule was moved, renamed, or deleted. Recreate your alert rule with a valid Log Analytics workspace target. |
+|Query is throttled|The query is failing for the rule because of throttling (Error 429). |Review your query and the [log queries limits](../service-limits.md#user-query-throttling). |
+|Unauthorized to run query |The query is failing because the query doesn't have the correct permissions. | Permissions are based on the permissions of the last user that edited the rule. If you suspect that the query doesn't have access, any user with the required permissions can edit or update the rule. Once the rule is saved, the new permissions take effect.</br>If you're using managed identities, check that the identity has permissions on the target resource. See [managed identities](alerts-create-log-alert-rule.md#managed-id).|
+|NSP validation failed |The query is failing because of NSP validations issues.| Review your network security perimeter rules to ensure your alert rule is correctly configured.|
+|Active alerts limit exceeded |Alert evaluation failed due to exceeding the limit of fired (non- resolved) alerts per day. |See [Azure Monitor service limits](../service-limits.md). |
+|Dimension combinations limit exceeded | Alert evaluation failed due to exceeding the allowed limit of dimension combinations values meeting the threshold.|See [Azure Monitor service limits](../service-limits.md). |
++
+## Add a new resource health alert
+
+1. Select **Add resource health alert**.
+
+1. The **Create alert rule** wizard opens, with the **Scope** and **Condition** panes prepopulated. If necessary, you can edit and modify the scope and condition of the alert rule at this stage.
+
+1. Follow the rest of the steps in [Create or edit an activity log, service health, or resource health alert rule](../alerts/alerts-create-activity-log-alert-rule.md).
+
+## Next steps
+
+Learn more about:
+- [Querying log data in Azure Monitor Logs](../logs/get-started-queries.md).
+- [Create or edit a log alert rule](alerts-create-log-alert-rule.md)
+
azure-monitor Opentelemetry Add Modify https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-add-modify.md
span_id = trace.get_current_span().get_span_context().span_id
-- ## Next steps ### [ASP.NET Core](#tab/aspnetcore-15)
span_id = trace.get_current_span().get_span_context().span_id
- To learn more about OpenTelemetry and its community, see the [OpenTelemetry Python GitHub repository](https://github.com/open-telemetry/opentelemetry-python). - To see available OpenTelemetry instrumentations and components, see the [OpenTelemetry Contributor Python GitHub repository](https://github.com/open-telemetry/opentelemetry-python-contrib). - To enable usage experiences, [enable web or browser user monitoring](javascript.md).++++
azure-monitor Opentelemetry Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-enable.md
Follow the steps in this section to instrument your application with OpenTelemet
-> [!TIP]
-> We don't recommend using the OTel Community SDK/API with the Azure Monitor OTel Distro since it automatically loads them as dependencies.
- ### Install the client library #### [ASP.NET Core](#tab/aspnetcore)
Application Insights is now enabled for your application. All the following step
> [!IMPORTANT] > If you have two or more services that emit telemetry to the same Application Insights resource, you're required to [set Cloud Role Names](opentelemetry-configuration.md#set-the-cloud-role-name-and-the-cloud-role-instance) to represent them properly on the Application Map.
-> [!TIP]
-> Sampling is enabled by default at a rate of 5 requests per second, aiding in cost management. Telemetry data may be missing in scenarios exceeding this rate. For more information on modifying sampling configuration, see [sampling overrides](./java-standalone-sampling-overrides.md).
- As part of using Application Insights instrumentation, we collect and send diagnostic data to Microsoft. This data helps us run and improve Application Insights. To learn more, see [Statsbeat in Azure Application Insights](./statsbeat.md). -- ## Next steps ### [ASP.NET Core](#tab/aspnetcore)
As part of using Application Insights instrumentation, we collect and send diagn
- To enable usage experiences, [enable web or browser user monitoring](javascript.md). ++
azure-monitor Release And Work Item Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/release-and-work-item-insights.md
You can use the `CreateReleaseAnnotation` PowerShell script to create annotation
``` > [!NOTE]
- > Your annotations must have **Category** set to **Deployment** to appear in the Azure portal.
+ > - Your annotations must have **Category** set to **Deployment** to appear in the Azure portal.
+ > - If you receive an error, "The request contains an entity body but no Content-Type header", try removing the replace parameters in the following line.
+ >
+ > `$body = (ConvertTo-Json $annotation -Compress)`
1. Call the PowerShell script with the following code. Replace the angle-bracketed placeholders with your values. The `-releaseProperties` are optional.
azure-monitor Kubernetes Monitoring Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/kubernetes-monitoring-enable.md
As of version 6.4.0-main-02-22-2023-3ee44b9e of the Managed Prometheus addon con
* `memory` * `process` * `cpu_info`
+
+ For more collectors, please see [Prometheus exporter for Windows metrics](https://github.com/prometheus-community/windows_exporter#windows_exporter).
Deploy the [windows-exporter-daemonset YAML](https://github.com/prometheus-community/windows_exporter/blob/master/kubernetes/windows-exporter-daemonset.yaml) file:
azure-monitor Scom Managed Instance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/scom-managed-instance-overview.md
The documentation for SCOM Managed Instance is maintained with the [other docume
| Section | Articles | |:|:| | Overview | - [About Azure Monitor SCOM Managed Instance](/system-center/scom/operations-manager-managed-instance-overview)<br>- [WhatΓÇÖs new in Azure Monitor SCOM Managed Instance](/system-center/scom/whats-new-scom-managed-instance) |
-| QuickStarts | [Quickstart: Migrate from Operations Manager on-premises to Azure Monitor SCOM Managed Instance](/system-center/scom/migrate-to-operations-manager-managed-instance?view=sc-om-2022&tabs=mp-overrides) |
+| QuickStarts | [Quickstart: Migrate from Operations Manager on-premises to Azure Monitor SCOM Managed Instance](/system-center/scom/migrate-to-operations-manager-managed-instance?tabs=mp-overrides) |
| Tutorials | [Tutorial: Create an instance of Azure Monitor SCOM Managed Instance](/system-center/scom/tutorial-create-scom-managed-instance) | | Concepts | - [Azure Monitor SCOM Managed Instance Service Health Dashboard](/system-center/scom/monitor-health-scom-managed-instance)<br>- [Customizations on Azure Monitor SCOM Managed Instance management servers](/system-center/scom/customizations-on-scom-managed-instance-management-servers) | | How-to guides | - [Register the Azure Monitor SCOM Managed Instance resource provider](/system-center/scom/register-scom-managed-instance-resource-provider)<br>- [Create a separate subnet in a virtual network for Azure Monitor SCOM Managed Instance](/system-center/scom/create-separate-subnet-in-vnet)<br> - [Create an Azure SQL managed instance](/system-center/scom/create-sql-managed-instance)<br> - [Create an Azure key vault](/system-center/scom/create-key-vault)<br>- [Create a user-assigned identity for Azure Monitor SCOM Managed Instance](/system-center/scom/create-user-assigned-identity)<br>- [Create a computer group and gMSA account for Azure Monitor SCOM Managed Instance](/system-center/scom/create-gmsa-account)<br>- [Store domain credentials in Azure Key Vault](/system-center/scom/store-domain-credentials-in-key-vault)<br>- [Create a static IP for Azure Monitor SCOM Managed Instance](/system-center/scom/create-static-ip)<br>- [Configure the network firewall for Azure Monitor SCOM Managed Instance](/system-center/scom/configure-network-firewall)<br>- [Verify Azure and internal GPO policies for Azure Monitor SCOM Managed Instance](/system-center/scom/verify-azure-and-internal-gpo-policies)<br>- [Azure Monitor SCOM Managed Instance self-verification of steps](/system-center/scom/scom-managed-instance-self-verification-of-steps)<br>- [Create an Azure Monitor SCOM Managed Instance](/system-center/scom/create-operations-manager-managed-instance)<br>- [Connect the Azure Monitor SCOM Managed Instance to Ops console](/system-center/scom/connect-managed-instance-ops-console)<br>- [Scale Azure Monitor SCOM Managed Instance](/system-center/scom/scale-scom-managed-instance)<br>- [Patch Azure Monitor SCOM Managed Instance](/system-center/scom/patch-scom-managed-instance)<br>- [Create reports on Power BI](/system-center/scom/operations-manager-managed-instance-create-reports-on-power-bi)<br>- [Dashboards on Azure Managed Grafana](/system-center/scom/dashboards-on-azure-managed-grafana)<br>- [View System Center Operations ManagerΓÇÖs alerts in Azure Monitor](/system-center/scom/view-operations-manager-alerts-azure-monitor)<br>- [Monitor Azure and Off-Azure Virtual machines with Azure Monitor SCOM Managed Instance](/system-center/scom/monitor-off-azure-vm-with-scom-managed-instance)<br>- [Monitor Azure and Off-Azure Virtual machines with Azure Monitor SCOM Managed Instance (preview)](/system-center/scom/monitor-arc-enabled-vm-with-scom-managed-instance)<br>- [Azure Monitor SCOM Managed Instance activity log](/system-center/scom/scom-mi-activity-log)<br>- [Configure Log Analytics for Azure Monitor SCOM Managed Instance](/system-center/scom/configure-log-analytics-for-scom-managed-instance)
azure-netapp-files Azure Netapp Files Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-metrics.md
Azure NetApp Files metrics are natively integrated into Azure monitor. From with
- *Percentage Volume Consumed Size* The percentage of the volume consumed, including snapshots.
- Aggregation metrics (for example, min, max) are not supported for percentage volume consumed size.
+ Aggregation metrics (for example, min, max) aren't supported for percentage volume consumed size.
- *Volume Allocated Size* The provisioned size of a volume - *Volume Quota Size*
Azure NetApp Files metrics are natively integrated into Azure monitor. From with
The size of all snapshots in a volume. - *Throughput limit reached*
- Throughput limit reached is a boolean metric that denotes the volume is hitting its QoS limits. The value 1 means that the volume has reached its maximum throughput, and throughput for this volume will be throttled. The value 0 means this limit has not yet been reached.
+ Throughput limit reached is a boolean metric that denotes the volume is hitting its QoS limits. The value 1 means that the volume has reached its maximum throughput, and throughput for this volume will be throttled. The value 0 means this limit hasn't yet been reached.
+
+ > [!NOTE]
+ > The Throughput limit reached metrics is collected every 5 minutes and is displayed as a hit if it has been collected in the last 5 minutes.
If the volume is hitting the throughput limit, it's not sized appropriately for the application's demands. To resolve throughput issues:
azure-netapp-files Manage Availability Zone Volume Placement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/manage-availability-zone-volume-placement.md
You can deploy new volumes in the logical availability zone of your choice. You
## Populate availability zone for Terraform-managed volumes
-The populate availability zone features requires a `zone` property on the volume. You can set the zone property only when you create the Terraform-managed volume, but you cannot modify it. Adding the `zone` property after the volume has been created can cause data loss or loss of the volume if the specified zone value does not match the availability zone.
+The populate availability zone features requires a `zone` property on the volume. You can set the zone property only when you create the Terraform-managed volume, but you can't modify it after the volume has been created. Adding the `zone` property after the volume has been created can cause data loss or loss of the volume if the specified zone value does not match the availability zone.
>[!IMPORTANT] >To prevent data loss on any Azure resource that includes volatile resources, you should use the [`prevent_destroy` lifecycle argument](https://developer.hashicorp.com/terraform/language/meta-arguments/lifecycle#prevent_destroy).
-1. Navigate to the Terraform module `terraform.tfstate`. The `"zone"` property should be an empty string.
-1. In the Terraform-managed volume's configuration file (`main.tf`), locate the lifecycle configuration block. Modify the block with `ignore_changes = [zone]`. If no lifecycle configuration block exists, add it:
+1. Navigate to the Terraform module `terraform.tfstate` file. The `"zone"` property should be an empty string.
+1. In the Terraform-managed volume's configuration file (`main.tf`), locate the lifecycle configuration block for the volume resource. Modify the block with `ignore_changes = [zone]`. If no lifecycle configuration block exists, add it:
``` lifecycle { ignore_changes = [zone] } ```
-1. In the Azure portal, locate the Terraform module. In the volume **Overview**, select **Populate availability zone** and make note of the availability zone. Do _not_ select save.
+1. In the Azure portal, locate the Terraform-managed volume. In the volume **Overview**, select **Populate availability zone** and make note of the availability zone. Do _not_ select save.
:::image type="content" source="./media/manage-availability-zone-volume-placement/populate-availability-zone.png" alt-text="Screenshot of the Populate Availability Zone menu." lightbox="./media/manage-availability-zone-volume-placement/populate-availability-zone.png":::
-1. In the volume's configuration file (`main.tf`), add a value for `zone`, entering the numerical value you retrieved in the previous step. For example, if the volume's availability zone is 2, enter `zone = 2`. Save the file.
+1. In the volume's configuration file (`main.tf`), add a value for `zone`, entering the numerical value you retrieved in the previous step. For example, if the volume's availability zone is 1, enter `zone = 1`.
+1. Save the file.
1. Return to the Azure portal. Select **Save** to populate the availability zone. 1. Run `terraform plan` to confirm that no changes will be made to your volume. The CLI output should display: `No changes. Your infrastructure matches the configuration.` 1. Run `terraform apply` to apply the changes. You should see the same CLI output as in the previous step.
-If you need to delete and recreate the volume in a different availability zone, remove the `ignore_changes = [zone]` line in the configuration file then run `terraform plan`. If the output indicates that no changes will be made to the volume, you can successfully populate the availability zone.
+If you need to delete and recreate the volume in a different availability zone, remove the `ignore_changes = [zone]` line in the configuration file then run `terraform plan` followed by `terraform apply`.
## Configure custom RBAC roles
azure-resource-manager Bicep Extensibility Kubernetes Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-extensibility-kubernetes-provider.md
The Kubernetes provider allows you to create Kubernetes resources directly with
> [!NOTE] > Kubernetes provider is not currently supported for private clusters: >
-> ```json
+> ```bicep
> resource AKS 'Microsoft.ContainerService/managedClusters@2023-01-02-preview' = {
+> ...
> properties: {
-> "apiServerAccessProfile": {
-> "enablePrivateCluster": "true"
-> }
+> apiServerAccessProfile: {
+> enablePrivateCluster: true
> }
+> }
> } > > ```
azure-vmware Configure Azure Elastic San https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-azure-elastic-san.md
Last updated 12/22/2023
-# Use Azure VMware Solution with Azure Elastic SAN Preview
+# Use Azure VMware Solution with Azure Elastic SAN (Integration in Preview)
This article explains how to use Azure Elastic SAN Preview as backing storage for Azure VMware Solution. [Azure VMware Solution](introduction.md) supports attaching iSCSI datastores as a persistent storage option. You can create Virtual Machine File System (VMFS) datastores with Azure Elastic SAN volumes and attach them to clusters of your choice. By using VMFS datastores backed by Azure Elastic SAN, you can expand your storage instead of scaling the clusters.
azure-vmware Deploy Arc For Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-arc-for-azure-vmware-solution.md
There are two ways to refresh the integration between the Arc-enabled VMs and Az
1. In the Azure VMware Solution private cloud, navigate to the vCenter Server inventory and Virtual Machines section within the portal. Locate the virtual machine that requires updating and follow the process to 'Enable in Azure'. If the option is grayed out, you must first **Remove from Azure** and then proceed to **Enable in Azure**
-2. Run the [az connectedvmware vm create ](/cli/azure/connectedvmware/vm?view=azure-cli-latest%22%20\l%20%22az-connectedvmware-vm-create)Azure CLI command on the VM in Azure VMware Solution to update the machine type. 
+2. Run the [az connectedvmware vm create ](/cli/azure/connectedvmware/vm#az-connectedvmware-vm-create)Azure CLI command on the VM in Azure VMware Solution to update the machine type. 
```azurecli
azure-vmware Enable Sql Azure Hybrid Benefit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/enable-sql-azure-hybrid-benefit.md
For example, if each host in Azure VMware Solution has 36 cores and you intend t
Users can configure the VM-Host placement policy to enable Azure Hybrid Benefit for SQL Server through the Azure Portal or Azure CLI.
-To enable through the Azure CLI, reference [az vmware placement-policy vm-host](/cli/azure/vmware/placement-policy/vm-host?view=azure-cli-latest).
+To enable through the Azure CLI, reference [az vmware placement-policy vm-host](/cli/azure/vmware/placement-policy/vm-host).
For the Azure Portal step-by-step see below:
By checking the Azure Hybrid Benefit checkbox in the configuration setting, you
[Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-benefit/)
-[az vmware placement-policy vm-host](/cli/azure/vmware/placement-policy/vm-host?view=azure-cli-latest)
+[az vmware placement-policy vm-host](/cli/azure/vmware/placement-policy/vm-host)
azure-web-pubsub Policy Definitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/policy-definitions.md
Title: Built-in policy definitions for Azure Web PubSub description: Lists Azure Policy built-in policy definitions for Azure Web PubSub. These built-in policy definitions provide common approaches to managing your Azure resources. -+ Last updated 01/03/2022
cdn Cdn Create A Storage Account With Cdn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-create-a-storage-account-with-cdn.md
In the preceding steps, you created a CDN profile and an endpoint in a resource
## Next steps > [!div class="nextstepaction"]
-> [Tutorial: Use CDN to serve static content from a web app.](cdn-add-to-web-app.md)
+> [Tutorial: Add a custom domain to your Azure CDN endpoint](cdn-map-content-to-custom-domain.md)
+
chaos-studio Chaos Studio Limitations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-limitations.md
The following are known limitations in Chaos Studio.
## Known issues - When selecting target resources for an agent-based fault in the experiment designer, it's possible to select virtual machines or virtual machine scale sets with an operating system not supported by the fault selected. - When running in a Linux environment, the agent-based network latency fault (NetworkLatency-1.1) can only affect **outbound** traffic, not inbound traffic. The fault can affect **both inbound and outbound** traffic on Windows environments (via the `inboundDestinationFilters` and `destinationFilters` parameters).
+- When filtering by Azure subscriptions from the Targets and/or Experiments page, you may experience long load times if you have many subscriptions with large numbers of Azure resources. As a workaround, filter down to the single specific subscription in question to quickly find your desired Targets and/or Experiments.
## Next steps Get started creating and running chaos experiments to improve application resilience with Chaos Studio by using the following links:
communication-services Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/best-practices.md
Title: Azure Communication Services - best practices
-description: Learn more about Azure Communication Service best practices
+description: Learn more about Azure Communication Service best practices.
Last updated 06/30/2021
+zone_pivot_groups: acs-plat-web-native
# Best practices: Azure Communication Services calling SDKs This article provides information about best practices related to the Azure Communication Services calling SDKs.
-## Azure Communication Services web JavaScript SDK best practices
-This section provides information about best practices associated with the Azure Communication Services JavaScript voice and video calling SDK.
-## JavaScript voice and video calling SDK
-
-### Plug-in microphone or enable microphone from device manager when Azure Communication Services call in progress
-When there is no microphone available at the beginning of a call, and then a microphone becomes available, the "noMicrophoneDevicesEnumerated" call diagnostic event will be raised.
-When this happens, your application should invoke `askDevicePermission` to obtain user consent to enumerate devices. Then user will then be able to mute/unmute the microphone.
-
-### Dispose video stream renderer view
-Communication Services applications should dispose `VideoStreamRendererView`, or its parent `VideoStreamRenderer` instance, when it is no longer needed.
-
-### Hang up the call on onbeforeunload event
-Your application should invoke `call.hangup` when the `onbeforeunload` event is emitted.
-
-### Handling multiple calls on multiple Tabs on mobile
-Your application should not connect to calls from multiple browser tabs simultaneously as this can cause undefined behavior due to resource allocation for microphone and camera on the device. Developers are encouraged to always hang up calls when completed in the background before starting a new one.
-
-### Handle OS muting call when phone call comes in.
-While on an Azure Communication Services call (for both iOS and Android) if a phone call comes in or Voice assistant is activated, the OS will automatically mute the user's microphone and camera. On Android, the call automatically unmutes and video restarts after the phone call ends. On iOS, it requires user action to "unmute" and "start video" again. You can listen for the notification that the microphone was muted unexpectedly with the quality event of `microphoneMuteUnexpectedly`. Do note in order to be able to rejoin a call properly you will need to use SDK 1.2.3-beta.1 or higher.
-
-```javascript
-const latestMediaDiagnostic = call.api(SDK.Features.Diagnostics).media.getLatest();
-const isIosSafari = (getOS() === OSName.ios) && (getPlatformName() === BrowserName.safari);
-if (isIosSafari && latestMediaDiagnostic.microphoneMuteUnexpectedly && latestMediaDiagnostic.microphoneMuteUnexpectedly.value) {
- // received a QualityEvent on iOS that the microphone was unexpectedly muted - notify user to unmute their microphone and to start their video stream
-}
-```
-
-Your application should invoke `call.startVideo(localVideoStream);` to start a video stream and should use `this.currentCall.unmute();` to unmute the audio.
-
-### Device management
-You can use the Azure Communication Services SDK to manage your devices and media operations.
-- Your application shouldn't use native browser APIs like `getUserMedia` or `getDisplayMedia` to acquire streams outside of the SDK. If you do, you'll have to manually dispose your media stream(s) before using `DeviceManager` or other device management APIs via the Communication Services SDK.-
-#### Request device permissions
-You can request device permissions using the SDK:
-- Your application should use `DeviceManager.askDevicePermission` to request access to audio and/or video devices.-- If the user denies access, `DeviceManager.askDevicePermission` will return 'false' for a given device type (audio or video) on subsequent calls, even after the page is refreshed. In this scenario, your application must detect that the user previously denied access and instruct the user to manually reset or explicitly grant access to a given device type.--
-#### Camera being used by another process
-- On Windows Chrome and Windows Edge, if you start/join/accept a call with video on and the camera device is being used by another process other than the browser that the web sdk is running on, then the call will be started with audio only and no video. A cameraStartFailed UFD will be raised because the camera failed to start since it was being used by another process. Same applies to turning video on mid-call. You can turn off the camera in the other process so that that process releases the camera device, and then start video again from the call and video will now turn on for the call and remote participants will start seeing your video. -- This is not an issue in macOS Chrome nor macOS Safari because the OS will let processes/threads share the camera device.-- On mobile devices, if a ProcessA requests the camera device and it is being used by ProcessB, then ProcessA will overtake the camera device and ProcessB will stop using the camera device-- On iOS safari, you cannot have the camera on for multiple call clients within the same tab nor across tabs. When any call client uses the camera, it will overtake the camera from any previous call client that was using it. Previous call client will get a cameraStoppedUnexpectedly UFD.-
-### Screen sharing
-#### Closing out of application does not stop it from being shared
-For example, lets say that from Chromium, you screen share the Microsoft Teams application. You then click on the "X" button on the Teams application to close it. The Teams application will not be closed and it will still be running in the background. You will even still see the icon in the bottom right of your desktop bar. Since the Teams application is still running, that means that it is still being screen shared and the remote participant in the call can still see your Teams application being screen shared. In order to stop the application from being screen shared, you will have to right click its icon on the desktop bar and then click on quit. Or you will have to click on "Stop sharing" button on the browser. Or call the sdk's Call.stopScreenSharing() API.
-
-#### Safari can only do full screen sharing
-Safari only allows to screen share the entire screen. Unlike Chromium, which lets you screen share full screen, specific desktop app, or specific browser tab.
-
-#### Screen sharing permissions on macOS
-In order to do screen sharing in macOS Safari or macOs Chrome, screen recording permissions must be granted to the browsers in the OS menu: "Systems Preferences" -> "Security & Privacy" -> "Screen Recording".
## Next steps For more information, see the following articles: -- [Add chat to your app](../quickstarts/chat/get-started.md)
+- [Improve and manage call quality](./voice-video-calling/manage-call-quality.md)
+- [Call Diagnostics](./voice-video-calling/call-diagnostics.md)
- [Add voice calling to your app](../quickstarts/voice-video-calling/getting-started-with-calling.md)-- [Reference documentation](reference.md)
+- [Use the UI Library for enhance calling experiences](./ui-library/ui-library-overview.md)
communication-services Toll Free Verification Guidelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/sms/toll-free-verification-guidelines.md
This is the most crucial step in the verification application and providing the
The general rule of thumb for opt-in are: - Making sure the opt-in flow is thoroughly detailed.  -- Consumer consent must be collected by the direct (first) party sending the messages. If you're a third party helping the direct party sending messages
+- Consumer consent must be collected by the direct (first) party sending the messages. If you're a third party helping the direct party sending messages, the opt-in flow must disclose the (third party) name.
- Ensure there's explicitly stated consent disclaimer language at the time of collection. (that is, when the phone number is collected there must be a disclosure about opting-in to messaging). - If your message has Marketing/Promotional content, then it must be optional for customers to opt-in
The general rule of thumb for opt-in are:
|Verbal/IVR opt-in|Provide a screenshot record of opt-in via verbal in your database/ CRM to show how the opt-in data is stored. (that is, a check box on their CRM saying that the customer opted in and the date) OR an audio recording of the IVR flow.| |Point of Sale | For POS opt-ins on a screen/tablet, provide screenshot of the form. For verbal POS opt-ins of informational traffic, provide a screenshot of the database or a record of the entry. | |2FA/OTP| Provide a screenshot of the process to receive the initial text.|
-|Paper form | Upload the form and make sure it includes XXXX. |
+|Paper form | Upload the form and make sure it includes frequency, campaign information and consent process to get consent from consumer. |
## Volume
communication-services Manage Call Quality https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/manage-call-quality.md
Title: Azure Communication Services Manage Calling Quality
-description: Learn how to improve and manage calling quality with Azure Communication Services
+description: Learn how to improve and manage calling quality with Azure Communication Services.
As your users start using Azure Communication Services for calls and meetings, t
[network recommendations](network-requirements.md). With QoS, you prioritize delay-sensitive network traffic (for example, voice or video streams), allowing it to "cut in line" in front of
-traffic that is less sensitive (like downloading a new app, where an extra second to download isn't a big deal). QoS identifies and marks all packets in real-time streams using Windows Group Policy Objects and a routing feature called Port-based Access Control Lists, which instructs your network to give voice, video, and screen sharing their own dedicated network bandwidth.
+traffic that is less sensitive (like downloading a new app, where an extra second to download isn't a significant deal). QoS identifies and marks all packets in real-time streams using Windows Group Policy Objects and a routing feature called Port-based Access Control Lists, which instructs your network to give voice, video, and screen sharing their own dedicated network bandwidth.
Ideally, you implement QoS on your internal network while getting ready to roll out your Azure Communication Services solution, but you can do it anytime. If you're small enough, you might not need QoS.
who needs to monitor dozens of security cameras feeds simultaneously may
not need the maximum resolution and frame rate that each video stream can provide. In this scenario, you could utilize our [Video constraints](video-constraints.md) capability to limit the amount of bandwidth used by each video stream.
+## Logs on native platforms
+
+Implementing **logging** as per the [logs file retrieval tutorial](../../tutorials/log-file-retrieval-tutorial.md) is critical to gathering details for native development. Detailed logs help in diagnosing issues specific to device models or OS versions. We encourage to the developers that start configuring the Logs API to get details around the call lifetime.
+ ## Implement existing quality and reliability capabilities before deployment > [!Note] > We recommend you use our easy to implement samples since they are already optimized to give your users the best call quality. Please see: [Samples](../../overview.md#samples)
-If our calling samples don't meet your needs or you decide to customize your solution please ensure you understand and implement the following capabilities in your custom calling scenarios.
+If our calling samples don't meet your needs, or you decide to customize your solution please ensure you understand and implement the following capabilities in your custom calling scenarios.
Before you launch and scale your customized Azure Communication Services calling solution, implement the following capabilities to support a high quality calling experience. These tools help prevent common quality and reliability calling issues from happening and diagnose issues if they occur. Keep in mind, some of these call data aren't created or stored unless you implement them.
-The following sections detail the tools to implement at different phases of a call:
+The following sections detail the tools to implement at different phases of a call:
+ - **Before a call** - **During a call** - **After a call** ## Before a call+ **Pre-call readiness** ΓÇô By using the pre-call checks Azure Communication Services provides, you can learn a userΓÇÖs connection status before the call and take proactive action on their behalf. For example, if you learn a userΓÇÖs
When user's use unsupported browsers it can be difficult to diagnose call issues
### Conflicting call clients
-Because Azure Communication Services Voice and Video calls run on web and mobile browsers your users may have multiple browser tabs running separate instances of the Azure
+Because Azure Communication Services Voice and Video call run on web and mobile browsers your users may have multiple browser tabs running separate instances of the Azure
Communication Services calling SDK. This can happen for various reasons. Maybe the user forget to close their previous tab. Maybe the user couldn't join a call without a meeting organizer present and they re-attempt to open the meeting join url link, which opens a separate mobile browser tab. No matter how a user ends up with multiple call browser tabs at the same time, it causes disruptions to audio and video behavior on the call they're trying to participate in, referred to as the target call. You should make sure there aren't multiple browser tabs open before a call starts, and also monitor during the whole call lifecycle. You can pro-actively notify customers to close their excess tabs, or help them join a call correctly with useful messaging if they're unable to join a call initially.
Video streams consume large amounts of network bandwidth, if you know your users
### Volume indicator
-Sometimes users can't hear each other, maybe the speaker is too quiet, the listener's device doesn't receive the audio packets, or there's an audio device issue blocking the sound. Users don't know when they're speaking too quietly, or when the other person can't hear them. You can use the input and output indicator to indicate if a userΓÇÖs volume is low or absent and prompt a user to speak louder or investigate an audio device issue through your user interface.
+Sometimes users can't hear each other; maybe the speaker is too quiet, the listener's device doesn't receive the audio packets, or there's an audio device issue blocking the sound. Users don't know when they're speaking too quietly, or when the other person can't hear them. You can use the input and output indicator to indicate if a userΓÇÖs volume is low or absent and prompt a user to speak louder or investigate an audio device issue through your user interface.
- For more information, please see: [Add volume indicator to your web calling](../../quickstarts/voice-video-calling/get-started-volume-indicator.md)
Since network conditions can change during a call, users can report poor audio a
### Optimal video count
-During a group call with 2 or more participants a user's video quality can fluctuate due to changes in network conditions and their specific hardware limitations. By using the Optimal Video Count API, you can improve user call quality by understanding how many video streams their local endpoint can render at a time without worsening quality. By implementing this feature, you can preserve the call quality and bandwidth of local endpoints that would otherwise attempt to render video poorly. The API exposes the property, optimalVideoCount, which dynamically changes in response to the network and hardware capabilities of a local endpoint. This information is available at runtime and updates throughout the call letting you adjust a userΓÇÖs visual experience as network and hardware conditions change.
+During a group call with 2 or more participants a user's video quality can fluctuate due to changes in network conditions and their specific hardware limitations. By using the Optimal Video Count API, you can improve user call quality by understanding how many videos streams their local endpoint can render at a time without worsening quality. By implementing this feature, you can preserve the call quality and bandwidth of local endpoints that would otherwise attempt to render video poorly. The API exposes the property, optimalVideoCount, which dynamically changes in response to the network and hardware capabilities of a local endpoint. This information is available at runtime and updates throughout the call letting you adjust a userΓÇÖs visual experience as network and hardware conditions change.
- To implement, visit web platform guidance [Manage Video](/azure/communication-services/how-tos/calling-sdk/manage-video?pivots=platform-web) and review the section titled Remote Video Quality.
to ensure you collecting available logs and metrics. These call data aren't stor
### Start collecting call logs
-Review this documentation to start collecting call logs: [Enable logs via Diagnostic Settings in Azure Monitor](../analytics/enable-logging.md)
+Review this documentation to start collecting call logs: [Enable logs via Diagnostic Settings in Azure Monitor.](../analytics/enable-logging.md)
- We recommend you choose the category group "allLogs" and choose the destination detail of ΓÇ£Send to Log Analytics workspace" in order to view and analyze the data in Azure Monitor.-- If you don't have a Log Analytics workspace to send your data to, you will need to [create one.](../../../azure-monitor/logs/quick-create-workspace.md)
+- If you don't have a Log Analytics workspace to send your data to, you'll need to [create one.](../../../azure-monitor/logs/quick-create-workspace.md)
- We recommend you monitor your data usage and retention policies for cost considerations as needed. See: [Controlling costs.](../../../azure-monitor/essentials/diagnostic-settings.md#controlling-costs) ### Diagnose calls with Call Diagnostics Call Diagnostics is an Azure Monitor experience that delivers tailored insight through specialized telemetry and diagnostic pages in the Azure portal.
-Once you begin storing log data in your log analytics workspace you can visualize your search for individual calls and visualize the data in Call Diagnostics. Within your Azure Monitor account you simply need to navigate to your Azure Communication Services resource and locate the Call Diagnostics blade in your side pane.
+Once you begin storing log data in your log analytics workspace, you can visualize your search for individual calls and visualize the data in Call Diagnostics. Within your Azure Monitor account you simply need to navigate to your Azure Communication Services resource and locate the Call Diagnostics blade in your side pane.
- See [Call Diagnostics](call-diagnostics.md) to learn how to best use this capability. <!-- #### sdkVersion
The call may have fired a User Facing Diagnostic indicating a severe problem wit
### Request support
-If you encounter quality or reliability issues you are unable to resolve and need support, you can submit a request for technical support. The more information you can provide in your request the better, however you can still submit requests with partial information to start your inquiry. See: [How to create azure support requests](/azure/azure-portal/supportability/how-to-create-azure-support-request)
+If you encounter quality or reliability issues you're unable to resolve and need support, you can submit a request for technical support. The more information you can provide in your request the better (native logs are crucial to optimize the response time), however you can still submit requests with partial information to start your inquiry. See: [How to create Azure support requests](/azure/azure-portal/supportability/how-to-create-azure-support-request).
-- If you are notified of license requirements while attempting to request technical support, you may need to choose a paid Azure support plan that best aligns to your needs. See: [Compare Support Plans](https://azure.microsoft.com/support/plans).
+- If you're notified of license requirements while attempting to request technical support, you may need to choose a paid Azure support plan that best aligns to your needs. See: [Compare Support Plans](https://azure.microsoft.com/support/plans).
- If you prefer not to purchase support you can leverage community support. See: [Community Support](https://azure.microsoft.com/support/community/). <!-- Free Public support options
New Issue ┬╖ Azure/Communication (github.com) or New Issue ┬╖ Azure/azure-sdk-fo
## Next steps -- Continue to learn other best practices, see: [Best practices: Azure Communication Services calling SDKs](../best-practices.md)--- Explore known issues, see: [Known issues in the SDKs and APIs](../known-issues.md)--- Learn how to debug calls, see: [Call Diagnostics](call-diagnostics.md)--- Learn how to use the Log Analytics workspace, see: [Log Analytics Tutorial](../../../../articles/azure-monitor/logs/log-analytics-tutorial.md)--- Create your own queries in Log Analytics, see: [Get Started Queries](../../../../articles/azure-monitor/logs/get-started-queries.md)
+- Continue to learn other best practices: [Best practices: Azure Communication Services calling SDKs](../best-practices.md)
+- Explore known issues: [Known issues in the SDKs and APIs](../known-issues.md)
+- Learn how to debug calls: [Call Diagnostics](call-diagnostics.md)
+- Learn how to use the Log Analytics workspace: [Log Analytics Tutorial](../../../../articles/azure-monitor/logs/log-analytics-tutorial.md)
+- Create your own queries in Log Analytics: [Get Started Queries](../../../../articles/azure-monitor/logs/get-started-queries.md)
<!-- Comment this out - add to the toc.yml file at row 583.
communication-services Calling Hero Sample https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/samples/calling-hero-sample.md
-+ Last updated 06/30/2021
connectors Built In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/built-in.md
Title: Built-in connector overview
-description: Learn about built-in connectors that run natively in Azure Logic Apps.
+description: Learn about connectors that run natively with the runtime in Azure Logic Apps.
ms.suite: integration Previously updated : 01/04/2024 Last updated : 02/12/2024 # Built-in connectors in Azure Logic Apps Built-in connectors provide ways for you to control your workflow's schedule and structure, run your own code, manage or manipulate data, and complete other tasks in your workflows. Different from managed connectors, some built-in connectors aren't tied to a specific service, system, or protocol. For example, you can start almost any workflow on a schedule by using the Recurrence trigger. Or, you can have your workflow wait until called by using the Request trigger. All built-in connectors run natively on the Azure Logic Apps runtime. Some don't require that you create a connection before you use them.
-For a smaller number of services, systems, and protocols, Azure Logic Apps provides a built-in version alongside the managed version. The number and range of built-in connectors vary based on whether you create a Consumption logic app workflow that runs in multi-tenant Azure Logic Apps or a Standard logic app workflow that runs in single-tenant Azure Logic Apps. In most cases, the built-in version provides better performance, capabilities, pricing, and so on. In a few cases, some built-in connectors are available only in one logic app workflow type and not the other.
+For a smaller number of services, systems, and protocols, Azure Logic Apps provides a built-in version alongside the managed version. The number and range of built-in connectors vary based on whether you create a Consumption logic app workflow that runs in multitenant Azure Logic Apps or a Standard logic app workflow that runs in single-tenant Azure Logic Apps. In most cases, the built-in version provides better performance, capabilities, pricing, and so on. In a few cases, some built-in connectors are available only in one logic app workflow type and not the other.
-For example, a Standard workflow can use both managed connectors and built-in connectors for Azure Blob, Azure Cosmos DB, Azure Event Hubs, Azure Service Bus, DB2, FTP, MQ, SFTP, and SQL Server. A Consumption workflow doesn't have the built-in versions. A Consumption workflow can use built-in connectors for Azure API Management, Azure App Services, and Batch, while a Standard workflow doesn't have these built-in connectors.
+For example, a Standard workflow can use both managed connectors and built-in connectors for Azure Blob Storage, Azure Cosmos DB, Azure Event Hubs, Azure Service Bus, DB2, FTP, MQ, SFTP, and SQL Server. A Consumption workflow doesn't have the built-in versions. A Consumption workflow can use built-in connectors for Azure API Management, Azure App Services, and Batch, while a Standard workflow doesn't have these built-in connectors.
-Also, in Standard workflows, some [built-in connectors with specific attributes are informally known as *service providers*](../logic-apps/custom-connector-overview.md#service-provider-interface-implementation). Some built-in connectors support only a single way to authenticate a connection to the underlying service. Other built-in connectors can offer a choice, such as using a connection string, Microsoft Entra ID, or a managed identity. All built-in connectors run in the same process as the Azure Logic Apps runtime. For more information, review [Single-tenant versus multi-tenant and integration service environment (ISE)](../logic-apps/single-tenant-overview-compare.md).
+Also, in Standard workflows, some [built-in connectors with specific attributes are informally known as *service providers*](../logic-apps/custom-connector-overview.md#service-provider-interface-implementation). Some built-in connectors support only a single way to authenticate a connection to the underlying service. Other built-in connectors can offer a choice, such as using a connection string, Microsoft Entra ID, or a managed identity. All built-in connectors run in the same process as the Azure Logic Apps runtime. For more information, review [Single-tenant versus multitenant and integration service environment (ISE)](../logic-apps/single-tenant-overview-compare.md).
This article provides a general overview about built-in connectors in Consumption workflows versus Standard workflows.
The following table lists the current and expanding galleries of built-in connec
| Consumption | Standard | |-|-|
-| Azure API Management<br>Azure App Services <br>Azure Functions <br>Azure Logic Apps <br>Batch <br>Control <br>Data Operations <br>Date Time <br>Flat File <br>HTTP <br>Inline Code <br>Integration Account <br>Liquid <br>Request <br>Schedule <br>Variables <br>XML | AS2 (v2) <br>Azure Automation* <br>Azure Blob* <br>Azure Cosmos DB* <br>Azure File Storage* <br>Azure Functions <br>Azure Queue* <br>Azure Table Storage* <br>Control <br>Data Operations <br>Date Time <br>DB2* <br>Event Hubs* <br>Flat File <br>FTP* <br>HTTP <br>IBM Host File* <br>Inline Code <br>Key Vault* <br>Liquid operations <br>MQ* <br>Request <br>Schedule <br>Service Bus* <br>SFTP* <br>SMTP* <br>SQL Server* <br>Variables <br>Workflow operations <br>XML operations |
-|||
+| Azure API Management<br>Azure App Services <br>Azure Functions <br>Azure Logic Apps <br>Batch <br>Control <br>Data Operations <br>Date Time <br>Flat File <br>HTTP <br>Inline Code <br>Integration Account <br>Liquid <br>Request <br>Schedule <br>Variables <br>XML | AS2 (v2) <br>Azure Automation* <br>Azure Blob Storage* <br>Azure Cosmos DB* <br>Azure File Storage* <br>Azure Functions <br>Azure Queue Storage* <br>Azure Table Storage* <br>Control <br>Data Operations <br>Date Time <br>DB2* <br>Event Grid Publisher* <br>Event Hubs* <br>File System* <br>Flat File <br>FTP* <br>HTTP <br>IBM Host File* <br>Inline Code <br>JDBC* <br>Key Vault* <br>Liquid operations <br>MQ* <br>Request <br>SAP* <br>Schedule <br>Service Bus* <br>SFTP* <br>SMTP* <br>SQL Server* <br>Variables <br>Workflow operations <br>XML operations |
<a name="service-provider-interface-implementation"></a>
In contrast, a built-in connector that's *not a service provider* has the follow
## Custom built-in connectors
-For Standard workflows, you can create your own built-in connector with the same [built-in connector extensibility model](../logic-apps/custom-connector-overview.md#built-in-connector-extensibility-model) that's used by service provider-based built-in connectors, such as Azure Blob, Azure Event Hubs, Azure Service Bus, SQL Server, and more. This interface implementation is based on the [Azure Functions extensibility model](../azure-functions/functions-bindings-register.md) and provides the capability for you to create custom built-in connectors that anyone can use in Standard workflows.
+For Standard workflows, you can create your own built-in connector with the same [built-in connector extensibility model](../logic-apps/custom-connector-overview.md#built-in-connector-extensibility-model) that's used by service provider-based built-in connectors, such as Azure Blob Storage, Azure Event Hubs, Azure Service Bus, SQL Server, and more. This interface implementation is based on the [Azure Functions extensibility model](../azure-functions/functions-bindings-register.md) and provides the capability for you to create custom built-in connectors that anyone can use in Standard workflows.
For Consumption workflows, you can't create your own built-in connectors, but you create your own managed connectors.
You can use the following built-in connectors to perform general tasks, for exam
:::row-end::: :::row::: :::column:::
- ![FTP icon][ftp-icon]
+ [![File System icon][file-system-icon]][file-system-doc]
\ \
- **FTP**<br>(*Standard workflow only*)
+ [**File System**][file-system-doc]<br>(*Standard workflow only*)
+ \
+ \
+ Connect to a file system on your network machine to create and manage files.
+ :::column-end:::
+ :::column:::
+ [![FTP icon][ftp-icon]][ftp-doc]
+ \
+ \
+ [**FTP**][ftp-doc]<br>(*Standard workflow only*)
\ \ Connect to FTP or FTPS servers that you can access from the internet so that you can work with your files and folders. :::column-end::: :::column:::
- ![SFTP-SSH icon][sftp-ssh-icon]
+ [![SFTP-SSH icon][sftp-ssh-icon]][sftp-doc]
\ \
- **SFTP**<br>(*Standard workflow only*)
+ [**SFTP**][sftp-doc]<br>(*Standard workflow only*)
\ \ Connect to SFTP servers that you can access from the internet by using SSH so that you can work with your files and folders. :::column-end::: :::column:::
- ![SMTP icon][smtp-icon]
+ [![SMTP icon][smtp-icon]][smtp-doc]
\ \
- **SMTP**<br>(*Standard workflow only*)
+ [**SMTP**][smtp-doc]<br>(*Standard workflow only*)
\ \ Connect to SMTP servers that you can send email.
You can use the following built-in connectors to access specific services and sy
When Swagger is included, the triggers and actions defined by these apps appear like any other first-class triggers and actions in Azure Logic Apps. :::column-end::: :::column:::
- ![Azure Blob icon][azure-blob-storage-icon]
+ [![Azure Automation icon][azure-automation-icon]][azure-automation-doc]
\ \
- **Azure Blob**<br>(*Standard workflow only*)
+ [**Azure Automation**][azure-automation-doc]<br>(*Standard workflow only*)
\ \
- Connect to your Azure Blob Storage account so you can create and manage blob content.
+ Connect to your Azure Automation accounts so you can create and manage Azure Automation jobs.
:::column-end::: :::column:::
- ![Azure Cosmos DB icon][azure-cosmos-db-icon]
+ [![Azure Blob Storage icon][azure-blob-storage-icon]][azure-blob-storage-doc]
\ \
- **Azure Cosmos DB**<br>(*Standard workflow only*)
+ [**Azure Blob Storage**][azure-blob-storage-doc]<br>(*Standard workflow only*)
\ \
- Connect to Azure Cosmos DB so that you can access and manage Azure Cosmos DB documents.
+ Connect to your Azure Blob Storage account so you can create and manage blob content.
:::column-end::: :::row-end::: :::row::: :::column:::
- ![Azure Event Hubs icon][azure-event-hubs-icon]
+ [![Azure Cosmos DB icon][azure-cosmos-db-icon]][azure-cosmos-db-doc]
+ \
+ \
+ [**Azure Cosmos DB**][azure-cosmos-db-doc]<br>(*Standard workflow only*)
+ \
+ \
+ Connect to Azure Cosmos DB so that you can access and manage Azure Cosmos DB documents.
+ :::column-end:::
+ :::column:::
+ [![Azure Event Hubs icon][azure-event-hubs-icon]][azure-event-hubs-doc]
\ \
- **Azure Event Hubs**<br>(*Standard workflow only*)
+ [**Azure Event Hubs**][azure-event-hubs-doc]<br>(*Standard workflow only*)
\ \ Consume and publish events through an event hub. For example, get output from your workflow with Event Hubs, and then send that output to a real-time analytics provider. :::column-end::: :::column:::
- ![Azure File Storage icon][azure-file-storage-icon]
+ [![Azure File Storage icon][azure-file-storage-icon]][azure-file-storage-doc]
\ \
- **Azure File Storage**<br>(*Standard workflow only*)
+ [**Azure File Storage**][azure-file-storage-doc]<br>(*Standard workflow only*)
\ \ Connect to your Azure Storage account so that you can create, update, and manage files.
You can use the following built-in connectors to access specific services and sy
\ Call [Azure-hosted functions](../azure-functions/functions-overview.md) to run your own *code snippets* (C# or Node.js) within your workflow. :::column-end::: :::column:::
- ![Azure Key Vault icon][azure-key-vault-icon]
+ [![Azure Key Vault icon][azure-key-vault-icon]][azure-key-vault-doc]
\ \
- **Azure Key Vault**<br>(*Standard workflow only*)
+ [**Azure Key Vault**][azure-key-vault-doc]<br>(*Standard workflow only*)
\ \ Connect to Azure Key Vault to store, access, and manage secrets. :::column-end::: :::column::: [![Azure Logic Apps icon][azure-logic-apps-icon]][nested-logic-app-doc] \
You can use the following built-in connectors to access specific services and sy
Manage asynchronous messages, queues, sessions, topics, and topic subscriptions. :::column-end::: :::column:::
- ![Azure Table Storage icon][azure-table-storage-icon]
+ [![Azure Table Storage icon][azure-table-storage-icon]][azure-table-storage-doc]
\ \
- **Azure Table Storage**<br>(*Standard workflow only*)
+ [**Azure Table Storage**][azure-table-storage-doc]<br>(*Standard workflow only*)
\ \ Connect to your Azure Storage account so that you can create, update, query, and manage tables. :::column-end::: :::column:::
- ![Azure Queue Storage][azure-queue-storage-icon]
+ [![Azure Queue Storage][azure-queue-storage-icon]][azure-queue-storage-doc]
\ \
- **Azure Queue Storage**<br>(*Standard workflow only*)
+ [**Azure Queue Storage**][azure-queue-storage-doc]<br>(*Standard workflow only*)
\ \ Connect to your Azure Storage account so that you can create, update, and manage queues. :::column-end::: :::column:::
- ![IBM DB2 icon][ibm-db2-icon]
+ [![IBM DB2 icon][ibm-db2-icon]][ibm-db2-doc]
\ \
- **IBM DB2**<br>(*Standard workflow only*)
+ [**IBM DB2**][ibm-db2-doc]<br>(*Standard workflow only*)
\ \ Connect to IBM DB2 in the cloud or on-premises. Update a row, get a table, and more. :::column-end::: :::column:::
- ![IBM Host File icon][ibm-host-file-icon]
+ [![IBM Host File icon][ibm-host-file-icon]][ibm-host-file-doc]
\ \
- **IBM Host File**<br>(*Standard workflow only*)
+ [**IBM Host File**][ibm-host-file-doc]<br>(*Standard workflow only*)
\ \ Connect to IBM Host File and generate or parse contents. :::column-end::: :::column:::
- ![IBM MQ icon][ibm-mq-icon]
+ [![IBM MQ icon][ibm-mq-icon]][ibm-mq-doc]
\ \
- **IBM MQ**<br>(*Standard workflow only*)
+ [**IBM MQ**][ibm-mq-doc]<br>(*Standard workflow only*)
\ \ Connect to IBM MQ on-premises or in Azure to send and receive messages. :::column-end:::
+ :::column:::
+ [![JDBC icon][jdbc-icon]][jdbc-doc]
+ \
+ \
+ [**JDBC**][jdbc-doc]<br>(*Standard workflow only*)
+ \
+ \
+ Connect to a relational database using JDBC drivers.
+ :::column-end:::
+ :::column:::
+ [![SAP icon][sap-icon]][sap-doc]
+ \
+ \
+ [**SAP**][sap-doc]<br>(*Standard workflow only*)
+ \
+ \
+ Connect to SAP so you can send or receive messages and invoke actions.
+ :::column-end:::
:::column::: [![SQL Server icon][sql-server-icon]][sql-server-doc] \
You can use the following built-in connectors to access specific services and sy
\ Connect to your SQL Server on premises or an Azure SQL Database in the cloud so that you can manage records, run stored procedures, or perform queries. :::column-end:::
+ :::column:::
+ :::column-end:::
:::row-end::: ## Run code from workflows
For more information, review the following documentation:
<!-- Built-in icons --> [azure-api-management-icon]: ./media/apis-list/azure-api-management.png [azure-app-services-icon]: ./media/apis-list/azure-app-services.png
+[azure-automation-icon]: ./media/apis-list/azure-automation.png
[azure-blob-storage-icon]: ./media/apis-list/azure-blob-storage.png [azure-cosmos-db-icon]: ./media/apis-list/azure-cosmos-db.png [azure-event-hubs-icon]: ./media/apis-list/azure-event-hubs.png
For more information, review the following documentation:
[data-operations-icon]: ./media/apis-list/data-operations.png [date-time-icon]: ./media/apis-list/date-time.png [for-each-icon]: ./media/apis-list/for-each-loop.png
+[file-system-icon]: ./media/apis-list/file-system.png
[ftp-icon]: ./media/apis-list/ftp.png [http-icon]: ./media/apis-list/http.png [http-request-icon]: ./media/apis-list/request.png
For more information, review the following documentation:
[ibm-host-file-icon]: ./media/apis-list/ibm-host-file.png [ibm-mq-icon]: ./media/apis-list/ibm-mq.png [inline-code-icon]: ./media/apis-list/inline-code.png
+[jdbc-icon]: ./media/apis-list/jdbc.png
+[sap-icon]: ./media/apis-list/sap.png
[schedule-icon]: ./media/apis-list/recurrence.png [scope-icon]: ./media/apis-list/scope.png [sftp-ssh-icon]: ./media/apis-list/sftp.png
For more information, review the following documentation:
<!--Built-in doc links--> [azure-api-management-doc]: ../api-management/get-started-create-service-instance.md "Create an Azure API Management service instance for managing and publishing your APIs" [azure-app-services-doc]: ../logic-apps/logic-apps-custom-api-host-deploy-call.md "Integrate logic app workflows with App Service API Apps"
-[azure-blob-storage-doc]: ./connectors-create-api-azureblobstorage.md "Manage files in your blob container with Azure Blob storage connector"
-[azure-cosmos-db-doc]: ./connectors-create-api-cosmos-db.md "Connect to Azure Cosmos DB so that you can access and manage Azure Cosmos DB documents"
-[azure-event-hubs-doc]: ./connectors-create-api-azure-event-hubs.md "Connect to Azure Event Hubs so that you can receive and send events between logic app workflows and Event Hubs"
+[azure-automation-doc]: /azure/logic-apps/connectors/built-in/reference/azureautomation/ "Connect to your Azure Automation accounts so you can create and manage Azure Automation jobs"
+[azure-blob-storage-doc]: /azure/logic-apps/connectors/built-in/reference/azureblob/ "Manage files in your blob container with Azure Blob storage"
+[azure-cosmos-db-doc]: /azure/logic-apps/connectors/built-in/reference/azurecosmosdb/ "Connect to Azure Cosmos DB so you can access and manage Azure Cosmos DB documents"
+[azure-event-hubs-doc]: /azure/logic-apps/connectors/built-in/reference/eventhub/ "Connect to Azure Event Hubs so that you can receive and send events between logic app workflows and Event Hubs"
+[azure-file-storage-doc]: /azure/logic-apps/connectors/built-in/reference/azurefile/ "Connect to Azure File Storage so you can create and manage files in your Azure storage account"
[azure-functions-doc]: ../logic-apps/logic-apps-azure-functions.md "Integrate logic app workflows with Azure Functions"
-[azure-service-bus-doc]: ./connectors-create-api-servicebus.md "Manage messages from Service Bus queues, topics, and topic subscriptions"
-[azure-table-storage-doc]: /connectors/azuretables/ "Connect to your Azure Storage account so that you can create, update, and query tables and more"
+[azure-key-vault-doc]: /azure/logic-apps/connectors/built-in/reference/keyvault/ "Connect to Azure Key Vault to securely store, access, and manage secrets"
+[azure-queue-storage-doc]: /azure/logic-apps/connectors/built-in/reference/azurequeues/ "Connect to Azure Storage so you can create and manage queue entries and queues"
+[azure-service-bus-doc]: /azure/logic-apps/connectors/built-in/reference/servicebus/ "Manage messages from Service Bus queues, topics, and topic subscriptions"
+[azure-table-storage-doc]: /azure/logic-apps/connectors/built-in/reference/azuretables/ "Connect to Azure Storage so you can create, update, and query tables and more"
[batch-doc]: ../logic-apps/logic-apps-batch-process-send-receive-messages.md "Process messages in groups, or as batches" [condition-doc]: ../logic-apps/logic-apps-control-flow-conditional-statement.md "Evaluate a condition and run different actions based on whether the condition is true or false" [data-operations-doc]: ../logic-apps/logic-apps-perform-data-operations.md "Perform data operations such as filtering arrays or creating CSV and HTML tables"
+[event-grid-publisher-doc]: /azure/logic-apps/connectors/built-in/reference/eventgridpublisher/ "Connect to Azure Event Grid for event-based programming using pub-sub semantics"
+[file-system-doc]: /azure/logic-apps/connectors/built-in/reference/filesystem/ "Connect to a file system on your network machine to create and manage files"
[for-each-doc]: ../logic-apps/logic-apps-control-flow-loops.md#foreach-loop "Perform the same actions on every item in an array"
-[ftp-doc]: ./connectors-create-api-ftp.md "Connect to an FTP or FTPS server for FTP tasks, like uploading, getting, deleting files, and more"
+[ftp-doc]: /azure/logic-apps/connectors/built-in/reference/ftp/ "Connect to an FTP or FTPS server for FTP tasks, like uploading, getting, deleting files, and more"
[http-doc]: ./connectors-native-http.md "Call HTTP or HTTPS endpoints from your logic app workflows" [http-request-doc]: ./connectors-native-reqres.md "Receive HTTP requests in your logic app workflows" [http-response-doc]: ./connectors-native-reqres.md "Respond to HTTP requests from your logic app workflows" [http-swagger-doc]: ./connectors-native-http-swagger.md "Call REST endpoints from your logic app workflows" [http-webhook-doc]: ./connectors-native-webhook.md "Wait for specific events from HTTP or HTTPS endpoints"
-[ibm-db2-doc]: ./connectors-create-api-db2.md "Connect to IBM DB2 in the cloud or on-premises. Update a row, get a table, and more"
-[ibm-mq-doc]: ./connectors-create-api-mq.md "Connect to IBM MQ on-premises or in Azure to send and receive messages"
+[ibm-db2-doc]: /azure/logic-apps/connectors/built-in/reference/db2/ "Connect to IBM DB2 in the cloud or on-premises. Update a row, get a table, and more"
+[ibm-host-file-doc]: /azure/logic-apps/connectors/built-in/reference/hostfile/ "Connect to your IBM host to work with offline files"
+[ibm-mq-doc]: /azure/logic-apps/connectors/built-in/reference/mq/ "Connect to IBM MQ on-premises or in Azure to send and receive messages"
[inline-code-doc]: ../logic-apps/logic-apps-add-run-inline-code.md "Add and run JavaScript code snippets from your logic app workflows"
+[jdbc-doc]: /azure/logic-apps/connectors/built-in/reference/jdbc/ "Connect to a relational database using JDBC drivers"
[nested-logic-app-doc]: ../logic-apps/logic-apps-http-endpoint.md "Integrate logic app workflows with nested workflows" [query-doc]: ../logic-apps/logic-apps-perform-data-operations.md#filter-array-action "Select and filter arrays with the Query action"
+[sap-doc]: /azure/logic-apps/connectors/built-in/reference/sap/ "Connect to SAP so you can send or receive messages and invoke actions"
[schedule-doc]: ../logic-apps/concepts-schedule-automated-recurring-tasks-workflows.md "Run logic app workflows based a schedule" [schedule-delay-doc]: ./connectors-native-delay.md "Delay running the next action" [schedule-delay-until-doc]: ./connectors-native-delay.md "Delay running the next action" [schedule-recurrence-doc]: ./connectors-native-recurrence.md "Run logic app workflows on a recurring schedule" [schedule-sliding-window-doc]: ./connectors-native-sliding-window.md "Run logic app workflows that need to handle data in contiguous chunks" [scope-doc]: ../logic-apps/logic-apps-control-flow-run-steps-group-scopes.md "Organize actions into groups, which get their own status after the actions in group finish running"
-[sftp-ssh-doc]: ./connectors-sftp-ssh.md "Connect to your SFTP account by using SSH. Upload, get, delete files, and more"
-[sql-server-doc]: ./connectors-create-api-sqlazure.md "Connect to Azure SQL Database or SQL Server. Create, update, get, and delete entries in an SQL database table"
+[sftp-doc]: /azure/logic-apps/connectors/built-in/reference/sftp/ "Connect to your SFTP account by using SSH. Upload, get, delete files, and more"
+[smtp-doc]: /azure/logic-apps/connectors/built-in/reference/smtp/ "Connect to your SMTP server so you can send email"
+[sql-server-doc]: /azure/logic-apps/connectors/built-in/reference/sql/ "Connect to Azure SQL Database or SQL Server. Create, update, get, and delete entries in an SQL database table"
[switch-doc]: ../logic-apps/logic-apps-control-flow-switch-statement.md "Organize actions into cases, which are assigned unique values. Run only the case whose value matches the result from an expression, object, or token. If no matches exist, run the default case" [terminate-doc]: ../logic-apps/logic-apps-workflow-actions-triggers.md#terminate-action "Stop or cancel an actively running workflow for your logic app workflow" [until-doc]: ../logic-apps/logic-apps-control-flow-loops.md#until-loop "Repeat actions until the specified condition is true or some state has changed"
container-apps Ingress How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/ingress-how-to.md
You can expose additional TCP ports from your application. To learn more, see th
Adding additional TCP ports can be done through the CLI by referencing a YAML file with your TCP port configurations. ```azurecli
-az containerapp create
+az containerapp create \
--name <app-name> \ --resource-group <resource-group> \ --yaml <your-yaml-file>
cosmos-db How To Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/how-to-private-link.md
Title: Use Azure Private Link description: Use Azure Private Link to connect to Azure Cosmos DB for MongoDB vCore over a private endpoint in a virtual network.--++
Last updated 11/01/2023
# CustomerIntent: As a security administrator, I want to use Azure Private Link so that I can ensure that database connections occur over privately-managed virtual network endpoints.
-# Enable Private access in Azure Cosmos DB for MongoDB vCore
+# Use Azure Private Link in Azure Cosmos DB for MongoDB vCore
[!INCLUDE[MongoDB vCore](../../includes/appliesto-mongodb-vcore.md)]
To establish a connection, Azure Cosmos DB for MongoDB vCore with Private Link s
- An existing Azure Cosmos DB for MongoDB vCore cluster. - If you don't have an Azure subscription, [create an account for free](https://azure.microsoft.com/free). - If you have an existing Azure subscription, [create a new Azure Cosmos DB for MongoDB vCore cluster](quickstart-portal.md).
+- Access to an active Virtual network and Subnet.
+ - If you donΓÇÖt have a Virtual network, [create a virtual network using the Azure portal](../../../virtual-network/quick-create-portal.md)
+- Verify your access to Azure Cosmos DB for MongoDB vCore Private Endpoint.
+ - If you donΓÇÖt have access, you can request it by following the steps below.
-## Create a cluster with a private endpoint by using the Azure portal
+## Requesting Access to Azure Cosmos DB for MongoDB vCore Private Endpoint via Azure Portal
-Follow these steps to create a new Azure Cosmos DB for MongoDB vCore cluster with a private endpoint by using the Azure portal:
+To request access for a private endpoint for an existing Azure Cosmos DB for MongoDB vCore cluster, follow these steps using the Azure portal:
-1. Sign in to the [Azure portal](https://portal.azure.com), then select **Create a resource** in the upper left-hand corner of the Azure portal.
+1. Sign in to the [Azure portal](https://portal.azure.com), and search for **Preview Features** in the search bar.
-1. On the **Create a resource** page, select **Databases** and then select **Azure Cosmos DB**.
+1. Choose **Azure Cosmos DB for MongoDB vCore Private Endpoint** from the available options list and click "register."
-1. On the Select API option page, on the **MongoDB** tile, select Create.
+1. You will receive a notification once access to the Private Endpoint is granted.
-1. Choose the **vCore cluster** resource type.
-1. On the **Create an Azure Cosmos DB for MongoDB vCore** cluster page, select or create a **Resource group**, enter a **Cluster name** and Location, and enter and confirm the administrator Password.
+## Create a private endpoint by using the Azure portal
-1. Select Next: **Networking**.
+Follow these steps to create a private endpoint for an existing Azure Cosmos DB for MongoDB vCore cluster by using the Azure portal:
-1. Select **Networking** tab, for Connectivity method, select **Private access**.
+1. Sign in to the [Azure portal](https://portal.azure.com), then select an Azure Cosmos DB for MongoDB vCore cluster.
-1. On the Create private endpoint screen, enter or select appropriate values for:
+1. Select **Networking** from the list of settings, and then select **Visit Link Center** under the **Private Endpoints** section:
+
+1. In the **Create a private endpoint - Basics** pane, enter or select the following details:
| Setting | Value | | - | -- |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
| Resource group | Select a resource group.|
+ | **Instance details** | |
| Name | Enter any name for your private endpoint. If this name is taken, create a unique one. | | Network Interface name | Enter any name for your Network Interface. If this name is taken, create a unique one. |
- | Location | Select the region where you want to deploy Private Link. Create the private endpoint in the same location where your virtual network exists.|
- | Target subresource | Select the type of subresource for the resource selected previously that your private endpoint should have the ability to access. |
- | Virtual network | Select your virtual network. |
- | Subnet | Select your subnet. |
- | Integrate with private DNS zone | Select **Yes**. To connect privately with your private endpoint, you need a DNS record. We recommend that you integrate your private endpoint with a private DNS zone. You can also use your own DNS servers or create DNS records by using the host files on your virtual machines. When you select yes for this option, a private DNS zone group is also created. DNS zone group is a link between the private DNS zone and the private endpoint. This link helps you to auto update the private DNS zone when there's an update to the private endpoint. For example, when you add or remove regions, the private DNS zone is automatically updated. |
- | Configuration name |Select your subscription and resource group. The private DNS zone is determined automatically. You can't change it by using the Azure portal.|
-
-1. Select **Ok**.
+ | Region | Select the region where you want to deploy Private Link. Create the private endpoint in the same location where your virtual network exists.|
-1. Select **Next: Tags** > **Review + create**. On the **Review + create** page, then select **Create**.
+1. Select **Next: Resource**.
-## Enable private access on an existing cluster
+1. In the **Create a private endpoint - Resource** pane, enter or select the following details:
-To create a private endpoint in an existing cluster, open the
-**Networking** page for the cluster.
-
-1. Select **Add private endpoint**.
-
- :::image type="content" source="media/howto-private-access/networking.jpg" alt-text="Screenshot of selecting Add private endpoint on the Networking screen." lightbox="media/howto-private-access/networking.jpg":::
+ | Setting | Value |
+ | - | -- |
+ | Connection Method | Choose one of your resources or connect to someone else's resource with a resource ID or alias that is shared with you. |
+ | Subscription | Select the subscription containing the resource you're connecting to.|
+ | Resource Type | Select the resource type you're connecting to. |
+ | Resource | Select the resource type you're connecting to. |
+ | Target subresource | Select the type of subresource for the resource selected previously that your private endpoint should have the ability to access. |
-2. On the **Basics** tab of the **Create a private endpoint** screen, confirm the **Subscription**, **Resource group**, and
- **Region**. Enter a **Name** for the endpoint, such as *my-cluster-1*, and a **Network interface name**, such as *my-cluster-1-nic*.
+1. Select **Next: Virtual Network**.
- > [!NOTE]
- >
- > Unless you have a good reason to choose otherwise, we recommend picking a
- > subscription and region that match those of your cluster. The
- > default values for the form fields might not be correct. Check them and
- > update if necessary.
+1. In the **Create a private endpoint - Virtual Network** pane, enter or select this information:
-3. Select **Next: Resource**. For **Target sub-resource**, choose the target
- node of the cluster. Usually **coordinator** is the desired node.
+ | Setting | Value |
+ | - | -- |
+ | Virtual network| Select your virtual network. |
+ | Subnet | Select your subnet. |
-4. Select **Next: Virtual Network**. Choose the desired **Virtual network** and
- **Subnet**. Under **Private IP configuration**, select **Statically allocate IP address** or keep the default, **Dynamically allocate IP address**.
+1. Select **Next: DNS**.
-5. Select **Next: DNS**.
+1. In the **Create a private endpoint - DNS** pane, enter or select this information:
-6. Under **Private DNS integration**, for **Integrate with private DNS zone**, keep the default **Yes** or select **No**.
+ | Setting | Value |
+ | - | -- |
+ | Integrate with private DNS zone | Select **Yes**. To connect privately with your private endpoint, you need a DNS record. We recommend that you integrate your private endpoint with a private DNS zone. You can also use your own DNS servers or create DNS records by using the host files on your virtual machines. When you select yes for this option, a private DNS zone group is also created. DNS zone group is a link between the private DNS zone and the private endpoint. This link helps you to auto update the private DNS zone when there's an update to the private endpoint. For example, when you add or remove regions, the private DNS zone is automatically updated. |
+ | Configuration name |Select your subscription and resource group. The private DNS zone is determined automatically. You can't change it by using the Azure portal.|
-7. Select **Next: Tags**, and add any desired tags.
+1. Select **Next: Tags** > **Review + create**. On the **Review + create** page, Azure validates your configuration.
-8. Select **Review + create**. Review the settings, and select
- **Create** when satisfied.
+1. When you see the **Validation passed** message, select **Create**.
+When you have an approved Private Endpoint for an Azure Cosmos DB account, in the Azure portal, the **All networks** option in the **Firewall and virtual networks** pane is unavailable.
## Create a private endpoint by using Azure CLI
az network private-link-resource list \
--type Microsoft.DocumentDB/mongoClusters ```
+## View private endpoints by using the Azure portal
+
+Follow these steps to view a private endpoint for an existing Azure Cosmos DB account by using the Azure portal:
+
+1. Sign in to the [Azure portal](https://portal.azure.com), then select Private Link under Azure Services.
+
+1. Select **Private Endpoint** from the list of settings to view all Private endpoints.
+ ## Next step > [!div class="nextstepaction"]
cosmos-db Sdk Java V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/sdk-java-v2.md
+
+ Title: Java SDK (legacy) - Release notes and resources
+
+description: Review the Java API and SDK including release dates, retirement dates, and changes made between each version of this SDK for Azure Cosmos DB for NoSQL.
+++++
+ms.devlang: java
Last updated : 02/12/2024++
+# Azure Cosmos DB for NoSQL Java SDK (legacy): Release notes and resources
+++
+This article covers the Azure Cosmos DB Sync Java SDK v2 for the API for NoSQL. This API only supports synchronous operations.
+
+> [!IMPORTANT]
+> This is *not* the latest Java SDK for Azure Cosmos DB! We **strongly recommend** using [Azure Cosmos DB Java SDK v4](sdk-java-v4.md) for your project. To upgrade, follow the instructions in the [Migrate to Azure Cosmos DB Java SDK v4](migrate-java-v4-sdk.md) guide and the [Reactor vs RxJava](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/reactor-rxjava-guide.md) guide.
+
+> [!WARNING]
+> On February 29, 2024 the Azure Cosmos DB Sync Java SDK v2.x will be retired. Azure Cosmos DB will cease to provide further maintenance and support for this SDK after retirement. Please follow the instructions here to migrate to Azure Cosmos DB Java SDK v4.
+
+| | Links |
+|||
+|**SDK Download**|[Maven](https://search.maven.org/#search%7Cgav%7C1%7Cg%3A%22com.microsoft.azure%22%20AND%20a%3A%22azure-documentdb%22)|
+|**API documentation**|[Java API reference documentation](/java/api/com.microsoft.azure.documentdb)|
+|**Contribute to SDK**|[GitHub](https://github.com/Azure/azure-documentdb-java/)|
+|**Get started**|[Get started with the Java SDK](./quickstart-java.md)|
+|**Web app tutorial**|[Web application development with Azure Cosmos DB](tutorial-java-web-app.md)|
+|**Minimum supported runtime**|[Java Development Kit (JDK) 7+](/java/azure/jdk/)|
+
+## Release notes
+
+Here's the release notes for each version of the SDK.
+
+### 2.6.5
+
+- Removed test dependency `com.google.guava/guava` due to security vulnerabilities
+- Upgraded dependency `com.fasterxml.jackson.core/jackson-databind` to 2.14.0
+- Upgraded dependency `commons-codec/commons-codec` to 1.15
+- Upgraded dependency `org.json/json` to 20180130
+
+### 2.6.4
+
+- Fixed the retry policy for read timeouts
+
+### 2.6.3
+
+- Fixed a retry policy when `GoneException` is wrapped in `IllegalStateException`
+
+### 2.6.2
+
+- Added a new retry policy to retry on Read Timeouts
+- Upgraded dependency `com.fasterxml.jackson.core/jackson-databind` to 2.9.10.8
+- Upgraded dependency `org.apache.httpcomponents/httpclient` to 4.5.13
+
+### 2.6.1
+
+- Fixed a bug in handling a query through service interop.
+
+### 2.6.0
+
+- Added support for querying change feed from point in time.
+
+### 2.5.1
+
+- Fixes primary partition cache issue on documentCollection query.
+
+### 2.5.0
+
+- Added support for 449 retry custom configuration.
+
+### 2.4.7
+
+- Fixes connection pool timeout issue.
+- Fixes auth token refresh on internal retries.
+
+### 2.4.6
+
+- Updated correct client side replica policy tag on databaseAccount and made databaseAccount configuration reads from cache.
+
+### 2.4.5
+
+- If the user provides pkRangeId, this version avoids retry on invalid partition key range error
+
+### 2.4.4
+
+- Optimized partition key range cache refreshes.
+- Fixes the scenario where the SDK doesn't entertain partition split hint from server and results in incorrect client side routing caches refresh.
+
+### 2.4.2
+
+- Optimized collection cache refreshes.
+
+### 2.4.1
+
+- Added support to retrieve inner exception message from request diagnostic string.
+
+### 2.4.0
+
+- Introduced version API on PartitionKeyDefinition.
+
+### 2.3.0
+
+- Added separate timeout support for direct mode.
+
+### 2.2.3
+
+- Consuming null error message from service and producing document client exception.
+
+### 2.2.2
+
+- Socket connection improvement, adding SoKeepAlive default true.
+
+### 2.2.0
+
+- Added request diagnostics string support.
+
+### 2.1.3
+
+- Fixed bug in PartitionKey for Hash V2.
+
+### 2.1.2
+
+- Added support for composite indexes.
+- Fixed bug in global endpoint manager to force refresh.
+- Fixed bug for upsert operations with preconditions in direct mode.
+
+### 2.1.1
+
+- Fixed bug in gateway address cache.
+
+### 2.1.0
+
+- Multi-region writes support added for direct mode.
+- Added support for handling `IOExceptions` thrown as `ServiceUnavailable` exceptions, from a proxy.
+- Fixed a bug in endpoint discovery retry policy.
+- Fixed a bug to ensure null pointer exceptions aren't thrown in BaseDatabaseAccountConfigurationProvider.
+- Fixed a bug to ensure QueryIterator doesn't return nulls.
+- Fixed a bug to ensure large PartitionKey is allowed.
+
+### 2.0.0
+
+- Multi-region writes support added for gateway mode.
+
+### 1.16.4
+
+- Fixed a bug in Read partition Key ranges for a query.
+
+### 1.16.3
+
+- Fixed a bug in setting continuation token header size in DirectHttps mode.
+
+### 1.16.2
+
+- Added streaming failover support.
+- Added support for custom metadata.
+- Improved session handling logic.
+- Fixed a bug in partition key range cache.
+- Fixed an `NullPointerException` (NPE) bug in direct mode.
+
+### 1.16.1
+
+- Added support for Unique Index.
+- Added support for limiting continuation token size in feed-options.
+- Fixed a bug in Json Serialization (timestamp).
+- Fixed a bug in Json Serialization (enum).
+- Dependency on com.fasterxml.jackson.core:jackson-databind upgraded to 2.9.5.
+
+### 1.16.0
+
+- Improved Connection Pooling for Direct Mode.
+- Improved Prefetch improvement for nonorderby cross partition query.
+- Improved UUID generation.
+- Improved Session consistency logic.
+- Added support for multipolygon.
+- Added support for Partition Key Range Statistics for Collection.
+- Fixed a bug in Multi-region support.
+
+### 1.15.0
+
+- Improved Json Serialization performance.
+- This SDK version requires the latest version of [Azure Cosmos DB Emulator](https://aka.ms/cosmosdb-emulator).
+
+### 1.14.0
+
+- Internal changes for Microsoft friends libraries.
+
+### 1.13.0
+
+- Fixed an issue in reading single partition key ranges.
+- Fixed an issue in ResourceID parsing that affects database with short names.
+- Fixed an issue cause by partition key encoding.
+
+### 1.12.0
+
+- Critical bug fixes to request processing during partition splits.
+- Fixed an issue with the Strong and BoundedStaleness consistency levels.
+
+### 1.11.0
+
+- Added support for a new consistency level called ConsistentPrefix.
+- Fixed a bug in reading collection in session mode.
+
+### 1.10.0
+
+- Enabled support for partitioned collection with as low as 2,500 RU/sec and scale in increments of 100 RU/sec.
+- Fixes a bug in the native assembly, which can cause NullRef exception in some queries.
+
+### 1.9.6
+
+- Fixed a bug in the query engine configuration that might cause exceptions for queries in Gateway mode.
+- Fixed a few bugs in the session container that might cause an "Owner resource not found" exception for requests immediately after collection creation.
+
+### 1.9.5
+
+- Added support for aggregation queries (COUNT, MIN, MAX, SUM, and AVG).
+- Added support for change feed.
+- Added support for collection quota information through RequestOptions.setPopulateQuotaInfo.
+- Added support for stored procedure script logging through RequestOptions.setScriptLoggingEnabled.
+- Fixed a bug where query in DirectHttps mode might stop responding when encountering throttle failures.
+- Fixed a bug in session consistency mode.
+- Fixes a bug, which might cause NullReferenceException in HttpContext when request rate is high.
+- Improved performance of DirectHttps mode.
+
+### 1.9.4
+
+- Added simple client instance-based proxy support with ConnectionPolicy.setProxy() API.
+- Added DocumentClient.close() API to properly close down a DocumentClient instance.
+- Improved query performance in direct connectivity mode by deriving the query plan from the native assembly instead of the Gateway.
+- Set FAIL_ON_UNKNOWN_PROPERTIES = false so users don't need to define JsonIgnoreProperties in their Plain Old Java Object (POJO).
+- Refactored logging to use SLF4J.
+- Fixed a few other bugs in consistency reader.
+
+### 1.9.3
+
+- Fixed a bug in the connection management to prevent connection leaks in direct connectivity mode.
+- Fixed a bug in the TOP query where it might throw NullReference exception.
+- Improved performance by reducing the number of network calls for the internal caches.
+- Added status code, ActivityID, and Request URI in DocumentClientException for better troubleshooting.
+
+### 1.9.2
+
+- Fixed an issue in the connection management for stability.
+
+### 1.9.1
+
+- Added support for BoundedStaleness consistency level.
+- Added support for direct connectivity for CRUD operations for partitioned collections.
+- Fixed a bug in querying a database with SQL.
+- Fixed a bug in the session cache where session token might be set incorrectly.
+
+### 1.9.0
+
+- Added support for cross partition parallel queries.
+- Added support for TOP/ORDER BY queries for partitioned collections.
+- Added support for strong consistency.
+- Added support for name based requests when using direct connectivity.
+- Fixed to make ActivityId stay consistent across all request retries.
+- Fixed a bug related to the session cache when recreating a collection with the same name.
+- Added Polygon and LineString DataTypes while specifying collection indexing policy for geo-fencing spatial queries.
+- Fixed issues with Java Doc for Java 1.8.
+
+### 1.8.1
+
+- Fixed a bug in PartitionKeyDefinitionMap to cache single partition collections and not make extra fetch partition key requests.
+- Fixed a bug to not retry when an incorrect partition key value is provided.
+
+### 1.8.0
+
+- Added the support for multi-region database accounts.
+- Added support for automatic retry on throttled requests with options to customize the max retry attempts and max retry wait time. For more information, see RetryOptions and ConnectionPolicy.getRetryOptions().
+- Deprecated IPartitionResolver based custom partitioning code. Use partitioned collections for higher storage and throughput.
+
+### 1.7.1
+
+- Added retry policy support for rate limiting.
+
+### 1.7.0
+
+- Added time to live (TTL) support for documents.
+
+### 1.6.0
+
+- Implemented [partitioned collections](../partitioning-overview.md) and [user-defined performance levels](../performance-levels.md).
+
+### 1.5.1
+
+- Fixed a bug in HashPartitionResolver to generate hash values in little-endian to be consistent with other software development kits (SDKs).
+
+### 1.5.0
+
+- Add Hash & Range partition resolvers to assist with sharding applications across multiple partitions.
+
+### 1.4.0
+
+- Implement Upsert. New upsertXXX methods added to support Upsert feature.
+- Implement ID Based Routing. No public API changes, all changes internal.
+
+### 1.3.0
+
+- Release skipped to bring version number in alignment with other SDKs
+
+### 1.2.0
+
+- Supports GeoSpatial Index.
+- Validates ID property for all resources. Ids for resources can't contain `?`, `/`, `#`, `\`, characters, or end with a space.
+- Adds new header "index transformation progress" to ResourceResponse.
+
+### 1.1.0
+
+- Implements V2 indexing policy
+
+### 1.0.0
+
+- GA SDK
+
+## Release and retirement dates
+
+Microsoft provides notification at least **12 months** in advance of retiring an SDK in order to smooth the transition to a newer/supported version. New features and functionality and optimizations are only added to the current SDK. We recommend that you always upgrade to the latest SDK version as early as possible.
+
+> [!WARNING]
+> After 30 might 2020, Azure Cosmos DB will no longer make bug fixes, add new features, and provide support to versions 1.x of the Azure Cosmos DB Java SDK for API for NoSQL. If you prefer not to upgrade, requests sent from version 1.x of the SDK will continue to be served by the Azure Cosmos DB service.
+>
+> After 29 February 2016, Azure Cosmos DB will no longer make bug fixes, add new features, and provide support to versions 0.x of the Azure Cosmos DB Java SDK for API for NoSQL. If you prefer not to upgrade, requests sent from version 0.x of the SDK will continue to be served by the Azure Cosmos DB service.
+
+| Version | Release Date | Retirement Date |
+| | | |
+| [2.6.1](#261) |Dec 17, 2020 |Feb 29, 2024|
+| [2.6.0](#260) |July 16, 2020 |Feb 29, 2024|
+| [2.5.1](#251) |June 03, 2020 |Feb 29, 2024|
+| [2.5.0](#250) |May 12, 2020 |Feb 29, 2024|
+| [2.4.7](#247) |Feb 20, 2020 |Feb 29, 2024|
+| [2.4.6](#246) |Jan 24, 2020 |Feb 29, 2024|
+| [2.4.5](#245) |Nov 10, 2019 |Feb 29, 2024|
+| [2.4.4](#244) |Oct 24, 2019 |Feb 29, 2024|
+| [2.4.2](#242) |Sep 26, 2019 |Feb 29, 2024|
+| [2.4.1](#241) |Jul 18, 2019 |Feb 29, 2024|
+| [2.4.0](#240) |May 04, 2019 |Feb 29, 2024|
+| [2.3.0](#230) |Apr 24, 2019 |Feb 29, 2024|
+| [2.2.3](#223) |Apr 16, 2019 |Feb 29, 2024|
+| [2.2.2](#222) |Apr 05, 2019 |Feb 29, 2024|
+| [2.2.0](#220) |Mar 27, 2019 |Feb 29, 2024|
+| [2.1.3](#213) |Mar 13, 2019 |Feb 29, 2024|
+| [2.1.2](#212) |Mar 09, 2019 |Feb 29, 2024|
+| [2.1.1](#211) |Dec 13, 2018 |Feb 29, 2024|
+| [2.1.0](#210) |Nov 20, 2018 |Feb 29, 2024|
+| [2.0.0](#200) |Sept 21, 2018 |Feb 29, 2024|
+| [1.16.4](#1164) |Sept 10, 2018 |May 30, 2020 |
+| [1.16.3](#1163) |Sept 09, 2018 |May 30, 2020 |
+| [1.16.2](#1162) |June 29, 2018 |May 30, 2020 |
+| [1.16.1](#1161) |May 16, 2018 |May 30, 2020 |
+| [1.16.0](#1160) |March 15, 2018 |May 30, 2020 |
+| [1.15.0](#1150) |Nov 14, 2017 |May 30, 2020 |
+| [1.14.0](#1140) |Oct 28, 2017 |May 30, 2020 |
+| [1.13.0](#1130) |August 25, 2017 |May 30, 2020 |
+| [1.12.0](#1120) |July 11, 2017 |May 30, 2020 |
+| [1.11.0](#1110) |May 10, 2017 |May 30, 2020 |
+| [1.10.0](#1100) |March 11, 2017 |May 30, 2020 |
+| [1.9.6](#196) |February 21, 2017 |May 30, 2020 |
+| [1.9.5](#195) |January 31, 2017 |May 30, 2020 |
+| [1.9.4](#194) |November 24, 2016 |May 30, 2020 |
+| [1.9.3](#193) |October 30, 2016 |May 30, 2020 |
+| [1.9.2](#192) |October 28, 2016 |May 30, 2020 |
+| [1.9.1](#191) |October 26, 2016 |May 30, 2020 |
+| [1.9.0](#190) |October 03, 2016 |May 30, 2020 |
+| [1.8.1](#181) |June 30, 2016 |May 30, 2020 |
+| [1.8.0](#180) |June 14, 2016 |May 30, 2020 |
+| [1.7.1](#171) |April 30, 2016 |May 30, 2020 |
+| [1.7.0](#170) |April 27, 2016 |May 30, 2020 |
+| [1.6.0](#160) |March 29, 2016 |May 30, 2020 |
+| [1.5.1](#151) |December 31, 2015 |May 30, 2020 |
+| [1.5.0](#150) |December 04, 2015 |May 30, 2020 |
+| [1.4.0](#140) |October 05, 2015 |May 30, 2020 |
+| [1.3.0](#130) |October 05, 2015 |May 30, 2020 |
+| [1.2.0](#120) |August 05, 2015 |May 30, 2020 |
+| [1.1.0](#110) |July 09, 2015 |May 30, 2020 |
+| 1.0.1 |May 12, 2015 |May 30, 2020 |
+| [1.0.0](#100) |April 07, 2015 |May 30, 2020 |
+| 0.9.5-prelease |Mar 09, 2015 |February 29, 2016 |
+| 0.9.4-prelease |February 17, 2015 |February 29, 2016 |
+| 0.9.3-prelease |January 13, 2015 |February 29, 2016 |
+| 0.9.2-prelease |December 19, 2014 |February 29, 2016 |
+| 0.9.1-prelease |December 19, 2014 |February 29, 2016 |
+| 0.9.0-prelease |December 10, 2014 |February 29, 2016 |
+
+## Frequently asked Questions
+
cosmos-db Product Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/product-updates.md
Updates that donΓÇÖt directly affect the internals of a cluster are rolled out g
Updates that change cluster internals, such as installing a [new minor PostgreSQL version](https://www.postgresql.org/developer/roadmap/), are delivered to existing clusters as part of the next [scheduled maintenance](concepts-maintenance.md) event. Such updates are available immediately to newly created clusters. ### February 2024
+* General availability: [The latest minor PostgreSQL version updates](reference-versions.md#postgresql-versions) (12.18, 13.14, 14.11, 15.6, and 16.2) are now available.
+ * [The last update for PostgreSQL 11](./reference-versions.md#postgresql-version-11-and-older) was released by community in November 2023.
* General availability: [Microsoft Entra authentication](./concepts-authentication.md#microsoft-entra-id-authentication-preview) is now supported in addition to Postgres roles in [all supported regions](./resources-regions.md). ### January 2024
cosmos-db Reference Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/reference-extensions.md
Previously updated : 01/22/2024 Last updated : 02/11/2024 # PostgreSQL extensions in Azure Cosmos DB for PostgreSQL
The versions of each extension installed in a cluster sometimes differ based on
> | [intarray](https://www.postgresql.org/docs/current/static/intarray.html) | Provides functions and operators for manipulating null-free arrays of integers. | 1.2 | 1.2 | 1.3 | 1.5 | 1.5 | 1.5 | > | [moddatetime](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.9) | Functions for tracking last modification time. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | > | [orafce](https://github.com/orafce/orafce) | Functions and operators that emulate a subset of functions and packages from the Oracle RDBMS. | | | | 4.9 | 4.9 | 4.9 |
-> | [pg\_partman](https://pgxn.org/dist/pg_partman/doc/pg_partman.html) | Manages partitioned tables by time or ID. | 4.7.4 | 4.7.4 | 4.7.4 | 5.0.0 | 5.0.0 | 5.0.0 |
+> | [pg\_partman](https://pgxn.org/dist/pg_partman/doc/pg_partman.html) | Manages partitioned tables by time or ID. | 4.7.4 | 4.7.4 | 4.7.4 | 5.0.1 | 5.0.1 | 5.0.1 |
> | [pg\_surgery](https://www.postgresql.org/docs/current/pgsurgery.html) | Functions to perform surgery on a damaged relation. | | | | 1.0 | 1.0 | 1.0 | > | [pg\_trgm](https://www.postgresql.org/docs/current/static/pgtrgm.html) | Provides functions and operators for determining the similarity of alphanumeric text based on trigram matching. | 1.4 | 1.4 | 1.5 | 1.6 | 1.6 | 1.6 | > | [pgcrypto](https://www.postgresql.org/docs/current/static/pgcrypto.html) | Provides cryptographic functions. | 1.3 | 1.3 | 1.3 | 1.3 | 1.3 | 1.3 |
The versions of each extension installed in a cluster sometimes differ based on
> | [pageinspect](https://www.postgresql.org/docs/current/pageinspect.html) | Inspect the contents of database pages at a low level. | 1.7 | 1.7 | 1.8 | 1.9 | 1.11 | 1.12 | > | [pg\_azure\_storage](howto-ingest-azure-blob-storage.md) | Azure integration for PostgreSQL. | | | 1.3 | 1.3 | 1.3 | 1.3 | > | [pg\_buffercache](https://www.postgresql.org/docs/current/static/pgbuffercache.html) | Provides a means for examining what's happening in the shared buffer cache in real time. | 1.3 | 1.3 | 1.3 | 1.3 | 1.3 | 1.4 |
-> | [pg\_cron](https://github.com/citusdata/pg_cron) | Job scheduler for PostgreSQL. | 1.5 | 1.5 | 1.5 | 1.5 | 1.5 | 1.5 |
+> | [pg\_cron](https://github.com/citusdata/pg_cron) | Job scheduler for PostgreSQL. | 1.5 | 1.6 | 1.6 | 1.6 | 1.6 | 1.6 |
> | [pg\_freespacemap](https://www.postgresql.org/docs/current/pgfreespacemap.html) | Examine the free space map (FSM). | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 | > | [pg\_prewarm](https://www.postgresql.org/docs/current/static/pgprewarm.html) | Provides a way to load relation data into the buffer cache. | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 | > | [pg\_stat\_statements](https://www.postgresql.org/docs/current/static/pgstatstatements.html) | Provides a means for tracking execution statistics of all SQL statements executed by a server. See the "pg_stat_statements" section for information about this extension. | 1.6 | 1.7 | 1.8 | 1.9 | 1.10 | 1.10 |
The versions of each extension installed in a cluster sometimes differ based on
> [!div class="mx-tableFixed"] > | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** | **PG 15** | **PG 16** | > |||||||
-> [pgvector](https://github.com/pgvector/pgvector#installation-notes) | Open-source vector similarity search for Postgres | 0.5.1 | 0.5.1 | 0.5.1 | 0.5.1 | 0.5.1 | 0.5.1 |
+> [pgvector](https://github.com/pgvector/pgvector#installation-notes) | Open-source vector similarity search for Postgres | 0.5.1 | 0.6.0 | 0.6.0 | 0.6.0 | 0.6.0 | 0.6.0 |
### PostGIS extensions > [!div class="mx-tableFixed"] > | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** | **PG 15** | **PG 16** | > |||||||
-> | [PostGIS](https://www.postgis.net/) | Spatial and geographic objects for PostgreSQL. | 3.3.4 | 3.4.0 | 3.4.0 | 3.4.0 | 3.4.0 | 3.4.0 |
-> | address\_standardizer | Used to parse an address into constituent elements. Used to support geocoding address normalization step. | 3.3.4 | 3.4.0 | 3.4.0 | 3.4.0 | 3.4.0 | 3.4.0 |
-> | postgis\_sfcgal | PostGIS SFCGAL functions. | 3.3.4 | 3.4.0 | 3.4.0 | 3.4.0 | 3.4.0 | 3.4.0 |
-> | postgis\_topology | PostGIS topology spatial types and functions. | 3.3.4 | 3.4.0 | 3.4.0 | 3.4.0 | 3.4.0 | 3.4.0 |
-
+> | [PostGIS](https://www.postgis.net/) | Spatial and geographic objects for PostgreSQL. | 3.3.4 | 3.4.1 | 3.4.1 | 3.4.1 | 3.4.1 | 3.4.1 |
+> | address\_standardizer | Used to parse an address into constituent elements. Used to support geocoding address normalization step. | 3.3.4 | 3.4.1 | 3.4.1 | 3.4.1 | 3.4.1 | 3.4.1 |
+> | postgis\_sfcgal | PostGIS SFCGAL functions. | 3.3.4 | 3.4.1 | 3.4.1 | 3.4.1 | 3.4.1 | 3.4.1 |
+> | postgis\_topology | PostGIS topology spatial types and functions. | 3.3.4 | 3.4.1 | 3.4.1 | 3.4.1 | 3.4.1 | 3.4.1 |
## pg_stat_statements The [pg\_stat\_statements extension](https://www.postgresql.org/docs/current/pgstatstatements.html) is preloaded on every Azure Cosmos DB for PostgreSQL cluster to provide you with a means of tracking execution statistics of SQL statements.
utl_file functions are disabled in orafce extension.
## Next steps
-* Learn about [supported PostgreSQL versions](./reference-versions.md#postgresql-versions).
+* Learn about [supported PostgreSQL versions](./reference-versions.md#postgresql-versions).
cosmos-db Reference Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/reference-versions.md
Previously updated : 11/20/2023 Last updated : 02/11/2024 # Supported database versions in Azure Cosmos DB for PostgreSQL
customizable during creation and can be upgraded in-place once the cluster is cr
### PostgreSQL version 16
-The current minor release is 16.1. Refer to the [PostgreSQL
-documentation](https://www.postgresql.org/docs/release/16.1/) to
+The current minor release is 16.2. Refer to the [PostgreSQL
+documentation](https://www.postgresql.org/docs/release/16.2/) to
learn more about improvements and fixes in this minor release. ### PostgreSQL version 15
-The current minor release is 15.5. Refer to the [PostgreSQL
-documentation](https://www.postgresql.org/docs/release/15.5/) to
+The current minor release is 15.6. Refer to the [PostgreSQL
+documentation](https://www.postgresql.org/docs/release/15.6/) to
learn more about improvements and fixes in this minor release. ### PostgreSQL version 14
-The current minor release is 14.10. Refer to the [PostgreSQL
-documentation](https://www.postgresql.org/docs/release/14.10/) to
+The current minor release is 14.11. Refer to the [PostgreSQL
+documentation](https://www.postgresql.org/docs/release/14.11/) to
learn more about improvements and fixes in this minor release. ### PostgreSQL version 13
-The current minor release is 13.13. Refer to the [PostgreSQL
-documentation](https://www.postgresql.org/docs/release/13.13/) to
+The current minor release is 13.14. Refer to the [PostgreSQL
+documentation](https://www.postgresql.org/docs/release/13.14/) to
learn more about improvements and fixes in this minor release. ### PostgreSQL version 12
-The current minor release is 12.17. Refer to the [PostgreSQL
-documentation](https://www.postgresql.org/docs/release/12.17/) to
+The current minor release is 12.18. Refer to the [PostgreSQL
+documentation](https://www.postgresql.org/docs/release/12.18/) to
learn more about improvements and fixes in this minor release.
-### PostgreSQL version 11
+### PostgreSQL version 11 and older
+
+We don't support PostgreSQL version 11 and older for Azure Cosmos DB for PostgreSQL.
> [!CAUTION] > PostgreSQL community ended support for PostgreSQL 11 on November 9, 2023. See [restrictions](./reference-versions.md#retired-postgresql-engine-versions-not-supported-in-azure-cosmos-db-for-postgresql) that apply to the retired PostgreSQL major versions in Azure Cosmos DB for PostgreSQL. Learn about [in-place upgrades for major PostgreSQL versions](./concepts-upgrade.md) in Azure Cosmos DB for PostgreSQL. The *final* minor release is 11.22. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/release/11.22/) to
-learn more about improvements and fixes in this minor release.
-
-### PostgreSQL version 10 and older
-
-We don't support PostgreSQL version 10 and older for Azure Cosmos DB for PostgreSQL.
+learn more about improvements and fixes in this last minor release.
## PostgreSQL version syntax
You may continue to run the retired version in Azure Cosmos DB for PostgreSQL.
However, note the following restrictions after the retirement date for each PostgreSQL database version: -- As the community will not be releasing any further bug fixes or security fixes, Azure Cosmos DB for PostgreSQL will not patch the retired database engine for any bugs or security issues, or otherwise take security measures with regard to the retired database engine. You may experience security vulnerabilities or other issues as a result. However, Azure will continue to perform periodic maintenance and patching for the host, OS, containers, and any other service-related components.-- If any support issue you may experience relates to the PostgreSQL engine itself, as the community no longer provides the patches, we may not be able to provide you with support. In such cases, you will have to upgrade your database to one of the supported versions.-- You will not be able to create new database servers for the retired version. However, you will be able to perform point-in-time recoveries and create read replicas for your existing servers.-- New service capabilities developed by Azure Cosmos DB for PostgreSQL may only be available to supported database server versions.
+- As the community won't be releasing any further bug fixes or security fixes, Azure Cosmos DB for PostgreSQL won't patch the retired database engine for any bugs or security issues, or otherwise take security measures with regard to the retired database engine. You might experience security vulnerabilities or other issues as a result. However, Azure will continue to perform periodic maintenance and patching for the host, OS, containers, and any other service-related components.
+- If any support issue you might experience relates to the PostgreSQL engine itself, as the community no longer provides the patches, we might not be able to provide you with support. In such cases, you will have to upgrade your database to one of the supported versions.
+- You won't be able to create new database servers for the retired version. However, you will be able to perform point-in-time recoveries and create read replicas for your existing servers.
+- New service capabilities developed by Azure Cosmos DB for PostgreSQL might only be available to supported database server versions.
- Uptime SLAs will apply solely to Azure Cosmos DB for PostgreSQL service-related issues and not to any downtime caused by database engine-related bugs. -- In the extreme event of a serious threat to the service caused by the PostgreSQL database engine vulnerability identified in the retired database version, Azure may choose to stop your database server to secure the service. In such case, you will be notified [to upgrade the server](./howto-upgrade.md) before bringing the server online.
+- In the extreme event of a serious threat to the service caused by the PostgreSQL database engine vulnerability identified in the retired database version, Azure might choose to stop your database server to secure the service. In such case, you will be notified [to upgrade the server](./howto-upgrade.md) before bringing the server online.
## Citus and other extension versions
defender-for-cloud Auto Deploy Vulnerability Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/auto-deploy-vulnerability-assessment.md
Defender for Cloud also offers vulnerability assessment for your:
- SQL databases - [Explore vulnerability assessment reports in the vulnerability assessment dashboard](defender-for-sql-on-machines-vulnerability-assessment.md#explore-vulnerability-assessment-reports) - Azure Container Registry images - [Use Microsoft Defender for container registries to scan your images for vulnerabilities](defender-for-containers-vulnerability-assessment-azure.md)-- Amazon AWS Elastic Container Registry images - [Use Microsoft Defender for container registries to scan your images for vulnerabilities](defender-for-containers-vulnerability-assessment-elastic.md)
+- [Vulnerability assessments for AWS with Microsoft Defender Vulnerability Management](agentless-vulnerability-assessment-aws.md)
defender-for-cloud Concept Aws Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-aws-connector.md
- Title: Defender for Cloud's AWS connector
-description: Conceptual information pulled from AWS connector article.
- Previously updated : 06/29/2023--
-# Defender for Cloud's AWS connector
-
-To protect your AWS-based resources, you must [connect your AWS account](quickstart-onboard-aws.md) using the built-in connector. The connector provides an agentless connection to your AWS account that you can extend with Defender for Cloud's Defender plans to secure your AWS resources:
--- [**Cloud Security Posture Management (CSPM)**](overview-page.md) assesses your AWS resources according to AWS-specific security recommendations and reflects your security posture in your secure score. The [asset inventory](asset-inventory.md) gives you one place to see all of your protected AWS resources. The [regulatory compliance dashboard](regulatory-compliance-dashboard.md) shows your compliance with built-in standards specific to AWS, including AWS CIS, AWS PCI DSS, and AWS Foundational Security Best Practices.--- [**Microsoft Defender for Servers**](defender-for-servers-introduction.md) brings threat detection and advanced defenses to [supported Windows and Linux EC2 instances](supported-machines-endpoint-solutions-clouds-servers.md?tabs=tab/features-multicloud).--- [**Microsoft Defender for Containers**](defender-for-containers-introduction.md) brings threat detection and advanced defenses to [supported Amazon EKS clusters](supported-machines-endpoint-solutions-clouds-containers.md).--- [**Microsoft Defender for SQL**](defender-for-sql-introduction.md) brings threat detection and advanced defenses to your SQL Servers running on AWS EC2, AWS RDS Custom for SQL Server.-
-## AWS authentication process
-
-Federated authentication is used between Microsoft Defender for Cloud and AWS. All of the resources related to the authentication are created as a part of the CloudFormation template deployment, including:
--- An identity provider (OpenID connect)-- Identity and Access Management (IAM) roles with a federated principal (connected to the identity providers).-
-The architecture of the authentication process across clouds is as follows:
--
-1. Microsoft Defender for Cloud CSPM service acquires a Microsoft Entra token with a validity life time of 1 hour that is signed by the Microsoft Entra ID using the RS256 algorithm.
-
-1. The Microsoft Entra token is exchanged with AWS short living credentials and Defender for Cloud's CSPM service assumes the CSPM IAM role (assumed with web identity).
-
-1. Since the principal of the role is a federated identity as defined in a trust relationship policy, the AWS identity provider validates the Microsoft Entra token against the Microsoft Entra ID through a process that includes:
- - audience validation
- - token digital signature validation
- - certificate thumbprint
-
-1. The Microsoft Defender for Cloud CSPM role is assumed only after the validation conditions defined at the trust relationship have been met. The conditions defined for the role level are used for validation within AWS and allows only the Microsoft Defender for Cloud CSPM application (validated audience) access to the specific role (and not any other Microsoft token).
-
-1. After the Microsoft Entra token is validated by the AWS identity provider, the AWS STS exchanges the token with AWS short-living credentials which the CSPM service uses to scan the AWS account.
-
-## Native connector plan requirements
-
-Each plan has its own requirements for the native connector.
-
-### Defender for Containers plan
--- At least one Amazon EKS cluster with permission to access to the EKS K8s API server. If you need to create a new EKS cluster, follow the instructions in [Getting started with Amazon EKS ΓÇô eksctl](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html).--- The resource capacity to create a new SQS queue, Kinesis Fire Hose delivery stream, and S3 bucket in the cluster's region.-
-### Defender for SQL plan
--- Microsoft Defender for SQL enabled on your subscription. Learn how to [enable protection on all of your databases](quickstart-enable-database-protections.md).--- An active AWS account, with EC2 instances running SQL server or RDS Custom for SQL Server.--- Azure Arc for servers installed on your EC2 instances/RDS Custom for SQL Server.
- - (Recommended) Use the auto provisioning process to install Azure Arc on all of your existing and future EC2 instances.
-
- Auto provisioning managed by AWS Systems Manager (SSM) using the SSM agent. Some Amazon Machine Images (AMIs) already have the SSM agent preinstalled. If you already have the SSM agent preinstalled, the AMIs are listed in [AMIs with SSM Agent preinstalled](https://docs.aws.amazon.com/systems-manager/latest/userguide/ssm-agent-technical-details.html#ami-preinstalled-agent). If your EC2 instances don't have the SSM Agent, you need to install it using either of the following relevant instructions from Amazon:
-
- - [Install SSM Agent for a hybrid environment (Windows)](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-install-managed-win.html)
-
- > [!NOTE]
- > To enable the Azure Arc auto-provisioning, you'll need **Owner** permission on the relevant Azure subscription.
--- Other extensions should be enabled on the Arc-connected machines:
- - Microsoft Defender for Endpoint
- - VA solution (TVM/Qualys)
- - Log Analytics (LA) agent on Arc machines or Azure Monitor agent (AMA)
-
- Make sure the selected LA workspace has security solution installed. The LA agent and AMA are currently configured in the subscription level. All of your AWS accounts and GCP projects under the same subscription inherit the subscription settings for the LA agent and AMA.
-
- Learn more about [monitoring components](monitoring-components.md) for Defender for Cloud.
-
-### Defender for Servers plan
--- Microsoft Defender for Servers enabled on your subscription. Learn how to [enable plans](enable-all-plans.md).--- An active AWS account, with EC2 instances.--- Azure Arc for servers installed on your EC2 instances.
- - (Recommended) Use the auto provisioning process to install Azure Arc on all of your existing and future EC2 instances.
-
- Auto provisioning managed by AWS Systems Manager (SSM) using the SSM agent. Some Amazon Machine Images (AMIs) already have the SSM agent preinstalled. If that is the case, their AMIs are listed in [AMIs with SSM Agent preinstalled](https://docs.aws.amazon.com/systems-manager/latest/userguide/ssm-agent-technical-details.html#ami-preinstalled-agent). If your EC2 instances don't have the SSM Agent, you need to install it using either of the following relevant instructions from Amazon:
-
- - [Install SSM Agent for a hybrid environment (Windows)](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-install-managed-win.html)
-
- - [Install SSM Agent for a hybrid environment (Linux)](https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-install-managed-linux.html)
-
- > [!NOTE]
- > To enable the Azure Arc auto-provisioning, you'll need an **Owner** permission on the relevant Azure subscription.
-
- - If you want to manually install Azure Arc on your existing and future EC2 instances, use the [EC2 instances should be connected to Azure Arc](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/231dee23-84db-44d2-bd9d-c32fbcfb42a3) recommendation to identify instances that don't have Azure Arc installed.
--- Other extensions should be enabled on the Arc-connected machines:
- - Microsoft Defender for Endpoint
- - VA solution (TVM/Qualys)
- - Log Analytics (LA) agent on Arc machines or Azure Monitor agent (AMA)
-
- Make sure the selected LA workspace has security solution installed. The LA agent and AMA are currently configured in the subscription level. All of your AWS accounts and GCP projects under the same subscription inherit the subscription settings for the LA agent and AMA.
-
- Learn more about [monitoring components](monitoring-components.md) for Defender for Cloud.
-
- > [!NOTE]
- > Defender for Servers assigns tags to your AWS resources to manage the auto-provisioning process. You must have these tags properly assigned to your resources so that Defender for Cloud can manage your resources:
- **AccountId**, **Cloud**, **InstanceId**, **MDFCSecurityConnector**
-
-## Learn more
-
-You can check out the following blogs:
--- [Ignite 2021: Microsoft Defender for Cloud news](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/ignite-2021-microsoft-defender-for-cloud-news/ba-p/2882807).-- [Security posture management and server protection for AWS and GCP](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/security-posture-management-and-server-protection-for-aws-and/ba-p/3271388)-
-## Next steps
-
-Connecting your AWS account is part of the multicloud experience available in Microsoft Defender for Cloud.
--- [Protect all of your resources with Defender for Cloud](enable-all-plans.md)
defender-for-cloud Concept Gcp Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-gcp-connector.md
- Title: Defender for Cloud's GCP connector
-description: Learn how the GCP connector works on Microsoft Defender for Cloud.
-- Previously updated : 06/29/2023--
-# Defender for Cloud's GCP connector
-
-The Microsoft Defender for Cloud GCP (Google Cloud Platform) connector is a feature that allows an organization to extend its cloud security posture management to their Google Cloud environments.
-
-The GCP connector allows organizations to use Microsoft Defender for Cloud to monitor and assess the security state of their Google Cloud resources. The connector allows organizations to use Microsoft Defender for Cloud to apply security policies and receive security recommendations for their Google Cloud resources.
-
-The GCP connector allows for continuous monitoring of Google Cloud resources for security risks, vulnerabilities, and misconfigurations. It also provides automated remediation capabilities to address identified risks and compliance issues. Additionally, it allows organizations to use the Microsoft Defender for Cloud's integrated threat protection capabilities to protect their Google Cloud resources from threats.
-
-## GCP authorization design
-
-The authentication process between Microsoft Defender for Cloud and GCP is a federated authentication process.
-
-When you onboard to Defender for Cloud, the GCloud template is used to create the following resources as part of the authentication process:
--- Workload identity pool and providers--- Service accounts and policy bindings-
-The authentication process works as follows:
--
-(1) - Microsoft Defender for Cloud's CSPM service acquires a Microsoft Entra token. The token is signed by Microsoft Entra ID using the RS256 algorithm and is valid for 1 hour.
-
-(2) - The Microsoft Entra token is exchanged with Google's STS token.
-
-(3) - Google STS validates the token with the workload identity provider. The Microsoft Entra token is sent to Google's STS that validates the token with the workload identity provider. Audience validation then occurs and the token is signed. A Google STS token is then returned to Defender for Cloud's CSPM service.
-
-(4) - Defender for Cloud's CSPM service uses the Google STS token to impersonate the service account. Defender for Cloud's CSPM receives service account credentials that are used to scan the project.
-
-## What happens when you onboard a single project
-
-There are four parts to the onboarding process that take place when you create the security connection between your GCP project and Microsoft Defender for Cloud.
-
-### Organization details
-
-In the first section, you need to add the basic properties of the connection between your GCP project and Defender for Cloud.
--
-Here you name your connector, select a subscription and resource group, which is used to create an ARM template resource that is called security connector. The security connector represents a configuration resource that holds the projects settings.
-
-You can also select a location and add the organization ID for your project.
-
-### Select plans
-
-After entering your organization's details, you'll then be able to select which plans to enable.
--
-From here, you can decide which resources you want to protect based on the security value you want to receive.
-
-### Configure access
-
-Once you selected the plans, you want to enable and the resources you want to protect you have to configure access between Defender for Cloud and your GCP project.
--
-In this step, you can find the GCloud script that needs to be run on the GCP project that is going to onboarded. The GCloud script is generated based on the plans you selected to onboard.
-
-The GCloud script creates all of the required resources on your GCP environment so that Defender for Cloud can operate and provide the following security values:
--- Workload identity pool-- Workload identity provider (per plan)-- Service accounts-- Project level policy bindings (service account has access only to the specific project)-
-### Review and generate
-
-The final step for onboarding is to review all of your selections and to create the connector.
--
-## What happens when you onboard an organization
-
-Similar to onboarding a single project, When onboarding a GCP organization, Defender for Cloud creates a security connector for each project under the organization (unless specific projects were excluded).
-
-### Organization details
-
-In the first section, you need to add the basic properties of the connection between your GCP organization and Defender for Cloud.
--
-Here you name your connector, select a subscription and resource group that is used to create an ARM template resource that is called security connector. The security connector represents a configuration resource that holds the projects settings.
-
-You also select a location and add the organization ID for your project.
-
-When you onboard an organization, you can also choose to exclude project numbers and folder IDs.
-
-### Select plans
-
-After entering your organization's details, you'll then be able to select which plans to enable.
--
-From here, you can decide which resources you want to protect based on the security value you want to receive.
-
-### Configure access
-
-Once you selected the plans, you want to enable and the resources you want to protect you have to configure access between Defender for Cloud and your GCP project.
--
-When you onboard an organization, there's a section that includes management project details. Similar to other GCP projects, the organization is also considered a project and is utilized by Defender for Cloud to create all of the required resources needed to connect the organization to Defender for Cloud.
-
-In the management project details section, you have the choice of:
--- Dedicating a management project for Defender for Cloud to include in the GCloud script.-- Provide the details of an already existing project to be used as the management project with Defender for Cloud. -
-You need to decide what is your best option for your organization's architecture. We recommend creating a dedicated project for Defender for Cloud.
-
-The GCloud script is generated based on the plans you selected to onboard. The script creates all of the required resources on your GCP environment so that Defender for Cloud can operate and provide the following security benefits:
--- Workload identity pool-- Workload identity provider for each plan-- Custom role to grant Defender for Cloud access to discover and get the project under the onboarded organization-- A service account for each plan-- A service account for the autoprovisioning service-- Organization level policy bindings for each service account-- API enablement(s) at the management project level. -
-Some of the APIs aren't in direct use with the management project. Instead the APIs authenticate through this project and use one of the API(s) from another project. The API must be enabled on the management project.
-
-### Review and generate
-
-The final step for onboarding is to review all of your selections and to create the connector.
--
-## Next steps
-
-[Connect your GCP projects to Microsoft Defender for Cloud](quickstart-onboard-gcp.md)
defender-for-cloud Custom Dashboards Azure Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/custom-dashboards-azure-workbooks.md
Learn more about using these scanners:
- [Find vulnerabilities with Microsoft Defender Vulnerability Management](deploy-vulnerability-assessment-defender-vulnerability-management.md) - [Find vulnerabilities with the integrated Qualys scanner](deploy-vulnerability-assessment-vm.md) - [Scan your ACR images for vulnerabilities](defender-for-containers-vulnerability-assessment-azure.md)-- [Scan your ECR images for vulnerabilities](defender-for-containers-vulnerability-assessment-elastic.md) - [Scan your SQL resources for vulnerabilities](defender-for-sql-on-machines-vulnerability-assessment.md) Findings for each resource type are reported in separate recommendations:
defender-for-cloud Defender For Containers Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-enable.md
You can check out the following blogs:
Now that you enabled Defender for Containers, you can: - [Scan your ACR images for vulnerabilities](defender-for-containers-vulnerability-assessment-azure.md)-- [Scan your Amazon AWS ECR images for vulnerabilities](defender-for-containers-vulnerability-assessment-elastic.md)
+- [Scan your AWS images for vulnerabilities with Microsoft Defender Vulnerability Management](agentless-vulnerability-assessment-aws.md)
+- [Scan your GGP images for vulnerabilities with Microsoft Defender Vulnerability Management](agentless-vulnerability-assessment-gcp.md)
- Check out [common questions](faq-defender-for-containers.yml) about Defender for Containers.
defender-for-cloud Defender For Containers Vulnerability Assessment Elastic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-vulnerability-assessment-elastic.md
- Title: Use Defender for Containers to scan your AWS ECR images for vulnerabilities powered by Trivy (Deprecated)
-description: Learn how to use Defender for Containers to scan images in your Amazon AWS Elastic Container Registry (ECR) to find vulnerabilities.
-- Previously updated : 06/14/2023---
-# Use Defender for Containers to scan your AWS ECR images for vulnerabilities powered by Trivy (Deprecated)
-
-> [!CAUTION]
-> This article references CentOS, a Linux distribution that is nearing End Of Life (EOL) status. Please consider your use and planning accordingly.
-
-Defender for Containers lets you scan the container images stored in your Amazon AWS Elastic Container Registry (ECR) as part of the protections provided within Microsoft Defender for Cloud.
-
-To enable scanning of vulnerabilities in containers, you have to [connect your AWS account to Defender for Cloud](quickstart-onboard-aws.md) and [enable Defender for Containers](defender-for-containers-enable.md). The agentless scanner, powered by the open-source scanner Trivy, scans your ECR repositories and reports vulnerabilities.
-
-Defender for Containers creates resources in your AWS account to build an inventory of the software in your images. The scan then sends only the software inventory to Defender for Cloud. This architecture protects your information privacy and intellectual property, and also keeps the outbound network traffic to a minimum.
-
-These resources are created under us-east-1 and eu-central-1 in each AWS account where container vulnerability assessment is enabled:
--- **S3 bucket** with the prefix `defender-for-containers-va`-- **ECS cluster** with the name `defender-for-containers-va`-- **VPC**
- - Tag `name` with the value `defender-for-containers-va`
- - IP subnet CIDR 10.0.0.0/16
- - Associated with **default security group** with the tag `name` and the value `defender-for-containers-va` that has one rule of all incoming traffic.
- - **Subnet** with the tag `name` and the value `defender-for-containers-va` in the `defender-for-containers-va` VPC with the CIDR 10.0.1.0/24 IP subnet used by the ECS cluster `defender-for-containers-va`
- - **Internet Gateway** with the tag `name` and the value `defender-for-containers-va`
- - **Route table** - Route table with the tag `name` and value `defender-for-containers-va`, and with these routes:
- - Destination: `0.0.0.0/0`; Target: Internet Gateway with the tag `name` and the value `defender-for-containers-va`
- - Destination: `10.0.0.0/16`; Target: `local`
-
-Defender for Cloud filters and classifies findings from the software inventory that the scanner creates. Images without vulnerabilities are marked as healthy and Defender for Cloud doesn't send notifications about healthy images to keep you from getting unwanted informational alerts.
-
-The triggers for an image scan are:
--- **On push** - Whenever an image is pushed to your registry, Defender for Containers automatically scans that image within 2 hours.--- **Continuous scan** - Defender for Containers reassesses the images based on the latest database of vulnerabilities of Trivy. This reassessment is performed twice a day for 90 days after an image is pushed to the registry.-
-## Prerequisites
-
-Before you can scan your ECR images:
--- [Connect your AWS account to Defender for Cloud and enable Defender for Containers](quickstart-onboard-aws.md)-- You must have at least one free VPC in the `us-east-1` and `eu-central-1` regions to host the AWS resources that build the software inventory.-
-For a list of the types of images not supported by Microsoft Defender for Containers, see [Availability](supported-machines-endpoint-solutions-clouds-containers.md?tabs=aws-eks#images).
-
-## Enable vulnerability assessment
-
-To enable vulnerability assessment:
-
-1. From Defender for Cloud's menu, open **Environment settings**.
-1. Select the AWS connector that connects to your AWS account.
-
- :::image type="content" source="media/defender-for-kubernetes-intro/select-aws-connector.png" alt-text="Screenshot of Defender for Cloud's environment settings page showing an AWS connector.":::
-
-1. In the Monitoring Coverage section of the Containers plan, select **Settings**.
-
- :::image type="content" source="media/defender-for-containers-vulnerability-assessment-elastic/aws-containers-settings.png" alt-text="Screenshot of Containers settings for the AWS connector." lightbox="media/defender-for-containers-vulnerability-assessment-elastic/aws-containers-settings.png":::
-
-1. Turn on **Vulnerability assessment**.
-
- :::image type="content" source="media/defender-for-containers-vulnerability-assessment-elastic/aws-containers-enable-va.png" alt-text="Screenshot of the toggle to turn on vulnerability assessment for ECR images.":::
-
-1. Select **Save** > **Next: Configure access**.
-
-1. Download the CloudFormation template.
-
-1. Using the downloaded CloudFormation template, create the stack in AWS as instructed on screen. If you're onboarding a management account, you need to run the CloudFormation template both as Stack and as StackSet. It takes up to 30 minutes for the AWS resources to be created. The resources have the prefix `defender-for-containers-va`.
-
-1. Select **Next: Review and generate**.
-
-1. Select **Update**.
-
-Findings are available as Defender for Cloud recommendations from 2 hours after vulnerability assessment is turned on. The recommendation also shows any reason that a repository is identified as not scannable ("Not applicable"), such as images pushed more than three months before you enabled vulnerability assessment.
-
-## View and remediate findings
-
-Vulnerability assessment lists the repositories with vulnerable images as the results of the [AWS registry container images should have vulnerabilities resolved - (powered by Trivy)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/03587042-5d4b-44ff-af42-ae99e3c71c87) recommendation. From the recommendation, you can identify vulnerable images and get details about the vulnerabilities.
-
-Vulnerability findings for an image are still shown in the recommendation for 48 hours after an image is deleted.
-
-1. To view the findings, open the **Recommendations** page. If the scan found issues, you'll see the recommendation [AWS registry container images should have vulnerabilities resolved - (powered by Trivy)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/03587042-5d4b-44ff-af42-ae99e3c71c87).
-
- :::image type="content" source="media/defender-for-containers-vulnerability-assessment-elastic/elastic-container-registry-recommendation.png" alt-text="Screenshot of the Recommendation to remediate findings in ECR images.":::
-
-1. Select the recommendation.
-
- The recommendation details page opens with additional information. This information includes the list of repositories with vulnerable images ("Affected resources") and the remediation steps.
-
-1. Select specific repositories to the vulnerabilities found in images in those repositories.
-
- :::image type="content" source="media/defender-for-containers-vulnerability-assessment-elastic/elastic-container-registry-unhealthy-repositories.png" alt-text="Screenshot of ECR repositories that have vulnerabilities." lightbox="media/defender-for-containers-vulnerability-assessment-elastic/elastic-container-registry-unhealthy-repositories.png":::
-
- The vulnerabilities section shows the identified vulnerabilities.
-
-1. To learn more about a vulnerability, select the vulnerability.
-
- The vulnerability details pane opens.
-
- :::image type="content" source="media/defender-for-containers-vulnerability-assessment-elastic/elastic-container-registry-vulnerability.png" alt-text="Screenshot of vulnerability details in ECR repositories." lightbox="media/defender-for-containers-vulnerability-assessment-elastic/elastic-container-registry-vulnerability.png":::
-
- This pane includes a detailed description of the issue and links to external resources to help mitigate the threats.
-
-1. Follow the steps in the remediation section of the recommendation.
-
-1. When you've taken the steps required to remediate the security issue, replace the image in your registry:
-
- 1. Push the updated image to trigger a scan.
-
- 1. Check the recommendations page for the recommendation [AWS registry container images should have vulnerabilities resolved - (powered by Trivy)](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/03587042-5d4b-44ff-af42-ae99e3c71c8).
-
- If the recommendation still appears and the image you've handled still appears in the list of vulnerable images, check the remediation steps again.
-
- 1. When you're sure the updated image has been pushed, scanned, and is no longer appearing in the recommendation, delete the ΓÇ£oldΓÇ¥ vulnerable image from your registry.
-
-<!--
-## Disable specific findings
-
-> [!NOTE]
-> [!INCLUDE [Legalese](../../includes/defender-for-cloud-preview-legal-text.md)]
-
-If you have an organizational need to ignore a finding, rather than remediate it, you can optionally disable it. Disabled findings don't affect your secure score or generate unwanted noise.
-
-When a finding matches the criteria you've defined in your disable rules, it won't appear in the list of findings. Typical scenarios include:
--- Disable findings with severity below medium-- Disable findings that are non-patchable-- Disable findings with CVSS score below 6.5-- Disable findings with specific text in the security check or category (for example, ΓÇ£RedHatΓÇ¥, ΓÇ£CentOS Security Update for sudoΓÇ¥)-
-> [!IMPORTANT]
-> To create a rule, you need permissions to edit a policy in Azure Policy.
->
-> Learn more in [Azure RBAC permissions in Azure Policy](../governance/policy/overview.md#azure-rbac-permissions-in-azure-policy).
-
-You can use any of the following criteria:
--- Finding ID-- Category-- Security check-- CVSS v3 scores-- Severity-- Patchable status-
-To create a rule:
-
-1. From the recommendations detail page for [Container registry images should have vulnerability findings resolved](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/dbd0cb49-b563-45e7-9724-889e799fa648), select **Disable rule**.
-1. Select the relevant scope.
-1. Define your criteria.
-1. Select **Apply rule**.
-
- :::image type="content" source="media/defender-for-containers-vulnerability-assessment-azure/new-disable-rule-for-registry-finding.png" alt-text="Screenshot of how to create a disable rule for VA findings on registry.":::
-
-1. To view, override, or delete a rule:
- 1. Select **Disable rule**.
- 1. From the scope list, subscriptions with active rules show as **Rule applied**.
- :::image type="content" source="./media/remediate-vulnerability-findings-vm/modify-rule.png" alt-text="Screenshot of how to modify or delete an existing rule.":::
- 1. To view or delete the rule, select the ellipsis menu ("..."). -->
-
-## Next steps
-
-Learn more about:
--- Defender for Cloud [Defender plans](defender-for-cloud-introduction.md#protect-cloud-workloads)-- [Multicloud protections](multicloud.yml) for your AWS account-- Check out [common questions](faq-defender-for-containers.yml) about Defender for Containers.
defender-for-cloud Deploy Vulnerability Assessment Byol Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/deploy-vulnerability-assessment-byol-vm.md
Learn more about obtaining the [Qualys Virtual Scanner Appliance](https://azurem
- [Remediate the findings from your vulnerability assessment solution](remediate-vulnerability-findings-vm.md) - Check out these [common questions](faq-vulnerability-assessments.yml) about vulnerability assessment.-
-Defender for Cloud also offers vulnerability analysis for your:
--- SQL databases - [Explore vulnerability assessment reports in the vulnerability assessment dashboard](defender-for-sql-on-machines-vulnerability-assessment.md#explore-vulnerability-assessment-reports)-- Azure Container Registry images - [Use Microsoft Defender for container registries to scan your images for vulnerabilities](defender-for-containers-vulnerability-assessment-azure.md)-- Amazon AWS Elastic Container Registry images - [Use Microsoft Defender for container registries to scan your images for vulnerabilities](defender-for-containers-vulnerability-assessment-elastic.md)
defender-for-cloud Deploy Vulnerability Assessment Defender Vulnerability Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/deploy-vulnerability-assessment-defender-vulnerability-management.md
# Enable vulnerability scanning with Microsoft Defender Vulnerability Management > [!IMPORTANT]
-> Defender for Server's vulnerability assessment solution powered by Qualys, is on a retirement path that set to complete on **May 1st, 2024**. If you are a currently using the built-in vulnerability assessment powered by Qualys, you should plan to [transition to the Microsoft Defender Vulnerability Management vulnerability scanning solution](how-to-transition-to-built-in.md).
+> Defender for Server's vulnerability assessment solution powered by Qualys, is on a retirement path that set to complete on **May 1st, 2024**. If you are a currently using the built-in vulnerability assessment powered by Qualys, you should plan to [transition to the Microsoft Defender Vulnerability Management vulnerability scanning solution](how-to-transition-to-built-in.md).
> > For more information about our decision to unify our vulnerability assessment offering with Microsoft Defender Vulnerability Management, see [this blog post](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/defender-for-cloud-unified-vulnerability-assessment-powered-by/ba-p/3990112). >
The integration between Microsoft Defender for Endpoint and Microsoft Defender f
:::image type="content" source="medivm-small.png" alt-text="Screenshot of the window that shows the options for selecting a vulnerability assessment solution from the recommendation."::: - - **To automatically find and view the vulnerabilities** on existing and new machines without the need to manually remediate the preceding recommendation, see [Automatically configure vulnerability assessment for your machines](auto-deploy-vulnerability-assessment.md). - **To onboard via the REST API**, run PUT/DELETE using this URL: `https://management.azure.com/subscriptions/.../resourceGroups/.../providers/Microsoft.Compute/virtualMachines/.../providers/Microsoft.Security/serverVulnerabilityAssessments/mdetvm?api-version=2015-06-01-preview`
You can check out the following blogs:
> [!div class="nextstepaction"] > [Remediate the findings from your vulnerability assessment solution](remediate-vulnerability-findings-vm.md)-
-Defender for Cloud also offers vulnerability analysis for your:
--- SQL databases - [Explore vulnerability assessment reports in the vulnerability assessment dashboard](defender-for-sql-on-machines-vulnerability-assessment.md#explore-vulnerability-assessment-reports)-- Azure Container Registry images - [Use Microsoft Defender for container registries to scan your images for vulnerabilities](defender-for-containers-vulnerability-assessment-azure.md)-- Amazon AWS Elastic Container Registry images - [Use Microsoft Defender for container registries to scan your images for vulnerabilities](defender-for-containers-vulnerability-assessment-elastic.md)
defender-for-cloud How To Transition To Built In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/how-to-transition-to-built-in.md
Using this REST API, you can easily migrate your subscription, at scale, from an
After migrating to the built-in Defender Vulnerability Management solution in Defender for Cloud, you need to offboard each VM from their old vulnerability assessment solution using either of the following methods: -- [Delete the VM extension with PowerShell](/powershell/module/az.compute/remove-azvmextension?view=azps-11.0.0).-- [REST API DELETE request](/rest/api/compute/virtual-machine-extensions/delete?view=rest-compute-2023-07-01&tabs=HTTP).
+- [Delete the VM extension with PowerShell](/powershell/module/az.compute/remove-azvmextension).
+- [REST API DELETE request](/rest/api/compute/virtual-machine-extensions/delete?tabs=HTTP).
## Next steps
defender-for-cloud Partner Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/partner-integration.md
Last updated 01/10/2023
This document helps you to manage security solutions already connected to Microsoft Defender for Cloud and add new ones. ## Integrated Azure security solutions+ Defender for Cloud makes it easy to enable integrated security solutions in Azure. Benefits include: - **Simplified deployment**: Defender for Cloud offers streamlined provisioning of integrated partner solutions. For solutions like antimalware and vulnerability assessment, Defender for Cloud can provision the agent on your virtual machines. For firewall appliances, Defender for Cloud can take care of much of the network configuration required.
Currently, integrated security solutions include vulnerability assessment by [Qu
Learn more about the integration of [vulnerability scanning tools from Qualys](deploy-vulnerability-assessment-vm.md), including a built-in scanner available to customers that enable Microsoft Defender for Servers.
-Defender for Cloud also offers vulnerability analysis for your:
--- SQL databases - [Explore vulnerability assessment reports in the vulnerability assessment dashboard](defender-for-sql-on-machines-vulnerability-assessment.md#explore-vulnerability-assessment-reports)-- Azure Container Registry images - [Use Microsoft Defender for container registries to scan your images for vulnerabilities](defender-for-containers-vulnerability-assessment-azure.md)-- Amazon AWS Elastic Container Registry images - [Use Microsoft Defender for container registries to scan your images for vulnerabilities](defender-for-containers-vulnerability-assessment-elastic.md)- ## How security solutions are integrated+ Azure security solutions that are deployed from Defender for Cloud are automatically connected. You can also connect other security data sources, including computers running on-premises or in other clouds. :::image type="content" source="./media/partner-integration/security-solutions-page-01-2023.png" alt-text="Screenshot showing security Solutions page." lightbox="./media/partner-integration/security-solutions-page-01-2023.png":::
The **Connected solutions** section includes security solutions that are current
The status of a security solution can be:
-* **Healthy** (green) - no health issues.
-* **Unhealthy** (red) - there's a health issue that requires immediate attention.
-* **Stopped reporting** (orange) - the solution has stopped reporting its health.
-* **Not reported** (gray) - the solution hasn't reported anything yet and no health data is available. A solution's status might be unreported if it was connected recently and is still deploying.
+- **Healthy** (green) - no health issues.
+- **Unhealthy** (red) - there's a health issue that requires immediate attention.
+- **Stopped reporting** (orange) - the solution has stopped reporting its health.
+- **Not reported** (gray) - the solution hasn't reported anything yet and no health data is available. A solution's status might be unreported if it was connected recently and is still deploying.
> [!NOTE] > If health status data is not available, Defender for Cloud shows the date and time of the last event received to indicate whether the solution is reporting or not. If no health data is available and no alerts were received within the last 14 days, Defender for Cloud indicates that the solution is unhealthy or not reporting.
The status of a security solution can be:
Select **VIEW** for additional information and options such as:
- - **Solution console** - Opens the management experience for this solution.
- - **Link VM** - Opens the Link Applications page. Here you can connect resources to the partner solution.
- - **Delete solution**
- - **Configure**
+- **Solution console** - Opens the management experience for this solution.
+- **Link VM** - Opens the Link Applications page. Here you can connect resources to the partner solution.
+- **Delete solution**
+- **Configure**
![Partner solution detail.](./media/partner-integration/partner-solutions-detail.png) - ### Discovered solutions Defender for Cloud automatically discovers security solutions running in Azure but not connected to Defender for Cloud and displays the solutions in the **Discovered solutions** section. These solutions include Azure solutions, like [Microsoft Entra ID Protection](../active-directory/identity-protection/overview-identity-protection.md), and partner solutions.
The **Add data sources** section includes other available data sources that can
![Data sources.](./media/partner-integration/add-data-sources.png) -- ## Next steps In this article, you learned how to integrate partner solutions in Defender for Cloud. To learn how to set up an integration with Microsoft Sentinel, or any other SIEM, see [Continuously export Defender for Cloud data](continuous-export.md).
defender-for-cloud Prepare Deprecation Log Analytics Mma Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/prepare-deprecation-log-analytics-mma-agent.md
Title: Prepare for retirement of the Log Analytics agent
-description: Learn how to prepare for the deprecation of the Log Analytics (MMA) agent in Microsoft Defender for Cloud
+description: Learn how to prepare for the deprecation of the Log Analytics (MMA) agent in Microsoft Defender for Cloud.
Previously updated : 02/08/2024 Last updated : 02/13/2024 # Prepare for retirement of the Log Analytics agent
-The Log Analytics agent, also known as the Microsoft Monitoring Agent (MMA), [will retire in August 2024](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/microsoft-defender-for-cloud-strategy-and-plan-towards-log/ba-p/3883341). As a result, the Defender for Servers and Defender for SQL on machines plans in Microsoft Defender for Cloud will be updated, and features that rely on the Log Analytics agent will be redesigned.
+The Log Analytics agent, also known as the Microsoft Monitoring Agent (MMA), [is retiring in August 2024](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/microsoft-defender-for-cloud-strategy-and-plan-towards-log/ba-p/3883341). As a result, the Defender for Servers and Defender for SQL on machines plans in Microsoft Defender for Cloud will be updated, and features that rely on the Log Analytics agent will be redesigned.
This article summarizes plans for agent retirement.
This article summarizes plans for agent retirement.
The Defender for Servers plan uses the Log Analytics agent in general availability (GA) and in AMA for [some features](plan-defender-for-servers-agents.md) (in preview). Here's what's happening with these features going forward:
-To simplify onboarding, all Defender for Servers security features and capabilities will be provided with a single agent ([Microsoft Defender for Endpoint (MDE))](integration-defender-for-endpoint.md), complemented by [agentless machine scanning](concept-agentless-data-collection.md), without any dependency on Log Analytics agent or AMA. Note that: 
+To simplify onboarding, all Defender for Servers security features and capabilities will be provided with a single agent ([Microsoft Defender for Endpoint](integration-defender-for-endpoint.md)), complemented by [agentless machine scanning](concept-agentless-data-collection.md), without any dependency on Log Analytics agent or AMA. Note that:ΓÇ»
- Defender for Servers features, which are based on AMA, are currently in preview and wonΓÇÖt be released in GA.ΓÇ» -- Features in preview that rely on AMA will remain supported until an alternative version of the feature is provided, based on Defender for Endpoint integration or agentless machine scanning.-- By enabling Defender for Endpoint integration and agentless machine scanning early, your Defender for Servers deployment stays up to date and supported.
+- Features in preview that rely on AMA remain supported until an alternate version of the feature is provided, which will rely on the Defender for Endpoint integration or the agentless machine scanning feature.
+- By enabling the Defender for Endpoint integration and agentless machine scanning feature before the deprecation takes place, your Defender for Servers deployment will be up to date and supported.
### Feature functionality
The following table summarizes how Defender for Servers features will be provide
| Feature | Current support | New support | New experience status | |-|-|-|-|
-| Microsoft Defender for Endpoint (MDE) integration for down-level Windows machines (Windows Server 2016/2012 R2) | Legacy Defender for Endpoint sensor, based on the Log Analytics agent | [Unified agent integration](/microsoft-365/security/defender-endpoint/configure-server-endpoints) | - Functionality with the unified agent is GA.<br/>- Functionality with the legacy Defender for Endpoint sensor using the Log Analytics agent will be deprecated in August 2024. |
+| Defender for Endpoint integration for down-level Windows machines (Windows Server 2016/2012 R2) | Legacy Defender for Endpoint sensor, based on the Log Analytics agent | [Unified agent integration](/microsoft-365/security/defender-endpoint/configure-server-endpoints) | - Functionality with the unified agent is GA.<br/>- Functionality with the legacy Defender for Endpoint sensor using the Log Analytics agent will be deprecated in August 2024. |
| OS-level threat detection | Log Analytics agent | Defender for Endpoint agent integration | Functionality with the Defender for Endpoint agent is GA. |
-| Adaptive application controls | Log Analytics agent (GA), AMA (Preview) | | The adaptive application control feature will be deprecated in August 2024. |
-| Endpoint protection discovery recommendations | Recommendations available in foundational CSPM and Defender for Servers, using the Log Analytics agent (GA), AMA (Preview)ΓÇ»| Agentless machine scanning | - Functionality with agentless machine scanning will be released to preview in February 2024 as part of Defender for Servers Plan 2 and the Defender CSPM plan.<br/>- Azure VMs, GCP instances, and AWS instances will be supported. On-premises machines wonΓÇÖt be supported. |
-| Missing OS update recommendation | Recommendations available in foundational CSPM and Defender for Servers using the Log Analytics agent. | Integration with Update Manager, Microsoft | New recommendations based on Azure Update Manager integration [are GA](release-notes-archive.md#two-recommendations-related-to-missing-operating-system-os-updates-were-released-to-ga), with no agent dependencies. |
-| OS misconfigurations (Microsoft Cloud Security Benchmark) | Recommendations available in foundational CSPM and Defender for Servers using the Log Analytics agent, Guest Configuration agent (Preview). | Microsoft Defender Vulnerability Management premium, as part of Defender for Servers Plan 2. | - Functionality based on integration with Microsoft Defender Vulnerability Management premium will be available in preview around April 2024.<br/>- Functionality with the Log Analytics agent will be deprecated in August 2024<br/>- Functionality with Guest Configuration agent (Preview) will deprecate when the Microsoft Defender Vulnerability Management is available.<br/>- Support of this feature for Docker-hub and VMMS will be deprecated in Aug 2024. |
+| Adaptive application controls | Log Analytics agent (GA), AMA (Preview) | | The adaptive application control feature is set to be deprecated in August 2024. |
+| Endpoint protection discovery recommendations | Recommendations that are available through the Foundational Cloud Security Posture Management (CSPM) plan and Defender for Servers, using the Log Analytics agent (GA), AMA (Preview)ΓÇ»| Agentless machine scanning | - Functionality with agentless machine scanning will be released to preview in February 2024 as part of Defender for Servers Plan 2 and the Defender CSPM plan.<br/>- Azure VMs, Google Cloud Platform (GCP) instances, and Amazon Web Services (AWS) instances will be supported. On-premises machines wonΓÇÖt be supported. |
+| Missing OS update recommendation | Recommendations available in the Foundational CSPM and Defender for Servers plans using the Log Analytics agent. | Integration with Update Manager, Microsoft | New recommendations based on Azure Update Manager integration [are GA](release-notes-archive.md#two-recommendations-related-to-missing-operating-system-os-updates-were-released-to-ga), with no agent dependencies. |
+| OS misconfigurations (Microsoft Cloud Security Benchmark) | Recommendations that are available through the Foundational CSPM and Defender for Servers plans using the Log Analytics agent, Guest Configuration agent (Preview). | Microsoft Defender Vulnerability Management premium, as part of Defender for Servers Plan 2. | - Functionality based on integration with Microsoft Defender Vulnerability Management premium will be available in preview around April 2024.<br/>- Functionality with the Log Analytics agent will be deprecated in August 2024<br/>- Functionality with Guest Configuration agent (Preview) will deprecate when the Microsoft Defender Vulnerability Management is available.<br/>- Support of this feature for Docker-hub and Azure Virtual Machine Scale Sets will be deprecated in Aug 2024. |
| File integrity monitoring | Log Analytics agent, AMA (Preview) | Defender for Endpoint agent integration | Functionality with the Defender for Endpoint agent will be available around April 2024.<br/>- Functionality with the Log Analytics agent will be deprecated in August 2024.<br/>- Functionality with AMA will deprecate when the Defender for Endpoint integration is released. |
-The [500-MB benefit](faq-defender-for-servers.yml#is-the-500-mb-of-free-data-ingestion-allowance-applied-per-workspace-or-per-machine-) for data ingestion over the defined tables will remain supported via the AMA agent for the machines under subscriptions covered by Defender for Servers Plan 2. Every machine is eligible for the benefit only once, even if both Log Analytics agent and Azure Monitor agent are installed on it.
+The [500-MB benefit](faq-defender-for-servers.yml#is-the-500-mb-of-free-data-ingestion-allowance-applied-per-workspace-or-per-machine-) for data ingestion over the defined tables remains supported via the AMA agent for the machines under subscriptions covered by Defender for Servers Plan 2. Every machine is eligible for the benefit only once, even if both Log Analytics agent and Azure Monitor agent are installed on it.
Learn more about how to [deploy AMA](/azure/azure-monitor/vm/monitor-virtual-machine-agent#agent-deployment-options). For SQL servers on machines, we recommend to [migrate to SQL server-targeted Azure Monitoring Agent's (AMA) autoprovisioning process](defender-for-sql-autoprovisioning.md). ### Endpoint protection recommendations experience
-Endpoint discovery and recommendations are currently provided by Defender for Cloud foundational CSPM and the Defender for Servers plan using the Log Analytics agent in GA, or in preview via the AMA. This experience will be replaced by security recommendations that are gathered using agentless machine scanning.ΓÇ»
+Endpoint discovery and recommendations are currently provided by the Defender for Cloud Foundational CSPM and the Defender for Servers plans using the Log Analytics agent in GA, or in preview via the AMA. This experience will be replaced by security recommendations that are gathered using agentless machine scanning.ΓÇ»
-Endpoint protection recommendations are constructed in two stages. The first stage is [EDR discovery](#edr-discovery) of an endpoint detection and response (EDR) solution. The second isΓÇ»[assessment](#edr-configuration-assessment) of the solutionΓÇÖs configuration. The following tables provide details of the current and new experiences for each stage.
+Endpoint protection recommendations are constructed in two stages. The first stage is [discovery](#endpoint-detection-and-response-solutiondiscovery) of an endpoint detection and response solution. The second isΓÇ»[assessment](#endpoint-detection-and-response-solutionconfiguration-assessment) of the solutionΓÇÖs configuration. The following tables provide details of the current and new experiences for each stage.
-#### EDR discovery
+#### Endpoint detection and response solution - discovery
| Area | Current experience (based on AMA/MMA)| New experience (based on agentless machine scanning) | |-|-|-|
Endpoint protection recommendations are constructed in two stages. The first sta
| **What plans are supported?** | - Foundational CSPM (free)<br/>- Defender for Servers Plan 1 and Plan 2 |- Defender CSPM<br/>- Defender for Servers Plan 2 | |**What fix is available?** | Install Microsoft anti-malware. | Install Defender for Endpoint on selected machines/subscriptions. |
-#### EDR configuration assessment
+#### Endpoint detection and response solution - configuration assessment
| Area | Current experience (based on AMA/MMA)| New experience (based on agentless machine scanning) | |-|-|-|
-| Resources are classified as unhealthy if one or more of the security checks arenΓÇÖt healthy. | Three security checks:<br/>- Real time protection is off<br/>- Signatures are out of date.<br/>- Both quick scan and full scan haven't run for seven days. | Three security checks:<br/>- Anti-virus is off or partially configured<br/>- Signatures are out of date<br/>- Both quick scan and full scan haven't run for seven days. |
-| Prerequisites to get the recommendation | An anti-malware solution in place | An endpoint detection and response (EDR) solution in place. |
+| Resources are classified as unhealthy if one or more of the security checks arenΓÇÖt healthy. | Three security checks:<br/>- Real time protection is off<br/>- Signatures are out of date.<br/>- Both quick scan and full scan aren't run for seven days. | Three security checks:<br/>- Anti-virus is off or partially configured<br/>- Signatures are out of date<br/>- Both quick scan and full scan aren't run for seven days. |
+| Prerequisites to get the recommendation | An anti-malware solution in place | An endpoint detection and response solution in place. |
#### Which recommendations are being deprecated?
The following table summarizes the timetable for recommendations being deprecate
|-|-|-|-|-| | [Endpoint protection should be installed on your machines](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/4fb67663-9ab9-475d-b026-8c544cced439) (public) | MM#changes-in-endpoint-protection-recommendations) | | [Endpoint protection health issues should be resolved on your machines](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/37a3689a-818e-4a0e-82ac-b1392b9bb000) (public)| MM#changes-in-endpoint-protection-recommendations) |
-| [Endpoint protection health failures on virtual machine scale sets should be resolved](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/e71020c2-860c-3235-cd39-04f3f8c936d2) | MMA | VMSS | August 2024 | No replacement |
-| [Endpoint protection solution should be installed on virtual machine scale sets](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/21300918-b2e3-0346-785f-c77ff57d243b) | MMA | VMSS | August 2024 | No replacement |
+| [Endpoint protection health failures on virtual machine scale sets should be resolved](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/e71020c2-860c-3235-cd39-04f3f8c936d2) | MMA | Azure Virtual Machine Scale Sets | August 2024 | No replacement |
+| [Endpoint protection solution should be installed on virtual machine scale sets](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/21300918-b2e3-0346-785f-c77ff57d243b) | MMA | Azure Virtual Machine Scale Sets | August 2024 | No replacement |
| [Endpoint protection solution should be on machines](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/383cf3bc-fdf9-4a02-120a-3e7e36c6bfee) | MMA | Non-Azure resources (Windows)| August 2024 | No replacement | | [Install endpoint protection solution on your machines](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/83f577bd-a1b6-b7e1-0891-12ca19d1e6df) | MMA | Azure and non-Azure (Windows) | August 2024 | [New agentless recommendation](upcoming-changes.md#changes-in-endpoint-protection-recommendations) | | [Endpoint protection health issues on machines should be resolved](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/3bcd234d-c9c7-c2a2-89e0-c01f419c1a8a) | MMA | Azure and non-Azure (Windows and Linux) | August 2024 | [New agentless recommendation](upcoming-changes.md#changes-in-endpoint-protection-recommendations). |
-The [new recommendations](upcoming-changes.md#changes-in-endpoint-protection-recommendations) experience based on agentless machine scanning will support both Windows and Linux OS across multicloud machines.
+The [new recommendations](upcoming-changes.md#changes-in-endpoint-protection-recommendations) experience based on agentless machine scanning support both Windows and Linux OS across multicloud machines.
#### How will the replacement work? - Current recommendations provided by the Log Analytics Agent or the AMA will be deprecated over time. - Some of these existing recommendations will be replaced by new recommendations based on agentless machine scanning.-- Recommendations currently in GA will remain in place until the Log Analytics agent retires.
+- Recommendations currently in GA remain in place until the Log Analytics agent retires.
- Recommendations that are currently in preview will be replaced when the new recommendation is available in preview. #### What's happening with secure score? -- Recommendations that are currently in GA will continue to impact secure score.ΓÇ» -- Current and upcoming new recommendations are located under the same Microsoft Cloud Security Benchmark control. This ensures that thereΓÇÖs no duplicate impact on secure score.
+- Recommendations that are currently in GA will continue to affect secure score.ΓÇ»
+- Current and upcoming new recommendations are located under the same Microsoft Cloud Security Benchmark control, ensuring that thereΓÇÖs no duplicate impact on secure score.
#### How do I prepare for the new recommendations?
If you're using the current Log Analytics agent/Azure Monitor agent autoprovisio
1. Select **Save**.
-Once the SQL server-targeted AMA autoprovisioning process has been enabled, you should disable the Log Analytics agent/Azure Monitor agent autoprovisioning process and uninstall the MMA on all SQL servers:
+Once the SQL server-targeted AMA autoprovisioning process is enabled, you should disable the Log Analytics agent/Azure Monitor agent autoprovisioning process and uninstall the MMA on all SQL servers:
To disable the Log Analytics agent:
We recommend you plan agent migration in accordance with your business requireme
| **Are you using Defender for Servers?** | **Are these Defender for Servers features required in GA: file integrity monitoring, endpoint protection recommendations, security baseline recommendations?** | **Are you using Defender for SQL servers on machines or AMA log collection?** | **Migration plan** | |-|-|-|-|
-| Yes | Yes | No | 1. Enable [Defender for Endpoint (MDE) integration](enable-defender-for-endpoint.md) and [agentless machine scanning](enable-agentless-scanning-vms.md).<br/>2. Wait for GA of all features with the alternative's platform (you can use preview version earlier).<br/>3. Once features are GA, disable the [Log Analytics agent](defender-for-sql-autoprovisioning.md#disable-the-log-analytics-agentazure-monitor-agent).
+| Yes | Yes | No | 1. Enable [Defender for Endpoint integration](enable-defender-for-endpoint.md) and [agentless machine scanning](enable-agentless-scanning-vms.md).<br/>2. Wait for GA of all features with the alternative's platform (you can use preview version earlier).<br/>3. Once features are GA, disable the [Log Analytics agent](defender-for-sql-autoprovisioning.md#disable-the-log-analytics-agentazure-monitor-agent).
| No | | No | You can remove the Log Analytics agent now. | | No | | Yes | 1. You can [migrate to SQL autoprovisioning for AMA](defender-for-sql-autoprovisioning.md) now.<br/>2. [Disable](defender-for-sql-autoprovisioning.md#disable-the-log-analytics-agentazure-monitor-agent) Log Analytics/Azure Monitor Agent. | | Yes | Yes | Yes | 1. Enable [Defender for Endpoint integration](enable-defender-for-endpoint.md) and [agentless machine scanning](enable-agentless-scanning-vms.md).<br/>2. You can use the Log Analytics agent and AMA side-by-side to get all features in GA. [Learn more](auto-deploy-azure-monitoring-agent.md#impact-of-running-with-both-the-log-analytics-and-azure-monitor-agents) about running agents side-by-side.<br>3. Migrate to [SQL autoprovisioning for AMA](defender-for-sql-autoprovisioning.md) in Defender for SQL on machines. Alternatively, start the migration from Log Analytics agent to AMA in April 2024.<br/>4. Once the migration is finished, [disable](defender-for-sql-autoprovisioning.md#disable-the-log-analytics-agentazure-monitor-agent) the Log Analytics agent. |
-| Yes | No | Yes | 1. Enable [Defender for Endpoint (MDE) integration](enable-defender-for-endpoint.md) and [agentless machine scanning](enable-agentless-scanning-vms.md).<br/>2. You can migrate to [SQL autoprovisioning for AMA](defender-for-sql-autoprovisioning.md) in Defender for SQL on machines now.<br/>3. [Disable](defender-for-sql-autoprovisioning.md#disable-the-log-analytics-agentazure-monitor-agent) the Log Analytics agent. |
+| Yes | No | Yes | 1. Enable [Defender for Endpoint integration](enable-defender-for-endpoint.md) and [agentless machine scanning](enable-agentless-scanning-vms.md).<br/>2. You can migrate to [SQL autoprovisioning for AMA](defender-for-sql-autoprovisioning.md) in Defender for SQL on machines now.<br/>3. [Disable](defender-for-sql-autoprovisioning.md#disable-the-log-analytics-agentazure-monitor-agent) the Log Analytics agent. |
## Next steps
defender-for-cloud Quickstart Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-aws.md
In this section of the wizard, you select the Defender for Cloud plans that you
1. Select **Next: Select plans**.
- The **Select plans** tab is where you choose which Defender for Cloud capabilities to enable for this AWS account. Each plan has its own [requirements for permissions](concept-aws-connector.md#native-connector-plan-requirements) and might incur [charges](https://azure.microsoft.com/pricing/details/defender-for-cloud/?v=17.23h).
+ The **Select plans** tab is where you choose which Defender for Cloud capabilities to enable for this AWS account. Each plan has its own [requirements for permissions](#native-connector-plan-requirements) and might incur [charges](https://azure.microsoft.com/pricing/details/defender-for-cloud/?v=17.23h).
:::image type="content" source="media/quickstart-onboard-aws/add-aws-account-plans-selection.png" alt-text="Screenshot that shows the tab for selecting plans for an AWS account." lightbox="media/quickstart-onboard-aws/add-aws-account-plans-selection.png":::
defender-for-cloud Quickstart Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-gcp.md
This screenshot shows GCP accounts displayed in the Defender for Cloud [overview
:::image type="content" source="./media/quickstart-onboard-gcp/gcp-account-in-overview.png" alt-text="Screenshot that shows GCP projects listed on the overview dashboard in Defender for Cloud." lightbox="media/quickstart-onboard-gcp/gcp-account-in-overview.png":::
+## GCP authorization design
+
+The authentication process between Microsoft Defender for Cloud and GCP is a federated authentication process.
+
+When you onboard to Defender for Cloud, the GCloud template is used to create the following resources as part of the authentication process:
+
+- Workload identity pool and providers
+
+- Service accounts and policy bindings
+
+The authentication process works as follows:
++
+1. Microsoft Defender for Cloud's CSPM service acquires a Microsoft Entra token. The token is signed by Microsoft Entra ID using the RS256 algorithm and is valid for 1 hour.
+
+1. The Microsoft Entra token is exchanged with Google's STS token.
+
+1. Google STS validates the token with the workload identity provider. The Microsoft Entra token is sent to Google's STS that validates the token with the workload identity provider. Audience validation then occurs and the token is signed. A Google STS token is then returned to Defender for Cloud's CSPM service.
+
+1. Defender for Cloud's CSPM service uses the Google STS token to impersonate the service account. Defender for Cloud's CSPM receives service account credentials that are used to scan the project.
+ ## Prerequisites To complete the procedures in this article, you need:
When you're connecting GCP projects to specific Azure subscriptions, consider th
## Connect your GCP project
-To connect your GCP project to Defender for Cloud by using a native connector:
+There are four parts to the onboarding process that take place when you create the security connection between your GCP project and Microsoft Defender for Cloud.
+
+### Project details
-1. Sign in to the [Azure portal](https://portal.azure.com).
+In the first section, you need to add the basic properties of the connection between your GCP project and Defender for Cloud.
-1. Go to **Defender for Cloud** > **Environment settings**.
-1. Select **Add environment** > **Google Cloud Platform**.
+Here you name your connector, select a subscription and resource group, which is used to create an ARM template resource that is called security connector. The security connector represents a configuration resource that holds the projects settings.
- :::image type="content" source="media/quickstart-onboard-gcp/add-gcp-project-environment-settings.png" alt-text="Screenshot that shows selections for adding Google Cloud Platform as a connector." lightbox="media/quickstart-onboard-gcp/add-gcp-project-environment-settings.png":::
+### Select plans for your project
-1. Enter all relevant information.
+After entering your organization's details, you'll then be able to select which plans to enable.
- :::image type="content" source="media/quickstart-onboard-gcp/add-gcp-project-details.png" alt-text="Screenshot of the pane for creating a GCP connector." lightbox="media/quickstart-onboard-gcp/add-gcp-project-details.png":::
- Optionally, if you select **Organization**, a management project and an organization custom role are created on your GCP project for the onboarding process. Autoprovisioning is enabled for the onboarding of new projects.
+From here, you can decide which resources you want to protect based on the security value you want to receive.
-## Select Defender plans
+### Configure access for your project
-In this section of the wizard, you select the Defender for Cloud plans that you want to enable.
+Once you selected the plans, you want to enable and the resources you want to protect you have to configure access between Defender for Cloud and your GCP project.
-1. Select **Next: Select plans**.
+
+In this step, you can find the GCloud script that needs to be run on the GCP project that is going to onboarded. The GCloud script is generated based on the plans you selected to onboard.
+
+The GCloud script creates all of the required resources on your GCP environment so that Defender for Cloud can operate and provide the following security values:
+
+- Workload identity pool
+- Workload identity provider (per plan)
+- Service accounts
+- Project level policy bindings (service account has access only to the specific project)
+
+### Review and generate the connector for your project
+
+The final step for onboarding is to review all of your selections and to create the connector.
++
+> [!NOTE]
+> The following APIs must be enabled in order to discover your GCP resources and allow the authentication process to occur:
+>
+> - `iam.googleapis.com`
+> - `sts.googleapis.com`
+> - `cloudresourcemanager.googleapis.com`
+> - `iamcredentials.googleapis.com`
+> - `compute.googleapis.com`
+> If you don't enable these APIs at this time, you can enable them during the onboarding process by running the GCloud script.
+
+After you create the connector, a scan starts on your GCP environment. New recommendations appear in Defender for Cloud after up to 6 hours. If you enabled autoprovisioning, Azure Arc and any enabled extensions are installed automatically for each newly detected resource.
-1. For the plans that you want to connect, turn the toggle to **On**. By default, all necessary prerequisites and components are provisioned. [Learn how to configure each plan](#optional-configure-selected-plans).
+## Connect your GCP organization
- :::image type="content" source="media/quickstart-onboard-gcp/add-gcp-project-plans-selection.png" alt-text="Screenshot that shows the tab for selecting plans for a GCP project." lightbox="media/quickstart-onboard-gcp/add-gcp-project-plans-selection.png":::
+Similar to onboarding a single project, When onboarding a GCP organization, Defender for Cloud creates a security connector for each project under the organization (unless specific projects were excluded).
- If you choose to turn on the Microsoft Defender for Containers plan, ensure that you meet the [network requirements](defender-for-containers-enable.md?tabs=defender-for-container-gcp#network-requirements) for it.
+### Organization details
-1. Select **Configure access** and make the following selections:
+In the first section, you need to add the basic properties of the connection between your GCP organization and Defender for Cloud.
- 1. Select the deployment type:
- - **Default access**: Allows Defender for Cloud to scan your resources and automatically include future capabilities.
- - **Least privilege access**: Grants Defender for Cloud access to only the current permissions needed for the selected plans. If you select the least privileged permissions, you receive notifications on any new roles and permissions that are required to get full functionality for connector health.
+Here you name your connector, select a subscription and resource group that is used to create an ARM template resource that is called security connector. The security connector represents a configuration resource that holds the projects settings.
- 1. Select the deployment method: **GCP Cloud Shell** or **Terraform**.
+You also select a location and add the organization ID for your project.
- :::image type="content" source="media/quickstart-onboard-gcp/add-gcp-project-configure-access.png" alt-text="Screenshot that shows deployment options and instructions for configuring access.":::
+When you onboard an organization, you can also choose to exclude project numbers and folder IDs.
-1. Follow the on-screen instructions for the selected deployment method to complete the required dependencies on GCP.
+### Select plans for your organization
-1. Select **Next: Review and generate**.
+After entering your organization's details, you'll then be able to select which plans to enable.
-1. Select **Create**.
- > [!NOTE]
- > The following APIs must be enabled in order to discover your GCP resources and allow the authentication process to occur:
- > - `iam.googleapis.com`
- > - `sts.googleapis.com`
- > - `cloudresourcemanager.googleapis.com`
- > - `iamcredentials.googleapis.com`
- > - `compute.googleapis.com`
- > If you don't enable these APIs at this time, you can enable them during the onboarding process by running the GCloud script.
+From here, you can decide which resources you want to protect based on the security value you want to receive.
+
+### Configure access for your organization
+
+Once you selected the plans, you want to enable and the resources you want to protect you have to configure access between Defender for Cloud and your GCP organization.
++
+When you onboard an organization, there's a section that includes management project details. Similar to other GCP projects, the organization is also considered a project and is utilized by Defender for Cloud to create all of the required resources needed to connect the organization to Defender for Cloud.
+
+In the management project details section, you have the choice of:
+
+- Dedicating a management project for Defender for Cloud to include in the GCloud script.
+- Provide the details of an already existing project to be used as the management project with Defender for Cloud.
+
+You need to decide what is your best option for your organization's architecture. We recommend creating a dedicated project for Defender for Cloud.
+
+The GCloud script is generated based on the plans you selected to onboard. The script creates all of the required resources on your GCP environment so that Defender for Cloud can operate and provide the following security benefits:
+
+- Workload identity pool
+- Workload identity provider for each plan
+- Custom role to grant Defender for Cloud access to discover and get the project under the onboarded organization
+- A service account for each plan
+- A service account for the autoprovisioning service
+- Organization level policy bindings for each service account
+- API enablements at the management project level
+
+Some of the APIs aren't in direct use with the management project. Instead the APIs authenticate through this project and use one of the APIs from another project. The API must be enabled on the management project.
+
+### Review and generate the connector for your organization
+
+The final step for onboarding is to review all of your selections and to create the connector.
++
+> [!NOTE]
+> The following APIs must be enabled in order to discover your GCP resources and allow the authentication process to occur:
+>
+> - `iam.googleapis.com`
+> - `sts.googleapis.com`
+> - `cloudresourcemanager.googleapis.com`
+> - `iamcredentials.googleapis.com`
+> - `compute.googleapis.com`
+> If you don't enable these APIs at this time, you can enable them during the onboarding process by running the GCloud script.
After you create the connector, a scan starts on your GCP environment. New recommendations appear in Defender for Cloud after up to 6 hours. If you enabled autoprovisioning, Azure Arc and any enabled extensions are installed automatically for each newly detected resource.
Microsoft Defender for Servers doesn't install the OS Config agent to a VM that
Alternatively, you can manually connect your VM instances to Azure Arc for servers. Instances in projects with the Defender for Servers plan enabled that aren't connected to Azure Arc are surfaced by the recommendation **GCP VM instances should be connected to Azure Arc**. Select the **Fix** option in the recommendation to install Azure Arc on the selected machines.
-The respective Azure Arc servers for EC2 instances or GCP virtual machines that no longer exist (and the respective Azure Arc servers with a status of [Disconnected or Expired](/azure/azure-arc/servers/overview)) are removed after seven days. This process removes irrelevant Azure Arc entities to ensure that only Azure Arc servers related to existing instances are displayed.
+The respective Azure Arc servers for GCP virtual machines that no longer exist (and the respective Azure Arc servers with a status of [Disconnected or Expired](/azure/azure-arc/servers/overview)) are removed after seven days. This process removes irrelevant Azure Arc entities to ensure that only Azure Arc servers related to existing instances are displayed.
Ensure that you fulfill the [network requirements for Azure Arc](../azure-arc/servers/network-requirements.md?tabs=azure-cloud).
Enable these other extensions on the Azure Arc-connected machines:
- Microsoft Defender for Endpoint - A vulnerability assessment solution (Microsoft Defender Vulnerability Management or Qualys)-- The Log Analytics agent on Azure Arc-connected machines or the Azure Monitor agent-
-Make sure the selected Log Analytics workspace has a security solution installed. The Log Analytics agent and the Azure Monitor agent are currently configured at the *subscription* level. All the multicloud accounts and projects (from both AWS and GCP) under the same subscription inherit the subscription settings for the Log Analytics agent and the Azure Monitor agent. [Learn more about monitoring components for Defender for Servers](monitoring-components.md).
-Defender for Servers assigns tags to your GCP resources to manage the autoprovisioning process. You must have these tags properly assigned to your resources so that Defender for Servers can manage your resources: `Cloud`, `InstanceName`, `MDFCSecurityConnector`, `MachineId`, `ProjectId`, and `ProjectNumber`.
+Defender for Servers assigns tags to your Azure Arc GCP resources to manage the autoprovisioning process. You must have these tags properly assigned to your resources so that Defender for Servers can manage your resources: `Cloud`, `InstanceName`, `MDFCSecurityConnector`, `MachineId`, `ProjectId`, and `ProjectNumber`.
To configure the Defender for Servers plan:
To configure the Defender for Databases plan:
1. Follow the [steps to connect your GCP project](#connect-your-gcp-project).
-1. On the **Select plans** tab, select **Configure**.
-
- :::image type="content" source="media/quickstart-onboard-gcp/view-configuration.png" alt-text="Screenshot that shows the link for configuring the Defender for Databases plan.":::
+1. On the **Select plans** tab, in **Databases**, select **Settings**.
-1. On the **Auto-provisioning configuration** pane, turn the toggles to **On** or **Off**, depending on your need.
+1. On the **Plan configuration** pane, turn the toggles to **On** or **Off**, depending on your need.
:::image type="content" source="media/quickstart-onboard-gcp/auto-provision-databases-screen.png" alt-text="Screenshot that shows the toggles for the Defender for Databases plan.":::
Microsoft Defender for Containers brings threat detection and advanced defenses
> - If you choose to disable the available configuration options, no agents or components will be deployed to your clusters. [Learn more about feature availability](supported-machines-endpoint-solutions-clouds-containers.md). > - Defender for Containers when deployed on GCP, might incur external costs such as [logging costs](https://cloud.google.com/stackdriver/pricing), [pub/sub costs](https://cloud.google.com/pubsub/pricing) and [egress costs](https://cloud.google.com/vpc/network-pricing#:~:text=Platform%20SKUs%20apply.-%2cInternet%20egress%20rates%2c-Premium%20Tier%20pricing). -- **Kubernetes audit logs to Defender for Cloud**: Enabled by default. This configuration is available at the GCP project level only. It provides agentless collection of the audit log data through [GCP Cloud Logging](https://cloud.google.com/logging/) to the Microsoft Defender for Cloud back end for further analysis. Defender for Containers requires control plane audit logs to provide [runtime threat protection](defender-for-containers-introduction.md#run-time-protection-for-kubernetes-nodes-and-clusters). To send Kubernetes audit logs to Microsoft Defender, toggle the setting to **On.**
+- **Kubernetes audit logs to Defender for Cloud**: Enabled by default. This configuration is available at the GCP project level only. It provides agentless collection of the audit log data through [GCP Cloud Logging](https://cloud.google.com/logging/) to the Microsoft Defender for Cloud back end for further analysis. Defender for Containers requires control plane audit logs to provide [runtime threat protection](defender-for-containers-introduction.md#run-time-protection-for-kubernetes-nodes-and-clusters). To send Kubernetes audit logs to Microsoft Defender, toggle the setting to **On**.
> [!NOTE] > If you disable this configuration, then the `Threat detection (control plane)` feature will be disabled. Learn more about [features availability](supported-machines-endpoint-solutions-clouds-containers.md).
To view all the active recommendations for your resources by resource type, use
:::image type="content" source="./media/quickstart-onboard-gcp/gcp-resource-types-in-inventory.png" alt-text="Screenshot of GCP options in the asset inventory page's resource type filter." lightbox="media/quickstart-onboard-gcp/gcp-resource-types-in-inventory.png":::
+> [!NOTE]
+> As the Log Analytics agent (also known as MMA) is set to retire in [August 2024](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/), all Defender for Servers features and security capabilities that currently depend on it, including those described on this page, will be available through either [Microsoft Defender for Endpoint integration](integration-defender-for-endpoint.md) or [agentless scanning](concept-agentless-data-collection.md), before the retirement date. For more information about the roadmap for each of the features that are currently rely on Log Analytics Agent, see [this announcement](upcoming-changes.md#defender-for-cloud-plan-and-strategy-for-the-log-analytics-agent-deprecation).
+ ## Integrate with Microsoft Defender XDR When you enable Defender for Cloud, Defender for Cloud alerts are automatically integrated into the Microsoft Defender Portal. No further steps are needed.
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
If you're looking for items older than six months, you can find them in the [Arc
|Date | Update | |-|-|
+| February 13 | [AWS container vulnerability assessment powered by Trivy retired](#aws-container-vulnerability-assessment-powered-by-trivy-retired) |
| February 8 | [Recommendations released for preview: four recommendations for Azure Stack HCI resource type](#recommendations-released-for-preview-four-recommendations-for-azure-stack-hci-resource-type) |
+### AWS container vulnerability assessment powered by Trivy retired
+
+February 13, 2024
+
+The container vulnerability assessment powered by Trivy has been retired. Any customers who were previously using this assessment should upgrade to the new [AWS container vulnerability assessment powered by Microsoft Defender Vulnerability Management](agentless-vulnerability-assessment-aws.md). For instructions on how to upgrade, see [How do I upgrade from the retired Trivy vulnerability assessment to the AWS vulnerability assessment powered by Microsoft Defender Vulnerability Management?](/azure/defender-for-cloud/faq-defender-for-containers#how-do-i-upgrade-from-the-retired-trivy-vulnerability-assessment-to-the-aws-vulnerability-assessment-powered-by-microsoft-defender-vulnerability-management-)
+ ### Recommendations released for preview: four recommendations for Azure Stack HCI resource type February 8, 2024
Support for Windows images was released in public preview as part of Vulnerabili
December 13, 2023
-The [container vulnerability assessment powered by Trivy](defender-for-containers-vulnerability-assessment-elastic.md) is now on a retirement path to be completed by February 13. This capability is now deprecated and will continue to be available to existing customers using this capability until February 13. We encourage customers using this capability to upgrade to the new [AWS container vulnerability assessment powered by Microsoft Defender Vulnerability Management](agentless-vulnerability-assessment-aws.md) by February 13.
+The container vulnerability assessment powered by Trivy is now on a retirement path to be completed by February 13. This capability is now deprecated and will continue to be available to existing customers using this capability until February 13. We encourage customers using this capability to upgrade to the new [AWS container vulnerability assessment powered by Microsoft Defender Vulnerability Management](agentless-vulnerability-assessment-aws.md) by February 13.
### Agentless container posture for AWS in Defender for Containers and Defender CSPM (Preview)
defender-for-cloud Support Matrix Defender For Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/support-matrix-defender-for-containers.md
This article summarizes support information for Container capabilities in Microsoft Defender for Cloud. > [!NOTE]
-> Specific features are in preview. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
+> - Specific features are in preview. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> - Only the versions of AKS, EKS and GKE supported by the cloud vendor are officially supported by Defender for Cloud.
## Azure
Learn how to [use Azure Private Link to connect networks to Azure Monitor](../az
| Security posture management | Docker CIS | EC2 | Preview | - | Log Analytics agent | Defender for Servers Plan 2 | | Security posture management | Control plane hardening | - | - | - | - | - | | Security posture management | Kubernetes data plane hardening | EKS | GA| - | Azure Policy for Kubernetes | Defender for Containers |
-| Vulnerability Assessment | [Deprecated] Registry scan (powered by Trivy)| ECR | Preview | - | Agentless | Defender for Containers |
| [Vulnerability assessment](agentless-vulnerability-assessment-aws.md) | Agentless registry scan (powered by Microsoft Defender Vulnerability Management) [supported packages](#registries-and-images-support-for-awsvulnerability-assessment-powered-by-microsoft-defender-vulnerability-management)| ECR | Preview | Preview | Agentless | Defender for Containers or Defender CSPM | | [Vulnerability assessment](agentless-vulnerability-assessment-aws.md) | Agentless/agent-based runtime (powered by Microsoft Defender Vulnerability Management) [supported packages](#registries-and-images-support-for-awsvulnerability-assessment-powered-by-microsoft-defender-vulnerability-management)| EKS | Preview | Preview | Agentless **OR/AND** Defender agent | Defender for Containers or Defender CSPM | | Runtime protection| Control plane | EKS | Preview | Preview | Agentless | Defender for Containers |
Learn how to [use Azure Private Link to connect networks to Azure Monitor](../az
| Operating systems | **Supported** <br> ΓÇó Alpine Linux 3.12-3.16 <br> ΓÇó Red Hat Enterprise Linux 6-9 <br> ΓÇó CentOS 6-9<br> ΓÇó Oracle Linux 6-9 <br> ΓÇó Amazon Linux 1, 2 <br> ΓÇó openSUSE Leap, openSUSE Tumbleweed <br> ΓÇó SUSE Enterprise Linux 11-15 <br> ΓÇó Debian GNU/Linux 7-12 <br> ΓÇó Google Distroless (based on Debian GNU/Linux 7-12)<br> ΓÇó Ubuntu 12.04-22.04 <br> ΓÇó Fedora 31-37<br> ΓÇó Mariner 1-2<br> ΓÇó Windows server 2016, 2019, 2022| | Language specific packages <br><br> | **Supported** <br> ΓÇó Python <br> ΓÇó Node.js <br> ΓÇó .NET <br> ΓÇó JAVA <br> ΓÇó Go |
-### Kubernetes distributions/configurations support - AWS
+### Kubernetes distributions/configurations support for AWS - Runtime threat protection
| Aspect | Details | |--|--|
Outbound proxy without authentication and outbound proxy with basic authenticati
| Operating systems | **Supported** <br> ΓÇó Alpine Linux 3.12-3.16 <br> ΓÇó Red Hat Enterprise Linux 6-9 <br> ΓÇó CentOS 6-9<br> ΓÇó Oracle Linux 6-9 <br> ΓÇó Amazon Linux 1, 2 <br> ΓÇó openSUSE Leap, openSUSE Tumbleweed <br> ΓÇó SUSE Enterprise Linux 11-15 <br> ΓÇó Debian GNU/Linux 7-12 <br> ΓÇó Google Distroless (based on Debian GNU/Linux 7-12)<br> ΓÇó Ubuntu 12.04-22.04 <br> ΓÇó Fedora 31-37<br> ΓÇó Mariner 1-2<br> ΓÇó Windows server 2016, 2019, 2022| | Language specific packages <br><br> | **Supported** <br> ΓÇó Python <br> ΓÇó Node.js <br> ΓÇó .NET <br> ΓÇó JAVA <br> ΓÇó Go |
-### Kubernetes distributions/configurations support - GCP
+### Kubernetes distributions/configurations support for GCP - Runtime threat protection
| Aspect | Details | |--|--|
defender-for-iot Alert Engine Messages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/alert-engine-messages.md
Alert severities on this page are listed by the severity as shown in the Azure p
| **Malware alerts** | Triggered when the Malware engine detects malicious network activity. For example, the engine detects a known attack such as Conficker. | | **Anomaly alerts** | Triggered when the Anomaly engine detects a deviation. For example, a device is performing network scans but isn't defined as a scanning device. |
+Defender for IoT's alert detection policy steers the different alert engines to trigger alerts based on business impact and network context, and reduce low-value IT related alerts. For more information, see [Focused alerts in OT/IT environments](alerts.md#focused-alerts-in-otit-environments).
+ ## Supported alert categories Each alert has one of the following categories:
defender-for-iot Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/alerts.md
For more information, see:
Alert options also differ depending on your location and user role. For more information, see [Azure user roles and permissions](roles-azure.md) and [On-premises users and roles](roles-on-premises.md).
+## Focused alerts in OT/IT environments
+
+Organizations where sensors are deployed between OT and IT networks deal with many alerts, related to both OT and IT traffic. The amount of alerts, some of which are irrelevant, can cause alert fatigue and affect overall performance. To address these challenges, Defender for IoT's detection policy steers its different [alert engines](alert-engine-messages.md#supported-alert-types) to focus on alerts with business impact and relevance to an OT network, and reduce low-value IT related alerts. For example, the **Unauthorized internet connectivity** alert is highly relevant in an OT network, but has relatively low value in an IT network.
+
+To focus the alerts triggered in these environments, all alert engines, except for the *Malware* engine, trigger alerts only if they detect a related OT subnet or protocol.
+However, to maintain triggering of alerts that indicate critical scenarios:
+
+- The *Malware* engine triggers malware alerts regardless of whether the alerts are related to OT or IT devices.
+- The other engines include exceptions for critical scenarios. For example, the *Operational* engine triggers alerts related to sensor traffic, regardless of whether the alert is related to OT or IT traffic.
+ ## Managing OT alerts in a hybrid environment Users working in hybrid environments might be managing OT alerts in [Defender for IoT](https://portal.azure.com/#view/Microsoft_Azure_IoT_Defender/IoTDefenderDashboard/~/Getting_started) on the Azure portal, the OT sensor, and an on-premises management console.
defender-for-iot Configure Sensor Settings Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/configure-sensor-settings-portal.md
For a bandwidth cap, define the maximum bandwidth you want the sensor to use for
To configure an NTP server for your sensor from the Azure portal, define an IP/Domain address of a valid IPv4 NTP server using port 123.
-### Subnet
+### Local subnets
-To focus the Azure device inventory on devices that are in your IoT/OT scope, you need to manually edit the subnet list to include only the locally monitored subnets that are in your IoT/OT scope. Once the subnets are configured, the network location of the devices is shown in the *Network location* (Public preview) column in the Azure device inventory. All of the devices associated with the listed subnets are displayed as *local*, while devices associated with detected subnets not included in the list are displayed as *routed*.
+To focus the Azure device inventory on devices that are in your OT scope, you need to manually edit the subnet list to include only the locally monitored subnets that are in your OT scope.
-**To configure your subnets in the Azure portal**:
+Subnets in the subnet list are automatically configured as ICS subnets, which means that Defender for IoT recognizes these subnets as OT networks. You can edit this setting when you [configure the subnets](#configure-subnets-in-the-azure-portal).
+
+Once the subnets are configured, the network location of the devices is shown in the *Network location* (Public preview) column in the Azure device inventory. All of the devices associated with the listed subnets are displayed as *local*, while devices associated with detected subnets not included in the list are displayed as *routed*.
+
+#### Configure subnets in the Azure portal
1. In the Azure portal, go to **Sites and sensors** > **Sensor settings**.
-1. Under **Subnets**, review the configured subnets. To focus the device inventory and view local devices in the inventory, delete any subnets that are not in your IoT/OT scope by selecting the options menu (...) on any subnet you want to delete.
+1. Under **Local subnets**, review the configured subnets. To focus the device inventory and view local devices in the inventory, delete any subnets that are not in your IoT/OT scope by selecting the options menu (...) on any subnet you want to delete.
1. To modify additional settings, select any subnet and then select **Edit** for the following options: - Select **Import subnets** to import a comma-separated list of subnet IP addresses and masks. Select **Export subnets** to export a list of currently configured data, or **Clear all** to start from scratch. - Enter values in the **IP Address**, **Mask**, and **Name** fields to add subnet details manually. Select **Add subnet** to add additional subnets as needed.
+
+ - **ICS Subnet** is on by default, which means that Defender for IoT recognizes the subnet as an OT network. To mark a subnet as non-ICS, toggle off **ICS Subnet**.
### VLAN naming
defender-for-iot How To Control What Traffic Is Monitored https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-control-what-traffic-is-monitored.md
While the OT network sensor automatically learns the subnets in your network, we
|**Segregated** | Select to show this subnet separately when displaying the device map according to Purdue level. | | **Remove subnet** | Select to remove any subnets that aren't related to your IoT/OT network scope.|
- In the subnet grid, subnets marked as **ICS subnet** are recognized as OT activity or protocols. This option is read-only in this grid, but you can [manually define a subnet as ICS](#manually-define-a-subnet-as-ics) if there's an OT subnet not being recognized correctly.
+ In the subnet grid, subnets marked as **ICS subnet** are recognized as OT networks. This option is read-only in this grid, but you can [manually define a subnet as ICS](#manually-define-a-subnet-as-ics) if there's an OT subnet not being recognized correctly.
1. When you're done, select **Save** to save your updates.
While the OT network sensor automatically learns the subnets in your network, we
If you have an OT subnet that isn't being marked automatically as an ICS subnet by the sensor, edit the device type for any of the devices in the relevant subnet to an ICS or IoT device type. The subnet will then be automatically marked by the sensor as an ICS subnet. > [!NOTE]
-> To manually change the subnet to be marked as ICS, the device type must be changed in device inventory in the OT sensor, and not from the Azure portal.
+> To manually change the subnet to be marked as ICS, change the device type in the device inventory in the OT sensor. In the Azure portal, subnets in the subnet list are marked as ICS by default in the [sensor settings](configure-sensor-settings-portal.md#local-subnets).
**To change the device type to manually update the subnet**:
defender-for-iot How To Manage Cloud Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-cloud-alerts.md
For more information, see [Azure user roles and permissions for Defender for IoT
| **Destination device address** | The IP address of the destination device. | | **Destination device** | The destination IP or MAC address, or the destination device name.| | **First detection** | The first time the alert was detected in the network. |
- | **ID** |The unique alert ID.|
+ | **Id** |The unique alert ID, aligned with the ID on the sensor console.<br><br>**Note:** If the [alert was merged with other alerts](alerts.md#alert-management-options) from sensors that detected the same alert, the Azure portal displays the alert ID of the first sensor that generated the alerts. |
| **Last activity** | The last time the alert was changed, including manual updates for severity or status, or automated changes for device updates or device/alert deduplication | | **Protocol** | The protocol detected in the network traffic for the alert.| | **Sensor** | The sensor that detected the alert.|
defender-for-iot How To View Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-view-alerts.md
For more information, see [On-premises users and roles for OT monitoring with De
| **Last detection** | The last time the alert was detected. <br><br>- If an alert's status is **New**, and the same traffic is seen again, the **Last detection** time is updated for the same alert. <br>- If the alert's status is **Closed** and traffic is seen again, the **Last detection** time is *not* updated, and a new alert is triggered. | | **Status** |The alert status: *New*, *Active*, *Closed*<br><br>For more information, see [Alert statuses and triaging options](alerts.md#alert-statuses-and-triaging-options).| | **Source Device** | The source device IP address, MAC, or device name. |
+ | **Id** | The unique alert ID, aligned with the ID on the Azure portal.<br><br> **Note:** If the [alert was merged with other alerts](alerts.md#alert-management-options) from sensors that detected the same alert, the Azure portal displays the alert ID of the first sensor that generated the alerts. |
1. To view more details, select the :::image type="icon" source="media/how-to-manage-device-inventory-on-the-cloud/edit-columns-icon.png" border="false"::: **Edit Columns** button.
defender-for-iot Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/whats-new.md
Features released earlier than nine months ago are described in the [What's new
> Noted features listed below are in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. >
+## February 2024
+
+|Service area |Updates |
+|||
+| **OT networks** | - [Focused alerts in OT/IT environments](#focused-alerts-in-otit-environments)<br>- [Alert ID now aligned on the Azure portal and sensor console](#alert-id-now-aligned-on-the-azure-portal-and-sensor-console)<br>- [New setting to focus local networks in the device inventory](#new-setting-to-focus-local-networks-in-the-device-inventory) |
+
+### Focused alerts in OT/IT environments
+
+Organizations where sensors are deployed between OT and IT networks deal with many alerts, related to both OT and IT traffic. The amount of alerts, some of which are irrelevant, can cause alert fatigue and affect overall performance.
+
+To address these challenges, we've updated Defender for IoT's detection policy to automatically trigger alerts based on business impact and network context, and reduce low-value IT related alerts.
+
+For more information, see [Focused alerts in OT/IT environments](alerts.md#focused-alerts-in-otit-environments).
+
+### Alert ID now aligned on the Azure portal and sensor console
+
+The alert ID in the **Id** column on the Azure portal **Alerts** page now displays the same alert ID as the sensor console. [Learn more about alerts on the Azure portal](how-to-manage-cloud-alerts.md#view-alerts-on-the-azure-portal).
+
+> [!NOTE]
+> If the [alert was merged with other alerts](alerts.md#alert-management-options) from sensors that detected the same alert, the Azure portal displays the alert ID of the first sensor that generated the alerts.
+
+### New setting to focus local networks in the device inventory
+
+To better focus the Azure device inventory on devices that are in your OT scope, we've added the **ICS** toggle in the **Subnets** sensor setting. This toggle marks the subnet as a subnet with OT networks. [Learn more](configure-sensor-settings-portal.md#configure-subnets-in-the-azure-portal).
++ ## January 2024 |Service area |Updates | |||
-| **OT networks** | - [Sensor update in Azure portal now supports selecting a specific version](#sensor-update-in-azure-portal-now-supports-selecting-a-specific-version) <br> |
+| **OT networks** | [Sensor update in Azure portal now supports selecting a specific version](#sensor-update-in-azure-portal-now-supports-selecting-a-specific-version) |
### Sensor update in Azure portal now supports selecting a specific version
See and filter which devices are defined as *local* or *routed*, according to yo
Configure your subnets either on the Azure portal or on your OT sensor. For more information, see: - [Manage your device inventory from the Azure portal](how-to-manage-device-inventory-for-organizations.md)-- [Configure OT sensor settings from the Azure portal](configure-sensor-settings-portal.md#subnet)
+- [Configure OT sensor settings from the Azure portal](configure-sensor-settings-portal.md#local-subnets)
- [Fine tune your subnet list](how-to-control-what-traffic-is-monitored.md#fine-tune-your-subnet-list) ### Configure OT sensor settings from the Azure portal (Public preview)
dms Concepts Migrate Azure Mysql Replicate Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/concepts-migrate-azure-mysql-replicate-changes.md
# MySQL to Azure Database for MySQL Data Migration - MySQL Replicate Changes
-Running a Replicate changes Migration, with our offline scenario with "Enable Transactional Consistency," enables businesses to migrate their databases to Azure while the databases remain operational. In other words, migrations can be completed with minimum downtime for critical applications, limiting the impact on service level availability and inconvenience to their end customers.
+Running a Replicate changes Migration, with our offline scenario with "Enable Transactional Consistency" enables businesses to migrate their databases to Azure while the databases remain operational. In other words, migrations can be completed with minimum downtime for critical applications, limiting the impact on service level availability and inconvenience to their end customers.
> [!NOTE] > This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article.
To complete the replicate changes migration successfully, ensure that the follow
- When performing a replicate changes migration, the name of the database on the target server must be the same as the name on the source server. - Support is limited to the ROW binlog format.-- DDL changes replication is supported only when migrating to a v8.0 Azure Database for MySQL Flexible Server target server and when you have selected the option for **Replicate data definition and administration statements for selected objects** on DMS UI. The replication feature supports replicating data definition and administration statements that occur after the initial load and are logged in the binary log to the target.
+- DDL changes replication is supported only when you have selected the option for **Replicate data definition and administration statements for selected objects** on DMS UI. The replication feature supports replicating data definition and administration statements that occur after the initial load and are logged in the binary log to the target.
- Renaming databases or tables is not supported when replicating changes. ## Next steps
dms Tutorial Mysql Azure External Replicate Changes Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-mysql-azure-external-replicate-changes-portal.md
+
+ Title: "Tutorial: Migrate from MySQL to Azure Database for MySQL - Flexible Server using DMS Replicate Changes via the Azure portal"
+
+description: "Learn to perform an online migration from MySQL to Azure Database for MySQL - Flexible Server by using Azure Database Migration Service Replicate Changes Scenario"
+++ Last updated : 08/07/2023+++
+ - sql-migration-content
++
+# Tutorial: Migrate from MySQL to Azure Database for MySQL - Flexible Server online using DMS Replicate Changes scenario
+
+> [!NOTE]
+> This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
+
+You can migrate your on-premises or other cloud services MySQL Server to Azure Database for MySQL ΓÇô Flexible Server by using Azure Database Migration Service (DMS), a fully managed service designed to enable seamless migrations from multiple database sources to Azure data platforms. In this tutorial, weΓÇÖll perform an online migration of a sample database from an on-premises MySQL server to an Azure Database for MySQL - Flexible Server (both running version 5.7) using a DMS Replicate Changes migration activity.
+
+Running a Replicate changes Migration, with our offline scenario with "Enable Transactional Consistency" enables businesses to migrate their databases to Azure while the databases remain operational. In other words, migrations can be completed with minimum downtime for critical applications, limiting the impact on service level availability and inconvenience to their end customers.
+
+In this tutorial, you'll learn how to:
+
+> [!div class="checklist"]
+>
+> * Create a MySQL Replicate Changes migration project in DMS.
+> * Run the Replicate Changes migration.
+> * Monitor the migration.
+> * Perform post-migration steps.
+
+## Prerequisites
+
+To complete this tutorial, you need to:
+
+* Use the MySQL command line tool of your choice to determine whether **log_bin** is enabled on the source server. The Binlog isn't always turned on by default, so verify that it's enabled before starting the migration. To determine whether log_bin is enabled on the source server, run the command: **SHOW VARIABLES LIKE 'log_bin'**.
+* Ensure that the user has **"REPLICATION_APPLIER"** or **"BINLOG_ADMIN"** permission on target server for applying the bin log.
+* Ensure that the user has **"REPLICATION SLAVE"** permission on the target server.
+* Ensure that the user has **"REPLICATION CLIENT"** and **"REPLICATION SLAVE"** permission on the source server for reading and applying the bin log.
+* Run an offline migration scenario with "**Enable Transactional Consistency"** to get the bin log file and position.
+ :::image type="content" source="media/tutorial-mysql-to-azure-replicate-changes/01-offline-migration-bin-log-pos.png" alt-text="Screenshot of binlog position of an Azure Database Migration Service offline migration.":::
+* If you're targeting a replicate changes migration, configure the **binlog_expire_logs_seconds** parameter on the source server to ensure that binlog files aren't purged before the replica commits the changes. We recommend at least two days, to begin with. After a successful cutover, the value can be reset.
+
+## Limitations
+
+As you prepare for the migration, be sure to consider the following limitations.
+
+* When performing a replicate changes migration, the name of the database on the target server must be the same as the name on the source server.
+* Support is limited to the ROW binlog format.
+* DDL changes replication is supported only when you have selected the option for **Replicate data definition and administration statements for selected objects** on DMS UI. The replication feature supports replicating data definition and administration statements that occur after the initial load and are logged in the binary log to the target.
+* Renaming databases or tables is not supported when replicating changes.
+
+### Create a Replicate Changes migration project
+
+To create a Replicate Changes migration project, perform the following steps.
+
+1. In the Azure portal, select **All services**, search for Azure Database Migration Service, and then select **Azure Database Migration Services**.
+
+ :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/10-dms-search.png" alt-text="Screenshot of a Locate all instances of Azure Database Migration Service.":::
+
+2. In the search results, select the DMS instance that you created for running the preliminary offline migration, and then select **+ New Migration Project**.
+
+ :::image type="content" source="media/tutorial-azure-mysql-single-to-flex-online/11-select-create.png" alt-text="Screenshot of a Select a new migration project.":::
+
+3. On the **New migration project** page, specify a name for the project, in the **Source server type** selection box, select **MySQL**, in the **Target server type** selection box, select **Azure Database For MySQL - Flexible Server**, in the **Migration activity type** selection box, select **Replicate changes**, and then select **Create and run activity**.
+
+### Configure the migration project
+
+To configure your DMS migration project, perform the following steps.
+
+1. On the **Select source** screen, input the source server name, server port, username, and password to your source server.
+ :::image type="content" source="media/tutorial-mysql-to-azure-replicate-changes/02-replicate-changes-select-source.png" alt-text="Screenshot of an Add source details screen.":::
+
+2. Select **Next : Select target>>**, and then, on the **Select target** screen, locate the target server based on the subscription, location, and resource group. The user name is auto populated, then provide the password for the target flexible server.
+ :::image type="content" source="media/tutorial-mysql-to-azure-replicate-changes/03-replicate-changes-select-target.png" alt-text="Screenshot of a Select target.":::
+
+3. Select **Next : Select binlog>>**, and then, on the **Select binlog** screen, input the binlog file name and binlog position as captured in the earlier run of offline migration scenario.
+ :::image type="content" source="media/tutorial-mysql-to-azure-replicate-changes/04-replicate-changes-select-binlog.png" alt-text="Screenshot of a Select binlog.":::
+
+4. Select **Next : Select databases>>**, and then, on the **Select databases** tab, select the server database objects that you want to migrate.
+ :::image type="content" source="media/tutorial-mysql-to-azure-replicate-changes/05-replicate-changes-select-db.png" alt-text="Screenshot of a Select database.":::
+
+5. Select **Next : Select tables>>** to navigate to the **Select tables** tab. Select all tables to be migrated.
+ :::image type="content" source="media/tutorial-mysql-to-azure-replicate-changes/06-replicate-changes-select-table.png" alt-text="Screenshot of a Select table.":::
+
+6. After configuring for schema migration, select **Review and start migration**.
+ > [!NOTE]
+ > You only need to navigate to the **Configure migration settings** tab if you are trying to troubleshoot failing migrations.
+
+7. On the **Summary** tab, in the **Activity name** text box, specify a name for the migration activity, and then review the summary to ensure that the source and target details match what you previously specified.
+ :::image type="content" source="media/tutorial-mysql-to-azure-replicate-changes/07-replicate-changes-summary.png" alt-text="Screenshot of a Summary.":::
+
+8. Select **Start migration**.
+
+ The migration activity window appears, and the Status of the activity is Initializing. The Status changes to Running when the table migrations start.
+ :::image type="content" source="media/tutorial-mysql-to-azure-replicate-changes/08-replicate-changes-start.png" alt-text="Screenshot of a Running status.":::
+
+### Monitor the migration
+
+1. Monitor the **Seconds behind source** and as soon as it nears 0, proceed to start cutover by navigating to the **Start Cutover** menu tab at the top of the migration activity screen.
+ :::image type="content" source="media/tutorial-mysql-to-azure-replicate-changes/09-replicate-changes-cutover-open.png" alt-text="Screenshot of a Perform cutover.":::
+
+2. Follow the steps in the cutover window before you're ready to perform a cutover. After completing all steps, select **Confirm**, and then select **Apply**.
+ :::image type="content" source="media/tutorial-mysql-to-azure-replicate-changes/10-replicate-changes-cutover-confirm.png" alt-text="Screenshot of a Perform cutover confirmation.":::
+Once the cutover is completed, you are all set to perform post migration validations and steps.
+ :::image type="content" source="media/tutorial-mysql-to-azure-replicate-changes/11-replicate-changes-cutover-complete.png" alt-text="Screenshot of a Perform cutover completed.":::
+
+## Perform post-migration activities
+
+When the migration has finished, be sure to complete the following post-migration activities.
+
+* Perform sanity testing of the application against the target database to certify the migration.
+* Update the connection string to point to the new flexible server.
+* Delete the source server after you have ensured application continuity.
+* To clean up the DMS resources, perform the following steps:
+ 1. In the Azure portal, select **All services**, search for Azure Database Migration Service, and then select **Azure Database Migration Services**.
+ 2. Select your migration service instance from the search results, and then select **Delete service**.
+ 3. In the confirmation dialog box, in the **TYPE THE DATABASE MIGRATION SERVICE NAME** textbox, specify the name of the instance, and then select **Delete**.
+
+## Migration best practices
+
+When performing a migration, be sure to consider the following best practices.
+
+* As part of discovery and assessment, take the server SKU, CPU usage, storage, database sizes, and extensions usage as some of the critical data to help with migrations.
+* Perform test migrations before migrating for production:
+ * Test migrations are important for ensuring that you cover all aspects of the database migration, including application testing. The best practice is to begin by running a migration entirely for testing purposes. After a newly started migration enters the Replicate Data Changes phase with minimal lag, only use your Flexible Server target for running test workloads. Use that target for testing the application to ensure expected performance and results. If you're migrating to a higher MySQL version, test for application compatibility.
+ * After testing is completed, you can migrate the production databases. At this point, you need to finalize the day and time of production migration. Ideally, there's low application use at this time. All stakeholders who need to be involved should be available and ready. The production migration requires close monitoring. For an online migration, the replication must be completed before you perform the cutover, to prevent data loss.
+* Redirect all dependent applications to access the new primary database and make the source server read-only. Then, open the applications for production usage.
+* After the application starts running on the target flexible server, monitor the database performance closely to see if performance tuning is required.
+
+## Next steps
+
+* For information about Azure Database for MySQL - Flexible Server, see [Overview - Azure Database for MySQL Flexible Server](./../mysql/flexible-server/overview.md).
+* For information about Azure Database Migration Service, see [What is Azure Database Migration Service?](./dms-overview.md).
+* For information about known issues and limitations when migrating to Azure Database for MySQL - Flexible Server using DMS, see [Known Issues With Migrations To Azure Database for MySQL - Flexible Server](./known-issues-azure-mysql-fs-online.md).
+* For information about known issues and limitations when performing migrations using DMS, see [Common issues - Azure Database Migration Service](./known-issues-troubleshooting-dms.md).
+* For troubleshooting source database connectivity issues while using DMS, see article [Issues connecting source databases](./known-issues-troubleshooting-dms-source-connectivity.md).
energy-data-services Concepts Entitlements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/concepts-entitlements.md
The entitlement service of Azure Data Manager for Energy allows you to create gr
Different groups and associated user entitlements must be set for every *new data partition*, even in the same Azure Data Manager for Energy instance.
+## Types of OSDU groups
The entitlement service enables three use cases for authorization: ### Data groups
The entitlement service enables three use cases for authorization:
### User groups - User groups are used for hierarchical grouping of user and service groups. - The service groups start with the word "users," such as `users.datalake.viewers` and `users.datalake.editors`.-- Some user groups are created by default when a data partition is provisioned. For information on these groups and their hierarchy scope, see [Bootstrapped OSDU entitlement groups](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/osdu-entitlement-roles.md).+ **Nested hierarchy** - If user_1 is part of a data_group_1 and data_group_1 is added as a member to the user_group_1, OSDU code checks for the nested membership and authorize user_1 to access the entitlements for user_group_1. This is explained in [OSDU Entitlement Check API](https://community.opengroup.org/osdu/platform/system/storage/-/blob/master/storage-core/src/main/java/org/opengroup/osdu/storage/service/EntitlementsAndCacheServiceImpl.java?ref_type=heads#L105) and [OSDU Retrieve Group API](https://community.opengroup.org/osdu/platform/security-and-compliance/entitlements/-/blob/master/provider/entitlements-v2-azure/src/main/java/org/opengroup/osdu/entitlements/v2/azure/spi/gremlin/retrievegroup/RetrieveGroupRepoGremlin.java#:~:text=public%20ParentTreeDto%20loadAllParents(EntityNode%20memberNode)%20%7B). - You can add individual users to a `user group`. The `user group` is then added to a `data group`. The data group is added to the ACL of the data record. It enables abstraction for the data groups because individual users don't need to be added one by one to the data group. Instead, you can add users to the `user group`. Then you can use the `user group` repeatedly for multiple `data groups`. The nested structure helps provide scalability to manage memberships in OSDU.
-#### Peculiarity of `users@` group
+## Default groups
+- Some OSDU groups are created by default when a data partition is provisioned.
+- Data groups of `data.default.viewers` and `data.default.owners` are created by default.
+- Service groups to view, edit, and admin each service such as `service.entitlement.admin` and `service.legal.editor` are created by default.
+- User groups of `users`, `users.datalake.viewers`, `users.datalake.editors`, `users.datalake.admins`, `users.datalake.ops`, and `users.data.root` are created by default.
+- The chart of default members and groups in [Bootstrapped OSDU entitlement groups](https://community.opengroup.org/osdu/platform/deployment-and-operations/infra-azure-provisioning/-/blob/master/docs/osdu-entitlement-roles.md) shows the column header groups as the member of row headers. For example, `users` group is member of `data.default.viewers` and `data.default.owners` by default. `users.datalake.admins` and `users.datalake.ops` are member of `service.entitlement.admin` group.
+- Service principal or the `client-id` or the `app-id` is the default owner of all the groups.
+
+### Peculiarity of `users@` group
- There's one exception of this group naming rule for the "users" group. It gets created when a new data partition is provisioned and its name follows the pattern of `users@{partition}.{domain}`. - It has the list of all the users with any type of access in a specific data partition. Before you add a new user to any entitlement groups, you also need to add the new user to the `users@{partition}.{domain}` group.
-#### Peculiarity of `users.data.root@` group
+### Peculiarity of `users.data.root@` group
- users.data.root entitlement group is the default member of all data groups when groups are created. If you try to remove users.data.root from any data group, you get error since this membership is enforced by OSDU. - users.data.root becomes automatically the default and permanent owner of all the data records when the records get created in the system as explained in [OSDU validate owner access API](https://community.opengroup.org/osdu/platform/system/storage/-/blob/master/storage-core/src/main/java/org/opengroup/osdu/storage/service/DataAuthorizationService.java?ref_type=heads#L66) and [OSDU users data root check API](https://community.opengroup.org/osdu/platform/system/storage/-/blob/master/storage-core/src/main/java/org/opengroup/osdu/storage/service/EntitlementsAndCacheServiceImpl.java#L98). As a result, irrespective of the OSDU membership of the user, the system checks if the user is ΓÇ£DataManagerΓÇ¥, i.e., part of data.root group, to grant access of the data record. - The default membership in users.data.root is only the `app-id` that is used to set up the instance. You can add other users explicitly to this group to give them default access of data records.
For each OSDU group, you can add a user as either an OWNER or a MEMBER:
## Entitlement APIs
-For a full list of Entitlement API endpoints, see [OSDU entitlement service](https://community.opengroup.org/osdu/platform/security-and-compliance/entitlements/-/blob/release/0.15/docs/tutorial/Entitlements-Service.md#entitlement-service-api). A few illustrations of how to use Entitlement APIs are available in [Manage users](how-to-manage-users.md).
+For a full list of Entitlement API endpoints, see [OSDU entitlement service](https://community.opengroup.org/osdu/platform/security-and-compliance/entitlements/-/tree/release/0.15/docs). A few illustrations of how to use Entitlement APIs are available in [Manage users](how-to-manage-users.md).
> [!NOTE] > The OSDU documentation refers to v1 endpoints, but the scripts noted in this documentation refer to v2 endpoints, which work and have been successfully validated.
energy-data-services How To Generate Auth Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-generate-auth-token.md
The `redirect-uri` of your app, where your app sends and receives the authentica
### Find the adme-url for your Azure Data Manager for Energy instance
-1. Create an [Azure Data Manager for Energy instance](quickstart-create-microsoft-energy-data-services-instance.md).
+1. Create an [Azure Data Manager for Energy instance](quickstart-create-microsoft-energy-data-services-instance.md) using the `client-id` generated above.
1. Go to your Azure Data Manager for Energy **Overview** page on the Azure portal. 1. On the **Essentials** pane, copy the URI.
You have two ways to get the list of data partitions in your Azure Data Manager
## Generate the client-id auth token
-Run the following curl command in Azure Cloud Bash after you replace the placeholder values with the corresponding values found earlier in the previous steps. The access token in the response is the `client-id` auth token.
+Run the following curl command in [Azure Cloud Bash](../cloud-shell/overview.md) after you replace the placeholder values with the corresponding values found earlier in the previous steps. The access token in the response is the `client-id` auth token.
**Request format**
Generating a user's auth token is a two-step process.
The first step to get an access token for many OpenID Connect (OIDC) and OAuth 2.0 flows is to redirect the user to the Microsoft identity platform `/authorize` endpoint. Microsoft Entra ID signs the user in and requests their consent for the permissions your app requests. In the authorization code grant flow, after consent is obtained, Microsoft Entra ID returns an authorization code to your app that it can redeem at the Microsoft identity platform `/token` endpoint for an access token.
-1. After you replace the parameters, you can paste the request in the URL of any browser and select Enter.
-1. Sign in to your Azure portal if you aren't signed in already.
-1. You might see the "Hmmm...can't reach this page" error message in the browser. You can ignore it.
-
- :::image type="content" source="media/how-to-generate-auth-token/localhost-redirection-error.png" alt-text="Screenshot of localhost redirection.":::
-
-1. The browser redirects to `http://localhost:8080/?code={authorization code}&state=...` upon successful authentication.
-1. Copy the response from the URL bar of the browser and fetch the text between `code=` and `&state`.
-1. Keep this `authorization-code` handy for future use.
-
+1. Prepare the request format using the parameters.
#### Request format ```bash
- https://login.microsoftonline.com/{tenant-id}/oauth2/v2.0/authorize?client_id={client-id}
+ https://login.microsoftonline.com/<tenant-id>/oauth2/v2.0/authorize?client_id=<client-id>
&response_type=code
- &redirect_uri={redirect-uri}
+ &redirect_uri=<redirect-uri>
&response_mode=query
- &scope={client-id}%2f.default&state=12345&sso_reload=true
+ &scope=<client-id>%2f.default&state=12345&sso_reload=true
```
-
-| Parameter | Required? | Description |
-| | | |
-|tenant-id|Required|Name of your Microsoft Entra tenant.|
-| client-id |Required |The application ID assigned to your app in the [Azure portal](https://portal.azure.com). |
-| response_type |Required |The response type, which must include `code` for the authorization code flow. You can receive an ID token if you include it in the response type, such as `code+id_token`, and in this case, the scope needs to include `openid`.|
-| redirect_uri |Required |The redirect URI of your app, where your app sends and receives the authentication responses. It must exactly match one of the redirect URIs that you registered in the portal, except that it must be URL encoded. |
-| scope |Required |A space-separated list of scopes. The `openid` scope indicates a permission to sign in the user and get data about the user in the form of ID tokens. The `offline_access` scope is optional for web applications. It indicates that your application needs a *refresh token* for extended access to resources. The client ID indicates the token issued is intended for use by an Azure Active Directory B2C registered client. The `https://{tenant-name}/{app-id-uri}/{scope}` indicates a permission to protected resources, such as a web API. |
-| response_mode |Recommended |The method that you use to send the resulting authorization code back to your app. It can be `query`, `form_post`, or `fragment`. |
-| state |Recommended |A value included in the request that can be a string of any content that you want to use. Usually, a randomly generated unique value is used to prevent cross-site request forgery (CSRF) attacks. The state also is used to encode information about the user's state in the app before the authentication request occurred. For example, the page the user was on, or the user flow that was being executed. |
+2. After you replace the parameters, you can paste the request in the URL of any browser and select Enter.
+3. Sign in to your Azure portal if you aren't signed in already.
+4. You might see the "Hmmm...can't reach this page" error message in the browser. You can ignore it.
+
+ :::image type="content" source="media/how-to-generate-auth-token/localhost-redirection-error.png" alt-text="Screenshot of localhost redirection.":::
+
+5. The browser redirects to `http://localhost:8080/?code={authorization code}&state=...` upon successful authentication.
+6. Copy the response from the URL bar of the browser and fetch the text between `code=` and `&state`.
#### Sample response
The first step to get an access token for many OpenID Connect (OIDC) and OAuth 2
http://localhost:8080/?code=0.BRoAv4j5cvGGr0...au78f&state=12345&session.... ```
-> [!NOTE]
-> The browser might say that the site can't be reached, but it should still have the authorization code in the URL bar.
+7. Keep this `authorization-code` handy for future use.
|Parameter| Description| | | |
The second step is to get the auth token and the refresh token. Your app uses th
#### Request format ```bash
- curl -X POST -H "Content-Type: application/x-www-form-urlencoded" -d 'client_id={client-id}
- &scope={client-id}%2f.default openid profile offline_access
- &code={authorization-code}
- &redirect_uri={redirect-uri}
+ curl -X POST -H "Content-Type: application/x-www-form-urlencoded" -d 'client_id=<client-id>
+ &scope=<client-id>%2f.default openid profile offline_access
+ &code=<authorization-code>
+ &redirect_uri=<redirect-uri>
&grant_type=authorization_code
- &client_secret={client-secret}' 'https://login.microsoftonline.com/{tenant-id}/oauth2/v2.0/token'
+ &client_secret=<client-secret>' 'https://login.microsoftonline.com/<tenant-id>/oauth2/v2.0/token'
```
-|Parameter |Required |Description |
-||||
-|tenant | Required | The `{tenant-id}` value in the path of the request can be used to control who can sign in to the application.|
-|client_id | Required | The application ID assigned to your app upon registration. |
-|scope | Required | A space-separated list of scopes. The scopes that your app requests in this leg must be equivalent to or a subset of the scopes that it requested in the first (authorization) leg. If the scopes specified in this request span multiple resource servers, the v2.0 endpoint returns a token for the resource specified in the first scope. |
-|code |Required |The authorization code that you acquired in the first step of the flow. |
-|redirect_uri | Required |The same redirect URI value that was used to acquire the authorization code. |
-|grant_type | Required | Must be `authorization_code` for the authorization code flow. |
-|client_secret | Required | The client secret that you created in the app registration portal for your app. It shouldn't be used in a native app because client secrets can't be reliably stored on devices. It's required for web apps and web APIs, which have the ability to store the client secret securely on the server side.|
- #### Sample response ```bash
energy-data-services How To Manage Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-manage-users.md
The Azure object ID (OID) is the Microsoft Entra user OID.
If you try to directly use your own access token for adding entitlements, it results in a 401 error. The `client-id` access token must be used to add the first set of users in the system. Those users (with admin access) can then manage more users with their own access token. 1. Use the `client-id` access token to do the following steps by using the commands outlined in the following sections:
- 1. Add the user to the `users@<data-partition-id>.<domain>` OSDU group.
- 2. Add the user to the `users.datalake.ops@<data-partition-id>.<domain>` OSDU group to give access of all the service groups.
- 3. Add the user to the `users.data.root@<data-partition-id>.<domain>` OSDU group to give access of all the data groups.
+ 1. Add the user to the `users@<data-partition-id>.<domain>` OSDU group with the OWNER role.
+ 2. Add the user to the `users.datalake.ops@<data-partition-id>.<domain>` OSDU group with the OWNER role to give access of all the service groups.
+ 3. Add the user to the `users.data.root@<data-partition-id>.<domain>` OSDU group with the OWNER role to give access of all the data groups.
1. The user becomes the admin of the data partition. The admin can then add or remove more users to the required entitlement groups: 1. Get the admin's auth token by using [Generate user access token](how-to-generate-auth-token.md#generate-the-user-auth-token) and by using the same `client-id` and `client-secret` values. 1. Get the OSDU group, such as `service.legal.editor@<data-partition-id>.<domain>`, to which you want to add more users by using the admin's access token.
event-grid Mqtt Events Fabric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-events-fabric.md
+
+ Title: Send MQTT events to Microsoft Fabric via Event Hubs
+description: Shows you how to use Event Grid to send events from MQTT clients to Microsoft Fabric via Azure Event Hubs.
+ Last updated : 02/13/2024+++++
+# How to send MQTT events to Microsoft Fabric via Event Hubs using Azure Event Grid
+This article shows you how to use Azure Event Grid to send events from MQTT clients to Microsoft Fabric data stores via Azure Event Hubs.
+
+## High-level steps
+
+1. Create a namespace topic that receives events from MQTT clients.
+2. Create a subscription to the topic using Event Hubs as the destination.
+3. Create an event stream in Microsoft Fabric with the event hub as a source and a Fabric KQL database or Lakehouse as a destination.
+
+## Event flow
+
+1. MQTT client sends events to your Event Grid namespace topic.
+2. Event subscription to the namespace topic forwards those events to your event hub.
+3. Fabric event stream receives events from the event hub and stores them in a Fabric destination such as a KQL database or a lakehouse.
+
+## Detailed steps
+
+1. In the Azure portal, do these steps:
+ 1. [Create an Event Hubs namespace and an event hub](../event-hubs/event-hubs-create.md).
+ 1. [Create an Event Grid namespace and a topic](create-view-manage-namespace-topics.md#create-a-namespace-topic).
+ 1. [Enable managed identity on the namespace](event-grid-namespace-managed-identity.md).
+ 1. [Add the identity to Azure Event Hubs Data Sender role on the Event Hubs namespace or event hub](/azure/role-based-access-control/role-assignments-portal?tabs=delegate-condition).
+ 1. [Create a subscription to the topic using Azure Event Hubs as the destination type and select your event hub](mqtt-routing-to-event-hubs-portal.md#create-an-event-subscription-with-event-hubs-as-the-endpoint).
+ 1. [Create an Event Grid namespace and enable MQTT broker](mqtt-publish-and-subscribe-portal.md#create-a-namespace).
+ 1. [Enable routing to a namespace topic](mqtt-routing-to-event-hubs-portal.md#configure-routing-in-the-event-grid-namespace).
+1. In Microsoft Fabric, do these steps:
+ 1. [Create a lakehouse](/fabric/onelake/create-lakehouse-onelake#create-a-lakehouse).
+ 2. [Create an event stream](/fabric/real-time-analytics/event-streams/create-manage-an-eventstream#create-an-eventstream).
+ 3. [Add your event hub as an input source](/fabric/real-time-analytics/event-streams/add-manage-eventstream-sources#add-an-azure-event-hub-as-a-source).
+ 4. [Add your lakehouse as a destination](/fabric/real-time-analytics/event-streams/add-manage-eventstream-destinations#add-a-lakehouse-as-a-destination).
+1. [Publish events to the namespace topic](publish-deliver-events-with-namespace-topics.md#send-events-to-your-topic).
+
+## Next steps
+Build a Power BI report as shown in the sample: [Build a near-real-time Power BI report with the event data ingested in a lakehouse](/fabric/real-time-analytics/event-streams/transform-and-stream-real-time-events-to-lakehouse).
++
+
++++
+-
++
firewall Enable Top Ten And Flow Trace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/enable-top-ten-and-flow-trace.md
Last updated 05/12/2023
-# Enable Top flows (preview) and Flow trace logs (preview) in Azure Firewall
+# Enable Top flows and Flow trace logs in Azure Firewall
Azure Firewall has two new diagnostics logs you can use to help monitor your firewall:
The following additional properties can be added:
### Enable the log
-Enable the log using the following Azure PowerShell commands or navigate to the Preview features in the portal and search for **Enable TCP Connection Logging**:
-
+Enable the log using the following Azure PowerShell commands or navigate in the portal and search for **Enable TCP Connection Logging**:
```azurepowershell Connect-AzAccount
firewall Firewall Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/firewall-preview.md
With the Azure Firewall Resource Health check, you can now diagnose and get supp
Starting in August 2023, this preview is automatically enabled on all firewalls and no action is required to enable this functionality. For more information, see [Resource Health overview](../service-health/resource-health-overview.md).
-### Top flows (preview) and Flow trace logs (preview)
--- The Top flows log shows the top connections that contribute to the highest throughput through the firewall.-- Flow trace logs show the full journey of a packet in the TCP handshake.-
-For more information, see [Enable Top flows (preview) and Flow trace logs (preview) in Azure Firewall](enable-top-ten-and-flow-trace.md).
- ### Auto-learn SNAT routes (preview) You can configure Azure Firewall to auto-learn both registered and private ranges every 30 minutes. For information, see [Azure Firewall SNAT private IP address ranges](snat-private-range.md#auto-learn-snat-routes-preview).
governance Policy For Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/policy-for-kubernetes.md
kubectl get pods -n gatekeeper-system
Lastly, verify that the latest add-on is installed by running this Azure CLI command, replacing `<rg>` with your resource group name and `<cluster-name>` with the name of your AKS cluster: `az aks show --query addonProfiles.azurepolicy -g <rg> -n <cluster-name>`. The result should look
-similar to the following output:
+similar to the following output for clusters using service principals:
```output {
similar to the following output:
"identity": null } ```-
+And the following output for clusters using managed identity:
+```output
+ {
+ "config": null,
+ "enabled": true,
+ "identity": {
+ "clientId": "########-####-####-####-############",
+ "objectId": "########-####-####-####-############",
+ "resourceId": "<resource-id>"
+ }
+ }
+```
## Install Azure Policy Extension for Azure Arc enabled Kubernetes [Azure Policy for Kubernetes](./policy-for-kubernetes.md) makes it possible to manage and report on the compliance state of your Kubernetes clusters from one place. With Azure Policy's Extension for Arc-enabled Kubernetes clusters, you can govern your Arc-enabled Kubernetes cluster components, like pods and containers.
kubectl logs <azure-policy pod name> -n kube-system
kubectl logs <gatekeeper pod name> -n gatekeeper-system ```
+If you are attempting to troubleshoot a particular ComplianceReasonCode that is appearing in your compliance results, you can search the azure-policy pod logs for that code to see the full accompanying error.
+ For more information, see [Debugging Gatekeeper](https://open-policy-agent.github.io/gatekeeper/website/docs/debug/) in the Gatekeeper documentation.
To identify the Gatekeeper version that your Azure Policy Add-On is using, you c
Finally, to identify the AKS cluster version that you are using, follow the linked AKS guidance for this. ### Add-On versions available per each AKS cluster version-
+#### 1.3.0
+Introduces error state for policies in error, enabling them to be distinguished from policies in noncompliant states. Adds support for v1 constraint templates and use of the excludedNamespaces parameter in mutation policies. Adds an error status check on constraint templates post-installation.
+- Released February 2024
+- Kubernetes 1.25+
+- Gatekeeper 3.14.0
+
#### 1.2.1 - Released October 2023 - Kubernetes 1.25+
governance Disallowed Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/tutorials/disallowed-resources.md
Title: "Tutorial: Disallow resource types in your cloud environment" description: In this tutorial, you use Azure Policy to enforce only certain resource types be used in your environment. Previously updated : 06/19/2023 Last updated : 02/13/2024 + # Tutorial: Disallow resource types in your cloud environment
-One popular goal of cloud governance is restricting what resource types are allowed in the environment. Businesses have many motivations behind resource type restrictions. For example, resource types may be costly or may go against business standards and strategies. Rather than using many policies for individual resource types, Azure Policy offers two built-in policies to achieve this goal:
+One popular goal of cloud governance is restricting what resource types are allowed in the environment. Businesses have many motivations behind resource type restrictions. For example, resource types might be costly or might go against business standards and strategies. Rather than using many policies for individual resource types, Azure Policy offers two built-in policies to achieve this goal:
-|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect |Version<br /><sub>(GitHub)</sub> |
||||| |[Allowed resource types](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa08ec900-254a-4555-9bf5-e42af04b5c5c) |This policy enables you to specify the resource types that your organization can deploy. Only resource types that support 'tags' and 'location' are affected by this policy. To restrict all resources, duplicate this policy and change the 'mode' to 'All'. |deny |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/General/AllowedResourceTypes_Deny.json) | |[Not allowed resource types](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6c112d4e-5bc7-47ae-a041-ea2d9dccd749) |Restrict which resource types can be deployed in your environment. Limiting resource types can reduce the complexity and attack surface of your environment while also helping to manage costs. Compliance results are only shown for non-compliant resources. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/General/InvalidResourceTypes_Deny.json) |
The first step in disabling resource types is to assign the **Not allowed resour
:::image source="../media/disallowed-resources/definition-details-red-outline.png" alt-text="Screenshot of definition details screen for 'Not allowed resource types' policy.":::
-1. Click the **Assign** button on the top of the page.
+1. Select the **Assign** button on the top of the page.
-1. On the **Basics** tab, select the **Scope** by selecting the ellipsis
- and choosing a management group, subscription, or resource group of choice. Ensure that the selected scope has at least one subscope. Learn more about [scopes](../concepts/scope.md). Then click **Select** at the bottom of the **Scope** page.
+1. On the **Basics** tab, set the **Scope** by selecting the ellipsis
+ and choosing a management group, subscription, or resource group. Ensure that the selected [scope](../concepts/scope.md) has at least one subscope. Then click **Select** at the bottom of the **Scope** page.
This example uses the **Contoso** subscription.
-
+ > [!NOTE] > If you assign this policy definition to your root management group scope, the portal can detect disallowed resource types and disable them in the **All Services** view so that portal users are aware of the restriction before trying to deploy a disallowed resource.
- >
1. Resources can be excluded based on the **Scope**. **Exclusions** start at one level lower than the level of the **Scope**. **Exclusions** are optional, so leave it blank for now.
The first step in disabling resource types is to assign the **Not allowed resour
## View disabled resource types in Azure portal
-This step only applies when the policy has been assigned at the root management group scope.
+This step only applies when the policy was assigned at the root management group scope.
-Now that you've assigned a built-in policy definition, go to [All Services](https://portal.azure.com/#allservices/category/All). Azure portal is aware of the disallowed resource types from this policy assignment and disables them in the **All Services** page accordingly.
+Now that you assigned a built-in policy definition, go to [All Services](https://portal.azure.com/#allservices/category/All). Azure portal is aware of the disallowed resource types from this policy assignment and disables them in the **All Services** page. The **Create** option is unavailable for disabled resource types.
> [!NOTE]
-> If you assign this policy definition to your root management group, users will see the following notification when they log in for the first time or if the policy changes after they have logged in:
+> If you assign this policy definition to your root management group, users will see the following notification when they sign in for the first time or if the policy changes after they have signed in:
> > _**Policy changed by admin** > Your administrator has made changes to the policies for your account. It is recommended that you refresh the portal to use the updated policies._
Now that you've assigned a built-in policy definition, go to [All Services](http
## Create an exemption
-Now suppose that one subscope should be allowed to have the resource types disabled by this policy. Let's create an exemption on this scope so that otherwise restricted resources can be deployed there.
+Now suppose that one subscope should be allowed to have the resource types disabled by this policy. Let's create an exemption on this scope so that otherwise restricted resources can be deployed there.
> [!WARNING]
-> If you assign this policy definition to your *root management group* scope, Azure portal is unable to detect exemptions at lower level scopes from the All Services list. Resources disallowed by the policy assignment will still show as disabled from this view even if an exemption is in place at a lower scope. However, if the user has permissions on the exempt subscope, they will not be prevented from navigating to the service and performing actions there. At this point the false disabled status should no longer be present.
+> If you assign this policy definition to your root management group scope, Azure portal is unable to detect exemptions at lower level scopes. Resources disallowed by the policy assignment will show as disabled from the **All Services** list and the **Create** option is unavailable.
1. Select **Assignments** under **Authoring** in the left side of the Azure Policy page. 1. Search for the policy assignment you created.
-1. Select the **Create exemption** button on the top of the page.
+1. Select the **Create exemption** button on the top of the page.
1. In the **Basics** tab, select the **Exemption scope**, which is the subscope that should be allowed to have resources restricted by this policy assignment.
-1. Fill out **Exemption name** with the desired text, and leave **Exemption category** as the default of *Waiver*. Don't switch the toggle for **Exemption expiration setting**, because this exemption won't be set to expire. Optionally add an **Exemption description**, and select **Review + create**.
+1. Fill out **Exemption name** with the desired text, and leave **Exemption category** as the default of _Waiver_. Don't switch the toggle for **Exemption expiration setting**, because this exemption won't be set to expire. Optionally add an **Exemption description**, and select **Review + create**.
1. This tutorial bypasses the **Advanced** tab. From the **Review + create** tab, select **Create**.
In this tutorial, you successfully accomplished the following tasks:
> - Assigned the **Not allowed resource types** built-in policy to deny creation of disallowed resource types > - Created an exemption for this policy assignment at a subscope
-With this built-in policy you specified resource types that are _not_ allowed. The alternative, more restrictive approach is to specify resource types that _are_ allowed using the **Allowed resource types** built-in policy.
+With this built-in policy you specified resource types that _aren't allowed_. The alternative, more restrictive approach is to specify resource types that _are allowed_ using the **Allowed resource types** built-in policy.
> [!NOTE] > Azure portal's **All Services** will only disable resources not specified in the allowed resource type policy if the `mode` is set to `All` and the policy is assigned at the root management group. This is because it checks all resource types regardless of `tags` and `locations`. If you want the portal to have this behavior, duplicate the [Allowed resource types](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa08ec900-254a-4555-9bf5-e42af04b5c5c) built-in policy and change its `mode` from `Indexed` to `All`, then assign it to the root management group scope.
hdinsight Apache Hadoop Connect Hive Jdbc Driver https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-connect-hive-jdbc-driver.md
Title: Query Apache Hive through the JDBC driver - Azure HDInsight
-description: Use the JDBC driver from a Java application to submit Apache Hive queries to Hadoop on HDInsight. Connect programmatically and from the SQuirrel SQL client.
+description: Use the JDBC driver from a Java application to submit Apache Hive queries to Hadoop on HDInsight.
Previously updated : 01/06/2023 Last updated : 02/12/2024 # Query Apache Hive through the JDBC driver in HDInsight [!INCLUDE [ODBC-JDBC-selector](../includes/hdinsight-selector-odbc-jdbc.md)]
-Learn how to use the JDBC driver from a Java application. To submit Apache Hive queries to Apache Hadoop in Azure HDInsight. The information in this document demonstrates how to connect programmatically, and from the SQuirreL SQL client.
+Learn how to use the JDBC driver from a Java application. To submit Apache Hive queries to Apache Hadoop in Azure HDInsight. The information in this document demonstrates how to connect programmatically, and from the `SQuirreL SQL` client.
For more information on the Hive JDBC Interface, see [HiveJDBCInterface](https://cwiki.apache.org/confluence/display/Hive/HiveJDBCInterface).
SQuirreL SQL is a JDBC client that can be used to remotely run Hive queries with
:::image type="content" source="./media/apache-hadoop-connect-hive-jdbc-driver/hdinsight-driversicons.png" alt-text="SQuirreL SQL application drivers icon" border="true":::
-5. In the Add Driver dialog, add the following information:
+5. In the Added Driver dialog, add the following information:
|Property | Value | |||
at java.util.concurrent.FutureTask.get(FutureTask.java:206)
### Connection disconnected by HDInsight
-**Symptoms**: When trying to download huge amount of data (say several GBs) through JDBC/ODBC, the connection is disconnected by HDInsight unexpectedly while downloading.
+**Symptoms**: HDInsight unexpectedly disconnects the connection when trying to download a huge amount of data (say several GBs) through JDBC/ODBC.
-**Cause**: This error is caused by the limitation on Gateway nodes. When getting data from JDBC/ODBC, all data needs to pass through the Gateway node. However, a gateway isn't designed to download a huge amount of data, so the Gateway might close the connection if it can't handle the traffic.
+**Cause**: The limitation on Gateway nodes causes this error. When getting data from JDBC/ODBC, all data needs to pass through the Gateway node. However, a gateway isn't designed to download a huge amount of data, so the Gateway might close the connection if it can't handle the traffic.
**Resolution**: Avoid using JDBC/ODBC driver to download huge amounts of data. Copy data directly from blob storage instead.
hdinsight Apache Hadoop Run Custom Programs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-run-custom-programs.md
description: When and how to run custom Apache MapReduce programs on Azure HDIns
Previously updated : 01/31/2023 Last updated : 02/12/2024 # Run custom MapReduce programs
Apache Hadoop-based big data systems such as HDInsight enable data processing us
| | | | | **Apache Hive using HiveQL** | <ul><li>An excellent solution for batch processing and analysis of large amounts of immutable data, for data summarization, and for on demand querying. It uses a familiar SQL-like syntax.</li><li>It can be used to produce persistent tables of data that can be easily partitioned and indexed.</li><li>Multiple external tables and views can be created over the same data.</li><li>It supports a simple data warehouse implementation that provides massive scale-out and fault-tolerance capabilities for data storage and processing.</li></ul> | <ul><li>It requires the source data to have at least some identifiable structure.</li><li>It isn't suitable for real-time queries and row level updates. It's best used for batch jobs over large sets of data.</li><li>It might not be able to carry out some types of complex processing tasks.</li></ul> | | **Apache Pig using Pig Latin** | <ul><li>An excellent solution for manipulating data as sets, merging and filtering datasets, applying functions to records or groups of records, and for restructuring data by defining columns, by grouping values, or by converting columns to rows.</li><li>It can use a workflow-based approach as a sequence of operations on data.</li></ul> | <ul><li>SQL users may find Pig Latin is less familiar and more difficult to use than HiveQL.</li><li>The default output is usually a text file and so can be more difficult to use with visualization tools such as Excel. Typically you'll layer a Hive table over the output.</li></ul> |
-| **Custom map/reduce** | <ul><li>It provides full control over the map and reduce phases, and execution.</li><li>It allows queries to be optimized to achieve maximum performance from the cluster, or to minimize the load on the servers and the network.</li><li>The components can be written in a range of well-known languages.</li></ul> | <ul><li>It's more difficult than using Pig or Hive because you must create your own map and reduce components.</li><li>Processes that require joining sets of data are more difficult to implement.</li><li>Even though there are test frameworks available, debugging code is more complex than a normal application because the code runs as a batch job under the control of the Hadoop job scheduler.</li></ul> |
-| **Apache HCatalog** | <ul><li>It abstracts the path details of storage, making administration easier and removing the need for users to know where the data is stored.</li><li>It enables notification of events such as data availability, allowing other tools such as Oozie to detect when operations have occurred.</li><li>It exposes a relational view of data, including partitioning by key, and makes the data easy to access.</li></ul> | <ul><li>It supports RCFile, CSV text, JSON text, SequenceFile, and ORC file formats by default, but you may need to write a custom SerDe for other formats.</li><li>HCatalog isn't thread-safe.</li><li>There are some restrictions on the data types for columns when using the HCatalog loader in Pig scripts. For more information, see [HCatLoader Data Types](https://cwiki.apache.org/confluence/display/Hive/HCatalog%20LoadStore#HCatalogLoadStore-HCatLoaderDataTypes) in the Apache HCatalog documentation.</li></ul> |
+| **Custom map/reduce** | <ul><li>It provides full control over the map and reduces phases, and execution.</li><li>It allows queries to be optimized to achieve maximum performance from the cluster, or to minimize the load on the servers and the network.</li><li>The components can be written in a range of well-known languages.</li></ul> | <ul><li>It's more difficult than using Pig or Hive because you must create your own map and reduce components.</li><li>Processes that require joining sets of data are more difficult to implement.</li><li>Even though there are test frameworks available, debugging code is more complex than a normal application because the code runs as a batch job under the control of the Hadoop job scheduler.</li></ul> |
+| `Apache HCatalog` | <ul><li>It abstracts the path details of storage, making administration easier and removing the need for users to know where the data is stored.</li><li>It enables notification of events such as data availability, allowing other tools such as Oozie to detect when operations have occurred.</li><li>It exposes a relational view of data, including partitioning by key, and makes the data easy to access.</li></ul> | <ul><li>It supports RCFile, CSV text, JSON text, SequenceFile, and ORC file formats by default, but you may need to write a custom SerDe for other formats.</li><li>`HCatalog` isn't thread-safe.</li><li>There are some restrictions on the data types for columns when using the `HCatalog` loader in Pig scripts. For more information, see [HCatLoader Data Types](https://cwiki.apache.org/confluence/display/Hive/HCatalog%20LoadStore#HCatalogLoadStore-HCatLoaderDataTypes) in the Apache `HCatalog` documentation.</li></ul> |
Typically, you use the simplest of these approaches that can provide the results you require. For example, you may be able to achieve such results by using just Hive, but for more complex scenarios you may need to use Pig, or even write your own map and reduce components. You may also decide, after experimenting with Hive or Pig, that custom map and reduce components can provide better performance by allowing you to fine-tune and optimize the processing.
hdinsight Hdinsight Os Patching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-os-patching.md
description: Learn how to configure OS patching schedule for Linux-based HDInsig
Previously updated : 02/01/2023 Last updated : 02/12/2024 # Configure the OS patching schedule for Linux-based HDInsight clusters
HDInsight provides support for you to perform common tasks on your cluster such
Patch on a representative non-production environment prior to deploying to production. Develop a plan to adequately test your system prior to your actual patching.
-From time-to-time, from an ssh session with your cluster, you may receive a message that security updates are available. The message may looks something like:
+From time-to-time, from an ssh session with your cluster, you may receive a message that security updates are available. The message might look something like:
``` 89 packages can be updated.
Patching is optional and at your discretion.
## Restart nodes
-The script [schedule-reboots](https://hdiconfigactions.blob.core.windows.net/linuxospatchingrebootconfigv02/schedule-reboots.sh), sets the type of reboot that will be performed on the machines in the cluster. When submitting the script action, set it to apply on all three node types: head node, worker node, and zookeeper. If the script isn't applied to a node type, the VMs for that node type won't be updated or restarted.
+The script [schedule-reboots](https://hdiconfigactions.blob.core.windows.net/linuxospatchingrebootconfigv02/schedule-reboots.sh) sets the type of reboot that will be performed on the machines in the cluster. When submitting the script action, set it to apply on all three node types: head node, worker node, and zookeeper. If the script isn't applied to a node type, the VMs for that node type won't be updated or restarted.
The `schedule-reboots script` accepts one numeric parameter:
hdinsight Interactive Query Troubleshoot Slow Reducer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/interactive-query-troubleshoot-slow-reducer.md
Title: Reducer is slow in Azure HDInsight
-description: Reducer is slow in Azure HDInsight from possible data skewing
+description: Reducer is slow in Azure HDInsight from possible data skewing.
Previously updated : 01/31/2023 Last updated : 02/12/2024 # Scenario: Reducer is slow in Azure HDInsight
Open [beeline](../hadoop/apache-hadoop-use-hive-beeline.md) and verify the value
The value of this variable is meant to be set to true/false based on the nature of the data.
-If the partitions in the input table are less(say less than 10), and so is the number of output partitions, and the variable is set to `true`, this causes data to be globally sorted and written using a single reducer per partition. Even if the number of reducers available is larger, a few reducers may be lagging behind due to data skew and the max parallelism cannot be attained. When changed to `false`, more than one reducer may handle a single partition and multiple smaller files will be written out, resulting in faster insert. This might affect further queries though because of the presence of smaller files.
+If the partitions in the input table are less(say less than 10), and so is the number of output partitions, and the variable is set to `true`, this causes data to be globally sorted and written using a single reducer per partition. Even if the number of reducers available is larger, a few reducers may be lagging behind due to data skew and the max parallelism can't be attained. When changed to `false`, more than one reducer may handle a single partition and multiple smaller files are written out, resulting in faster insert. This might affect further queries though because of the presence of smaller files.
-A value of `true` makes sense when the number of partitions is larger and data is not skewed. In such cases the result of the map phase will be written out such that each partition will be handled by a single reducer resulting in better subsequent query performance.
+A value of `true` makes sense when the number of partitions is larger and data isn't skewed. In such cases the result of the map phase are written out such that each partition will be handled by a single reducer resulting in better subsequent query performance.
## Resolution 1. Try to repartition the data to normalize into multiple partitions.
-1. If #1 is not possible, set the value of the config to false in beeline session and try the query again. `set hive.optimize.sort.dynamic.partition=false`. Setting the value to false at a cluster level is not recommended. The value of `true` is optimal and set the parameter as necessary based on nature of data and query.
+1. If #1 isn't possible, set the value of the config to false in beeline session and try the query again. `set hive.optimize.sort.dynamic.partition=false`. Setting the value to false at a cluster level is not recommended. The value of `true` is optimal and set the parameter as necessary based on nature of data and query.
## Next steps
internet-peering Howto Subscription Association Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/internet-peering/howto-subscription-association-portal.md
Previously updated : 02/12/2024 Last updated : 02/13/2024 #CustomerIntent: As an administrator, I want to learn how to create a PeerASN resource so I can associate my peer ASN to Azure subscription and submit peering requests.
As an Internet Service Provider or Internet Exchange Provider, you must associat
## Register Peering provider
-In this section, you learn how to check if the peering provider is registered in your subscription and how to register it if not registered. Peering resource provider is required to set up peering. If you previously registered the peering resource provider, you can skip this section.
+In this section, you learn how to check if the peering provider is registered in your subscription and how to register it if not registered. Peering provider is required to set up peering. If you previously registered the peering provider, you can skip this section.
1. Sign in to the [Azure portal](https://portal.azure.com).
iot-central Howto Create Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-create-private-endpoint.md
To restrict public access for your devices to IoT Central, turn off access from
1. Select **Save**.
+> [!TIP]
+> If you choose to define a list of IP addresses/ranges that can connect to the public endpoint of your IoT Central application, be sure to include the IP address of any proxy that your devices use to connect to your IoT Central application.
+ ## Connect to a private endpoint When you disable public network access for your IoT Central application, your devices aren't able to connect to the Device Provisioning Service (DPS) global endpoint. This happens because the only FQDN for DPS has a direct IP address in your virtual network. The global endpoint is now unreachable.
kinect-dk Add Library To Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kinect-dk/add-library-to-project.md
Title: Add Azure Kinect library to your Visual Studio project description: Learn how to add the Azure Kinect NuGet package to your Visual Studio Project.--++ Last updated 06/26/2019
load-balancer Manage Rules How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/manage-rules-how-to.md
Previously updated : 12/13/2022 Last updated : 02/12/2024
load-balancer Quickstart Load Balancer Standard Internal Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-internal-portal.md
During the creation of the load balancer, you configure:
- Inbound load-balancing rules 1. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results.- 1. In the **Load balancer** page, select **Create**.- 1. In the **Basics** tab of the **Create load balancer** page, enter, or select the following information: | Setting | Value | | | | | **Project details** | |
- | Subscription | Select your subscription. |
+ | Subscription | Select your subscription. |
| Resource group | Select **load-balancer-rg**. | | **Instance details** | |
- | Name | Enter **load-balancer** |
+ | Name | Enter **load-balancer**. |
| Region | Select **East US**. | | SKU | Leave the default **Standard**. | | Type | Select **Internal**. |
During the creation of the load balancer, you configure:
:::image type="content" source="./media/quickstart-load-balancer-standard-internal-portal/create-standard-internal-load-balancer.png" alt-text="Screenshot of create standard load balancer basics tab." border="true"::: 1. Select **Next: Frontend IP configuration** at the bottom of the page.- 1. In **Frontend IP configuration**, select **+ Add a frontend IP configuration**, then enter or select the following information: | Setting | Value | | - | -- |
- | Name | Enter **lb-frontend** |
+ | Name | Enter **lb-frontend**. |
| Private IP address version | Select **IPv4** or **IPv6** depending on your requirements. | | Setting | Value | | - | -- |
- | Name | Enter **lb-frontend** |
- | Virtual network | Select **lb-vnet** |
- | Subnet | Select **backend-subnet** |
- | Assignment | Select **Dynamic** |
- | Availability zone | Select **Zone-redundant** |
+ | Name | Enter **lb-frontend**. |
+ | Virtual network | Select **lb-vnet**. |
+ | Subnet | Select **backend-subnet**. |
+ | Assignment | Select **Dynamic**. |
+ | Availability zone | Select **Zone-redundant**. |
1. Select **Add**. 1. Select **Next: Backend pools** at the bottom of the page.
During the creation of the load balancer, you configure:
| **Setting** | **Value** | | -- | |
- | Name | Enter **lb-HTTP-rule** |
+ | Name | Enter **lb-HTTP-rule**. |
| IP Version | Select **IPv4** or **IPv6** depending on your requirements. | | Frontend IP address | Select **lb-frontend**. | | Backend pool | Select **lb-backend-pool**. |
During the creation of the load balancer, you configure:
| Health probe | Select **Create new**. </br> In **Name**, enter **lb-health-probe**. </br> Select **TCP** in **Protocol**. </br> Leave the rest of the defaults, and select **OK**. | | Session persistence | Select **None**. | | Idle timeout (minutes) | Enter or select **15**. |
- | Enable TCP reset | Select **checkbox** . |
+ | Enable TCP reset | Select **checkbox**. |
| Enable Floating IP | Leave the default of unselected. | 1. Select **Save**.- 1. Select the blue **Review + create** button at the bottom of the page.- 1. Select **Create**. [!INCLUDE [load-balancer-create-2-virtual-machines](../../includes/load-balancer-create-2-virtual-machines.md)]
During the creation of the load balancer, you configure:
In this section, you create a VM named **lb-TestVM**. This VM is used to test the load balancer configuration. 1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
+1. In **Virtual machines**, select **+ Create** > **Azure virtual machine**.
+1. In **Create a virtual machine**, enter or select the values in the **Basics** tab:
-2. In **Virtual machines**, select **+ Create** > **Azure virtual machine**.
-
-2. In **Create a virtual machine**, type or select the values in the **Basics** tab:
-
- | Setting | Value |
+ | Setting | Value |
|-- | - | | **Project Details** | |
- | Subscription | Select your Azure subscription |
- | Resource Group | Select **load-balancer-rg** |
+ | Subscription | Select your Azure subscription. |
+ | Resource Group | Select **load-balancer-rg**. |
| **Instance details** | |
- | Virtual machine name | Enter **lb-TestVM** |
- | Region | Select **(US) East US** |
- | Availability Options | Select **No infrastructure redundancy required** |
+ | Virtual machine name | Enter **lb-TestVM**. |
+ | Region | Select **(US) East US**. |
+ | Availability Options | Select **No infrastructure redundancy required**. |
| Security type | Select **Standard**. |
- | Image | Select **Windows Server 2022 Datacenter - x64 Gen2** |
+ | Image | Select **Windows Server 2022 Datacenter - x64 Gen2**. |
| Azure Spot instance | Leave the default of unselected. |
- | Size | Choose VM size or take default setting |
+ | Size | Choose VM size or take default setting. |
| **Administrator account** | |
- | Username | Enter a username |
- | Password | Enter a password |
- | Confirm password | Reenter password |
+ | Username | Enter a username. |
+ | Password | Enter a password. |
+ | Confirm password | Reenter password. |
| **Inbound port rules** | | | Public inbound ports | Select **None**. |
-3. Select the **Networking** tab, or select **Next: Disks**, then **Next: Networking**.
-
-4. In the **Networking** tab, select or enter:
+1. Select the **Networking** tab, or select **Next: Disks**, then **Next: Networking**.
+1. In the **Networking** tab, select or enter:
| Setting | Value | |-|-| | **Network interface** | |
- | Virtual network | **lb-vnet** |
- | Subnet | **backend-subnet** |
+ | Virtual network | **lb-vnet**. |
+ | Subnet | **backend-subnet**. |
| Public IP | Select **None**. |
- | NIC network security group | Select **Advanced** |
+ | NIC network security group | Select **Advanced**. |
| Configure network security group | Select **lb-NSG** created in the previous step.|
-5. Select **Review + create**.
-
-6. Review the settings, and then select **Create**.
+1. Select **Review + create**.
+1. Review the settings, and then select **Create**.
## Install IIS 1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.-
-2. Select **lb-vm1**.
-
-3. In the **Overview** page, select **Connect**, then **Bastion**.
-
-4. Enter the username and password entered during VM creation.
-
-5. Select **Connect**.
-
-6. On the server desktop, navigate to **Windows Administrative Tools** > **Windows PowerShell** > **Windows PowerShell**.
-
-7. In the PowerShell Window, execute the following commands to:
-
- * Install the IIS server.
- * Remove the default iisstart.htm file.
- * Add a new iisstart.htm file that displays the name of the VM.
-
- ```powershell
-
+1. Select **lb-vm1**.
+1. In the **Overview** page, select **Connect**, then **Bastion**.
+1. Enter the username and password entered during VM creation.
+1. Select **Connect**.
+1. On the server desktop, navigate to **Windows Administrative Tools** > **Windows PowerShell** > **Windows PowerShell**.
+1. In the PowerShell Window, execute the following commands to:
+ 1. Install the IIS server.
+ 1. Remove the default iisstart.htm file.
+ 1. Add a new iisstart.htm file that displays the name of the VM.
+
+ ```powershell
# Install IIS server role Install-WindowsFeature -name Web-Server -IncludeManagementTools
-
+ # Remove default htm file Remove-Item C:\inetpub\wwwroot\iisstart.htm
-
+ # Add a new htm file that displays server name Add-Content -Path "C:\inetpub\wwwroot\iisstart.htm" -Value $("Hello World from " + $env:computername)
- ```
-
-8. Close the Bastion session with **lb-vm1**.
+ ```
-9. Repeat steps 1 through 8 to install IIS and the updated iisstart.htm file on **lb-VM2**.
+1. Close the Bastion session with **lb-vm1**.
+1. Repeat steps 1 through 8 to install IIS and the updated iisstart.htm file on **lb-VM2**.
## Test the load balancer In this section, you test the load balancer by connecting to the **lb-TestVM** and verifying the webpage. 1. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results.-
-2. Select **load-balancer**.
-
-3. Make note or copy the address next to **Private IP address** in the **Overview** of **load-balancer**. If you can't see the **Private IP address** field, select **See more** in the information window.
-
-4. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
-
-5. Select **lb-TestVM**.
-
-6. In the **Overview** page, select **Connect**, then **Bastion**.
-
-7. Enter the username and password entered during VM creation.
-
-8. Open **Microsoft Edge** on **lb-TestVM**.
-
-9. Enter the IP address from the previous step into the address bar of the browser. The custom page displaying one of the backend server names is displayed on the browser. In this example, it's **10.1.0.4**.
+1. Select **load-balancer**.
+1. Make note or copy the address next to **Private IP address** in the **Overview** of **load-balancer**. If you can't see the **Private IP address** field, select **See more** in the information window.
+1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
+1. Select **lb-TestVM**.
+1. In the **Overview** page, select **Connect**, then **Bastion**.
+1. Enter the username and password entered during VM creation.
+1. Open **Microsoft Edge** on **lb-TestVM**.
+1. Enter the IP address from the previous step into the address bar of the browser. The custom page displaying one of the backend server names is displayed on the browser. In this example, it's **10.1.0.4**.
:::image type="content" source="./media/quickstart-load-balancer-standard-internal-portal/load-balancer-test.png" alt-text="Screenshot shows a browser window displaying the customized page, as expected." border="true":::
load-balancer Quickstart Load Balancer Standard Public Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-public-portal.md
# Quickstart: Create a public load balancer to load balance VMs using the Azure portal
-Get started with Azure Load Balancer by using the Azure portal to create a public load balancer for a backend pool with two virtual machines. Additional resources include Azure Bastion, NAT Gateway, a virtual network, and the required subnets.
+Get started with Azure Load Balancer by using the Azure portal to create a public load balancer for a backend pool with two virtual machines. Other resources include Azure Bastion, NAT Gateway, a virtual network, and the required subnets.
## Prerequisites
Sign in to the [Azure portal](https://portal.azure.com).
## Create load balancer
-In this section, you'll create a zone redundant load balancer that load balances virtual machines. With zone-redundancy, one or more availability zones can fail and the data path survives as long as one zone in the region remains healthy.
+In this section, you create a zone redundant load balancer that load balances virtual machines. With zone-redundancy, one or more availability zones can fail and the data path survives as long as one zone in the region remains healthy.
-During the creation of the load balancer, you'll configure:
+During the creation of the load balancer, you configure:
* Frontend IP address * Backend pool
During the creation of the load balancer, you'll configure:
| Setting | Value | | | | | **Project details** | |
- | Subscription | Select your subscription. |
- | Resource group | Select **load-balancer-rg**. |
+ | Subscription | Select your subscription |
+ | Resource group | Select **load-balancer-rg** |
| **Instance details** | |
- | Name | Enter **load-balancer** |
- | Region | Select **East US**. |
- | SKU | Leave the default **Standard**. |
- | Type | Select **Public**. |
- | Tier | Leave the default **Regional**. |
+ | Name | Enter **load-balancer** |
+ | Region | Select **East US** |
+ | SKU | Leave the default **Standard** |
+ | Type | Select **Public** |
+ | Tier | Leave the default **Regional** |
:::image type="content" source="./media/quickstart-load-balancer-standard-public-portal/create-standard-load-balancer.png" alt-text="Screenshot of create standard load balancer basics tab." border="true":::
During the creation of the load balancer, you'll configure:
| Setting | Value | | - | -- | | Name | Enter **lb-HTTP-rule** |
- | IP Version | Select **IPv4** or **IPv6** depending on your requirements. |
- | Frontend IP address | Select **lb-frontend (To be created)**. |
- | Backend pool | Select **lb-backend-pool**. |
- | Protocol | Select **TCP**. |
- | Port | Enter **80**. |
- | Backend port | Enter **80**. |
+ | IP Version | Select **IPv4** or **IPv6** depending on your requirements |
+ | Frontend IP address | Select **lb-frontend (To be created)** |
+ | Backend pool | Select **lb-backend-pool** |
+ | Protocol | Select **TCP** |
+ | Port | Enter **80** |
+ | Backend port | Enter **80** |
| Health probe | Select **Create new**. </br> In **Name**, enter **lb-health-probe**. </br> Select **HTTP** in **Protocol**. </br> Leave the rest of the defaults, and select **Save**. | | Session persistence | Select **None**. |
- | Idle timeout (minutes) | Enter or select **15**. |
- | Enable TCP reset | Select checkbox. |
- | Enable Floating IP | Leave unchecked. |
+ | Idle timeout (minutes) | Enter or select **15** |
+ | Enable TCP reset | Select checkbox |
+ | Enable Floating IP | Leave unchecked |
| Outbound source network address translation (SNAT) | Leave the default of **(Recommended) Use outbound rules to provide backend pool members access to the internet.** | 1. Select **Save**.
During the creation of the load balancer, you'll configure:
## Install IIS 1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.- 1. Select **myVM1**.- 1. On the **Overview** page, select **Connect**, then **Bastion**.- 1. Enter the username and password entered during VM creation.- 1. Select **Connect**.- 1. On the server desktop, navigate to **Start** > **Windows PowerShell** > **Windows PowerShell**.- 1. In the PowerShell Window, run the following commands to:
- * Install the IIS server
- * Remove the default iisstart.htm file
+ * Install the IIS server.
+ * Remove the default iisstart.htm file.
* Add a new iisstart.htm file that displays the name of the VM: ```powershell
During the creation of the load balancer, you'll configure:
Add-Content -Path "C:\inetpub\wwwroot\iisstart.htm" -Value $("Hello World from " + $env:computername) ```- 1. Close the Bastion session with **myVM1**.- 1. Repeat steps 1 to 8 to install IIS and the updated iisstart.htm file on **myVM2**. ## Test the load balancer 1. In the search box at the top of the page, enter **Public IP**. Select **Public IP addresses** in the search results.- 1. In **Public IP addresses**, select **frontend-ip**.- 1. Copy the item in **IP address**. Paste the public IP into the address bar of your browser. The custom VM page of the IIS Web server is displayed in the browser. :::image type="content" source="./media/quickstart-load-balancer-standard-public-portal/load-balancer-test.png" alt-text="Screenshot of load balancer test":::
logic-apps Logic Apps Http Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-http-endpoint.md
Previously updated : 01/09/2024 Last updated : 02/13/2024 # Create workflows that you can call, trigger, or nest using HTTPS endpoints in Azure Logic Apps
This guide shows how to create a callable endpoint for your workflow by adding t
* A logic app workflow where you want to use the request-based trigger to create the callable endpoint. You can start with either a blank workflow or an existing workflow where you can replace the current trigger. This example starts with a blank workflow.
+* To test the URL for the callable endpoint that you create, you'll need a tool or app such as [Postman](https://www.postman.com/downloads/).
+ ## Create a callable endpoint Based on whether you have a Standard or Consumption logic app workflow, follow the corresponding steps:
Based on whether you have a Standard or Consumption logic app workflow, follow t
1. Save your workflow.
- The **HTTP POST URL** box now shows the generated callback URL that other services can use to call and trigger your logic app. This URL includes query parameters that specify a Shared Access Signature (SAS) key, which is used for authentication.
+ The **HTTP POST URL** box now shows the generated callback URL that other services can use to call and trigger your logic app workflow. This URL includes query parameters that specify a Shared Access Signature (SAS) key, which is used for authentication.
![Screenshot shows Standard workflow, Request trigger, and generated callback URL for endpoint.](./media/logic-apps-http-endpoint/endpoint-url-standard.png)
Based on whether you have a Standard or Consumption logic app workflow, follow t
* To the right of the **HTTP POST URL** box, select **Copy URL** (copy files icon).
- * Make this call by using the method that the Request trigger expects. This example uses the `POST` method:
-
- `POST https://management.azure.com/{logic-app-resource-ID}/triggers/{endpoint-trigger-name}/listCallbackURL?api-version=2016-06-01`
- * Copy the callback URL from your workflow's **Overview** page. 1. On your workflow menu, select **Overview**.
Based on whether you have a Standard or Consumption logic app workflow, follow t
:::image type="content" source="./media/logic-apps-http-endpoint/find-trigger-url-standard.png" alt-text="Screenshot shows Standard workflow and Overview page with workflow URL." lightbox="./media/logic-apps-http-endpoint/find-trigger-url-standard.png":::
+1. To test the callback URL that you now have for the Request trigger, use a tool or app such as [Postman](https://www.postman.com/downloads/), and send the request using the method that the Request trigger expects.
+
+ This example uses the `POST` method:
+
+ `POST https://{logic-app-name}.azurewebsites.net:443/api/{workflow-name}/triggers/{trigger-name}/invoke?api-version=2022-05-01&sp=%2Ftriggers%2F{trigger-name}%2Frun&sv=1.0&sig={shared-access-signature}`
+ ### [Consumption](#tab/consumption) 1. In the [Azure portal](https://portal.azure.com), open your Consumption logic app resource and blank workflow in the designer.
Based on whether you have a Standard or Consumption logic app workflow, follow t
1. Save your workflow.
- The **HTTP POST URL** box now shows the generated callback URL that other services can use to call and trigger your logic app. This URL includes query parameters that specify a Shared Access Signature (SAS) key, which is used for authentication.
+ The **HTTP POST URL** box now shows the generated callback URL that other services can use to call and trigger your logic app workflow. This URL includes query parameters that specify a Shared Access Signature (SAS) key, which is used for authentication.
![Screenshot shows Consumption workflow, Request trigger, and generated callback URL for endpoint.](./media/logic-apps-http-endpoint/endpoint-url-consumption.png)
Based on whether you have a Standard or Consumption logic app workflow, follow t
* To the right of the **HTTP POST URL** box, select **Copy Url** (copy files icon).
- * Make this call by using the method that the Request trigger expects. This example uses the `POST` method:
-
- `POST https://management.azure.com/{logic-app-resource-ID}/triggers/{endpoint-trigger-name}/listCallbackURL?api-version=2016-06-01`
- * Copy the callback URL from your logic app's **Overview** page. 1. On your logic app menu, select **Overview**.
Based on whether you have a Standard or Consumption logic app workflow, follow t
:::image type="content" source="./media/logic-apps-http-endpoint/find-trigger-url-consumption.png" alt-text="Screenshot shows Consumption logic app Overview page with workflow URL." lightbox="./media/logic-apps-http-endpoint/find-trigger-url-consumption.png":::
+1. To test the callback URL that you now have for the Request trigger, use a tool or app such as [Postman](https://www.postman.com/downloads/), and send the request using the method that the Request trigger expects.
+
+ This example uses the `POST` method:
+
+ `POST https://{server-name}.{region}.logic.azure.com/workflows/{workflow-ID}/triggers/{trigger-name}/paths/invoke/?api-version=2016-10-01&sp=%2Ftriggers%2F{trigger-name}%2Frun&sv=1.0&sig={shared-access-signature}`
+ <a name="select-method"></a>
logic-apps Monitor Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/monitor-logic-apps.md
ms.suite: integration Previously updated : 09/29/2023 Last updated : 02/13/2024 # Monitor workflow run status, review trigger and workflow run history, and set up alerts in Azure Logic Apps
To set up alerts without using [Azure Monitor](../azure-monitor/logs/log-query-o
1. On the **Create an alert rule** page, from the **Signal name** list, select the signal for which you want to get an alert.
- For example, to send an alert when a trigger fails, follow these steps:
+ > [!NOTE]
+ >
+ > Available alert signals differ between Consumption and Standard logic apps. For example,
+ > Consumption logic apps have many trigger-related signals, such as **Triggers Completed**
+ > and **Triggers Failed**, while Standard workflows have the **Workflow Triggers Completed Count**
+ > and **Workflow Triggers Failure Rate** signals.
+
+ For example, to send an alert when a trigger fails in a Consumption workflow, follow these steps:
1. From the **Signal name** list, select the **Triggers Failed** signal.
To set up alerts without using [Azure Monitor](../azure-monitor/logs/log-query-o
For example, the finished condition looks similar to the following example, and the **Create an alert rule** page now shows the cost for running that alert:
- ![Screenshot shows the alert rule condition.](./media/monitor-logic-apps/set-up-condition-for-alert.png)
+ ![Screenshot shows Consumption logic app and alert rule condition.](./media/monitor-logic-apps/set-up-condition-for-alert.png)
1. When you're ready, select **Review + Create**.
machine-learning Concept Model Management And Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-model-management-and-deployment.md
Previously updated : 01/04/2023 Last updated : 02/13/2024 # MLOps: Model management, deployment, and monitoring with Azure Machine Learning
Last updated 01/04/2023
[!INCLUDE [dev v2](includes/machine-learning-dev-v2.md)]
-In this article, learn how to apply Machine Learning Operations (MLOps) practices in Azure Machine Learning for the purpose of managing the lifecycle of your models. Applying MLOps practices can improve the quality and consistency of your machine learning solutions.
+In this article, learn about machine learning operations (MLOps) practices in Azure Machine Learning for the purpose of managing the lifecycle of your models. Applying MLOps practices can improve the quality and consistency of your machine learning solutions.
## What is MLOps?
-MLOps is based on [DevOps](https://azure.microsoft.com/overview/what-is-devops/) principles and practices that increase the efficiency of workflows. Examples include continuous integration, delivery, and deployment. MLOps applies these principles to the machine learning process, with the goal of:
+MLOps is based on [DevOps](https://azure.microsoft.com/overview/what-is-devops/) principles and practices that increase the efficiency of workflows. These principles include continuous integration, delivery, and deployment. MLOps applies these principles to the machine learning lifecycle, with the goal of:
* Faster experimentation and development of models. * Faster deployment of models into production. * Quality assurance and end-to-end lineage tracking.
-## MLOps in Machine Learning
+<!-- ## MLOps capabilities -->
-Machine Learning provides the following MLOps capabilities:
+MLOps provides the following capabilities to the machine learning process:
- **Create reproducible machine learning pipelines.** Use machine learning pipelines to define repeatable and reusable steps for your data preparation, training, and scoring processes. - **Create reusable software environments.** Use these environments for training and deploying models.-- **Register, package, and deploy models from anywhere.** You can also track associated metadata required to use the model.-- **Capture the governance data for the end-to-end machine learning lifecycle.** The logged lineage information can include who is publishing models and why changes were made. It can also include when models were deployed or used in production.-- **Notify and alert on events in the machine learning lifecycle.** Event examples include experiment completion, model registration, model deployment, and data drift detection.
+- **Register, package, and deploy models from anywhere.** Track associated metadata required to use a model.
+- **Capture governance data for the end-to-end machine learning lifecycle.** The logged lineage information can include who is publishing models and why changes were made. It can also include when models were deployed or used in production.
+- **Notify and alert on events in the machine learning lifecycle.** Events include experiment completion, model registration, model deployment, and data drift detection.
- **Monitor machine learning applications for operational and machine learning-related issues.** Compare model inputs between training and inference. Explore model-specific metrics. Provide monitoring and alerts on your machine learning infrastructure.-- **Automate the end-to-end machine learning lifecycle with Machine Learning and Azure Pipelines.** By using pipelines, you can frequently update models. You can also test new models. You can continually roll out new machine learning models alongside your other applications and services.
+- **Automate the end-to-end machine learning lifecycle with machine learning and Azure pipelines.** Use pipelines to frequently test and update models. You can continually roll out new machine learning models alongside your other applications and services.
-For more information on MLOps, see [Machine learning DevOps](/azure/cloud-adoption-framework/ready/azure-best-practices/ai-machine-learning-mlops).
+For more information on MLOps, see [Machine learning operations](/azure/cloud-adoption-framework/ready/azure-best-practices/ai-machine-learning-mlops).
## Create reproducible machine learning pipelines
-Use machine learning pipelines from Machine Learning to stitch together all the steps in your model training process.
+Use Azure Machine Learning pipelines to stitch together all the steps in your model training process. A machine learning pipeline can contain steps that include data preparation, feature extraction, hyperparameter tuning, and model evaluation.
-A machine learning pipeline can contain steps from data preparation to feature extraction to hyperparameter tuning to model evaluation. For more information, see [Machine learning pipelines](concept-ml-pipelines.md).
+If you use the [Azure Machine Learning designer](concept-designer.md) to create a machine learning pipeline, you can clone the pipeline to iterate over its design without losing your old versions. To clone a pipeline at any time in the designer, go to the upper-right corner to select **...** > **Clone**.
-If you use the [designer](concept-designer.md) to create your machine learning pipelines, you can at any time select the **...** icon in the upper-right corner of the designer page. Then select **Clone**. When you clone your pipeline, you iterate your pipeline design without losing your old versions.
+For more information on Azure Machine Learning pipelines, see [Machine learning pipelines](concept-ml-pipelines.md).
## Create reusable software environments
-By using Machine Learning environments, you can track and reproduce your projects' software dependencies as they evolve. You can use environments to ensure that builds are reproducible without manual software configurations.
+By using Azure Machine Learning environments, you can track and reproduce your projects' software dependencies as they evolve. You can use environments to ensure that builds are reproducible without manual software configurations.
-Environments describe the pip and conda dependencies for your projects. You can use them for training and deployment of models. For more information, see [What are Machine Learning environments?](concept-environments.md).
+Environments describe the pip and conda dependencies for your projects. You can use environments for model training and deployment. For more information on environments, see [What are Azure Machine Learning environments?](concept-environments.md).
## Register, package, and deploy models from anywhere
The following sections discuss how to register, package, and deploy models.
With model registration, you can store and version your models in the Azure cloud, in your workspace. The model registry makes it easy to organize and keep track of your trained models.
-> [!TIP]
-> A registered model is a logical container for one or more files that make up your model. For example, if you have a model that's stored in multiple files, you can register them as a single model in your Machine Learning workspace. After registration, you can then download or deploy the registered model and receive all the files that were registered.
+A registered model is a logical container for one or more files that make up your model. For example, if you have a model that is stored in multiple files, you can register the files as a single model in your Azure Machine Learning workspace. After registration, you can then download or deploy the registered model and receive all the component files.
-Registered models are identified by name and version. Each time you register a model with the same name as an existing one, the registry increments the version. More metadata tags can be provided during registration. These tags are then used when you search for a model. Machine Learning supports any model that can be loaded by using Python 3.5.2 or higher.
+You can identify registered models by name and version. Whenever you register a model with the same name as an existing model, the registry increments the version number. You can provide metadata tags during registration and use these tags when you search for a model. Azure Machine Learning supports any model that can be loaded by using Python 3.5.2 or higher.
> [!TIP]
-> You can also register models trained outside Machine Learning.
+> You can also register models trained outside Azure Machine Learning.
> [!IMPORTANT]
-> * When you use the **Filter by** `Tags` option on the **Models** page of Azure Machine Learning Studio, instead of using `TagName : TagValue`, use `TagName=TagValue` without spaces.
+> * When you use the **Filter by** `Tags` option on the **Models** page of Azure Machine Learning studio, instead of using `TagName : TagValue`, use `TagName=TagValue` without spaces.
> * You can't delete a registered model that's being used in an active deployment.
-For more information, [Work with models in Azure Machine Learning](./how-to-manage-models.md).
+For more information on how to use models in Azure Machine Learning, see [Work with models in Azure Machine Learning](./how-to-manage-models.md).
### Package and debug models
-Before you deploy a model into production, it's packaged into a Docker image. In most cases, image creation happens automatically in the background during deployment. You can manually specify the image.
-
-If you run into problems with the deployment, you can deploy on your local development environment for troubleshooting and debugging.
+Before you deploy a model into production, it needs to be packaged into a Docker image. In most cases, image creation automatically happens in the background during deployment; however, you can manually specify the image.
-For more information, see [How to troubleshoot online endpoints](how-to-troubleshoot-online-endpoints.md).
+It's useful to first deploy to your local development environment so that you can troubleshoot and debug before deploying to the cloud. This practice can help you avoid having problems with your deployment to Azure Machine Learning. For more information on how to resolve common deployment issues, see [How to troubleshoot online endpoints](how-to-troubleshoot-online-endpoints.md).
### Convert and optimize models
-Converting your model to [Open Neural Network Exchange](https://onnx.ai) (ONNX) might improve performance. On average, converting to ONNX can double performance.
+You can convert your model to [Open Neural Network Exchange](https://onnx.ai) (ONNX) to try to improve performance. Typically, converting to ONNX can double performance.
For more information on ONNX with Machine Learning, see [Create and accelerate machine learning models](concept-onnx.md).
-### Use models
+### Deploy models
-Trained machine learning models are deployed as [endpoints](concept-endpoints.md) in the cloud or locally. Deployments use CPU, GPU for inferencing.
+You can deploy trained machine learning models as [endpoints](concept-endpoints.md) in the cloud or locally. Deployments use CPU and GPU for inferencing.
-When deploying a model as an endpoint, you provide the following items:
+When deploying a model as an endpoint, you need to provide the following items:
-* The models that are used to score data submitted to the service or device.
-* An entry script. This script accepts requests, uses the models to score the data, and returns a response.
-* A Machine Learning environment that describes the pip and conda dependencies required by the models and entry script.
-* Any other assets such as text and data that are required by the models and entry script.
+* The __model__ that is used to score data submitted to the service or device.
+* An __entry script__<sup>1</sup>. This script accepts requests, uses the models to score the data, and returns a response.
+* An __environment__<sup>2</sup> that describes the pip and conda dependencies required by the models and entry script.
+* Any __other assets__, such as text and data that are required by the models and entry script.
-You also provide the configuration of the target deployment platform. For example, the VM family type, available memory, and number of cores. When the image is created, components required by Azure Machine Learning are also added. For example, assets needed to run the web service.
+You also provide the configuration of the target deployment platform. For example, the virtual machine (VM) family type, available memory, and number of cores. When the image is created, components required by Azure Machine Learning, such as assets needed to run the web service, are also added.
+
+<sup>1,2</sup> When you deploy an MLflow model, you don't need to provide an entry script, also known as a scoring script. You also don't need to provide an environment for the deployment. For more information on deploying MLflow models, see [Guidelines for deploying MLflow models](how-to-deploy-mlflow-models.md).
#### Batch scoring
-Batch scoring is supported through batch endpoints. For more information, see [endpoints](concept-endpoints.md).
+Batch scoring is supported through batch endpoints. For more information on batch scoring, see [Batch endpoints](concept-endpoints-batch.md).
-#### Online endpoints
+#### Real-time scoring
-You can use your models with an online endpoint. Online endpoints can use the following compute targets:
+You can use your models with an online endpoint for real-time scoring. Online endpoints can use the following compute targets:
* Managed online endpoints * Azure Kubernetes Service * Local development environment
-To deploy the model to an endpoint, you must provide the following items:
+To deploy a model to an endpoint, you must provide the following items:
* The model or ensemble of models. * Dependencies required to use the model. Examples are a script that accepts requests and invokes the model and conda dependencies. * Deployment configuration that describes how and where to deploy the model.
-For more information, see [Deploy online endpoints](how-to-deploy-online-endpoints.md).
+For more information on deployment for real-time scoring, see [Deploy online endpoints](how-to-deploy-online-endpoints.md).
-#### Controlled rollout
+#### Controlled rollout for online endpoints
When deploying to an online endpoint, you can use controlled rollout to enable the following scenarios:
-* Create multiple versions of an endpoint for a deployment
+* Create multiple versions of an endpoint for a deployment.
* Perform A/B testing by routing traffic to different deployments within the endpoint.
-* Switch between endpoint deployments by updating the traffic percentage in endpoint configuration.
+* Switch between endpoint deployments by updating the traffic percentage in the endpoint configuration.
-For more information, see [Controlled rollout of machine learning models](./how-to-safely-rollout-online-endpoints.md).
+For more information on deployment using a controlled rollout, see [Perform safe rollout of new deployments for real-time inference](./how-to-safely-rollout-online-endpoints.md).
### Analytics
-Microsoft Power BI supports using machine learning models for data analytics. For more information, see [Machine Learning integration in Power BI (preview)](/power-bi/service-machine-learning-integration).
+Microsoft Power BI supports using machine learning models for data analytics. For more information, see [Azure Machine Learning integration in Power BI](/power-bi/transform-model/dataflows/dataflows-machine-learning-integration).
## Capture the governance data required for MLOps
-Machine Learning gives you the capability to track the end-to-end audit trail of all your machine learning assets by using metadata. For example:
+Azure Machine Learning gives you the capability to track the end-to-end audit trail of all your machine learning assets by using metadata. For example:
-- [Machine Learning datasets](how-to-create-register-datasets.md) help you track, profile, and version data.-- [Interpretability](how-to-machine-learning-interpretability.md) allows you to explain your models, meet regulatory compliance, and understand how models arrive at a result for specific input.-- Machine Learning Job history stores a snapshot of the code, data, and computes used to train a model.-- The [Machine Learning Model Registry](./how-to-manage-models.md?tabs=use-local#create-a-model-in-the-model-registry) captures all the metadata associated with your model. For example, metadata includes which experiment trained it, where it's being deployed, and if its deployments are healthy.-- [Integration with Azure](how-to-use-event-grid.md) allows you to act on events in the machine learning lifecycle. Examples are model registration, deployment, data drift, and training (job) events.
+- [Azure Machine Learning data assets](how-to-create-register-datasets.md) help you track, profile, and version data.
+- [Model interpretability](how-to-machine-learning-interpretability.md) allows you to explain your models, meet regulatory compliance, and understand how models arrive at a result for a given input.
+- Azure Machine Learning Job history stores a snapshot of the code, data, and computes used to train a model.
+- [Azure Machine Learning model registry](./how-to-manage-models.md?tabs=use-local#create-a-model-in-the-model-registry) captures all the metadata associated with your model. For example, which experiment trained the model, where the model is being deployed, and if the model's deployments are healthy.
+- [Integration with Azure](how-to-use-event-grid.md) allows you to act on events, such as model registration, deployment, data drift, and training (job) events, in the machine learning lifecycle.
> [!TIP]
-> While some information on models and datasets is automatically captured, you can add more information by using _tags_. When you look for registered models and datasets in your workspace, you can use tags as a filter.
+> While some information on models and data assets is automatically captured, you can add more information by using _tags_. When you look for registered models and data assets in your workspace, you can use tags as a filter.
## Notify, automate, and alert on events in the machine learning lifecycle
-Machine Learning publishes key events to Azure Event Grid, which can be used to notify and automate on events in the machine learning lifecycle. For more information, see [Use Event Grid](how-to-use-event-grid.md).
+Azure Machine Learning publishes key events to Azure Event Grid, which can be used to notify and automate on events in the machine learning lifecycle. For more information on how to set up event-driven processes based on Azure Machine Learning events, see [Custom CI/CD and event-driven workflows](how-to-use-event-grid.md).
## Automate the machine learning lifecycle
-You can use GitHub and Azure Pipelines to create a continuous integration process that trains a model. In a typical scenario, when a data scientist checks a change into the Git repo for a project, Azure Pipelines starts a training job. The results of the job can then be inspected to see the performance characteristics of the trained model. You can also create a pipeline that deploys the model as a web service.
+You can use GitHub and [Azure Pipelines](/azure/devops/pipelines/get-started/what-is-azure-pipelines) to create a continuous integration process that trains a model. In a typical scenario, when a data scientist checks a change into a project's Git repo, Azure Pipelines starts a training job. The results of the job can then be inspected to see the performance characteristics of the trained model. You can also create a pipeline that deploys the model as a web service.
-The [Machine Learning extension](https://marketplace.visualstudio.com/items?itemName=ms-air-aiagility.vss-services-azureml) makes it easier to work with Azure Pipelines. It provides the following enhancements to Azure Pipelines:
+The [Machine Learning extension](https://marketplace.visualstudio.com/items?itemName=ms-air-aiagility.vss-services-azureml) makes it easier to work with Azure Pipelines. The extension provides the following enhancements to Azure Pipelines:
* Enables workspace selection when you define a service connection. * Enables release pipelines to be triggered by trained models created in a training pipeline.
-For more information on using Azure Pipelines with Machine Learning, see:
-
-* [Continuous integration and deployment of machine learning models with Azure Pipelines](/azure/devops/pipelines/targets/azure-machine-learning)
-* [Machine Learning MLOps](https://github.com/Azure/mlops-v2) repository
+For more information on using Azure Pipelines with Machine Learning, see [Use Azure Pipelines with Azure Machine Learning](how-to-devops-machine-learning.md).
-## Next steps
+## Related content
-Learn more by reading and exploring the following resources:
+- [Set up MLOps with Azure DevOps](how-to-setup-mlops-azureml.md)
+- [Learning path: End-to-end MLOps with Azure Machine Learning](/training/paths/build-first-machine-operations-workflow/)
+- [CI/CD of machine learning models with Azure Pipelines](/azure/devops/pipelines/targets/azure-machine-learning)
-+ [Set up MLOps with Azure DevOps](how-to-setup-mlops-azureml.md)
-+ [Learning path: End-to-end MLOps with Azure Machine Learning](/training/paths/build-first-machine-operations-workflow/)
-+ [How to deploy a model to an online endpoint](how-to-deploy-online-endpoints.md) with Machine Learning
-+ [Tutorial: Train and deploy a model](tutorial-train-deploy-notebook.md)
-+ [CI/CD of machine learning models with Azure Pipelines](/azure/devops/pipelines/targets/azure-machine-learning)
-+ [Machine learning at scale](/azure/architecture/data-guide/big-data/machine-learning-at-scale)
-+ [Azure AI reference architectures and best practices repo](https://github.com/microsoft/AI)
machine-learning How To Managed Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-managed-network.md
To enable the [serverless Spark jobs](how-to-submit-spark-jobs.md) for the manag
``` > [!NOTE]
- > - When **Allow Only Approved Outbound** is enabled (`isolation_mode: allow_only_approved_outbound`), conda package dependencies defined in Spark session configuration will fail to install. To resolve this problem, upload a self-contained Python package wheel with no external dependencies to an Azure storage account and create private endpoint to this storage account. Use the path to Python package wheel as `py_files` parameter in your Spark job.
- > - If the workspace was created with `isolation_mode: allow_internet_outbound`, it can not be updated later to use `isolation_mode: allow_only_approved_outbound`.
+ > When **Allow Only Approved Outbound** is enabled (`isolation_mode: allow_only_approved_outbound`), conda package dependencies defined in Spark session configuration will fail to install. To resolve this problem, upload a self-contained Python package wheel with no external dependencies to an Azure storage account and create private endpoint to this storage account. Use the path to Python package wheel as `py_files` parameter in your Spark job. Setting an FQDN outbound rule will not bypass this issue as FQDN rule propagation is not supported by Spark.
# [Python SDK](#tab/python)
machine-learning How To Prevent Data Loss Exfiltration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-prevent-data-loss-exfiltration.md
For more information, see [How to secure training environments](./v1/how-to-secu
## 3. Enable storage endpoint for the subnet
+Use the following steps to enable a storage endpoint for the subnet that contains your Azure Machine Learning compute clusters and compute instances:
+ 1. From the [Azure portal](https://portal.azure.com), select the __Azure Virtual Network__ for your Azure Machine Learning workspace.
-1. From the left of the page, select __Subnets__ and then select the subnet that contains your compute cluster/instance resources.
+1. From the left of the page, select __Subnets__ and then select the subnet that contains your compute cluster and compute instance.
1. In the form that appears, expand the __Services__ dropdown and then enable __Microsoft.Storage__. Select __Save__ to save these changes. 1. Apply the service endpoint policy to your workspace subnet.
machine-learning Reference Yaml Deployment Managed Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-deployment-managed-online.md
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
| | - | -- | - | | `request_timeout_ms` | integer | The scoring timeout in milliseconds. Note that the maximum value allowed is `180000` milliseconds. See [limits for online endpoints](how-to-manage-quotas.md#azure-machine-learning-online-endpoints-and-batch-endpoints) for more. | `5000` | | `max_concurrent_requests_per_instance` | integer | The maximum number of concurrent requests per instance allowed for the deployment. <br><br> **Note:** If you're using [Azure Machine Learning Inference Server](how-to-inference-server-http.md) or [Azure Machine Learning Inference Images](concept-prebuilt-docker-images-inference.md), your model must be configured to handle concurrent requests. To do so, pass `WORKER_COUNT: <int>` as an environment variable. For more information about `WORKER_COUNT`, see [Azure Machine Learning Inference Server Parameters](how-to-inference-server-http.md#server-parameters) <br><br> **Note:** Set to the number of requests that your model can process concurrently on a single node. Setting this value higher than your model's actual concurrency can lead to higher latencies. Setting this value too low might lead to under utilized nodes. Setting too low might also result in requests being rejected with a 429 HTTP status code, as the system will opt to fail fast. For more information, see [Troubleshooting online endpoints: HTTP status codes](how-to-troubleshoot-online-endpoints.md#http-status-codes). | `1` |
-| `max_queue_wait_ms` | integer | The maximum amount of time in milliseconds a request will stay in the queue. | `500` |
+| `max_queue_wait_ms` | integer | (Deprecated) The maximum amount of time in milliseconds a request will stay in the queue. (Now increase `request_timeout_ms` to account for any networking/queue delays) | `500` |
### ProbeSettings
machine-learning How To Connect Data Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-connect-data-ui.md
- Previously updated : 09/28/2021+ Last updated : 02/09/2024 #Customer intent: As low code experience data scientist, I need to make my data in storage on Azure available to my remote compute to train my ML models. # Connect to data with the Azure Machine Learning studio
-In this article, learn how to access your data with the [Azure Machine Learning studio](https://ml.azure.com). Connect to your data in storage services on Azure with [Azure Machine Learning datastores](how-to-access-data.md), and then package that data for tasks in your ML workflows with [Azure Machine Learning datasets](how-to-create-register-datasets.md).
+This article shows how to access your data with the [Azure Machine Learning studio](https://ml.azure.com). Connect to your data in Azure storage services with [Azure Machine Learning datastores](how-to-access-data.md). Then, package that data for ML workflow tasks with [Azure Machine Learning datasets](how-to-create-register-datasets.md).
-The following table defines and summarizes the benefits of datastores and datasets.
+This table defines and summarizes the benefits of datastores and datasets.
-|Object|Description| Benefits|
+|Object|Description| Benefits|
||||
-|Datastores| Securely connect to your storage service on Azure, by storing your connection information, like your subscription ID and token authorization in your [Key Vault](https://azure.microsoft.com/services/key-vault/) associated with the workspace | Because your information is securely stored, you <br><br> <li> Don't&nbsp;put&nbsp;authentication&nbsp;credentials&nbsp;or&nbsp;original&nbsp;data sources at risk. <li> No longer need to hard code them in your scripts.
-|Datasets| By creating a dataset, you create a reference to the data source location, along with a copy of its metadata. With datasets you can, <br><br><li> Access data during model training.<li> Share data and collaborate with other users.<li> Use open-source libraries, like pandas, for data exploration. | Because datasets are lazily evaluated, and the data remains in its existing location, you <br><br><li>Keep a single copy of data in your storage.<li> Incur no extra storage cost <li> Don't risk unintentionally changing your original data sources.<li>Improve ML workflow performance speeds.
+|Datastores| To securely connect to your storage service on Azure, store your connection information (subscription ID, token authorization, etc.) in the [Key Vault](https://azure.microsoft.com/services/key-vault/) associated with the workspace | Because your information is securely stored, you don't put authentication credentials or original data sources at risk, and you no longer need to hard code these values in your scripts
+|Datasets| Dataset creation also creates a reference to the data source location, along with a copy of its metadata. With datasets you can access data during model training, share data and collaborate with other users, and use open-source libraries, like pandas, for data exploration. | Since datasets are lazily evaluated, and the data remains in its existing location, you keep a single copy of data in your storage. Additionally, you incur no extra storage cost, you avoid unintentional changes to your original data sources, and improve ML workflow performance speeds.|
-To understand where datastores and datasets fit in Azure Machine Learning's overall data access workflow, see the [Securely access data](concept-data.md#data-workflow) article.
+To learn where datastores and datasets fit in the overall Azure Machine Learning data access workflow, visit [Securely access data](concept-data.md#data-workflow).
-For a code first experience, see the following articles to use the [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/) to:
-* [Connect to Azure storage services with datastores](how-to-access-data.md).
-* [Create Azure Machine Learning datasets](how-to-create-register-datasets.md).
+For more information about the [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/) and a code-first experience, see:
+* [Connect to Azure storage services with datastores](how-to-access-data.md)
+* [Create Azure Machine Learning datasets](how-to-create-register-datasets.md)
## Prerequisites -- An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/).
+- An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/)
-- Access to [Azure Machine Learning studio](https://ml.azure.com/).
+- Access to [Azure Machine Learning studio](https://ml.azure.com/)
-- An Azure Machine Learning workspace. [Create workspace resources](../quickstart-create-resources.md).
+- An Azure Machine Learning workspace. [Create workspace resources](../quickstart-create-resources.md)
- - When you create a workspace, an Azure blob container and an Azure file share are automatically registered as datastores to the workspace. They're named `workspaceblobstore` and `workspacefilestore`, respectively. If blob storage is sufficient for your needs, the `workspaceblobstore` is set as the default datastore, and already configured for use. Otherwise, you need a storage account on Azure with a [supported storage type](how-to-access-data.md#supported-data-storage-service-types).
-
+ - When you create a workspace, an Azure blob container and an Azure file share are automatically registered to the workspace as datastores. They're named `workspaceblobstore` and `workspacefilestore`, respectively. For sufficient blob storage resources, the `workspaceblobstore` is set as the default datastore, already configured for use. If you require more blob storage resources, you need an Azure storage account, with a [supported storage type](how-to-access-data.md#supported-data-storage-service-types).
## Create datastores
-You can create datastores from [these Azure storage solutions](how-to-access-data.md#supported-data-storage-service-types). **For unsupported storage solutions**, and to save data egress cost during ML experiments, you must [move your data](how-to-access-data.md#move-data-to-supported-azure-storage-solutions) to a supported Azure storage solution. [Learn more about datastores](how-to-access-data.md).
+You can create datastores from [these Azure storage solutions](how-to-access-data.md#supported-data-storage-service-types). **For unsupported storage solutions**, and to save data egress cost during ML experiments, you must [move your data](how-to-access-data.md#move-data-to-supported-azure-storage-solutions) to a supported Azure storage solution. For more information about datastores, visit [this resource](how-to-access-data.md).
-You can create datastores with credential-based access or identity-based access.
+You can create datastores with credential-based access or identity-based access.
# [Credential-based](#tab/credential)
-Create a new datastore in a few steps with the Azure Machine Learning studio.
+Create a new datastore with the Azure Machine Learning studio.
> [!IMPORTANT]
-> If your data storage account is in a virtual network, additional configuration steps are required to ensure the studio has access to your data. See [Network isolation & privacy](../how-to-enable-studio-virtual-network.md) to ensure the appropriate configuration steps are applied.
+> If your data storage account is located in a virtual network, additional configuration steps are required to ensure that the studio can access your data. Visit [Network isolation & privacy](../how-to-enable-studio-virtual-network.md) for more information about the appropriate configuration steps.
1. Sign in to [Azure Machine Learning studio](https://ml.azure.com/). 1. Select **Data** on the left pane under **Assets**. 1. At the top, select **Datastores**. 1. Select **+Create**.
-1. Complete the form to create and register a new datastore. The form intelligently updates itself based on your selections for Azure storage type and authentication type. See the [storage access and permissions section](#access-validation) to understand where to find the authentication credentials you need to populate this form.
+1. Complete the form to create and register a new datastore. The form intelligently updates itself based on your selections for Azure storage type and authentication type. For more information about where to find the authentication credentials needed to populate this form, visit the [storage access and permissions section](#access-validation).
-The following example demonstrates what the form looks like when you create an **Azure blob datastore**:
+This screenshot shows the **Azure blob datastore** creation panel:
-![Form for a new datastore](media/how-to-connect-data-ui/new-datastore-form.png)
# [Identity-based](#tab/identity)
-Create a new datastore in a few steps with the Azure Machine Learning studio. Learn more about [identity-based data access](how-to-identity-based-data-access.md).
+For more information about new datastore creation with the Azure Machine Learning studio, visit [identity-based data access](how-to-identity-based-data-access.md).
> [!IMPORTANT]
-> If your data storage account is in a virtual network, additional configuration steps are required to ensure the studio has access to your data. See [Network isolation & privacy](../how-to-enable-studio-virtual-network.md) to ensure the appropriate configuration steps are applied.
+> If your data storage account resides in a virtual network, additional configuration steps are required to ensure that Studio can access your data. Visit [Network isolation & privacy](../how-to-enable-studio-virtual-network.md) to ensure that the appropriate configuration steps are applied.
1. Sign in to [Azure Machine Learning studio](https://ml.azure.com/). 1. Select **Data** on the left pane under **Assets**.
Create a new datastore in a few steps with the Azure Machine Learning studio. Le
1. Select **+Create**. 1. Complete the form to create and register a new datastore. The form intelligently updates itself based on your selections for Azure storage type. See [which storage types support identity-based](how-to-identity-based-data-access.md#storage-access-permissions) data access. 1. Customers need to choose the storage acct and container name they want to use+ Blob reader role (for ADLS Gen 2 and Blob storage) is required; whoever is creating needs permissions to see the contents of the storage Reader role of the subscription and resource group 1. Select **No** to not **Save credentials with the datastore for data access**.
-The following example demonstrates what the form looks like when you create an **Azure blob datastore**:
+This screenshot shows the **Azure blob datastore** creation panel:
+ ![Form for a new datastore](media/how-to-connect-data-ui/new-id-based-datastore-form.png)
The following example demonstrates what the form looks like when you create an *
## Create data assets
-After you create a datastore, create a dataset to interact with your data. Datasets package your data into a lazily evaluated consumable object for machine learning tasks, like training. [Learn more about datasets](how-to-create-register-datasets.md).
+After you create a datastore, create a dataset to interact with your data. Datasets package your data into a lazily evaluated consumable object for machine learning tasks - for example, training. Visit [Create Azure Machine Learning datasets](how-to-create-register-datasets.md) for more information about datasets.
-There are two types of datasets, FileDataset and TabularDataset.
-[FileDatasets](how-to-create-register-datasets.md#filedataset) create references to single or multiple files or public URLs. Whereas [TabularDatasets](how-to-create-register-datasets.md#tabulardataset) represent your data in a tabular format. You can create TabularDatasets from .csv, .tsv, .parquet, .jsonl files, and from SQL query results.
+Datasets have two types: FileDataset and TabularDataset. [FileDatasets](how-to-create-register-datasets.md#filedataset) create references to single or multiple files, or public URLs. [TabularDatasets](how-to-create-register-datasets.md#tabulardataset) represent data in a tabular format. You can create TabularDatasets from
+- .csv
+- .tsv
+- .parquet
+- .json
+files, and from SQL query results.
The following steps describe how to create a dataset in [Azure Machine Learning studio](https://ml.azure.com).
The following steps describe how to create a dataset in [Azure Machine Learning
1. Navigate to [Azure Machine Learning studio](https://ml.azure.com) 1. Under __Assets__ in the left navigation, select __Data__. On the Data assets tab, select Create
-1. Give your data asset a name and optional description. Then, under **Type**, select one of the Dataset types, either **File** or **Tabular**.
+1. Give the data asset a name and optional description. Then, under **Type**, select a Dataset type, either **File** or **Tabular**.
-1. You have a few options for your data source. If your data is already stored in Azure, choose "From Azure storage". If you want to upload data from your local drive, choose "From local files". If your data is stored at a public web location, choose "From web files". You can also create a data asset from a SQL database, or from [Azure Open Datasets](../../open-datasets/how-to-create-azure-machine-learning-dataset-from-open-dataset.md).
+1. The **Data source** pane opens next, as shown in this screenshot:
-1. For the file selection step, select where you want your data to be stored in Azure, and what data files you want to use.
- 1. Enable skip validation if your data is in a virtual network. Learn more about [virtual network isolation and privacy](../how-to-enable-studio-virtual-network.md).
-1. Follow the steps to set the data parsing settings and schema for your data asset. The settings will be pre-populated based on file type and you can further configure your settings prior to creating the data asset.
+You have different options for your data source. For data already stored in Azure, choose "From Azure storage." To upload data from your local drive, choose "From local files." For data stored at a public web location, choose "From web files." You can also create a data asset from a SQL database, or from [Azure Open Datasets](../../open-datasets/how-to-create-azure-machine-learning-dataset-from-open-dataset.md).
+
+1. At the file selection step, select the location where Azure should store your data, and the data files you want to use.
+ 1. Enable skip validation if your data is in a virtual network. Learn more about [virtual network isolation and privacy](../how-to-enable-studio-virtual-network.md).
-1. Once you reach the Review step, click Create on the last page
+1. Follow the steps to set the data parsing settings and schema for your data asset. The settings prepopulate based on file type, and you can further configure your settings before data asset creation.
-<a name="profile"></a>
+1. Once you reach the Review step, select Create on the last page
### Data preview and profile
-After you create your dataset, verify you can view the preview and profile in the studio with the following steps:
+After you create your dataset, verify that you can view the preview and profile in the studio:
1. Sign in to the [Azure Machine Learning studio](https://ml.azure.com/) 1. Under __Assets__ in the left navigation, select __Data__.
After you create your dataset, verify you can view the preview and profile in th
1. Select the **Profile** tab. :::image type="content" source="media\how-to-connect-data-ui\explore-generate-profile.png" alt-text="Screenshot shows dataset column metadata in the Profile tab.":::
-You can get a vast variety of summary statistics across your data set to verify whether your data set is ML-ready. For non-numeric columns, they include only basic statistics like min, max, and error count. For numeric columns, you can also review their statistical moments and estimated quantiles.
+You can use summary statistics across your data set to verify whether your data set is ML-ready. For non-numeric columns, these statistics include only basic statistics - for example, min, max, and error count. Numeric columns offer statistical moments and estimated quantiles.
-Specifically, Azure Machine Learning dataset's data profile includes:
+The Azure Machine Learning dataset data profile includes:
>[!NOTE] > Blank entries appear for features with irrelevant types.
-|Statistic|Description
-||
-|Feature| Name of the column that is being summarized.
-|Profile| In-line visualization based on the type inferred. For example, strings, booleans, and dates will have value counts, while decimals (numerics) have approximated histograms. This allows you to gain a quick understanding of the distribution of the data.
-|Type distribution| In-line value count of types within a column. Nulls are their own type, so this visualization is useful for detecting odd or missing values.
-|Type|Inferred type of the column. Possible values include: strings, booleans, dates, and decimals.
-|Min| Minimum value of the column. Blank entries appear for features whose type doesn't have an inherent ordering (like, booleans).
+|Statistic|Description|
+||--
+|Feature| The summarized column name
+|Profile| In-line visualization based on the inferred type. Strings, booleans, and dates have value counts. Decimals (numerics) have approximated histograms. These visualizations offer a quick understanding of the data distribution
+|Type distribution| In-line value count of types within a column. Nulls are their own type, so this visualization can detect odd or missing values
+|Type|Inferred column type. Possible values include: strings, booleans, dates, and decimals
+|Min| Minimum value of the column. Blank entries appear for features whose type doesn't have an inherent ordering (for example, booleans)
|Max| Maximum value of the column.
-|Count| Total number of missing and non-missing entries in the column.
-|Not missing count| Number of entries in the column that aren't missing. Empty strings and errors are treated as values, so they won't contribute to the "not missing count."
-|Quantiles| Approximated values at each quantile to provide a sense of the distribution of the data.
-|Mean| Arithmetic mean or average of the column.
-|Standard deviation| Measure of the amount of dispersion or variation of this column's data.
-|Variance| Measure of how far spread out this column's data is from its average value.
-|Skewness| Measure of how different this column's data is from a normal distribution.
-|Kurtosis| Measure of how heavily tailed this column's data is compared to a normal distribution.
+|Count| Total number of missing and nonmissing entries in the column
+|Not missing count| Number of entries in the column that aren't missing. Empty strings and errors are treated as values, so they don't contribute to the "not missing count."
+|Quantiles| Approximated values at each quantile, to provide a sense of the data distribution
+|Mean| Arithmetic mean or average of the column
+|Standard deviation| Measure of the amount of dispersion or variation for the data of this column
+|Variance| Measure of how far the data of this column spreads out from its average value
+|Skewness| Measures the difference of this column's data from a normal distribution
+|Kurtosis| Measures the degree of "tailness" of this column's data, compared to a normal distribution
## Storage access and permissions
-To ensure you securely connect to your Azure storage service, Azure Machine Learning requires that you have permission to access the corresponding data storage. This access depends on the authentication credentials used to register the datastore.
+To ensure that you securely connect to your Azure storage service, Azure Machine Learning requires that you have permission to access the corresponding data storage. This access depends on the authentication credentials used to register the datastore.
### Virtual network
-If your data storage account is in a **virtual network**, extra configuration steps are required to ensure Azure Machine Learning has access to your data. See [Use Azure Machine Learning studio in a virtual network](../how-to-enable-studio-virtual-network.md) to ensure the appropriate configuration steps are applied when you create and register your datastore.
+If your data storage account is in a **virtual network**, extra configuration steps are required to ensure that Azure Machine Learning has access to your data. See [Use Azure Machine Learning studio in a virtual network](../how-to-enable-studio-virtual-network.md) to ensure the appropriate configuration steps are applied when you create and register your datastore.
### Access validation > [!WARNING]
-> Cross tenant access to storage accounts is not supported. If cross tenant access is needed for your scenario, please reach out to the Azure Machine Learning Data Support team alias at amldatasupport@microsoft.com for assistance with a custom code solution.
+> Cross-tenant access to storage accounts is not supported. If your scenario needs cross-tenant access, please reach out to the Azure Machine Learning Data Support team alias at amldatasupport@microsoft.com for assistance with a custom code solution.
-**As part of the initial datastore creation and registration process**, Azure Machine Learning automatically validates that the underlying storage service exists and the user provided principal (username, service principal, or SAS token) has access to the specified storage.
+**As part of the initial datastore creation and registration process**, Azure Machine Learning automatically validates that the underlying storage service exists and that the user-provided principal (username, service principal, or SAS token) has access to the specified storage.
-**After datastore creation**, this validation is only performed for methods that require access to the underlying storage container, **not** each time datastore objects are retrieved. For example, validation happens if you want to download files from your datastore; but if you just want to change your default datastore, then validation doesn't happen.
+**After datastore creation**, this validation is only performed for methods that require access to the underlying storage container. The validation is **not** performed each time datastore objects are retrieved. For example, validation happens when you download files from your datastore. However, if you want to change your default datastore, validation doesn't occur.
-To authenticate your access to the underlying storage service, you can provide either your account key, shared access signatures (SAS) tokens, or service principal according to the datastore type you want to create. The [storage type matrix](how-to-access-data.md#supported-data-storage-service-types) lists the supported authentication types that correspond to each datastore type.
+To authenticate your access to the underlying storage service, provide either your account key, shared access signatures (SAS) tokens, or service principal, according to the datastore type you want to create. The [storage type matrix](how-to-access-data.md#supported-data-storage-service-types) lists the supported authentication types that correspond to each datastore type.
-You can find account key, SAS token, and service principal information on your [Azure portal](https://portal.azure.com).
+You can find account key, SAS token, and service principal information at your [Azure portal](https://portal.azure.com).
-* If you plan to use an account key or SAS token for authentication, select **Storage Accounts** on the left pane, and choose the storage account that you want to register.
+* To obtain an account key for authentication, select **Storage Accounts** in the left pane, and choose the storage account that you want to register
* The **Overview** page provides information such as the account name, container, and file share name.
- 1. For account keys, go to **Access keys** on the **Settings** pane.
- 1. For SAS tokens, go to **Shared access signatures** on the **Settings** pane.
+ * Expand the **Security + networking** node in the left nav
+ * Select **Access keys**
+ * The available key values serve as **Account key** values
+* To obtain an SAS token for authentication, select **Storage Accounts** in the left pane, and choose the storage account that you want
+ * To obtain an **Access key** value, expand the **Security + networking** node in the left nav
+ * Select **Shared access signature**
+ * Complete the process to generate the SAS value
-* If you plan to use a [service principal](../../active-directory/develop/howto-create-service-principal-portal.md) for authentication, go to your **App registrations** and select which app you want to use.
- * Its corresponding **Overview** page will contain required information like tenant ID and client ID.
+* To use a [service principal](../../active-directory/develop/howto-create-service-principal-portal.md) for authentication, go to your **App registrations** and select which app you want to use.
+ * Its corresponding **Overview** page contains required information like tenant ID and client ID.
> [!IMPORTANT]
-> * If you need to change your access keys for an Azure Storage account (account key or SAS token), be sure to sync the new credentials with your workspace and the datastores connected to it. Learn how to [sync your updated credentials](../how-to-change-storage-access-key.md). <br> <br>
-> * If you unregister and re-register a datastore with the same name, and it fails, the Azure Key Vault for your workspace may not have soft-delete enabled. By default, soft-delete is enabled for the key vault instance created by your workspace, but it may not be enabled if you used an existing key vault or have a workspace created prior to October 2020. For information on how to enable soft-delete, see [Turn on Soft Delete for an existing key vault](../../key-vault/general/soft-delete-change.md#turn-on-soft-delete-for-an-existing-key-vault).
+> * To change your access keys for an Azure Storage account (account key or SAS token), be sure to sync the new credentials with both your workspace and the datastores connected to it. For more information, visit [sync your updated credentials](../how-to-change-storage-access-key.md).
+> * If you unregister and then re-register a datastore with the same name, and that re-registration fails, the Azure Key Vault for your workspace may not have soft-delete enabled. By default, soft-delete is enabled for the key vault instance created by your workspace, but it may not be enabled if you used an existing key vault or have a workspace created prior to October 2020. For more information about how to enable soft-delete, visit [Turn on Soft Delete for an existing key vault](../../key-vault/general/soft-delete-change.md#turn-on-soft-delete-for-an-existing-key-vault).
### Permissions
-For Azure blob container and Azure Data Lake Gen 2 storage, make sure your authentication credentials have **Storage Blob Data Reader** access. Learn more about [Storage Blob Data Reader](../../role-based-access-control/built-in-roles.md#storage-blob-data-reader). An account SAS token defaults to no permissions.
+For Azure blob container and Azure Data Lake Gen 2 storage, ensure that your authentication credentials have **Storage Blob Data Reader** access. Learn more about [Storage Blob Data Reader](../../role-based-access-control/built-in-roles.md#storage-blob-data-reader). By default, an account SAS token has no permissions.
* For data **read access**, your authentication credentials must have a minimum of list and read permissions for containers and objects. * For data **write access**, write and add permissions also are required.
Use your datasets in your machine learning experiments for training ML models. [
## Next steps
-* [A step-by-step example of training with TabularDatasets and automated machine learning](../tutorial-first-experiment-automated-ml.md).
+* [A step-by-step example of training with TabularDatasets and automated machine learning](../tutorial-first-experiment-automated-ml.md)
-* [Train a model](how-to-set-up-training-targets.md).
+* [Train a model](how-to-set-up-training-targets.md)
-* For more dataset training examples, see the [sample notebooks](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/work-with-data/).
+* For more dataset training examples, see the [sample notebooks](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/work-with-data/)
migrate Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/whats-new.md
ms. Previously updated : 02/12/2024 Last updated : 02/13/2024
## Update (January 2024) -- Public preview: Using the RVTools XLSX, you can import on-premises servers' configuration into a VMware environment and create quick business case by assessing the cost of Azure and Azure VMware Solution (AVS) environments. [Learn more](migrate-support-matrix-vmware.md#import-servers-using-rvtools-xlsx-preview).
+- Public preview: Using the RVTools XLSX, you can import an on-premises VMware environment's servers' configuration data into Azure Migrate and create a quick business case and also assess the cost of hosting these workloads on Azure and/or Azure VMware Solution (AVS) environments. [Learn more](migrate-support-matrix-vmware.md#import-servers-using-rvtools-xlsx-preview).
## Update (December 2023)
mysql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/overview.md
To migrate from Azure Database for MySQL single server to Azure Database for MyS
For more information, see [Select the right tools for migration to Azure Database for MySQL flexible server](../../mysql/how-to-decide-on-right-migration-tools.md).
-## Azure regions
+### Azure regions
One advantage of running your workload in Azure is its global reach. Azure Database for MySQL flexible server is available today in the following Azure regions:
One advantage of running your workload in Azure is its global reach. Azure Datab
| Brazil South | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | Canada Central | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | Canada East | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: |
-| Central India | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: |
+| Central India | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Central US | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: | | China East 2 | :heavy_check_mark: | :heavy_check_mark: | :x: | :x: | | China East 3 | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: |
One advantage of running your workload in Azure is its global reach. Azure Datab
| South India | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: | | Southeast Asia | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | Sweden Central | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :x: |
-| Switzerland North | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: :heavy_check_mark: |
+| Switzerland North | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
| Switzerland West | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: | | UAE Central | :heavy_check_mark: | :heavy_check_mark: | :x: | :heavy_check_mark: | | UAE North | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :x: |
mysql Whats Happening To Mysql Single Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/whats-happening-to-mysql-single-server.md
After years of evolving the Azure Database for MySQL - Single Server service, it
Azure Database for MySQL - Flexible Server is a fully managed production-ready database service designed for more granular control and flexibility over database management functions and configuration settings. For more information about Flexible Server, visit **[Azure Database for MySQL - Flexible Server](../flexible-server/overview.md)**.
-If you currently have an Azure Database for MySQL - Single Server service hosting production servers, we're glad to let you know that you can migrate your Azure Database for MySQL - Single Server servers to the Azure Database for MySQL - Flexible Server service free of cost using Azure Database Migration Service (classic) . Review the different ways to migrate using Azure Data Migration Service (DMS) in the section below.
+If you currently have an Azure Database for MySQL - Single Server service hosting production servers, we're glad to let you know that you can migrate your Azure Database for MySQL - Single Server servers to the Azure Database for MySQL - Flexible Server service free of cost using Azure Database for MySQL Import, in-place auto-migration or Azure Database Migration Service (classic) . Review the different ways to migrate in the section below.
## Migrate from Single Server to Flexible Server
Learn how to migrate from Azure Database for MySQL - Single Server to Azure Data
| Scenario | Tool(s) | Details | |-|||
+| Offline/Online | Azure Database for MySQL Import and the Azure CLI | [Tutorial: Azure Database for MySQL Import with the Azure CLI](../migrate/migrate-single-flexible-mysql-import-cli.md) |
| Offline | Database Migration Service (classic) and the Azure portal | [Tutorial: DMS (classic) with the Azure portal (offline)](../../dms/tutorial-mysql-azure-single-to-flex-offline-portal.md) |
-| Offline | Azure Database for MySQL Import and the Azure CLI | [Tutorial: Azure Database for MySQL Import with the Azure CLI (offline)](../migrate/migrate-single-flexible-mysql-import-cli.md) |
| Online | Database Migration Service (classic) and the Azure portal | [Tutorial: DMS (classic) with the Azure portal (online)](../../dms/tutorial-mysql-Azure-single-to-flex-online-portal.md) |
+| Offline | In-place auto-migration nomination [form] (<https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR4lhLelkCklCuumNujnaQ-ZUQzRKSVBBV0VXTFRMSDFKSUtLUDlaNTA5Wi4u>) | [In-place auto-migration from Azure Database for MySQL Single to Flexible Server](../migrate/migrate-single-flexible-in-place-auto-migration.md) |
For more information on migrating from Single Server to Flexible Server using other migration tools, visit [Select the right tools for migration to Azure Database for MySQL](../migrate/how-to-decide-on-right-migration-tools.md). > [!NOTE]
-> In-place auto-migration from Azure Database for MySQL ΓÇô Single Server to Flexible Server is a service-initiated in-place migration during planned maintenance window for Single Server database workloads with Basic or General Purpose SKU, data storage used <= 20 GiB and no complex features (CMK, AAD, Read Replica, Private Link) enabled. The eligible servers are identified by the service and are sent an advance notification detailing steps to review migration details. All other Single Server workloads are recommended to use user-initiated migration tooling offered by Azure - Azure DMS, Azure Database for MySQL Import to migrate. Learn more about in-place auto-migration [here](../migrate/migrate-single-flexible-in-place-auto-migration.md).
+> In-place auto-migration from Azure Database for MySQL ΓÇô Single Server to Flexible Server is a service-initiated in-place migration during planned maintenance window for select Single Server database workloads. The eligible servers are identified by the service and are sent an advance notification detailing steps to review migration details. If you own a Single Server workload with Basic or GP SKU, data storage used <= 20 GiB and no complex features (CMK, AAD, Read Replica, Private Link) enabled, you can now nominate yourself (if not already scheduled by the service) for auto-migration by submitting your server details through this [form](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR4lhLelkCklCuumNujnaQ-ZUQzRKSVBBV0VXTFRMSDFKSUtLUDlaNTA5Wi4u). All other Single Server workloads are recommended to use user-initiated migration tooling offered by Azure - Azure DMS, Azure Database for MySQL Import to migrate. Learn more about in-place auto-migration [here](../migrate/migrate-single-flexible-in-place-auto-migration.md).
## Migration Eligibility
To upgrade to Azure Database for MySQL Flexible Server, it's important to know w
**Q. What happens to my existing Azure Database for MySQL single server instances?**
-**A.** Your existing Azure Database for MySQL single server workloads will continue to function as before and will be officially supported until the sunset date. However, no new updates will be released for Single Server and we strongly advise you to start migrating to Azure Database for MySQL Flexible Server at the earliest.
+**A.** Your existing Azure Database for MySQL single server workloads will continue to function as before and will be officially supported until the sunset date. However, no new updates will be released for Single Server and we strongly advise you to start migrating to Azure Database for MySQL Flexible Server at the earliest. Post sunset date, Azure Database for MySQL Single Server platform will be deprecated and will no longer be available to host any existing instances.
**Q. Can I choose to continue running Single Server beyond the sunset date?**
-**A.** Unfortunately, we don't plan to support Single Server beyond the sunset date of **September 16, 2024**, and hence we strongly advise that you start planning your migration as soon as possible.
+**A.** Unfortunately, we don't plan to support Single Server beyond the sunset date of **September 16, 2024**, and hence we strongly advise that you start planning your migration as soon as possible. Post sunset date, Azure Database for MySQL Single Server platform will be deprecated and will no longer be available to host any existing instances.
**Q. After the Single Server retirement announcement, what if I still need to create a new single server to meet my business needs?**
To upgrade to Azure Database for MySQL Flexible Server, it's important to know w
**Q. Are there additional costs associated with performing the migration?**
-**A.** When running the migration, you pay for the target flexible server and the source single server. The configuration and compute of the target flexible server determines the additional costs incurred. For more information, see, [Pricing](https://azure.microsoft.com/pricing/details/mysql/flexible-server/). Once you've decommissioned the source single server post successful migration, you only pay for your running flexible server. There are no costs incurred while running the migration through the Azure Database Migration Service (classic) migration tooling.
+**A.** When running the migration, you pay for the target flexible server and the source single server. The configuration and compute of the target flexible server determines the additional costs incurred. For more information, see, [Pricing](https://azure.microsoft.com/pricing/details/mysql/flexible-server/). Once you've decommissioned the source single server post successful migration, you only pay for your running flexible server. There are no costs incurred while running the migration through the Azure Database Migration Service (classic), in-place auto-migration or Azure Database for MySQL Import migration tooling.
**Q. Will my billing be affected by running Flexible Server as compared to Single Server?**
To upgrade to Azure Database for MySQL Flexible Server, it's important to know w
**Q. Do I need to incur downtime for migrate Single Server to Flexible Server?**
-**A.** To limit any downtime you might incur, perform an online migration to Flexible Server, which provides minimal downtime.
+**A.** To limit any downtime you might incur, perform an online migration to Flexible Server, which provides minimal downtime.
**Q. Will there be future updates to Single Server to support latest MySQL versions?**
To upgrade to Azure Database for MySQL Flexible Server, it's important to know w
**Q. What migration options are available to help me migrate my single server to a flexible server?**
-**A.** You can use Database Migration Service (classic) to run [online](../../dms/tutorial-mysql-Azure-single-to-flex-online-portal.md) or [offline](../../dms/tutorial-mysql-azure-single-to-flex-offline-portal.md) migrations (recommended). In addition, you can use community tools such as m[ydumper/myloader together with Data-in replication](../migrate/how-to-migrate-single-flexible-minimum-downtime.md) to perform migrations.
+**A.** You can use [Azure Database for MySQL Import (recommended)](../migrate/migrate-single-flexible-mysql-import-cli.md) to migrate. Additionally, You can use Database Migration Service (classic) to run [online](../../dms/tutorial-mysql-Azure-single-to-flex-online-portal.md) or [offline](../../dms/tutorial-mysql-azure-single-to-flex-offline-portal.md) migrations.
**Q. My single server is deployed in a region that doesnΓÇÖt support flexible server. How should I proceed with migration?**
To upgrade to Azure Database for MySQL Flexible Server, it's important to know w
**A.** You can perform any number of test migrations, and after gaining confidence through testing, perform the final migration. A test migration doesnΓÇÖt affect the source single server, which remains operational and continues replicating until you perform the actual migration. If there are any errors during the test migration, you can choose to postpone the final migration and keep your source server running. You can then reattempt the final migration after you resolve the errors. After you've performed a final migration to Flexible Server and the source single server has been shut down, you can't perform a rollback from Flexible Server to Single Server.
-**Q. The size of my database is greater than 1 TB, so how should I proceed with an Azure Database Migration Service initiated migration?**
+**Q. The size of my database is greater than 1 TB, so how should I proceed with my migration?**
-**A.** To support Azure Database Migration Service (DMS) migrations of databases that are 1 TB+, raise a support ticket with Azure Database Migration Service to scale-up the migration agent to support your 1 TB+ database migrations.
+**A.** You can use [Azure Database for MySQL Import (recommended)](../migrate/migrate-single-flexible-mysql-import-cli.md) to migrate which is highly performant for heavier workloads.
**Q. Is cross-region migration supported?**
nat-gateway Tutorial Nat Gateway Load Balancer Internal Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nat-gateway/tutorial-nat-gateway-load-balancer-internal-portal.md
Previously updated : 05/24/2022 Last updated : 02/13/2024
In this section, you test the NAT gateway. You first discover the public IP of t
1. Select **public-ip-nat**.
-1. Make note of the public IP address:
+1. Make note of the public IP address.
:::image type="content" source="./media/quickstart-create-nat-gateway-portal/find-public-ip.png" alt-text="Screenshot of public IP address of NAT gateway." border="true":::
In this section, you test the NAT gateway. You first discover the public IP of t
1. Select **Use Bastion**.
-1. Enter the username and password entered during VM creation. Select **Connect**.
+1. Enter the username and password entered during virtual machine creation. Select **Connect**.
-1. In the bash prompt, enter the following command:
+1. In the bash prompt, enter the following command.
```bash curl ifconfig.me
postgresql Concepts Compare Single Server Flexible Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-compare-single-server-flexible-server.md
The following table provides a list of high-level features and capabilities comp
| Burstable SKUs | No | Yes | | Ability to scale across compute tiers | Can't scale Basic tier | Yes. Can scale across tiers | | Stop/Start | No | Yes (for all compute SKUs). Only compute is stopped/started |
-| Max. Storage size | 1 TB (Basic), 4 TB or 16 TB (GP, MO). Note: Not all regions support 16 TB. | 16 TB |
+| Max. Storage size | 1 TB (Basic), 4 TB or 16 TB (GP, MO). Note: Not all regions support 16 TB. | 64 TB. Note: Not all regions support 64 TB.|
| Min storage size | 5 GB (Basic), 100 GB (GP, MO) | 32 GB | | Storage auto-grow | Yes | Yes |
-| Max IOPS | Basic - Variable. GP/MO: up to 18 K | Up to 18 K |
+| Max IOPS | Basic - Variable. GP/MO: up to 18 K | Up to 80 K |
| **Networking/Security** | | | | Supported networking | Virtual network, private link, public access | Private access (VNET injection in a delegated subnet), public access) | | Public access control | Firewall | Firewall |
The following table provides a list of high-level features and capabilities comp
| Cost | 1x | 2x (compute + storage) | | Availability with non-HA configuration | Automatic restart, compute relocation | Automatic restart, compute relocation | Protect from zone failure | Compute - Yes. Storage - No | Compute & storage - Yes |
-| Protect from region failure | No | No |
+| Protect from region failure | No | Yes |
| Mode of HA replication | N/A | Postgres physical streaming replication in SYNC mode | Standby can be used for read purposes | N/A | No | | Application performance impact | No (not replicating) | Yes (Due to sync replication. Depends on the workload) |
The following table provides a list of high-level features and capabilities comp
| Ability to restore on a different zone | N/A | Yes | | Ability to restore to a different VNET | No | Yes | | Ability to restore to a different region | Yes (Geo-redundant) | Yes (in [selected regions](overview.md#azure-regions)) |
-| Ability to restore a deleted server | Limited via API | Limited via support ticket |
+| Ability to restore a deleted server | Limited via API | Limited via API |
| **Read Replica** | | | | Support for read replicas | Yes | Yes | | Number of read replicas | 5 | 5 |
The following table provides a list of high-level features and capabilities comp
| Maintenance period | Anytime within 15-hrs window | 1 hr window | | **Metrics** | | | | Errors | Failed connections | Failed connections |
-| Latency | Max lag across replicas, Replica lag | N/A |
+| Latency | Max lag across replicas, Replica lag | Max lag across replicas, Replica lag |
| Saturation | Backup storage used, CPU %, IO %, Memory %, Server log storage limit, server log storage %, server log storage used, Storage limit, Storage %, Storage used | Backup storage used, CPU credits consumed, CPU credits remaining, CPU %, Disk queue depth, IOPS, Memory %, Read IOPS, Read throughput bytes/s, storage free, storage %, storage used, Transaction log storage used, Write IOPS, Write throughput bytes/s | | Traffic | Active connections, Network In, Network out | Active connections, Max. used transaction ID, Network In, Network Out, succeeded connections | | **Extensions** | | (offers latest versions)|
The following table provides a list of high-level features and capabilities comp
| Secure Sockets Layer support (SSL) | Yes | Yes | | **Other features** | | | | Alerts | Yes | Yes |
-| Microsoft Defender for Cloud | Yes | No |
+| Microsoft Defender for Cloud | Yes | Yes |
| Resource health | Yes | Yes | | Service health | Yes | Yes |
-| Performance insights (iPerf) | Yes | Yes. Not available in portal |
+| Performance insights (iPerf) | Yes | Yes |
| Major version upgrades support | No | Yes | | Minor version upgrades | Yes. Automatic during maintenance window | Yes. Automatic during maintenance window |
postgresql Concepts Networking Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-networking-private-link.md
The following situations and outcomes are possible when you use Private Link in
- If you don't configure any public traffic or service endpoint and you create private endpoints, then the Azure Database for PostgreSQL flexible server instance is accessible only through the private endpoints. If you don't configure public traffic or a service endpoint, after all approved private endpoints are rejected or deleted, no traffic will be able to access the Azure Database for PostgreSQL flexible server instance.
+## Troubleshooting connectivity issues with Private Endpoint based networking
+
+Following are basic areas to check if you are having connectivity issues using Private Endpoint based networking:
+
+1. **Verify IP Address Assignments:** Check that the private endpoint has the correct IP address assigned and that there are no conflicts with other resources. For more information on private endpoint and IP see this [doc](../../private-link/manage-private-endpoint.md)
+2. **Check Network Security Groups (NSGs):** Review the NSG rules for the private endpoint's subnet to ensure the necessary traffic is allowed and does not have conflicting rules. For more information on NSG see this [doc](../../virtual-network/network-security-groups-overview.md)
+3. **Validate Route Table Configuration:** Ensure the route tables associated with the private endpoint's subnet and the connected resources are correctly configured with the appropriate routes.
+4. **Use Network Monitoring and Diagnostics:** Leverage Azure Network Watcher to monitor and diagnose network traffic using tools like Connection Monitor or Packet Capture. For more information on network diagnostics see this [doc](../../network-watcher/network-watcher-overview.md)
+
+Further details on troubleshooting private are also available in this [guide](../../private-link/troubleshoot-private-endpoint-connectivity.md)
+
+## Troubleshooting DNS resolution with Private Endpoint based networking
+
+Following are basic areas to check if you are having DNS resolution issues using Private Endpoint based networking:
+
+1. **Validate DNS Resolution:** Check if the DNS server or service used by the private endpoint and the connected resources is functioning correctly. Ensure the private endpoint's DNS settings are accurate. For more information on private endpoints and DNS zone settings see this [doc](../../private-link/private-endpoint-dns.md)
+2. **Clear DNS Cache:** Clear the DNS cache on the private endpoint or client machine to ensure the latest DNS information is retrieved and avoid inconsistent errors.
+3. **Analyze DNS Logs:** Review DNS logs for error messages or unusual patterns, such as DNS query failures, server errors, or timeouts. For more on DNS metrics see this [doc](../../dns/dns-alerts-metrics.md)
++ ## Next steps - Learn how to create an Azure Database for PostgreSQL flexible server instance by using the **Private access (VNet integration)** option in [the Azure portal](how-to-manage-virtual-network-portal.md) or [the Azure CLI](how-to-manage-virtual-network-cli.md).
postgresql Concepts Single To Flexible https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/concepts-single-to-flexible.md
Along with data migration, the tool automatically provides the following built-i
- Migration of permissions of database objects on your source server such as GRANTS/REVOKES to the target server. > [!NOTE]
-> This functionality is enabled by default for flexible servers in all Azure public regions. It will be enabled for flexible servers in gov clouds and China regions soon. Also, please note that this feature is currently disabled for PostgreSQL version 16 servers, and support for it will be introduced in the near future.
+> This functionality is enabled by default for flexible servers in all Azure public regions and gov clouds. It will be enabled for flexible servers in China regions soon. Also, please note that this feature is currently disabled for PostgreSQL version 16 servers, and support for it will be introduced in the near future.
## Limitations
reliability Reliability Elastic San https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-elastic-san.md
+
+ Title: Reliability in Azure Elastic SAN
+description: Find out about reliability in Azure Elastic SAN.
+++++ Last updated : 02/13/2024++
+# Reliability in Elastic SAN
+
+This article describes reliability support in Azure Elastic SAN and covers both regional resiliency with availability zones and disaster recovery and business continuity.
+
+## Availability zone support
++
+Azure Elastic SAN supports availability zone deployment with locally redundant storage (LRS) and regional deployment with zone-redundant storage (ZRS).
+
+### Prerequisites
+
+LRS and ZRS Elastic SAN are currently only available in a subset of regions. For a list of regions, see [Scale targets for Elastic SAN](../storage/elastic-san/elastic-san-scale-targets.md).
++
+#### Create a resource using availability zones
+
+To create an Elastic SAN with an availability zone enabled, see [Deploy an Elastic SAN](../storage/elastic-san/elastic-san-create.md).
++
+### Zone down experience
+
+When deploying an Elastic SAN, if you select ZRS for your SAN's redundancy option, zonal failover is supported by the platform without manual intervention. An elastic SAN using ZRS is designed to self-heal and rebalance itself to take advantage of healthy zones automatically.
+
+If you deployed an LRS elastic SAN, you may need to deploy a new SAN, using snapshots exported to managed disks.
+
+### Low-latency design
+
+The latency differences between an elastic SAN on LRS and an elastic SAN on ZRS isn't particularly high. However, for workloads sensitive to latency spikes, consider an elastic SAN on LRS since it offers the lowest latency.
+
+### Availability zone redeployment and migration
+
+To migrate an elastic SAN on LRs to ZRS, you must snapshot your elastic SAN's volumes, export them to managed disk snapshots, deploy an elastic SAN on ZRS, and then create volumes on the SAN on ZRS using those disk snapshots. To learn how to use snapshots (preview), see [Snapshot Azure Elastic SAN Preview volumes (preview)](../storage/elastic-san/elastic-san-snapshots.md).
+
+## Disaster recovery and business continuity
++
+### Single and Multi-region disaster recovery
+
+For Azure Elastic SAN, you're responsible for the DR experience. You can [take snapshots](../storage/elastic-san/elastic-san-snapshots.md) of your volumes and [export them](../storage/elastic-san/elastic-san-snapshots.md#export-volume-snapshot) to managed disk snapshots. Then, you can [copy an incremental snapshot to a new region](../virtual-machines/disks-copy-incremental-snapshot-across-regions.md) to store your data is in a region other than the region your elastic SAN is in. You should export to regions that are geographically distant from your primary region to reduce the possibility of multiple regions being affected due to a disaster.
+
+#### Outage detection, notification, and management
+
+You can find outage declarations in [Service Health - Microsoft Azure](https://portal.azure.com/#view/Microsoft_Azure_Health/AzureHealthBrowseBlade/~/serviceIssues).
+
+### Capacity and proactive disaster recovery resiliency
+
+Microsoft and its customers operate under the [Shared Responsibility Model](./availability-zones-overview.md#shared-responsibility-model). Shared responsibility means that for customer-enabled DR (customer-responsible services), you must address DR for any service you deploy and control. You should prevalidate any service you deploy will work with Elastic SAN. To ensure that recovery is proactive, you should always predeploy secondaries because there's no guarantee of capacity at time of impact for those who haven't preallocated.
+
+## Next steps
+
+- [Plan for deploying an Elastic SAN](../storage/elastic-san/elastic-san-planning.md)
+- [Snapshot Azure Elastic SAN volumes (preview)](../storage/elastic-san/elastic-san-snapshots.md)
reliability Reliability Guidance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-guidance-overview.md
For a more detailed overview of reliability principles in Azure, see [Reliabilit
|Azure Data Share| |[Disaster recovery for Azure Data Share](../data-share/disaster-recovery.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| |Azure Deployment Environments| [Reliability in Azure Deployment Environments](reliability-deployment-environments.md)|[Reliability in Azure Deployment Environments](reliability-deployment-environments.md)| |Azure DevOps|| [Azure DevOps Data protection - data availability](/azure/devops/organizations/security/data-protection?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json&preserve-view=true&#data-availability)|
+|Azure Elastic SAN|[Availability zone support](reliability-elastic-san.md#availability-zone-support)|[Disaster recovery and business continuity](reliability-elastic-san.md#disaster-recovery-and-business-continuity)|
|Azure Health Data Services - Azure API for FHIR|| [Disaster recovery for Azure API for FHIR](../healthcare-apis/azure-api-for-fhir/disaster-recovery.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) | |Azure IoT Hub| [IoT Hub high availability and disaster recovery](../iot-hub/iot-hub-ha-dr.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json)| [IoT Hub high availability and disaster recovery](../iot-hub/iot-hub-ha-dr.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) | |Azure Machine Learning Service|| [Failover for business continuity and disaster recovery](../machine-learning/v1/how-to-high-availability-machine-learning.md?toc=/azure/reliability/toc.json&bc=/azure/reliability/breadcrumb/toc.json) |
role-based-access-control Classic Administrators https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/classic-administrators.md
Previously updated : 01/26/2024 Last updated : 02/13/2024
# Azure classic subscription administrators > [!IMPORTANT]
-> Classic resources and classic administrators will be [retired on August 31, 2024](https://azure.microsoft.com/updates/cloud-services-retirement-announcement/). Starting February 26, 2024, you won't be able to add new Co-Administrators. Remove unnecessary Co-Administrators and use Azure RBAC for fine-grained access control.
+> Classic resources and classic administrators will be [retired on August 31, 2024](https://azure.microsoft.com/updates/cloud-services-retirement-announcement/). Starting March 26, 2024, you won't be able to add new Co-Administrators. This date was recently extended. Remove unnecessary Co-Administrators and use Azure RBAC for fine-grained access control.
Microsoft recommends that you manage access to Azure resources using Azure role-based access control (Azure RBAC). However, if you are still using the classic deployment model, you'll need to use a classic subscription administrator role: Service Administrator and Co-Administrator. For information about how to migrate your resources from classic deployment to Resource Manager deployment, see [Azure Resource Manager vs. classic deployment](../azure-resource-manager/management/deployment-models.md).
Will Co-Administrators lose access after August 31, 2024?
What is the equivalent Azure role I should assign for Co-Administrators? -- [Owner](built-in-roles.md#owner) role at subscription scope has the equivalent access. However, Owner is a [privileged administrator role](role-assignments-steps.md#privileged-administrator-roles) and grants full access to manage Azure resources. You should consider another Azure role with fewer permissions or reduce the scope.
+- [Owner](built-in-roles.md#owner) role at subscription scope has the equivalent access. However, Owner is a [privileged administrator role](role-assignments-steps.md#privileged-administrator-roles) and grants full access to manage Azure resources. You should consider a job function role with fewer permissions, reduce the scope, or add a condition.
What should I do if I have a strong dependency on Co-Administrators? - Email ACARDeprecation@microsoft.com and describe your scenario.
-## View Co-Administrators
+## Prepare for Co-Administrators retirement
-Follow these steps to view the Co-Administrators for a subscription using the Azure portal.
+Use the following steps to help you prepare for the Co-Administrator role retirement.
+
+### Step 1: Review your current Co-Administrators
+
+1. Use the Azure portal to [get a list of your Co-Administrators](#view-classic-administrators).
+
+1. Review the [sign-in logs](/entra/identity/monitoring-health/concept-sign-ins) for your Co-Administrators to assess whether they are active users.
+
+### Step 2: Remove Co-Administrators that no longer need access
+
+1. If user is no longer in your enterprise, [remove Co-Administrator](#remove-a-co-administrator).
+
+1. If user was deleted, but their Co-Administrator assignment was not removed, [remove Co-Administrator](#remove-a-co-administrator).
+
+ Users that have been deleted typically include the text **(User was not found in this directory)**.
+
+ :::image type="content" source="media/classic-administrators/user-not-found.png" alt-text="Screenshot of user not found in directory and with Co-Administrator role." lightbox="media/classic-administrators/user-not-found.png":::
+
+1. After reviewing activity of user, if user is no longer active, [remove Co-Administrator](#remove-a-co-administrator).
+
+### Step 3: Replace existing Co-Administrators with job function roles
+
+Most users don't need the same permissions as a Co-Administrator. Consider a job function role instead.
+
+1. If a user still needs some access, determine the appropriate [job function role](role-assignments-steps.md#job-function-roles) they need.
+
+1. Determine the [scope](scope-overview.md) user needs.
+
+1. Follow steps to [assign a job function role to user](role-assignments-portal.md).
+
+1. [Remove Co-Administrator](#remove-a-co-administrator).
+
+### Step 4: Replace existing Co-Administrators with Owner role and conditions
+
+Some users might need more access than what a job function role can provide. If you must assign the [Owner](built-in-roles.md#owner) role, consider adding a condition to constrain the role assignment.
+
+1. Assign the [Owner role at subscription scope with conditions](role-assignments-portal-subscription-admin.md) to the user.
+
+1. [Remove Co-Administrator](#remove-a-co-administrator).
+
+## View classic administrators
+
+Follow these steps to view the Service Administrator and Co-Administrators for a subscription using the Azure portal.
1. Sign in to the [Azure portal](https://portal.azure.com) as an [Owner](built-in-roles.md#owner) of a subscription.
Follow these steps to view the Co-Administrators for a subscription using the Az
![Screenshot that opens Classic administrators.](./media/shared/classic-administrators.png)
-## Assess Co-Administrators
-
-Use the following table to assess how to remove or re-assign Co-Administrators.
-
-| Assessment | Next steps|
-| | |
-| User no longer needs access | Follow steps to [remove Co-Administrator](#remove-a-co-administrator). |
-| User still needs some access, but not full access | 1. Determine the Azure role the user needs.<br/>2. Determine the scope the user needs.<br/>3. Follow steps to [assign an Azure role to user](role-assignments-portal.md).<br/>4. [Remove Co-Administrator](#remove-a-co-administrator). |
-| User needs the same access as a Co-Administrator | 1. Assign the [Owner role at subscription scope](role-assignments-portal-subscription-admin.md).<br/>2. [Remove Co-Administrator](#remove-a-co-administrator). |
- ## Remove a Co-Administrator > [!IMPORTANT]
-> Classic resources and classic administrators will be [retired on August 31, 2024](https://azure.microsoft.com/updates/cloud-services-retirement-announcement/). Starting February 26, 2024, you won't be able to add new Co-Administrators. Remove unnecessary Co-Administrators and use Azure RBAC for fine-grained access control.
+> Classic resources and classic administrators will be [retired on August 31, 2024](https://azure.microsoft.com/updates/cloud-services-retirement-announcement/). Starting March 26, 2024, you won't be able to add new Co-Administrators. This date was recently extended. Remove unnecessary Co-Administrators and use Azure RBAC for fine-grained access control.
Follow these steps to remove a Co-Administrator.
Follow these steps to remove a Co-Administrator.
## Add a Co-Administrator > [!IMPORTANT]
-> Classic resources and classic administrators will be [retired on August 31, 2024](https://azure.microsoft.com/updates/cloud-services-retirement-announcement/). Starting February 26, 2024, you won't be able to add new Co-Administrators. Remove unnecessary Co-Administrators and use Azure RBAC for fine-grained access control.
+> Classic resources and classic administrators will be [retired on August 31, 2024](https://azure.microsoft.com/updates/cloud-services-retirement-announcement/). Starting March 26, 2024, you won't be able to add new Co-Administrators. This date was recently extended. Remove unnecessary Co-Administrators and use Azure RBAC for fine-grained access control.
> > You only need to add a Co-Administrator if the user needs to manage Azure classic deployments by using [Azure Service Management PowerShell Module](/powershell/azure/servicemanagement/install-azure-ps). If the user only uses the Azure portal to manage the classic resources, you wonΓÇÖt need to add the classic administrator for the user.
For more information about Microsoft accounts and Microsoft Entra accounts, see
## Remove the Service Administrator
-You might want to remove the Service Administrator, for example, if they are no longer with the company. If you do remove the Service Administrator, you must have a user who is assigned the [Owner](built-in-roles.md#owner) role at subscription scope to avoid orphaning the subscription. A subscription Owner has the same access as the Service Administrator.
+To remove the Service Administrator, you must have a user who is assigned the [Owner](built-in-roles.md#owner) role at subscription scope without conditions to avoid orphaning the subscription. A subscription Owner has the same access as the Service Administrator.
1. Sign in to the [Azure portal](https://portal.azure.com) as an [Owner](built-in-roles.md#owner) of a subscription.
role-based-access-control Rbac And Directory Admin Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/rbac-and-directory-admin-roles.md
ms.assetid: 174f1706-b959-4230-9a75-bf651227ebf6 Previously updated : 02/09/2024 Last updated : 02/13/2024
Several Microsoft Entra roles span Microsoft Entra ID and Microsoft 365, such as
## Classic subscription administrator roles > [!IMPORTANT]
-> Classic resources and classic administrators will be [retired on August 31, 2024](https://azure.microsoft.com/updates/cloud-services-retirement-announcement/). Starting February 26, 2024, you won't be able to add new Co-Administrators. Remove unnecessary Co-Administrators and use Azure RBAC for fine-grained access control.
+> Classic resources and classic administrators will be [retired on August 31, 2024](https://azure.microsoft.com/updates/cloud-services-retirement-announcement/). Starting March 26, 2024, you won't be able to add new Co-Administrators. This date was recently extended. Remove unnecessary Co-Administrators and use Azure RBAC for fine-grained access control.
Account Administrator, Service Administrator, and Co-Administrator are the three classic subscription administrator roles in Azure. Classic subscription administrators have full access to the Azure subscription. They can manage resources using the Azure portal, Azure Resource Manager APIs, and the classic deployment model APIs. The account that is used to sign up for Azure is automatically set as both the Account Administrator and Service Administrator. Then, additional Co-Administrators can be added. The Service Administrator and the Co-Administrators have the equivalent access of users who have been assigned the Owner role (an Azure role) at the subscription scope. The following table describes the differences between these three classic subscription administrative roles.
role-based-access-control Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/troubleshooting.md
ms.assetid: df42cca2-02d6-4f3c-9d56-260e1eb7dc44 Previously updated : 01/26/2024 Last updated : 02/13/2024
If you're a Microsoft Entra Global Administrator and you don't have access to a
## Classic subscription administrators > [!IMPORTANT]
-> Classic resources and classic administrators will be [retired on August 31, 2024](https://azure.microsoft.com/updates/cloud-services-retirement-announcement/). Starting February 26, 2024, you won't be able to add new Co-Administrators. Remove unnecessary Co-Administrators and use Azure RBAC for fine-grained access control.
+> Classic resources and classic administrators will be [retired on August 31, 2024](https://azure.microsoft.com/updates/cloud-services-retirement-announcement/). Starting March 26, 2024, you won't be able to add new Co-Administrators. This date was recently extended. Remove unnecessary Co-Administrators and use Azure RBAC for fine-grained access control.
> > For more information, see [Azure classic subscription administrators](classic-administrators.md).
role-based-access-control Tutorial Role Assignments Group Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/tutorial-role-assignments-group-powershell.md
-+ Last updated 02/02/2019
To complete this tutorial, you will need:
- Permissions to create groups in Microsoft Entra ID (or have an existing group) - [Azure Cloud Shell](../cloud-shell/quickstart-powershell.md)
+- [Microsoft Graph PowerShell SDK](/powershell/microsoftgraph/installation)
## Role assignments
In Azure RBAC, to grant access, you create a role assignment. A role assignment
To assign a role, you need a user, group, or service principal. If you don't already have a group, you can create one. -- In Azure Cloud Shell, create a new group using the [New-AzureADGroup](/powershell/module/azuread/new-azureadgroup) command.
+- In Azure Cloud Shell, create a new group using the [New-MgGroup](/powershell/module/microsoft.graph.groups/new-mggroup) command.
```azurepowershell
- New-AzureADGroup -DisplayName "RBAC Tutorial Group" `
- -MailEnabled $false -SecurityEnabled $true -MailNickName "NotSet"
+ New-MgGroup -DisplayName "RBAC Tutorial Group" -MailEnabled:$false `
+ -SecurityEnabled:$true -MailNickName "NotSet"
```
- ```Example
- ObjectId DisplayName Description
- -- -- --
- 11111111-1111-1111-1111-111111111111 RBAC Tutorial Group
+ ```output
+ DisplayName Id MailNickname Description GroupTypes
+ -- -- -- -
+ RBAC Tutorial Group 11111111-1111-1111-1111-111111111111 NotSet {}
``` If you don't have permissions to create groups, you can try the [Tutorial: Grant a user access to Azure resources using Azure PowerShell](tutorial-role-assignments-user-powershell.md) instead.
You use a resource group to show how to assign a role at a resource group scope.
To grant access for the group, you use the [New-AzRoleAssignment](/powershell/module/az.resources/new-azroleassignment) command to assign a role. You must specify the security principal, role definition, and scope.
-1. Get the object ID of the group using the [Get-AzureADGroup](/powershell/module/azuread/new-azureadgroup) command.
+1. Get the object ID of the group using the [Get-MgGroup](/powershell/module/microsoft.graph.groups/get-mggroup) command.
```azurepowershell
- Get-AzureADGroup -SearchString "RBAC Tutorial Group"
+ Get-MgGroup -Filter "DisplayName eq 'RBAC Tutorial Group'"
```
- ```Example
- ObjectId DisplayName Description
- -- -- --
- 11111111-1111-1111-1111-111111111111 RBAC Tutorial Group
+ ```output
+ DisplayName Id MailNickname Description GroupTypes
+ -- -- -- -
+ RBAC Tutorial Group 11111111-1111-1111-1111-111111111111 NotSet {}
``` 1. Save the group object ID in a variable.
To clean up the resources created by this tutorial, delete the resource group an
1. When asked to confirm, type **Y**. It will take a few seconds to delete.
-1. Delete the group using the [Remove-AzureADGroup](/powershell/module/azuread/remove-azureadgroup) command.
+1. Delete the group using the [Remove-MgGroup](/powershell/module/microsoft.graph.groups/remove-mggroup) command.
```azurepowershell
- Remove-AzureADGroup -ObjectId $groupId
+ Remove-MgGroup -GroupID $groupId
``` If you receive an error when you try to delete the group, you can also delete the group in the portal.
route-server Next Hop Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/next-hop-ip.md
Previously updated : 02/07/2024 Last updated : 02/13/2024+ #CustomerIntent: As an Azure administrator, I want to use the frontend IP address of the load balancer as the next hop IP so packets are routed to the load balancer to get to the NVAs that are in the backend pool.
-# Next Hop IP support
+# Next hop IP support
-With the support for Next Hop IP in Azure Route Server, you can peer with network virtual appliances (NVAs) that are deployed behind an Azure internal load balancer. The internal load balancer lets you set up active-passive connectivity scenarios and use load balancing to improve connectivity performance.
+With the support for Next hop IP in Azure Route Server, you can peer with network virtual appliances (NVAs) that are deployed behind an Azure internal load balancer. The internal load balancer lets you set up active-passive connectivity scenarios and use load balancing to improve connectivity performance.
:::image type="content" source="./media/next-hop-ip/route-server-next-hop.png" alt-text="Diagram of a Route Server peered with two NVAs behind an internal load balancer.":::
You can deploy a set of active-passive NVAs behind an internal load balancer to
## Active-active NVA connectivity You can deploy a set of active-active NVAs behind an internal load balancer to optimize connectivity performance. With the support for Next hop IP, you can define the next hop for both NVA instances as the IP address of the internal load balancer. Traffic that reaches the load balancer is sent to both NVA instances.+ > [!NOTE]
-> * Active-active NVA connectivity may result in asymmetric routing.
+> Active-active NVA connectivity may result in asymmetric routing.
## Next hop IP configuration
-Next hop IPs are set up in the BGP configuration of the target NVAs. The Next hop IP isn't part of the Azure Route Server configuration.
+Next hop IP addresses are set up in the BGP configuration of the target NVAs. The Next hop IP isn't part of the Azure Route Server configuration.
## Related content -- Learn how to [configure Azure Route Server](quickstart-configure-route-server-portal.md).-- Learn how to [monitor Azure Route Server](monitor-route-server.md).
+- [Configure Azure Route Server](quickstart-configure-route-server-portal.md).
+- [Monitor Azure Route Server](monitor-route-server.md).
sap Soft Stop Sap And Hana Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/center-sap-solutions/soft-stop-sap-and-hana-database.md
Use the [Stop-AzWorkloadsSapApplicationInstance](/powershell/module/az.workloads
``` ### Using CLI
-Use the [az workloads sap-application-server-instance stop](/cli/azure/workloads/sap-application-server-instance?view=azure-cli-latest#az-workloads-sap-application-server-instance-stop) command:
+Use the [az workloads sap-application-server-instance stop](/cli/azure/workloads/sap-application-server-instance#az-workloads-sap-application-server-instance-stop) command:
```azurecli-interactive az workloads sap-application-server-instance stop --id /subscriptions/Sub1/resourceGroups/RG1/providers/Microsoft.Workloads/sapVirtualInstances/DB0/applicationInstances/app0 --soft-stop-timeout-seconds 300
Use the [Stop-AzWorkloadsSapDatabaseInstance](/powershell/module/az.workloads/st
``` ### Using CLI
-Use the [az workloads sap-database-instance stop](/cli/azure/workloads/sap-database-instance?view=azure-cli-latest#az-workloads-sap-database-instance-stop) command:
+Use the [az workloads sap-database-instance stop](/cli/azure/workloads/sap-database-instance#az-workloads-sap-database-instance-stop) command:
```azurecli-interactive az workloads sap-database-instance stop --id /subscriptions/Sub1/resourceGroups/RG1/providers/Microsoft.Workloads/sapVirtualInstances/DB0/databaseInstances/ab0 --soft-stop-timeout-seconds 300
search Search Indexer Howto Access Private https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-howto-access-private.md
- ignite-2023 Previously updated : 01/16/2024 Last updated : 02/13/2024 # Make outbound connections through a shared private link This article explains how to configure private, outbound calls from Azure AI Search to an Azure PaaS resource that runs within a virtual network.
-Setting up a private connection allows a search service to connect to a virtual network IP address instead of a port that's open to the internet. The object created for the connection is called a *shared private link*. On the connection, Search uses the shared private link internally to reach an Azure PaaS resource inside the network boundary.
+Setting up a private connection allows a search service to connect to a virtual network IP address instead of a port that's open to the internet. The object created for the connection is called a *shared private link*. On the connection, the search service uses the shared private link internally to reach an Azure PaaS resource inside the network boundary.
Shared private link is a premium feature that's billed by usage. When you set up a shared private link, charges for the private endpoint are added to your Azure invoice. As you use the shared private link, data transfer rates for inbound and outbound access are also invoiced. For details, see [Azure Private Link pricing](https://azure.microsoft.com/pricing/details/private-link/).
Azure AI Search makes outbound calls to other Azure PaaS resources in the follow
+ Encryption key requests to Azure Key Vault + Custom skill requests to Azure Functions or similar resource
-In service-to-service communications, Search typically sends a request over a public internet connection. However, if your data, key vault, or function should be accessed through a [private endpoint](../private-link/private-endpoint-overview.md), you can create a *shared private link*.
+In service-to-service communications, Azure AI Search typically sends a request over a public internet connection. However, if your data, key vault, or function should be accessed through a [private endpoint](../private-link/private-endpoint-overview.md), you must create a *shared private link*.
A shared private link is: + Created using Azure AI Search tooling, APIs, or SDKs + Approved by the Azure PaaS resource owner
-+ Used internally by Search on a private connection to a specific Azure resource
++ Used internally by Azure AI Search on a private connection to a specific Azure resource
-Only your search service can use the private links that it creates, and there can be only one shared private link created on your service for each resource and sub-resource combination.
+Only your search service can use the private links that it creates, and there can be only one shared private link created on your service for each resource and subresource combination.
-Once you set up the private link, it's used automatically whenever Search connects to that PaaS resource. You don't need to modify the connection string or alter the client you're using to issue the requests, although the device used for the connection must connect using an authorized IP in the Azure PaaS resource's firewall.
+Once you set up the private link, it's used automatically whenever the search service connects to that PaaS resource. You don't need to modify the connection string or alter the client you're using to issue the requests, although the device used for the connection must connect using an authorized IP in the Azure PaaS resource's firewall.
> [!NOTE] > There are two scenarios for using [Azure Private Link](../private-link/private-link-overview.md) and Azure AI Search together. Creating a shared private link is one scenario, relevant when an *outbound* connection to Azure PaaS requires a private connection. The second scenario is [configure search for a private *inbound* connection](service-create-private-endpoint.md) from clients that run in a virtual network. While both scenarios have a dependency on Azure Private Link, they are independent. You can create a shared private link without having to configure your own search service for a private endpoint.
When evaluating shared private links for your scenario, remember these constrain
+ Several of the resource types used in a shared private link are in preview. If you're connecting to a preview resource (Azure Database for MySQL, Azure Functions, or Azure SQL Managed Instance), use a preview version of the Management REST API to create the shared private link. These versions include `2020-08-01-preview` or `2021-04-01-preview`.
-+ Indexer execution must use the private execution environment that's specific to your search service. Private endpoint connections aren't supported from the multi-tenant environment. The configuration setting for this requirement is covered in this article.
++ Indexer execution must use the private execution environment that's specific to your search service. Private endpoint connections aren't supported from the multitenant environment. The configuration setting for this requirement is covered in this article. ## Prerequisites
When evaluating shared private links for your scenario, remember these constrain
+ An Azure PaaS resource from the following list of supported resource types, configured to run in a virtual network.
-+ To create a shared private link you must ensure that you have the following minimum permissions on both Azure AI Search and the data source:
-
- a) For the data source, you should have the permission to approve private endpoint connections. For instance, if you're using an Azure Storage account as your data source (such as Blob container, Azure Files share, Azure table), you need to assign the permission `Microsoft.Storage/storageAccounts/privateEndpointConnectionsApproval/action`.
-
- b) For the AI Search service, you need to have the permissions to read and write shared private link resources and read operation statuses. Specifically, you should have the permissions
- - `Microsoft.Search/searchServices/sharedPrivateLinkResources/write`
- - `Microsoft.Search/searchServices/sharedPrivateLinkResources/read`
- - `Microsoft.Search/searchServices/sharedPrivateLinkResources/operationStatuses/read`
++ Permissions on both Azure AI Search and the data source:
+ + On the Azure PaaS resource, you must have the permission to approve private endpoint connections. For instance, if you're using an Azure Storage account as your data source (such as Blob container, Azure Files share, Azure table), you need `Microsoft.Storage/storageAccounts/privateEndpointConnectionsApproval/action`.
+ + On the search service, you must have read and write permissions on shared private link resources and read operation statuses:
+ + `Microsoft.Search/searchServices/sharedPrivateLinkResources/write`
+ + `Microsoft.Search/searchServices/sharedPrivateLinkResources/read`
+ + `Microsoft.Search/searchServices/sharedPrivateLinkResources/operationStatuses/read`
<a name="group-ids"></a>
When evaluating shared private links for your scenario, remember these constrain
You can create a shared private link for the following resources.
-| Resource type | Sub-resource (or Group ID) |
+| Resource type | Subresource (or Group ID) |
|--|-| | Microsoft.Storage/storageAccounts <sup>1</sup> | `blob`, `table`, `dfs`, `file` | | Microsoft.DocumentDB/databaseAccounts <sup>2</sup>| `Sql` |
You can create a shared private link for the following resources.
| Microsoft.Sql/managedInstances (preview) <sup>4</sup>| `managedInstance` | | Microsoft.CognitiveServices/accounts (preview) <sup>5</sup>| `openai_account` |
-<sup>1</sup> If Azure Storage and Azure AI Search are in the same region, the connection to storage is made over the Microsoft backbone network, which means a shared private link is redundant for this configuration. However, if you already set up a private endpoint for Azure Storage, you should also set up a shared private link or the connection is refused on the storage side. Also, if you're using multiple storage formats for various scenarios in search, make sure to create a separate shared private link for each sub-resource.
+<sup>1</sup> If Azure Storage and Azure AI Search are in the same region, the connection to storage is made over the Microsoft backbone network, which means a shared private link is redundant for this configuration. However, if you already set up a private endpoint for Azure Storage, you should also set up a shared private link or the connection is refused on the storage side. Also, if you're using multiple storage formats for various scenarios in search, make sure to create a separate shared private link for each subresource.
<sup>2</sup> The `Microsoft.DocumentDB/databaseAccounts` resource type is used for indexer connections to Azure Cosmos DB for NoSQL. The provider name and group ID are case-sensitive.
-<sup>3</sup> The `Microsoft.Web/sites` resource type is used for App service and Azure functions. In the context of Azure AI Search, an Azure function is the more likely scenario. An Azure function is commonly used for hosting the logic of a custom skill. Azure Function has Consumption, Premium and Dedicated [App Service hosting plans](../app-service/overview-hosting-plans.md). The [App Service Environment (ASE)](../app-service/environment/overview.md) and [Azure Kubernetes Service (AKS)](../aks/intro-kubernetes.md) aren't supported at this time.
+<sup>3</sup> The `Microsoft.Web/sites` resource type is used for App service and Azure functions. In the context of Azure AI Search, an Azure function is the more likely scenario. An Azure function is commonly used for hosting the logic of a custom skill. Azure Function has Consumption, Premium, and Dedicated [App Service hosting plans](../app-service/overview-hosting-plans.md). The [App Service Environment (ASE)](../app-service/environment/overview.md) and [Azure Kubernetes Service (AKS)](../aks/intro-kubernetes.md) aren't supported at this time.
<sup>4</sup> See [Create a shared private link for a SQL Managed Instance](search-indexer-how-to-access-private-sql.md) for instructions.
Here are a few tips:
+ Give the private link a meaningful name. In the Azure PaaS resource, a shared private link appears alongside other private endpoints. A name like "shared-private-link-for-search" can remind you how it's used.
-When you complete the steps in this section, you have a shared private link that's provisioned in a pending state. **It takes several minutes to create the link**. Once it's created, the resource owner needs to approve the request before it's operational.
+When you complete the steps in this section, you have a shared private link that's provisioned in a pending state. **It takes several minutes to create the link**. Once it's created, the resource owner must approve the request before it's operational.
### [**Azure portal**](#tab/portal-create)
When you complete the steps in this section, you have a shared private link that
1. Under **Settings** on the left navigation pane, select **Networking**.
-1. On the **Shared Private Access** tab, select **+ Add Shared Private Access**.
+1. On the **Shared Private Access** page, select **+ Add Shared Private Access**.
1. Select either **Connect to an Azure resource in my directory** or **Connect to an Azure resource by resource ID or alias**. 1. If you select the first option (recommended), the portal helps you pick the appropriate Azure resource and fills in other properties, such as the group ID of the resource and the resource type.
- ![Screenshot of the "Add Shared Private Access" pane, showing a guided experience for creating a shared private link resource. ](media\search-indexer-howto-secure-access\new-shared-private-link-resource.png)
+ :::image type="content" source="media/search-indexer-howto-secure-access/new-shared-private-link-resource.png" lightbox="media/search-indexer-howto-secure-access/new-shared-private-link-resource.png" alt-text="Screenshot of the Add Shared Private Access page, showing a guided experience for creating a shared private link resource." :::
1. If you select the second option, enter the Azure resource ID manually and choose the appropriate group ID from the list at the beginning of this article.
- ![Screenshot of the "Add Shared Private Access" pane, showing the manual experience for creating a shared private link resource.](media\search-indexer-howto-secure-access\new-shared-private-link-resource-manual.png)
+ :::image type="content" source="media/search-indexer-howto-secure-access/new-shared-private-link-resource-manual.png" lightbox="media/search-indexer-howto-secure-access/new-shared-private-link-resource-manual.png" alt-text="Screenshot of the Add Shared Private Access page, showing the manual experience for creating a shared private link resource.":::
1. Confirm the provisioning status is "Updating".
- ![Screenshot of the "Add Shared Private Access" pane, showing the resource creation in progress. ](media\search-indexer-howto-secure-access\new-shared-private-link-resource-progress.png)
+ :::image type="content" source="media/search-indexer-howto-secure-access/new-shared-private-link-resource-progress.png" lightbox="media/search-indexer-howto-secure-access/new-shared-private-link-resource-progress.png" alt-text="Screenshot of the Add Shared Private Access page, showing the resource creation in progress.":::
1. Once the resource is successfully created, the provisioning state of the resource changes to "Succeeded".
- ![Screenshot of the "Add Shared Private Access" pane, showing the resource creation completed. ](media\search-indexer-howto-secure-access\new-shared-private-link-resource-success.png)
+ :::image type="content" source="media/search-indexer-howto-secure-access/new-shared-private-link-resource-success.png" lightbox="media/search-indexer-howto-secure-access/new-shared-private-link-resource-success.png" alt-text="Screenshot of the Add Shared Private Access page, showing the resource creation completed.":::
### [**REST API**](#tab/rest-create)
Because it's easy and quick, this section uses Azure CLI steps for getting a bea
az account get-access-token ```
-1. Switch to a REST client and set up a [GET Shared Private Link Resource](/rest/api/searchmanagement/shared-private-link-resources/get). This step allows you to review existing shared private links to ensure you're not duplicating a link. There can be only one shared private link for each resource and sub-resource combination.
+1. Switch to a REST client and set up a [GET Shared Private Link Resource](/rest/api/searchmanagement/shared-private-link-resources/get). This step allows you to review existing shared private links to ensure you're not duplicating a link. There can be only one shared private link for each resource and subresource combination.
```http GET https://https://management.azure.com/subscriptions/{{subscriptionId}}/resourceGroups/{{rg-name}}/providers/Microsoft.Search/searchServices/{{service-name}}/sharedPrivateLinkResources?api-version={{api-version}} ```
-1. On the **Authorization** tab, select **Bearer Token** and then paste in the token.
+1. On the **Authorization** page, select **Bearer Token** and then paste in the token.
1. Set the content type to JSON.
-1. Send the request. You should get a list of all shared private link resources that exist for your search service. Make sure there's no existing shared private link for the resource and sub-resource combination.
+1. Send the request. You should get a list of all shared private link resources that exist for your search service. Make sure there's no existing shared private link for the resource and subresource combination.
1. Formulate a PUT request to [Create or Update Shared Private Link](/rest/api/searchmanagement/shared-private-link-resources/create-or-update) for the Azure PaaS resource. Provide a URI and request body similar to the following example:
Because it's easy and quick, this section uses Azure CLI steps for getting a bea
See [Manage with PowerShell](search-manage-powershell.md) for instructions on getting started.
-First, use [Get-AzSearchSharedPrivateLinkResource](/powershell/module/az.search/get-azsearchprivatelinkresource) to review any existing shared private links to ensure you're not duplicating a link. There can be only one shared private link for each resource and sub-resource combination.
+First, use [Get-AzSearchSharedPrivateLinkResource](/powershell/module/az.search/get-azsearchprivatelinkresource) to review any existing shared private links to ensure you're not duplicating a link. There can be only one shared private link for each resource and subresource combination.
```azurepowershell Get-AzSearchSharedPrivateLinkResource -ResourceGroupName <search-service-resource-group-name> -ServiceName <search-service-name>
Rerun the first request to monitor the provisioning state as it transitions from
See [Manage with the Azure CLI](search-manage-azure-cli.md) for instructions on getting started.
-First, use [az-search-shared-private-link-resource list](/cli/azure/search/shared-private-link-resource) to review any existing shared private links to ensure you're not duplicating a link. There can be only one shared private link for each resource and sub-resource combination.
+First, use [az-search-shared-private-link-resource list](/cli/azure/search/shared-private-link-resource) to review any existing shared private links to ensure you're not duplicating a link. There can be only one shared private link for each resource and subresource combination.
```azurecli az search shared-private-link-resource list --service-name {{your-search-service-name}} --resource-group {{your-search-service-resource-group}}
A `202 Accepted` response is returned on success. The process of creating an out
## 2 - Approve the private endpoint connection
-The resource owner must approve the connection request you created. This section assumes the portal for this step, but you can also use the REST APIs of the Azure PaaS resource. [Private Endpoint Connections (Storage Resource Provider)](/rest/api/storagerp/privateendpointconnections) and [Private Endpoint Connections (Cosmos DB Resource Provider)](/rest/api/cosmos-db-resource-provider/2023-03-15/private-endpoint-connections) are two examples.
+Approval of the private endpoint connection is granted on the Azure PaaS side. It might be automatic if the service consumer has Azure role-based access control (RBAC) permissions on the service provider resource. Otherwise, manual approval is required. For details, see [Manage Azure private endpoints](/azure/private-link/manage-private-endpoint).
-1. In the Azure portal, open the **Networking** page of the Azure PaaS resource.
+This section assumes manual approval and the portal for this step, but you can also use the REST APIs of the Azure PaaS resource. [Private Endpoint Connections (Storage Resource Provider)](/rest/api/storagerp/privateendpointconnections) and [Private Endpoint Connections (Cosmos DB Resource Provider)](/rest/api/cosmos-db-resource-provider/2023-03-15/private-endpoint-connections) are two examples.
+
+1. In the Azure portal, open the **Networking** page of the Azure PaaS resource.[text](https://ms.portal.azure.com/#blade%2FHubsExtension%2FResourceMenuBlade%2Fid%2F%2Fsubscriptions%2Fa5b1ca8b-bab3-4c26-aebe-4cf7ec4791a0%2FresourceGroups%2Ftest-private-endpoint%2Fproviders%2FMicrosoft.Network%2FprivateEndpoints%2Ftest-private-endpoint)
1. Find the section that lists the private endpoint connections. The following example is for a storage account.
- ![Screenshot of the Azure portal, showing the "Private endpoint connections" pane.](media\search-indexer-howto-secure-access\storage-privateendpoint-approval.png)
+ :::image type="content" source="media/search-indexer-howto-secure-access/storage-privateendpoint-approval.png" lightbox="media/search-indexer-howto-secure-access/storage-privateendpoint-approval.png" alt-text="Screenshot of the Azure portal, showing the Private endpoint connections pane.":::
1. Select the connection, and then select **Approve**. It can take a few minutes for the status to be updated in the portal.
- ![Screenshot of the Azure portal, showing an "Approved" status on the "Private endpoint connections" pane.](media\search-indexer-howto-secure-access\storage-privateendpoint-after-approval.png)
+ :::image type="content" source="media/search-indexer-howto-secure-access/storage-privateendpoint-after-approval.png" lightbox="media/search-indexer-howto-secure-access/storage-privateendpoint-after-approval.png" alt-text="Screenshot of the Azure portal, showing an Approved status on the Private endpoint connections pane.":::
After the private endpoint is approved, Azure AI Search creates the necessary DNS zone mappings in the DNS zone that's created for it.
+The private endpoint link on the page only resolves to the private link definition in Azure AI Search if there's shared tenancy between Azure AI Search backend private link and the Azure PaaS resource.
++
+A status message of `"The access token is from the wrong issuer"` and `must match the tenant associated with this subscription` appears because the backend private endpoint resource is provisioned in a Microsoft-managed tenant, while the linked resource (Azure AI Search) is in your tenant. It's by design you can't access the private endpoint resource by selecting the private endpoint connection link.
+
+Follow the instructions in the next section to check the status of your shared private link.
+ ## 3 - Check shared private link status
-On the Azure AI Search side, you can confirm request approval by revisiting the Shared Private Access tab of the search service **Networking** page. Connection state should be approved.
+On the Azure AI Search side, you can confirm request approval by revisiting the Shared Private Access page of the search service **Networking** page. Connection state should be approved.
- ![Screenshot of the Azure portal, showing an "Approved" shared private link resource.](media\search-indexer-howto-secure-access\new-shared-private-link-resource-approved.png)
+ :::image type="content" source="media/search-indexer-howto-secure-access/new-shared-private-link-resource-approved.png" alt-text="Screenshot of the Azure portal, showing an Approved shared private link resource.":::
Alternatively, you can also obtain connection state by using the [GET Shared Private Link API](/rest/api/searchmanagement/2021-04-01-preview/shared-private-link-resources/get).
If the provisioning state (`properties.provisioningState`) of the resource is "S
## 4 - Configure the indexer to run in the private environment
-[Indexer execution](search-indexer-securing-resources.md#indexer-execution-environment) occurs in either a private environment that's specific to the search service, or a multi-tenant environment that's used internally to offload expensive skillset processing for multiple customers.
+[Indexer execution](search-indexer-securing-resources.md#indexer-execution-environment) occurs in either a private environment that's specific to the search service, or a multitenant environment that's used internally to offload expensive skillset processing for multiple customers.
-The execution environment is usually transparent, but once you start building firewall rules or establishing private connections, you have to take indexer execution into account. For a private connection, configure indexer execution to always run in the private environment.
+The execution environment is transparent, but once you start building firewall rules or establishing private connections, you must take indexer execution into account. For a private connection, configure indexer execution to always run in the private environment.
This step shows you how to configure the indexer to run in the private environment using the REST API. You can also set the execution environment using the JSON editor in the portal.
search Vector Search How To Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-how-to-query.md
api-key: {{admin-api-key}}
"select": "title, content, category", "vectorQueries": [ {
- "kind": "vector"
+ "kind": "vector",
"vector": [ -0.009154141, 0.018708462,
api-key: {{admin-api-key}}
"select": "title, content, category", "vectorQueries": [ {
- "kind": "vector"
+ "kind": "vector",
"vector": [ -0.009154141, 0.018708462,
api-key: {{admin-api-key}}
"vectorFilterMode": "preFilter", "vectorQueries": [ {
- "kind": "vector"
+ "kind": "vector",
"vector": [ -0.009154141, 0.018708462,
api-key: {{admin-api-key}}
"vectorFilterMode": "preFilter", "vectorQueries": [ {
- "kind": "vector"
+ "kind": "vector",
"vector": [ -0.009154141, 0.018708462,
api-key: {{admin-api-key}}
"select": "title, content, category", "vectorQueries": [ {
- "kind": "vector"
+ "kind": "vector",
"vector": [ -0.009154141, 0.018708462,
The following query example looks for similarity in both `myImageVector` and `my
"select": "title, content, category", "vectorQueries": [ {
- "kind": "vector"
+ "kind": "vector",
"vector": [ -0.009154141, 0.018708462,
POST https://{{search-service}}.search.windows.net/indexes/{{index}}/docs/search
"select": "title, genre, description", "vectorQueries": [ {
- "kind": "text"
+ "kind": "text",
"text": "mystery novel set in London", "fields": "descriptionVector", "k": 5
api-key: {{admin-api-key}}
"vectorFilterMode": "postFilter", "vectorQueries": [ {
- "kind": "text"
+ "kind": "text",
"text": "mystery novel set in London", "fields": "descriptionVector, synopsisVector", "k": 5
A vector query specifies the `k` parameter, which determines how many matches ar
If you're familiar with full text search, you know to expect zero results if the index doesn't contain a term or phrase. However, in vector search, the search operation is identifying nearest neighbors, and it will always return `k` results even if the nearest neighbors aren't that similar. So, it's possible to get results for nonsensical or off-topic queries, especially if you aren't using prompts to set boundaries. Less relevant results have a worse similarity score, but they're still the "nearest" vectors if there isn't anything closer. As such, a response with no meaningful results can still return `k` results, but each result's similarity score would be low.
-A [hybrid approach](hybrid-search-overview.md) that includes full text search can mitigate this problem. Another mitigation is to set a minimum threshold on the search score, but only if the query is a pure single vector query. Hybrid queries aren't conducive to minimum thresholds because the ranges are so much smaller and volatile.
+A [hybrid approach](hybrid-search-overview.md) that includes full text search can mitigate this problem. Another mitigation is to set a minimum threshold on the search score, but only if the query is a pure single vector query. Hybrid queries aren't conducive to minimum thresholds because the RRF ranges are so much smaller and volatile.
Query parameters affecting result count include:
search Vector Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-store.md
In Azure AI Search, there are two patterns for working with search results.
+ Generative search. Language models formulate a response to the user's query using data from Azure AI Search. This pattern usually includes an orchestration layer to coordinate prompts and maintain context. In this pattern, results are fed into prompt flows and chat models like GPT and Text-Davinci. This approach is based on [**Retrieval augmented generation (RAG)**](retrieval-augmented-generation-overview.md) architecture, where the search index provides the grounding data.
-+ Classic search. The search engine formulates a response based on content in your index, and you render those results in a client app. In a direct response from the search engine, results are returned in a flattened row set, and you can choose which fields are passed to the client app. It's expected that you would populate the vector store (search index) with nonvector content that's human readable so that you don't have to decode vectors for your response. The search engine matches on vectors, but returns nonvector values from the same search document.
++ Classic search. The search engine formulates a response based on content in your index, and you render those results in a client app. In a direct response from the search engine, results are returned in a flattened row set, and you can choose which fields are passed to the client app. It's expected that you would populate the vector store (search index) with nonvector content that's human readable in your response. The search engine matches on vectors, but can return nonvector values from the same search document. Your index schema should reflect your primary use case.
Your index schema should reflect your primary use case.
The following examples highlight the differences in field composition for solutions build for generative AI versus classic search.
-An index schema for a vector store requires a name, a key field, one or more vector fields, and a vector configuration. Nonvector fields are recommended for hybrid queries, or for returning human readable content that doesn't have to be decoded first. For step by step instructions, see [Create a vector store](vector-search-how-to-create-index.md).
+An index schema for a vector store requires a name, a key field, one or more vector fields, and a vector configuration. Nonvector fields are recommended for hybrid queries, or for returning verbatim human readable content that doesn't have to go through a language model. For instructions about vector configuration, see [Create a vector store](vector-search-how-to-create-index.md).
### Basic vector field configuration
-A vector field, such as "content_vector" in the following example, is of type `Collection(Edm.Single)`. It must be searchable and retrievable. It can't be filterable, facetable, or sortable, and it can't have analyzers, normalizers, or synonym map assignments. It must have dimensions set to a value supported by the embedding model. Text-embedding-ada-002 is the mostly commonly used embedding model and it generates embeddings using 1,536 dimensions. A vector search profile is specified in a vector search configuration and assigned to a vector field using the profile name.
+A vector field, such as "content_vector" in the following example, is of type `Collection(Edm.Single)`. It must be searchable and retrievable. It can't be filterable, facetable, or sortable, and it can't have analyzers, normalizers, or synonym map assignments. It must have dimensions set to the number of embeddings generated by the embedding model. For instance, if you're using text-embedding-ada-002, it generates 1,536 embeddings. A vector search profile is specified in a vector search configuration and assigned to a vector field using the profile name.
-Content (nonvector) fields are useful for human readable text returned directly from the search engine. If you're using language models exclusively for response formulation, you can skip nonvector content fields. This example assumes that "content" is the human readable equivalent of the "content_vector" field.
+Content (nonvector) fields are useful for human readable text returned directly from the search engine. If you're using language models exclusively for response formulation, you can skip nonvector content fields. The following example assumes that "content" is the human readable equivalent of the "content_vector" field.
Metadata fields are useful for filters, especially if metadata includes origin information about the source document.
sentinel Detect Threats Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/detect-threats-custom.md
Analytics rules search for specific events or sets of events across your environ
[Incidents](investigate-cases.md) created from alerts that are detected by rules mapped to MITRE ATT&CK tactics and techniques automatically inherit the rule's mapping. -- Set the alert **Severity** as appropriate.
+- Set the alert **Severity** as appropriate, matching the impact the activity triggering the rule might have on the target environment, should the rule be a true positive.
+ - **Informational**. No impact on your system, but the information might be indicative of future steps planned by a threat actor.
+ - **Low**. The immediate impact would be minimal. A threat actor would likely need to conduct multiple steps before achieving an impact on an environment.
+ - **Medium**. The threat actor could have some impact on the environment with this activity, but it would be limited in scope or require additional activity.
+ - **High**. The activity identified provides the threat actor with wide ranging access to conduct actions on the environment or is triggered by impact on the environment.
+
+ Severity level defaults are not a guarantee of current or environmental impact level. [Customize alert details](customize-alert-details.md) to customize the severity, tactics, and other properties of a given instance of an alert with the values of any relevant fields from a query output.
+
+ Severity definitions for Microsoft Sentinel analytics rule templates are relevant only for alerts created by analytics rules. For alerts ingested from from other services, the severity is defined by the source security service.
+
- When you create the rule, its **Status** is **Enabled** by default, which means it will run immediately after you finish creating it. If you donΓÇÖt want it to run immediately, select **Disabled**, and the rule will be added to your **Active rules** tab and you can enable it from there when you need it. :::image type="content" source="media/tutorial-detect-threats-custom/general-tab.png" alt-text="Start creating a custom analytics rule":::
service-fabric Container Image Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/container-image-management.md
Title: Azure Service Fabric container image management description: How to use container image management in a service fabric cluster. --++ Last updated 06/22/2023 # Container Image Management
-The activation path during Service Fabric containers deployment, handles the downloading of the container images to the VM on which the containers are running. Once the containers have been removed from the cluster and their application types have been unregistered, there's a cleanup cycle that deletes the container images. This container image cleanup works only if the container image has been hard coded in the service manifest. For existing Service Fabric runtime versions, the configurations supporting the cleanup of the container images are as follows -
+The activation path during Service Fabric containers deployment, handles the downloading of the container images to the VM on which the containers are running. Once the containers are removed from the cluster and their application types are unregistered, there's a cleanup cycle that deletes the container images. This container image cleanup works only if the container image was hard coded in the service manifest. For existing Service Fabric runtime versions, the configurations supporting the cleanup of the container images are as follows -
## Settings
The activation path during Service Fabric containers deployment, handles the dow
|CacheCleanupScanInterval |Setting in seconds determining how often the cleanup cycle runs. | ## Container Image Management v2
-Starting Service Fabric version 10.0 there's a newer version of the container image deletion flow. This flow cleans up container images irrespective of how the container images may have been defined - either hard coded or parameterized during application deployment. PruneContainerImages and ContainerImageDeletionEnabled configuration are mutually exclusive and cluster upgrade validation exists to ensure one or the other is switched on but not both. The configuration supporting this feature are as follows -
+Starting Service Fabric version 10.0 there's a newer version of the container image deletion flow. This flow cleans up container images irrespective of how the container images were defined - either hard coded or parameterized during application deployment. PruneContainerImages and ContainerImageDeletionEnabled configuration are mutually exclusive and cluster upgrade validation exists to ensure one or the other is switched on but not both. The configuration supporting this feature are as follows -
### Settings
Starting Service Fabric version 10.0 there's a newer version of the container im
|ContainerImageDeletionEnabled |Setting to enable or disable deletion of container images. | |ContainerImageCleanupInterval |Time interval for cleaning up unused container images. | |ContainerImageTTL |Time to live for container images once they're eligible for removal (not referenced by containers on the VM and the application is deleted(if ContainerImageDeletionOnAppInstanceDeletionEnabled is enabled)). |
- |ContainerImageDeletionOnAppInstanceDeletionEnabled |Setting to enable or disable deletion of expired ttl container images only after application has been deleted as well. |
+ |ContainerImageDeletionOnAppInstanceDeletionEnabled |Setting to enable or disable deletion of expired ttl container images only after application was deleted as well. |
|ContainerImagesToSkip |When set enables the container runtime to skip deleting images that match any of the set of regular expressions. The \| character separates each expression. Example: "mcr.microsoft.com/.+\|docker.io/library/alpine:latest" - this example matches everything prefixed with "mcr.microsoft.com/" and matches exactly "docker.io/library/alpine:latest". By default we don't delete the known Windows base images microsoft/windowsservercore or microsoft/nanoserver. | ## Next steps
service-fabric How To Managed Cluster Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-availability-zones.md
The update should be done via ARM template by setting the zonalUpdateMode proper
} }] ```
-2. Add a node to a cluster by using the [az sf cluster node add PowerShell command](/cli/azure/sf/cluster/node?view=azure-cli-latest#az-sf-cluster-node-add()).
+2. Add a node to a cluster by using the [az sf cluster node add PowerShell command](/cli/azure/sf/cluster/node#az-sf-cluster-node-add()).
-3. Remove a node from a cluster by using the [az sf cluster node remove PowerShell command](/cli/azure/sf/cluster/node?view=azure-cli-latest#az-sf-cluster-node-remove()).
+3. Remove a node from a cluster by using the [az sf cluster node remove PowerShell command](/cli/azure/sf/cluster/node#az-sf-cluster-node-remove()).
[sf-architecture]: ./media/service-fabric-cross-availability-zones/sf-cross-az-topology.png [sf-architecture]: ./media/service-fabric-cross-availability-zones/sf-cross-az-topology.png
service-fabric Service Fabric Reliable Services Reliable Collections Guidelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reliable-services-reliable-collections-guidelines.md
The guidelines are organized as simple recommendations prefixed with the terms *
* Consider dispose transaction as soon as possible after commit completes (especially if using ConcurrentQueue). * Do not perform any blocking code inside a transaction. * When [string](/dotnet/api/system.string) is used as the key for a reliable dictionary, the sorting order uses [default string comparer CurrentCulture](/dotnet/api/system.string.compare#system-string-compare(system-string-system-string)). Note that the CurrentCulture sorting order is different from [Ordinal string comparer](/dotnet/api/system.stringcomparer.ordinal).
+* Do not dispose or cancel a committing transaction. This is not supported and could crash the host process.
Here are some things to keep in mind:
storage Storage Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-introduction.md
The Azure Storage platform includes the following data
- [Azure Blobs](../blobs/storage-blobs-introduction.md): A massively scalable object store for text and binary data. Also includes support for big data analytics through Data Lake Storage Gen2. - [Azure Files](../files/storage-files-introduction.md): Managed file shares for cloud or on-premises deployments.-- [Azure Elastic SAN](../elastic-san/elastic-san-introduction.md) (preview): A fully integrated solution that simplifies deploying, scaling, managing, and configuring a SAN in Azure.
+- [Azure Elastic SAN](../elastic-san/elastic-san-introduction.md): A fully integrated solution that simplifies deploying, scaling, managing, and configuring a SAN in Azure.
- [Azure Queues](../queues/storage-queues-introduction.md): A messaging store for reliable messaging between application components. - [Azure Tables](../tables/table-storage-overview.md): A NoSQL store for schemaless storage of structured data. - [Azure managed Disks](../../virtual-machines/managed-disks-overview.md): Block-level storage volumes for Azure VMs.
The following table compares Azure Storage services and shows example scenarios
|--|-|-| | **Azure Files** |Offers fully managed cloud file shares that you can access from anywhere via the industry standard [Server Message Block (SMB) protocol](/windows/win32/fileio/microsoft-smb-protocol-and-cifs-protocol-overview), [Network File System (NFS) protocol](https://en.wikipedia.org/wiki/Network_File_System), and [Azure Files REST API](/rest/api/storageservices/file-service-rest-api).<br><br>You can mount Azure file shares from cloud or on-premises deployments of Windows, Linux, and macOS. | You want to "lift and shift" an application to the cloud that already uses the native file system APIs to share data between it and other applications running in Azure.<br/><br/>You want to replace or supplement on-premises file servers or NAS devices.<br><br> You want to store development and debugging tools that need to be accessed from many virtual machines. | | **Azure Blobs** | Allows unstructured data to be stored and accessed at a massive scale in block blobs.<br/><br/>Also supports [Azure Data Lake Storage Gen2](../blobs/data-lake-storage-introduction.md) for enterprise big data analytics solutions. | You want your application to support streaming and random access scenarios.<br/><br/>You want to be able to access application data from anywhere.<br/><br/>You want to build an enterprise data lake on Azure and perform big data analytics. |
-| **Azure Elastic SAN** (preview) | Azure Elastic SAN (preview) is a fully integrated solution that simplifies deploying, scaling, managing, and configuring a SAN, while also offering built-in cloud capabilities like high availability. | You want large scale storage that is interoperable with multiple types of compute resources (such as SQL, MariaDB, Azure virtual machines, and Azure Kubernetes Services) accessed via the [internet Small Computer Systems Interface](https://en.wikipedia.org/wiki/ISCSI) (iSCSI) protocol.|
+| **Azure Elastic SAN** | Azure Elastic SAN is a fully integrated solution that simplifies deploying, scaling, managing, and configuring a SAN, while also offering built-in cloud capabilities like high availability. | You want large scale storage that is interoperable with multiple types of compute resources (such as SQL, MariaDB, Azure virtual machines, and Azure Kubernetes Services) accessed via the [internet Small Computer Systems Interface](https://en.wikipedia.org/wiki/ISCSI) (iSCSI) protocol.|
| **Azure Disks** | Allows data to be persistently stored and accessed from an attached virtual hard disk. | You want to "lift and shift" applications that use native file system APIs to read and write data to persistent disks.<br/><br/>You want to store data that isn't required to be accessed from outside the virtual machine to which the disk is attached. | | **Azure Container Storage** (preview) | Azure Container Storage (preview) is a volume management, deployment, and orchestration service that integrates with Kubernetes and is built natively for containers. | You want to dynamically and automatically provision persistent volumes to store data for stateful applications running on Kubernetes clusters. | | **Azure Queues** | Allows for asynchronous message queueing between application components. | You want to decouple application components and use asynchronous messaging to communicate between them.<br><br>For guidance around when to use Queue Storage versus Service Bus queues, see [Storage queues and Service Bus queues - compared and contrasted](../../service-bus-messaging/service-bus-azure-and-service-bus-queues-compared-contrasted.md). |
For more information about Azure Files, see [Introduction to Azure Files](../fil
Some SMB features aren't applicable to the cloud. For more information, see [Features not supported by the Azure File service](/rest/api/storageservices/features-not-supported-by-the-azure-file-service).
-## Azure Elastic SAN (preview)
+## Azure Elastic SAN
-Azure Elastic storage area network (SAN) is Microsoft's answer to the problem of workload optimization and integration between your large scale databases and performance-intensive mission-critical applications. Elastic SAN (preview) is a fully integrated solution that simplifies deploying, scaling, managing, and configuring a SAN, while also offering built-in cloud capabilities like high availability.
+Azure Elastic storage area network (SAN) is Microsoft's answer to the problem of workload optimization and integration between your large scale databases and performance-intensive mission-critical applications. Elastic SAN is a fully integrated solution that simplifies deploying, scaling, managing, and configuring a SAN, while also offering built-in cloud capabilities like high availability.
Elastic SAN is designed for large scale IO-intensive workloads and top tier databases such as SQL, MariaDB, and support hosting the workloads on virtual machines, or containers such as Azure Kubernetes Service. Elastic SAN volumes are compatible with a wide variety of compute resources through the [iSCSI](https://en.wikipedia.org/wiki/ISCSI) protocol. Some other benefits of Elastic SAN include a simplified deployment and management interface. Since you can manage storage for multiple compute resources from a single interface, and cost optimization.
-For more information about Azure Elastic SAN, see [What is Azure Elastic SAN? (preview)](../elastic-san/elastic-san-introduction.md).
+For more information about Azure Elastic SAN, see [What is Azure Elastic SAN?](../elastic-san/elastic-san-introduction.md).
## Azure Container Storage (preview)
storage Clone Volume https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/clone-volume.md
You can clone persistent volumes in [Azure Container Storage](container-storage-
- This article requires version 2.0.64 or later of the Azure CLI. See [How to install the Azure CLI](/cli/azure/install-azure-cli). If you're using Azure Cloud Shell, the latest version is already installed. If you plan to run the commands locally instead of in Azure Cloud Shell, be sure to run them with administrative privileges. - You'll need an Azure Kubernetes Service (AKS) cluster with a node pool of at least three virtual machines (VMs) for the cluster nodes, each with a minimum of four virtual CPUs (vCPUs). -- This article assumes you've already installed Azure Container Storage on your AKS cluster, and that you've created a storage pool and persistent volume claim (PVC) using either [Azure Disks](use-container-storage-with-managed-disks.md) or [ephemeral disk (local storage)](use-container-storage-with-local-disk.md). Azure Elastic SAN Preview doesn't support resizing volumes.
+- This article assumes you've already installed Azure Container Storage on your AKS cluster, and that you've created a storage pool and persistent volume claim (PVC) using either [Azure Disks](use-container-storage-with-managed-disks.md) or [ephemeral disk (local storage)](use-container-storage-with-local-disk.md). Azure Elastic SAN doesn't support resizing volumes.
## Clone a volume
storage Container Storage Aks Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/container-storage-aks-quickstart.md
## Getting started -- Take note of your Azure subscription ID. We recommend using a subscription on which you have a [Kubernetes contributor](../../role-based-access-control/built-in-roles.md#kubernetes-extension-contributor) role if you want to use Azure Disks or Ephemeral Disk as data storage. If you want to use Azure Elastic SAN Preview as data storage, you'll need an [Owner](../../role-based-access-control/built-in-roles.md#owner) role on the Azure subscription.
+- Take note of your Azure subscription ID. We recommend using a subscription on which you have a [Kubernetes contributor](../../role-based-access-control/built-in-roles.md#kubernetes-extension-contributor) role if you want to use Azure Disks or Ephemeral Disk as data storage. If you want to use Azure Elastic SAN as data storage, you'll need an [Owner](../../role-based-access-control/built-in-roles.md#owner) role on the Azure subscription.
- [Launch Azure Cloud Shell](https://shell.azure.com), or if you're using a local installation, sign in to the Azure CLI by using the [az login](/cli/azure/reference-index#az-login) command.
If the resource group was created successfully, you'll see output similar to thi
Before deploying Azure Container Storage, you'll need to decide which back-end storage option you want to use to create your storage pool and persistent volumes. Three options are currently available: -- **Azure Elastic SAN Preview**: Azure Elastic SAN preview is a good fit for general purpose databases, streaming and messaging services, CI/CD environments, and other tier 1/tier 2 workloads. Storage is provisioned on demand per created volume and volume snapshot. Multiple clusters can access a single SAN concurrently, however persistent volumes can only be attached by one consumer at a time.
+- **Azure Elastic SAN**: Azure Elastic SAN preview is a good fit for general purpose databases, streaming and messaging services, CI/CD environments, and other tier 1/tier 2 workloads. Storage is provisioned on demand per created volume and volume snapshot. Multiple clusters can access a single SAN concurrently, however persistent volumes can only be attached by one consumer at a time.
- **Azure Disks**: Azure Disks are a good fit for databases such as MySQL, MongoDB, and PostgreSQL. Storage is provisioned per target container storage pool size and maximum volume size.
Before deploying Azure Container Storage, you'll need to decide which back-end s
You'll specify the storage pool type when you install Azure Container Storage. > [!NOTE]
-> For Azure Elastic SAN Preview and Azure Disks, Azure Container Storage will deploy the backing storage for you as part of the installation. You don't need to create your own Elastic SAN or Azure Disk.
+> For Azure Elastic SAN and Azure Disks, Azure Container Storage will deploy the backing storage for you as part of the installation. You don't need to create your own Elastic SAN or Azure Disk.
## Choose a VM type for your cluster
-If you intend to use Azure Elastic SAN Preview or Azure Disks as backing storage, then you should choose a [general purpose VM type](../../virtual-machines/sizes-general.md) such as **standard_d4s_v5** for the cluster nodes. If you intend to use Ephemeral Disk, choose a [storage optimized VM type](../../virtual-machines/sizes-storage.md) with NVMe drives such as **standard_l8s_v3**. In order to use Ephemeral Disk, the VMs must have NVMe drives. You'll specify the VM type when you create the cluster in the next section.
+If you intend to use Azure Elastic SAN or Azure Disks as backing storage, then you should choose a [general purpose VM type](../../virtual-machines/sizes-general.md) such as **standard_d4s_v5** for the cluster nodes. If you intend to use Ephemeral Disk, choose a [storage optimized VM type](../../virtual-machines/sizes-storage.md) with NVMe drives such as **standard_l8s_v3**. In order to use Ephemeral Disk, the VMs must have NVMe drives. You'll specify the VM type when you create the cluster in the next section.
> [!IMPORTANT] > You must choose a VM type that supports [Azure premium storage](../../virtual-machines/premium-storage-performance.md). Each VM should have a minimum of four virtual CPUs (vCPUs). Azure Container Storage will consume one core for I/O processing on every VM the extension is deployed to.
kubectl get sp -n acstor
``` > [!IMPORTANT]
-> If you specified Azure Elastic SAN Preview as backing storage for your storage pool and you don't have owner-level access to the Azure subscription, only Azure Container Storage will be installed and a storage pool won't be created. In this case, you'll have to [create an Elastic SAN storage pool manually](use-container-storage-with-elastic-san.md).
+> If you specified Azure Elastic SAN as backing storage for your storage pool and you don't have owner-level access to the Azure subscription, only Azure Container Storage will be installed and a storage pool won't be created. In this case, you'll have to [create an Elastic SAN storage pool manually](use-container-storage-with-elastic-san.md).
## Install Azure Container Storage on an existing AKS cluster
To create persistent volumes, select the link for the backing storage type you s
- [Create persistent volume claim with Azure managed disks](use-container-storage-with-managed-disks.md#create-a-persistent-volume-claim) - [Create persistent volume claim with Ephemeral Disk](use-container-storage-with-local-disk.md#create-a-persistent-volume-claim)-- [Create persistent volume claim with Azure Elastic SAN Preview](use-container-storage-with-elastic-san.md#create-a-persistent-volume-claim)
+- [Create persistent volume claim with Azure Elastic SAN](use-container-storage-with-elastic-san.md#create-a-persistent-volume-claim)
storage Container Storage Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/container-storage-faq.md
* <a id="azure-container-storage-preview-limitations"></a> **Which other Azure services does Azure Container Storage support?**
- During public preview, Azure Container Storage supports only Azure Kubernetes Service (AKS) with storage pools provided by Azure Disks, Ephemeral Disk, or Azure Elastic SAN Preview.
+ During public preview, Azure Container Storage supports only Azure Kubernetes Service (AKS) with storage pools provided by Azure Disks, Ephemeral Disk, or Azure Elastic SAN.
* <a id="azure-container-storage-rwx"></a> **Does Azure Container Storage support read-write-many (RWX) workloads?**
storage Container Storage Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/container-storage-introduction.md
We'd like input on how you plan to use Azure Container Storage. Please complete
Azure Container Storage utilizes existing Azure Storage offerings for actual data storage and offers a volume orchestration and management solution purposely built for containers. You can choose any of the supported backing storage options to create a storage pool for your persistent volumes.
-Azure Container Storage offers persistent volume support with ReadWriteOnce access mode to Linux-based [Azure Kubernetes Service (AKS)](../../aks/intro-kubernetes.md) clusters. Supported backing storage options include block storage offerings only: Azure Disks, Ephemeral Disks, and Azure Elastic SAN Preview. The following table summarizes the supported storage types, recommended workloads, and provisioning models.
+Azure Container Storage offers persistent volume support with ReadWriteOnce access mode to Linux-based [Azure Kubernetes Service (AKS)](../../aks/intro-kubernetes.md) clusters. Supported backing storage options include block storage offerings only: Azure Disks, Ephemeral Disks, and Azure Elastic SAN. The following table summarizes the supported storage types, recommended workloads, and provisioning models.
| **Storage type** | **Description** | **Workloads** | **Offerings** | **Provisioning model** | ||--||||
-| **[Azure Elastic SAN Preview](../elastic-san/elastic-san-introduction.md)** | Provision on demand, fully managed resource | General purpose databases, streaming and messaging services, CD/CI environments, and other tier 1/tier 2 workloads. | Azure Elastic SAN Preview | Provisioned on demand per created volume and volume snapshot. Multiple clusters can access a single SAN concurrently, however persistent volumes can only be attached by one consumer at a time. |
+| **[Azure Elastic SAN](../elastic-san/elastic-san-introduction.md)** | Provision on demand, fully managed resource | General purpose databases, streaming and messaging services, CD/CI environments, and other tier 1/tier 2 workloads. | Azure Elastic SAN | Provisioned on demand per created volume and volume snapshot. Multiple clusters can access a single SAN concurrently, however persistent volumes can only be attached by one consumer at a time. |
| **[Azure Disks](../../virtual-machines/managed-disks-overview.md)** | Granular control of storage SKUs and configurationsΓÇï | Azure Disks are a good fit for tier 1 and general purpose databases such as MySQL, MongoDB, and PostgreSQL. | Premium SSD, Premium SSD v2, Standard SSD, Ultra Disk | Provisioned per target container storage pool size and maximum volume size. | | **Ephemeral Disk** | Utilizes local storage resources on AKS nodes | Ephemeral disk is extremely latency sensitive (low sub-ms latency), so it's best for applications with no data durability requirement or with built-in data replication support such as Cassandra. | NVMe only (available on [storage optimized VM SKUs](../../virtual-machines/sizes-storage.md)) | Deployed as part of the VMs hosting an AKS cluster. AKS discovers the available ephemeral storage on AKS nodes and acquires them for volume deployment. |
Azure Container Storage is derived from [OpenEBS](https://openebs.io/), an open-
You can use Azure Container Storage to:
-* **Accelerate VM-to-container initiatives:** Azure Container Storage surfaces the full spectrum of Azure block storage offerings that were previously only available for VMs and makes them available for containers. This includes ephemeral disk that provides extremely low latency for workloads like Cassandra, as well as Azure Elastic SAN Preview that provides native iSCSI and shared provisioned targets.
+* **Accelerate VM-to-container initiatives:** Azure Container Storage surfaces the full spectrum of Azure block storage offerings that were previously only available for VMs and makes them available for containers. This includes ephemeral disk that provides extremely low latency for workloads like Cassandra, as well as Azure Elastic SAN that provides native iSCSI and shared provisioned targets.
* **Simplify volume management with Kubernetes:** By providing volume orchestration via the Kubernetes control plane, Azure Container Storage makes it easy to deploy and manage volumes within Kubernetes - without the need to move back and forth between different control planes.
storage Install Container Storage Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/install-container-storage-aks.md
If you don't have an Azure subscription, create a [free account](https://azure.m
## Getting started
-* Take note of your Azure subscription ID. We recommend using a subscription on which you have a [Kubernetes contributor](../../role-based-access-control/built-in-roles.md#kubernetes-extension-contributor) role if you want to use Azure Disks or Ephemeral Disk as data storage. If you want to use Azure Elastic SAN Preview as data storage, you'll need an [Owner](../../role-based-access-control/built-in-roles.md#owner) role on the Azure subscription.
+* Take note of your Azure subscription ID. We recommend using a subscription on which you have a [Kubernetes contributor](../../role-based-access-control/built-in-roles.md#kubernetes-extension-contributor) role if you want to use Azure Disks or Ephemeral Disk as data storage. If you want to use Azure Elastic SAN as data storage, you'll need an [Owner](../../role-based-access-control/built-in-roles.md#owner) role on the Azure subscription.
* [Launch Azure Cloud Shell](https://shell.azure.com), or if you're using a local installation, sign in to the Azure CLI by using the [az login](/cli/azure/reference-index#az-login) command.
Before you create your cluster, you should understand which back-end storage opt
### Data storage options
-* **[Azure Elastic SAN Preview](../elastic-san/elastic-san-introduction.md)**: Azure Elastic SAN preview is a good fit for general purpose databases, streaming and messaging services, CD/CI environments, and other tier 1/tier 2 workloads. Storage is provisioned on demand per created volume and volume snapshot. Multiple clusters can access a single SAN concurrently, however persistent volumes can only be attached by one consumer at a time.
+* **[Azure Elastic SAN](../elastic-san/elastic-san-introduction.md)**: Azure Elastic SAN preview is a good fit for general purpose databases, streaming and messaging services, CD/CI environments, and other tier 1/tier 2 workloads. Storage is provisioned on demand per created volume and volume snapshot. Multiple clusters can access a single SAN concurrently, however persistent volumes can only be attached by one consumer at a time.
* **[Azure Disks](../../virtual-machines/managed-disks-overview.md)**: Azure Disks are a good fit for databases such as MySQL, MongoDB, and PostgreSQL. Storage is provisioned per target container storage pool size and maximum volume size.
Before you create your cluster, you should understand which back-end storage opt
To use Azure Container Storage, you'll need a node pool of at least three Linux VMs. Each VM should have a minimum of four virtual CPUs (vCPUs). Azure Container Storage will consume one core for I/O processing on every VM the extension is deployed to.
-If you intend to use Azure Elastic SAN Preview or Azure Disks with Azure Container Storage, then you should choose a [general purpose VM type](../../virtual-machines/sizes-general.md) such as **standard_d4s_v5** for the cluster nodes.
+If you intend to use Azure Elastic SAN or Azure Disks with Azure Container Storage, then you should choose a [general purpose VM type](../../virtual-machines/sizes-general.md) such as **standard_d4s_v5** for the cluster nodes.
If you intend to use Ephemeral Disk, choose a [storage optimized VM type](../../virtual-machines/sizes-storage.md) with NVMe drives such as **standard_l8s_v3**. In order to use Ephemeral Disk, the VMs must have NVMe drives.
Congratulations, you've successfully installed Azure Container Storage. You now
Now you can create a storage pool and persistent volume claim, and then deploy a pod and attach a persistent volume. Follow the steps in the appropriate how-to article.
-* [Use Azure Container Storage Preview with Azure Elastic SAN Preview](use-container-storage-with-elastic-san.md)
+* [Use Azure Container Storage Preview with Azure Elastic SAN](use-container-storage-with-elastic-san.md)
* [Use Azure Container Storage Preview with Azure Disks](use-container-storage-with-managed-disks.md) * [Use Azure Container Storage with Azure Ephemeral disk (NVMe)](use-container-storage-with-local-disk.md)
storage Resize Volume https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/resize-volume.md
Shrinking persistent volumes isn't currently supported. You can't expand a volum
- This article requires version 2.0.64 or later of the Azure CLI. See [How to install the Azure CLI](/cli/azure/install-azure-cli). If you're using Azure Cloud Shell, the latest version is already installed. If you plan to run the commands locally instead of in Azure Cloud Shell, be sure to run them with administrative privileges. - You'll need an Azure Kubernetes Service (AKS) cluster with a node pool of at least three virtual machines (VMs) for the cluster nodes, each with a minimum of four virtual CPUs (vCPUs). -- This article assumes you've already installed Azure Container Storage on your AKS cluster, and that you've created a storage pool and persistent volume claim (PVC) using either [Azure Disks](use-container-storage-with-managed-disks.md) or [ephemeral disk (local storage)](use-container-storage-with-local-disk.md). Azure Elastic SAN Preview doesn't support resizing volumes.
+- This article assumes you've already installed Azure Container Storage on your AKS cluster, and that you've created a storage pool and persistent volume claim (PVC) using either [Azure Disks](use-container-storage-with-managed-disks.md) or [ephemeral disk (local storage)](use-container-storage-with-local-disk.md). Azure Elastic SAN doesn't support resizing volumes.
## Expand a volume
storage Use Container Storage With Elastic San https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/use-container-storage-with-elastic-san.md
Title: Use Azure Container Storage Preview with Azure Elastic SAN Preview
-description: Configure Azure Container Storage Preview for use with Azure Elastic SAN Preview. Create a storage pool, select a storage class, create a persistent volume claim, and attach the persistent volume to a pod.
+ Title: Use Azure Container Storage Preview with Azure Elastic SAN
+description: Configure Azure Container Storage Preview for use with Azure Elastic SAN. Create a storage pool, select a storage class, create a persistent volume claim, and attach the persistent volume to a pod.
-# Use Azure Container Storage Preview with Azure Elastic SAN Preview
-[Azure Container Storage](container-storage-introduction.md) is a cloud-based volume management, deployment, and orchestration service built natively for containers. This article shows you how to configure Azure Container Storage to use Azure Elastic SAN Preview as back-end storage for your Kubernetes workloads. At the end, you'll have a pod that's using Elastic SAN as its storage.
+# Use Azure Container Storage Preview with Azure Elastic SAN
+[Azure Container Storage](container-storage-introduction.md) is a cloud-based volume management, deployment, and orchestration service built natively for containers. This article shows you how to configure Azure Container Storage to use Azure Elastic SAN as back-end storage for your Kubernetes workloads. At the end, you'll have a pod that's using Elastic SAN as its storage.
## Prerequisites
- If you haven't already installed Azure Container Storage Preview, follow the instructions in [Install Azure Container Storage](container-storage-aks-quickstart.md). > [!NOTE]
-> To use Azure Container Storage with Azure Elastic SAN Preview, your AKS cluster should have a node pool of at least three [general purpose VMs](../../virtual-machines/sizes-general.md) such as **standard_d4s_v5** for the cluster nodes, each with a minimum of four virtual CPUs (vCPUs).
+> To use Azure Container Storage with Azure Elastic SAN, your AKS cluster should have a node pool of at least three [general purpose VMs](../../virtual-machines/sizes-general.md) such as **standard_d4s_v5** for the cluster nodes, each with a minimum of four virtual CPUs (vCPUs).
## Regional availability
First, create a storage pool, which is a logical grouping of storage for your Ku
If you enabled Azure Container Storage using `az aks create` or `az aks update` commands, you might already have a storage pool. Use `kubectl get sp -n acstor` to get the list of storage pools. If you have a storage pool already available that you want to use, you can skip this section and proceed to [Display the available storage classes](#display-the-available-storage-classes).
-Follow these steps to create a storage pool with Azure Elastic SAN Preview.
+Follow these steps to create a storage pool with Azure Elastic SAN.
1. Use your favorite text editor to create a YAML manifest file such as `code acstor-storagepool.yaml`.
Follow these steps to create a storage pool with Azure Elastic SAN Preview.
kubectl describe sp <storage-pool-name> -n acstor ```
-When the storage pool is created, Azure Container Storage will create a storage class on your behalf using the naming convention `acstor-<storage-pool-name>`. It will also create an Azure Elastic SAN Preview resource.
+When the storage pool is created, Azure Container Storage will create a storage class on your behalf using the naming convention `acstor-<storage-pool-name>`. It will also create an Azure Elastic SAN resource.
-## Assign Contributor role to AKS managed identity on Azure Elastic SAN Preview subscription
+## Assign Contributor role to AKS managed identity on Azure Elastic SAN subscription
-Next, you must assign the [Contributor](../../role-based-access-control/built-in-roles.md#contributor) Azure RBAC built-in role to the AKS managed identity on your Azure Elastic SAN Preview subscription. You'll need an [Owner](../../role-based-access-control/built-in-roles.md#owner) role for your Azure subscription in order to do this. If you don't have sufficient permissions, ask your admin to perform these steps.
+Next, you must assign the [Contributor](../../role-based-access-control/built-in-roles.md#contributor) Azure RBAC built-in role to the AKS managed identity on your Azure Elastic SAN subscription. You'll need an [Owner](../../role-based-access-control/built-in-roles.md#owner) role for your Azure subscription in order to do this. If you don't have sufficient permissions, ask your admin to perform these steps.
1. Sign in to the [Azure portal](https://portal.azure.com?azure-portal=true).
-1. Select **Subscriptions**, and locate and select the subscription associated with the Azure Elastic SAN Preview resource that Azure Container Storage created on your behalf. This will likely be the same subscription as the AKS cluster that Azure Container Storage is installed on. You can verify this by locating the Elastic SAN resource in the resource group that AKS created (`MC_YourResourceGroup_YourAKSClusterName_Region`).
+1. Select **Subscriptions**, and locate and select the subscription associated with the Azure Elastic SAN resource that Azure Container Storage created on your behalf. This will likely be the same subscription as the AKS cluster that Azure Container Storage is installed on. You can verify this by locating the Elastic SAN resource in the resource group that AKS created (`MC_YourResourceGroup_YourAKSClusterName_Region`).
1. Select **Access control (IAM)** from the left pane. 1. Select **Add > Add role assignment**. 1. Under **Assignment type**, select **Privileged administrator roles** and then **Contributor**, then select **Next**. If you don't have an Owner role on the subscription, you won't be able to add the Contributor role.
storage Volume Snapshot Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/container-storage/volume-snapshot-restore.md
- This article requires version 2.0.64 or later of the Azure CLI. See [How to install the Azure CLI](/cli/azure/install-azure-cli). If you're using Azure Cloud Shell, the latest version is already installed. If you plan to run the commands locally instead of in Azure Cloud Shell, be sure to run them with administrative privileges. - You'll need an Azure Kubernetes Service (AKS) cluster with a node pool of at least three virtual machines (VMs) for the cluster nodes, each with a minimum of four virtual CPUs (vCPUs). -- This article assumes you've already installed Azure Container Storage on your AKS cluster, and that you've created a storage pool and persistent volume claim (PVC) using either [Azure Disks](use-container-storage-with-managed-disks.md) or [ephemeral disk (local storage)](use-container-storage-with-local-disk.md). Azure Elastic SAN Preview doesn't support volume snapshots.
+- This article assumes you've already installed Azure Container Storage on your AKS cluster, and that you've created a storage pool and persistent volume claim (PVC) using either [Azure Disks](use-container-storage-with-managed-disks.md) or [ephemeral disk (local storage)](use-container-storage-with-local-disk.md). Azure Elastic SAN doesn't support volume snapshots.
## Create a volume snapshot class
storage Elastic San Batch Create Sample https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-batch-create-sample.md
Title: Create multiple Azure Elastic SAN Preview volumes in a batch
-description: Azure PowerShell Script Sample - Create multiple Elastic SAN Preview volumes in a batch.
+ Title: Create multiple Azure Elastic SAN volumes in a batch
+description: Azure PowerShell Script Sample - Create multiple Elastic SAN volumes in a batch.
Previously updated : 10/12/2022 Last updated : 02/13/2024
-# Create multiple elastic SAN Preview volumes in a batch
+# Create multiple elastic SAN volumes in a batch
To simplify creating multiple volumes as a batch, you can use a .csv with pre-filled values to create as many volumes of varying sizes as you like.
storage Elastic San Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-best-practices.md
Title: Best practices for configuring an Elastic SAN Preview
+ Title: Best practices for configuring an Elastic SAN
description: Elastic SAN best practices Previously updated : 10/19/2023 Last updated : 02/13/2024
-# Optimize the performance of your Elastic SAN Preview
+# Optimize the performance of your Elastic SAN
This article provides some general guidance on getting optimal performance with an environment that uses an Azure Elastic SAN.
mpclaim -L -M 2
Set-MPIOSetting -NewDiskTimeout 30 ```
-For more information regarding MPIO cmdlets, see [https://learn.microsoft.com/en-us/powershell/module/mpio/?view=windowsserver2022-ps](/powershell/module/mpio/?view=windowsserver2022-ps)
+For more information regarding MPIO cmdlets, see [MPIO reference](/powershell/module/mpio/).
#### Linux
Before deploying an Elastic SAN, determining the optimal size of the Elastic SAN
With your existing storage solution, select a time interval (day/week/quarter) to track performance. The best time interval is one that is a good snapshot of your applications/workloads. Over that time period, record the combined maximum IOPS and throughput for all workloads. If you use an interval higher than a minute, or if any of your workloads have bottlenecks with your current configuration, consider adding more base capacity to your Elastic SAN deployment. You should leave some headroom when determining your base capacity, to account for growth. The rest of your Elastic SAN's storage should use additional-capacity, to save on cost.
-For more information on performance, see [Elastic SAN Preview and virtual machine performance](elastic-san-performance.md).
+For more information on performance, see [Elastic SAN and virtual machine performance](elastic-san-performance.md).
storage Elastic San Configure Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-configure-customer-managed-keys.md
Title: Use customer-managed keys with an Azure Elastic SAN Preview
+ Title: Use customer-managed keys with an Azure Elastic SAN
description: Learn how to configure Azure Elastic SAN encryption with customer-managed keys for an Elastic SAN volume group by using the Azure PowerShell module or Azure CLI.
Previously updated : 12/13/2023 Last updated : 02/13/2024
storage Elastic San Connect Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-connect-aks.md
Title: Connect an Azure Elastic SAN Preview volume to an AKS cluster.
-description: Learn how to connect to an Azure Elastic SAN Preview volume an Azure Kubernetes Service cluster.
+ Title: Connect an Azure Elastic SAN volume to an AKS cluster.
+description: Learn how to connect to an Azure Elastic SAN volume an Azure Kubernetes Service cluster.
Previously updated : 07/11/2023 Last updated : 02/13/2024
-# Connect Azure Elastic SAN Preview volumes to an Azure Kubernetes Service cluster
+# Connect Azure Elastic SAN volumes to an Azure Kubernetes Service cluster (Preview)
-This article explains how to connect an Azure Elastic storage area network (SAN) Preview volume from an Azure Kubernetes Service (AKS) cluster. To make this connection, enable the [Kubernetes iSCSI CSI driver](https://github.com/kubernetes-csi/csi-driver-iscsi) on your cluster. With this driver, you can access volumes on your Elastic SAN by creating persistent volumes on your AKS cluster, and then attaching the Elastic SAN volumes to the persistent volumes.
+This article explains how to connect an Azure Elastic storage area network (SAN) volume from an Azure Kubernetes Service (AKS) cluster. To make this connection, enable the [Kubernetes iSCSI CSI driver](https://github.com/kubernetes-csi/csi-driver-iscsi) on your cluster. With this driver, you can access volumes on your Elastic SAN by creating persistent volumes on your AKS cluster, and then attaching the Elastic SAN volumes to the persistent volumes.
## About the driver
The iSCSI CSI driver for Kubernetes is [licensed under the Apache 2.0 license](h
- Use either the [latest Azure CLI](/cli/azure/install-azure-cli) or install the [latest Azure PowerShell module](/powershell/azure/install-azure-powershell) - Meet the [compatibility requirements](https://github.com/kubernetes-csi/csi-driver-iscsi/blob/master/README.md#container-images--kubernetes-compatibility) for the iSCSI CSI driver-- [Deploy an Elastic SAN Preview](elastic-san-create.md)
+- [Deploy an Elastic SAN](elastic-san-create.md)
- [Configure a virtual network endpoint](elastic-san-networking.md) - [Configure virtual network rules](elastic-san-networking.md#configure-virtual-network-rules)
You've now successfully connected an Elastic SAN volume to your AKS cluster.
## Next steps
-[Plan for deploying an Elastic SAN Preview](elastic-san-planning.md)
+[Plan for deploying an Elastic SAN](elastic-san-planning.md)
<!-- LINKS - internal -->
-[Configure Elastic SAN networking Preview]: elastic-san-networking.md
+[Configure Elastic SAN networking]: elastic-san-networking.md
storage Elastic San Connect Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-connect-linux.md
Title: Connect to an Azure Elastic SAN Preview volume - Linux.
-description: Learn how to connect to an Azure Elastic SAN Preview volume from a Linux client.
+ Title: Connect to an Azure Elastic SAN volume - Linux.
+description: Learn how to connect to an Azure Elastic SAN volume from a Linux client.
Previously updated : 01/19/2024 Last updated : 02/13/2024
-# Connect to Elastic SAN Preview volumes - Linux
+# Connect to Elastic SAN volumes - Linux
-This article explains how to connect to an Elastic storage area network (SAN) volume from an individual Linux client. For details on connecting from a Windows client, see [Connect to Elastic SAN Preview volumes - Windows](elastic-san-connect-windows.md).
+This article explains how to connect to an Elastic storage area network (SAN) volume from an individual Linux client. For details on connecting from a Windows client, see [Connect to Elastic SAN volumes - Windows](elastic-san-connect-windows.md).
In this article, you'll add the Storage service endpoint to an Azure virtual network's subnet, then you'll configure your volume group to allow connections from your subnet. Finally, you'll configure your client environment to connect to an Elastic SAN volume and establish a connection.
-You must use a cluster manager when connecting an individual elastic SAN volume to multiple clients. For details, see [Use clustered applications on Azure Elastic SAN Preview](elastic-san-shared-volumes.md).
+You must use a cluster manager when connecting an individual elastic SAN volume to multiple clients. For details, see [Use clustered applications on Azure Elastic SAN](elastic-san-shared-volumes.md).
## Prerequisites - Use either the [latest Azure CLI](/cli/azure/install-azure-cli) or install the [latest Azure PowerShell module](/powershell/azure/install-azure-powershell)-- [Deploy an Elastic SAN Preview](elastic-san-create.md)
+- [Deploy an Elastic SAN](elastic-san-create.md)
- [Configure a virtual network endpoint](elastic-san-networking.md) - [Configure virtual network rules](elastic-san-networking.md#configure-virtual-network-rules)
You need to use 32 sessions to each target volume to achieve its maximum IOPS an
## Next steps
-[Configure Elastic SAN networking Preview](elastic-san-networking.md)
+[Configure Elastic SAN networking](elastic-san-networking.md)
storage Elastic San Connect Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-connect-windows.md
Title: Connect to an Azure Elastic SAN Preview volume - Windows
-description: Learn how to connect to an Azure Elastic SAN Preview volume from a Windows client.
+ Title: Connect to an Azure Elastic SAN volume - Windows
+description: Learn how to connect to an Azure Elastic SAN volume from a Windows client.
Previously updated : 01/19/2024 Last updated : 02/13/2024
-# Connect to Elastic SAN Preview volumes - Windows
+# Connect to Elastic SAN volumes - Windows
-This article explains how to connect to an Elastic storage area network (SAN) volume from an individual Windows client. For details on connecting from a Linux client, see [Connect to Elastic SAN Preview volumes - Linux](elastic-san-connect-linux.md).
+This article explains how to connect to an Elastic storage area network (SAN) volume from an individual Windows client. For details on connecting from a Linux client, see [Connect to Elastic SAN volumes - Linux](elastic-san-connect-linux.md).
In this article, you add the Storage service endpoint to an Azure virtual network's subnet, then you configure your volume group to allow connections from your subnet. Finally, you configure your client environment to connect to an Elastic SAN volume and establish a connection. For best performance, ensure that your VM and your Elastic SAN are in the same zone.
-You must use a cluster manager when connecting an individual elastic SAN volume to multiple clients. For details, see [Use clustered applications on Azure Elastic SAN Preview](elastic-san-shared-volumes.md).
+You must use a cluster manager when connecting an individual elastic SAN volume to multiple clients. For details, see [Use clustered applications on Azure Elastic SAN](elastic-san-shared-volumes.md).
## Prerequisites - Use either the [latest Azure CLI](/cli/azure/install-azure-cli) or install the [latest Azure PowerShell module](/powershell/azure/install-azure-powershell)-- [Deploy an Elastic SAN Preview](elastic-san-create.md)
+- [Deploy an Elastic SAN](elastic-san-create.md)
- [Configure a virtual network endpoint](elastic-san-networking.md) - [Configure virtual network rules](elastic-san-networking.md#configure-virtual-network-rules)
You need to use 32 sessions to each target volume to achieve its maximum IOPS an
## Next steps
-[Configure Elastic SAN networking Preview](elastic-san-networking.md)
+[Configure Elastic SAN networking](elastic-san-networking.md)
storage Elastic San Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-create.md
Title: Create an Azure Elastic SAN Preview
-description: Learn how to deploy an Azure Elastic SAN Preview with the Azure portal, Azure PowerShell module, or Azure CLI.
+ Title: Create an Azure Elastic SAN
+description: Learn how to deploy an Azure Elastic SAN with the Azure portal, Azure PowerShell module, or Azure CLI.
Previously updated : 11/07/2023 Last updated : 02/13/2024
-# Deploy an Elastic SAN Preview
+# Deploy an Elastic SAN
-This article explains how to deploy and configure an elastic storage area network (SAN). If you're interested in Azure Elastic SAN, or have any feedback you'd like to provide, fill out this optional survey [https://aka.ms/ElasticSANPreviewSignUp](https://aka.ms/ElasticSANPreviewSignUp).
+This article explains how to deploy and configure an elastic storage area network (SAN). If you're interested in Azure Elastic SAN, or have any feedback you'd like to provide, fill out [this](https://aka.ms/ElasticSANPreviewSignup) optional survey.
## Prerequisites
There are no extra registration steps required.
1. Select **Next : Volume groups**.
- :::image type="content" source="media/elastic-san-create/elastic-create-flow.png" alt-text="Screenshot of creation flow." lightbox="media/elastic-san-create/elastic-create-flow.png":::
+ :::image type="content" source="media/elastic-san-create/elastic-san-create-flow.png" alt-text="Screenshot of creation flow." lightbox="media/elastic-san-create/elastic-san-create-flow.png":::
# [PowerShell](#tab/azure-powershell)
Use one of these sets of sample code to create an Elastic SAN that uses locally
| `<ElasticSanVolumeGroupName>` | The name of the Elastic SAN Volume Group to be created. | | `<VolumeName>` | The name of the Elastic SAN Volume to be created. | | `<Location>` | The region where the new resources will be created. |
-| `<Zone>` | The availability zone where the Elastic SAN will be created.<br> *Specify the same availability zone as the zone that will host your workload.*<br>*Use only if the Elastic SAN will use locally-redundant storage.*<br> *Must be a zone supported in the target location such as `1`, `2`, or `3`.* |
+| `<Zone>` | The availability zone where the Elastic SAN will be created.<br> *Specify the same availability zone as the zone that will host your workload.*<br>*Use only if the Elastic SAN will use locally redundant storage.*<br> *Must be a zone supported in the target location such as `1`, `2`, or `3`.* |
-The following command creates an Elastic SAN that uses **locally-redundant** storage.
+The following command creates an Elastic SAN that uses **locally redundant** storage.
```azurepowershell # Define some variables.
Use one of these sets of sample code to create an Elastic SAN that uses locally
| `<ElasticSanVolumeGroupName>` | The name of the Elastic SAN Volume Group to be created. | | `<VolumeName>` | The name of the Elastic SAN Volume to be created. | | `<Location>` | The region where the new resources will be created. |
-| `<Zone>` | The availability zone where the Elastic SAN will be created.<br> *Specify the same availability zone as the zone that will host your workload.*<br>*Use only if the Elastic SAN uses locally-redundant storage.*<br> *Must be a zone supported in the target location such as `1`, `2`, or `3`.* |
+| `<Zone>` | The availability zone where the Elastic SAN will be created.<br> *Specify the same availability zone as the zone that will host your workload.*<br>*Use only if the Elastic SAN uses locally redundant storage.*<br> *Must be a zone supported in the target location such as `1`, `2`, or `3`.* |
-The following command creates an Elastic SAN that uses **locally-redundant** storage.
+The following command creates an Elastic SAN that uses **locally redundant** storage.
```azurecli # Define some variables.
az elastic-san volume create --elastic-san-name $EsanName -g $RgName -v $EsanVgN
## Next steps
-Now that you've deployed an Elastic SAN, Connect to Elastic SAN (preview) volumes from either [Windows](elastic-san-connect-windows.md) or [Linux](elastic-san-connect-linux.md) clients.
+Now that you've deployed an Elastic SAN, Connect to Elastic SAN volumes from either [Windows](elastic-san-connect-windows.md) or [Linux](elastic-san-connect-linux.md) clients.
storage Elastic San Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-delete.md
Title: Delete an Azure Elastic SAN Preview
-description: Learn how to delete an Azure Elastic SAN Preview with the Azure portal, Azure PowerShell module, or the Azure CLI.
+ Title: Delete an Azure Elastic SAN
+description: Learn how to delete an Azure Elastic SAN with the Azure portal, Azure PowerShell module, or the Azure CLI.
Previously updated : 02/22/2023 Last updated : 02/13/2024
-# Delete an Elastic SAN Preview
+# Delete an Elastic SAN
Your Elastic storage area network (SAN) resources can be deleted at different resource levels. This article covers the overall deletion process, starting from disconnecting iSCSI connections to volumes, deleting the volumes themselves, deleting a volume group, and deleting an elastic SAN itself. Before you delete your elastic SAN, make sure it's not being used in any running workloads.
Copy the script from [here](https://github.com/Azure-Samples/azure-elastic-san/b
### Linux
-You can use the following script to create your connections. To execute it, you will require the following parameters:
+You can use the following script to create your connections. To execute it, you'll require the following parameters:
- subscription: Subscription ID - g: Resource Group Name
Copy the script from [here](https://github.com/Azure-Samples/azure-elastic-san/b
## Delete a SAN
-You can delete your SAN by using the Azure portal, Azure PowerShell, or Azure CLI. If you delete a SAN or a volume group, the corresponding child resources will be deleted along with it. The delete commands for each of the resource levels are below.
+You can delete your SAN by using the Azure portal, Azure PowerShell, or Azure CLI. If you delete a SAN or a volume group, the corresponding child resources are deleted along with it. The delete commands for each of the resource levels are below.
-The following commands delete your volumes. These commands use `ForceDelete false`, `-DeleteSnapshot false`, `--x-ms-force-delete false`, and `--x-ms-delete-snapshots false` parameters for PowerShell and CLI, respectively. If you set `ForceDelete` or `--x-ms-force-delete` to `true`, it'll cause volume deletion to succeed even if you've active iSCSI connections. If you set `-DeleteSnapshot` or `--x-ms-delete-snapshots` to `true`, it'll delete all snapshots associated with the volume, as well as the volume itself.
+The following commands delete your volumes. These commands use `ForceDelete false`, `-DeleteSnapshot false`, `--x-ms-force-delete false`, and `--x-ms-delete-snapshots false` parameters for PowerShell and CLI, respectively. If you set `ForceDelete` or `--x-ms-force-delete` to `true`, it causes volume deletion to succeed even if you have active iSCSI connections. If you set `-DeleteSnapshot` or `--x-ms-delete-snapshots` to `true`, it deletes all snapshots associated with the volume, and the volume itself.
# [PowerShell](#tab/azure-powershell)
az elastic-san delete -n $sanName -g $resourceGroupName
## Next steps
-[Plan for deploying an Elastic SAN Preview](elastic-san-planning.md)
+[Plan for deploying an Elastic SAN](elastic-san-planning.md)
storage Elastic San Encryption Manage Customer Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-encryption-manage-customer-keys.md
Title: Manage customer-managed keys for Elastic SAN Preview
+ Title: Manage customer-managed keys for Elastic SAN
-description: Learn how to manage customer-managed keys for Azure Elastic SAN Preview
+description: Learn how to manage customer-managed keys for Azure Elastic SAN
Previously updated : 12/13/2023 Last updated : 02/13/2024
-# Manage customer-managed keys for Azure Elastic SAN Preview
+# Manage customer-managed keys for Azure Elastic SAN
All data written to an Elastic SAN volume is automatically encrypted-at-rest with a data encryption key (DEK). Azure DEKs are always *platform-managed* (managed by Microsoft). Azure uses [envelope encryption](../../security/fundamentals/encryption-atrest.md#envelope-encryption-with-a-key-hierarchy), also referred to as wrapping, which involves using a Key Encryption Key (KEK) to encrypt the DEK. By default, the KEK is platform-managed, but you can create and manage your own KEK. [Customer-managed keys](elastic-san-encryption-overview.md#customer-managed-keys) offer greater flexibility to manage access controls and can help you meet your organization security and compliance requirements.
storage Elastic San Encryption Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-encryption-overview.md
Title: Encryption options for Azure Elastic SAN Preview
+ Title: Encryption options for Azure Elastic SAN
description: Azure Elastic SAN protects your data by encrypting it at rest. You can use platform-managed keys for the encryption of your Elastic SAN volumes or use customer-managed keys to manage encryption with your own keys. Previously updated : 12/13/2023 Last updated : 02/13/2024
-# Learn about encryption for an Azure Elastic SAN Preview
+# Learn about encryption for an Azure Elastic SAN
Azure Elastic SAN uses server-side encryption (SSE) to automatically encrypt data stored in an Elastic SAN. SSE protects your data and helps you meet your organizational security and compliance requirements.
storage Elastic San Expand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-expand.md
Title: Increase the size of an Azure Elastic SAN and its volumes Preview
-description: Learn how to increase the size of an Azure Elastic SAN Preview and its volumes with the Azure portal, Azure PowerShell module, or Azure CLI.
+ Title: Increase the size of an Azure Elastic SAN and its volumes
+description: Learn how to increase the size of an Azure Elastic SAN and its volumes with the Azure portal, Azure PowerShell module, or Azure CLI.
Previously updated : 01/05/2024 Last updated : 02/13/2024
-# Increase the size of an Elastic SAN Preview
+# Increase the size of an Elastic SAN
-This article covers increasing the size of an Elastic storage area network Preview and an individual volume, if you need additional storage or performance. Be sure you need the storage or performance before you increase the size because decreasing the size isn't supported, to prevent data loss.
+This article covers increasing the size of an Elastic storage area network (SAN) and an individual volume, if you need additional storage or performance. Be sure you need the storage or performance before you increase the size because decreasing the size isn't supported, to prevent data loss.
## Expand SAN size
storage Elastic San Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-introduction.md
Title: Introduction to Azure Elastic SAN Preview
-description: An overview of Azure Elastic SAN Preview, a service that enables you to create a virtual SAN to act as the storage for multiple compute options.
+ Title: Introduction to Azure Elastic SAN
+description: An overview of Azure Elastic SAN, a service that enables you to create a virtual SAN to act as the storage for multiple compute options.
Previously updated : 11/07/2023 Last updated : 02/13/2024 - ignite-2023-elastic-SAN
-# What is Azure Elastic SAN? Preview
+# What is Azure Elastic SAN?
-Azure Elastic storage area network (SAN) is Microsoft's answer to the problem of workload optimization and integration between your large scale databases and performance-intensive mission-critical applications. Elastic SAN Preview is a fully integrated solution that simplifies deploying, scaling, managing, and configuring a SAN, while also offering built-in cloud capabilities like high availability.
+Azure Elastic storage area network (SAN) is Microsoft's answer to the problem of workload optimization and integration between your large scale databases and performance-intensive mission-critical applications. Elastic SAN is a fully integrated solution that simplifies deploying, scaling, managing, and configuring a SAN, while also offering built-in cloud capabilities like high availability.
Elastic SAN is interoperable with multiple types of compute resources such as Azure Virtual Machines, Azure VMware Solutions, and Azure Kubernetes Service. Instead of having to deploy and manage individual storage options for each individual compute deployment, you can provision an Elastic SAN and use the SAN volumes as backend storage for all your workloads. Consolidating your storage like this can be more cost effective if you have a sizeable amount of large scale IO-intensive workloads and top tier databases.
The status of items in this table might change over time.
| Private endpoints | ✔️ | | Grant network access to specific Azure virtual networks| ✔️ | | Soft delete | ⛔ |
-| Snapshots | ✔️ |
+| Snapshots (preview) | ✔️ |
## Next steps For a video introduction to Azure Elastic SAN, see [Accelerate your SAN migration to the cloud](/shows/inside-azure-for-it/accelerate-your-san-migration-to-the-cloud).
-[Plan for deploying an Elastic SAN Preview](elastic-san-planning.md)
+[Plan for deploying an Elastic SAN](elastic-san-planning.md)
storage Elastic San Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-metrics.md
+
+ Title: Metrics for Azure Elastic SAN
+description: Learn about the available metrics for an Azure Elastic SAN.
+++ Last updated : 02/13/2024+++
+# Elastic SAN metrics
+
+Azure offers metrics in the Azure portal that provide insight into your Elastic SAN resources. This article provides definitions of the specific metrics you can select to monitor.
+
+## Metrics definitions
+The following metrics are currently available for your Elastic SAN resource. You can configure and view them in the Azure portal:
+
+|Metric|Definition|
+|||
+|**Used Capacity**|The total amount of storage used in your SAN resources. At the SAN level, it's the sum of capacity used by volume groups and volumes, in bytes. At the volume group level, it's the sum of the capacity used by all volumes in the volume group, in bytes|
+|**Transactions**|The number of requests made to a storage service or the specified API operation. This number includes successful and failed requests, as well as requests that produced errors.|
+|**E2E Latency**|The average end-to-end latency of successful requests made to the resource or the specified API operation.|
+|**Server Latency**|The average time used to process a successful request. This value doesn't include the network latency specified in **E2E Latency**. |
+|**Ingress**|The amount of ingress data. This number includes ingress from an external client into the resource as well as ingress within Azure. |
+|**Egress**|The amount of egress data. This number includes egress from an external client into the resource as well as egress within Azure. |
+
+By default, all metrics are shown at the SAN level. To view these metrics at either the volume group or volume level, select a filter on your selected metric to view your data on a specific volume group or volume.
+
+## Next steps
+
+- [Azure Monitor Metrics overview](../../azure-monitor/essentials/data-platform-metrics.md)
+- [Azure Monitor Metrics aggregation and display explained](../../azure-monitor/essentials/metrics-aggregation-explained.md)
storage Elastic San Networking Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-networking-concepts.md
Title: Azure Elastic SAN Preview networking concepts
-description: An overview of Azure Elastic SAN Preview networking options, including storage service endpoints, private endpoints, and iSCSI.
+ Title: Azure Elastic SAN networking concepts
+description: An overview of Azure Elastic SAN networking options, including storage service endpoints, private endpoints, and iSCSI.
Previously updated : 01/16/2024 Last updated : 02/13/2024
-# Learn about networking configurations for Elastic SAN Preview
+# Learn about networking configurations for Elastic SAN
-Azure Elastic storage area network (SAN) Preview allows you to secure and control the level of access to your Elastic SAN volumes that your applications and enterprise environments require. This article describes the options for allowing users and applications access to Elastic SAN volumes from an [Azure virtual network infrastructure](../../virtual-network/vnet-integration-for-azure-services.md).
+Azure Elastic storage area network (SAN) allows you to secure and control the level of access to your Elastic SAN volumes that your applications and enterprise environments require. This article describes the options for allowing users and applications access to Elastic SAN volumes from an [Azure virtual network infrastructure](../../virtual-network/vnet-integration-for-azure-services.md).
You can configure Elastic SAN volume groups to only allow access over specific endpoints on specific virtual network subnets. The allowed subnets can belong to a virtual network in the same subscription, or those in a different subscription, including subscriptions belonging to a different Microsoft Entra tenant. Once network access is configured for a volume group, the configuration is inherited by all volumes belonging to the group.
iSCSI sessions can periodically disconnect and reconnect over the course of the
## Next steps
-[Configure Elastic SAN networking Preview](elastic-san-networking.md)
+[Configure Elastic SAN networking](elastic-san-networking.md)
storage Elastic San Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-networking.md
Title: Configure networking for Azure Elastic SAN Preview
-description: Learn how to configure access to an Azure Elastic SAN Preview.
+ Title: Configure networking for Azure Elastic SAN
+description: Learn how to configure access to an Azure Elastic SAN.
Previously updated : 11/29/2023 Last updated : 02/13/2024
-# Configure network access for Azure Elastic SAN Preview
+# Configure network access for Azure Elastic SAN
-You can control access to your Azure Elastic storage area network (SAN) Preview volumes. Controlling access allows you to secure your data and meet the needs of your applications and enterprise environments.
+You can control access to your Azure Elastic storage area network (SAN) volumes. Controlling access allows you to secure your data and meet the needs of your applications and enterprise environments.
This article describes how to configure your Elastic SAN to allow access from your Azure virtual network infrastructure.
After you have enabled the desired endpoints and granted access in your network
## Next steps -- [Connect Azure Elastic SAN Preview volumes to an Azure Kubernetes Service cluster](elastic-san-connect-aks.md)-- [Connect to Elastic SAN Preview volumes - Linux](elastic-san-connect-linux.md)-- [Connect to Elastic SAN Preview volumes - Windows](elastic-san-connect-windows.md)
+- [Connect Azure Elastic SAN volumes to an Azure Kubernetes Service cluster](elastic-san-connect-aks.md)
+- [Connect to Elastic SAN volumes - Linux](elastic-san-connect-linux.md)
+- [Connect to Elastic SAN volumes - Windows](elastic-san-connect-windows.md)
storage Elastic San Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-performance.md
Title: Azure Elastic SAN Preview and virtual machine performance
+ Title: Azure Elastic SAN and virtual machine performance
description: Learn how your workload's performance is handled by Azure Elastic SAN and Azure Virtual Machines. - ignite-2023-elastic-SAN Previously updated : 02/06/2024 Last updated : 02/13/2024
-# How performance works when Virtual Machines are connected to Elastic SAN Preview volumes
+# How performance works when virtual machines are connected to Elastic SAN volumes
This article clarifies how Elastic SAN performance works, and how the combination of Elastic SAN limits and Azure Virtual Machines (VM) limits can affect the performance of your workloads.
In this scenario, all the workloads hit their spike at almost the same time. At
## Next steps
-[Deploy an Elastic SAN (preview)](elastic-san-create.md).
+[Deploy an Elastic SAN](elastic-san-create.md).
storage Elastic San Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-planning.md
Title: Planning for an Azure Elastic SAN Preview
+ Title: Planning for an Azure Elastic SAN
description: Understand planning for an Azure Elastic SAN deployment. Learn about storage capacity, performance, redundancy, and encryption. Previously updated : 06/09/2023 Last updated : 02/13/2024 - ignite-2023-elastic-SAN
-# Plan for deploying an Elastic SAN Preview
+# Plan for deploying an Elastic SAN
There are three main aspects to an elastic storage area network (SAN): the SAN itself, volume groups, and volumes. When deploying a SAN, you make selections while configuring the SAN, including the redundancy of the entire SAN, and how much performance and storage the SAN has. Then you create volume groups that are used to manage volumes at scale. Any settings applied to a volume group are inherited by volumes inside that volume group. Finally, you partition the storage capacity that was allocated at the SAN-level into individual volumes.
-Before deploying an Elastic SAN Preview, consider the following:
+Before deploying an Elastic SAN, consider the following:
- How much storage do you need? - What level of performance do you need?
Using the same example of a 100 TiB SAN that has 500,000 IOPS and 20,000 MB/s. S
## Networking
-In the Elastic SAN Preview, you can enable or disable public network access at the Elastic SAN level. You can also configure access to volume groups in the SAN over both public [Storage service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md) and [private endpoints](../../private-link/private-endpoint-overview.md) from selected virtual network subnets. Once network access is configured for a volume group, the configuration is inherited by all volumes belonging to the group. If you disable public access at the SAN level, access to the volume groups within that SAN is only available over private endpoints, regardless of individual configurations for the volume group.
+In the Elastic SAN, you can enable or disable public network access at the Elastic SAN level. You can also configure access to volume groups in the SAN over both public [Storage service endpoints](../../virtual-network/virtual-network-service-endpoints-overview.md) and [private endpoints](../../private-link/private-endpoint-overview.md) from selected virtual network subnets. Once network access is configured for a volume group, the configuration is inherited by all volumes belonging to the group. If you disable public access at the SAN level, access to the volume groups within that SAN is only available over private endpoints, regardless of individual configurations for the volume group.
To allow network access or an individual volume group, you must [enable a service endpoint for Azure Storage](elastic-san-networking.md#configure-an-azure-storage-service-endpoint) or a [private endpoint](elastic-san-networking.md#configure-a-private-endpoint) in your virtual network, then [setup a network rule](elastic-san-networking.md#configure-virtual-network-rules) on the volume group for any service endpoints. You don't need a network rule to allow traffic from a private endpoint since the storage firewall only controls access through public endpoints. You can then mount volumes from [AKS](elastic-san-connect-aks.md), [Linux](elastic-san-connect-linux.md), or [Windows](elastic-san-connect-windows.md) clients in the subnet with the [internet Small Computer Systems Interface](https://en.wikipedia.org/wiki/ISCSI) (iSCSI) protocol.
Data in an Azure Elastic SAN is encrypted and decrypted transparently using 256-
For more information about the cryptographic modules underlying SSE, see [Cryptography API: Next Generation](/windows/desktop/seccng/cng-portal).
+## Migration
+
+There are currently two options for migrating your data into Azure Elastic SAN. Both paths require deploying and configuring an elastic SAN first, and then creating volumes through the migration process.
+
+- [Cirrus Data](https://www.cirrusdata.com/) which allows you to migrate from external locations such as an on-premises SAN.
+- [Managed disk snapshots (preview)](elastic-san-snapshots.md#create-a-volume-from-a-managed-disk-snapshot), which allows you to migrate from managed disks to elastic SAN volumes.
+ ## iSCSI support Elastic SAN supports the [internet Small Computer Systems Interface](https://en.wikipedia.org/wiki/ISCSI) (iSCSI) protocol. The following iSCSI commands are currently supported:
The following iSCSI features aren't currently supported:
## Next steps -- [Networking options for Elastic SAN Preview](elastic-san-networking-concepts.md)-- [Deploy an Elastic SAN Preview](elastic-san-create.md)
+- [Networking options for Elastic SAN](elastic-san-networking-concepts.md)
+- [Deploy an Elastic SAN](elastic-san-create.md)
For a video that goes over the general planning and deployment with a few example scenarios, see [Getting started with Azure Elastic SAN](/shows/inside-azure-for-it/getting-started-with-azure-elastic-san).
storage Elastic San Scale Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-scale-targets.md
Title: Elastic SAN Preview scalability and performance targets
+ Title: Elastic SAN scalability and performance targets
description: Learn about the capacity, IOPS, and throughput rates for Azure Elastic SAN. Previously updated : 10/19/2023 Last updated : 02/13/2024
-# Scale targets for Elastic SAN Preview
+# Scale targets for Elastic SAN
There are three main components to an elastic storage area network (SAN): the SAN itself, volume groups, and volumes. ## The Elastic SAN
-An Elastic SAN Preview has three attributes that determine its performance: total capacity, IOPS, and throughput.
+An Elastic SAN has three attributes that determine its performance: total capacity, IOPS, and throughput.
### Capacity The total capacity of your Elastic SAN is determined by two different capacities, the base capacity and the additional capacity. Increasing the base capacity also increases the SAN's IOPS and throughput but is more costly than increasing the additional capacity. Increasing additional capacity doesn't increase IOPS or throughput.
-The maximum total capacity of your SAN is determined by the region where it's located and by its redundancy configuration. The minimum total capacity for an Elastic SAN is 1 tebibyte (TiB). Base or additional capacity can be increased in increments of 1 TiB.
+The region your SAN is located in and your SAN's redundancy determines its maximum total capacity. The minimum total capacity for an Elastic SAN is 1 tebibyte (TiB). Base or additional capacity can be increased in increments of 1 TiB.
### IOPS
The appliance scale targets vary depending on region and redundancy of the SAN i
#### LRS
-|Resource |France Central |Southeast Asia |Australia East |North Europe | West Europe | UK South | East US | East US 2 | South Central US| West US 2 | West US 3 | Sweden Central |
-|||||
-|Maximum number of Elastic SAN that can be deployed per subscription per region |5 |5 |5 |5 |5 |5 |5 |5 |5 | 5 | 5|5|
-|Maximum capacity units (TiB) |100 |100 |600 |600|600|600| 600 |600 |600 |600 | 100 | 100 |
-|Maximum base capacity units (TiB) |100 |100 |400 |400 | 400|400 |400 |400 |400 |400 | 100 |100 |
-|Minimum total SAN capacity (TiB) |1 |1 |1 |1 |1 |1 |1 |1 | 1 | 1 | 1 |1|
-|Maximum total IOPS |500,000 |500,000 |2,000,000 |2,000,000|2,000,000 |2,000,000 |2,000,000 |2,000,000 |2,000,000 |2,000,000 | 500,000 |500,000 |
-|Maximum total throughput (MB/s) |20,000 |20,000 |80,000 |80,000 |80,000|80,000 |80,000 |80,000 |80,000 |80,000 | 20,000|20,000|
+Different regions have varying levels of base storage capacity available. We break them down into two sets, regions with a higher base storage capacity available, and regions with a lower base storage capacity available. Other than the base storage capacity differences, which directly affect the available performance that a SAN can distribute to its volumes and volume groups, there are no differences between these sets of regions.
+
+##### Higher available base storage capacity
+
+The following regions are regions with higher base storage capacity available, and the table following the regions outlines their scale targets: Australia East, Brazil South, Canada Central, Germany West, North Europe, West Europe, UK South, East US, East US 2, South Central US, US Central, and West US 2.
++
+|Resource |Values |
+|||
+|Maximum number of Elastic SANs that can be deployed per subscription per region | 5 |
+|Maximum capacity-only units (TiB) | 600 |
+|Maximum base capacity units (TiB) | 400 |
+|Minimum total SAN capacity (TiB) | 1 |
+|Maximum total IOPS |2,000,000 |
+|Maximum total throughput (MB/s) |80,000 |
++
+##### Lower available base storage capacity
++
+The following regions are regions with higher base storage capacity available, and the table following the regions outlines their scale targets: East Asia, Korea Central, South Africa North, France Central, Southeast Asia, West US 3, Sweden Central, Switzerland North.
+
+|Resource |Values |
+|||
+|Maximum number of Elastic SANs that can be deployed per subscription per region | 5 |
+|Maximum capacity-only units (TiB) | 100 |
+|Maximum base capacity units (TiB) | 100 |
+|Minimum total SAN capacity (TiB) | 1 |
+|Maximum total IOPS |500,000 |
+|Maximum total throughput (MB/s) |20,000 |
#### ZRS
ZRS is only available in France Central, North Europe, West Europe and West US 2
|Resource |France Central |North Europe | West Europe |West US 2 | ||||| |Maximum number of Elastic SAN that can be deployed per subscription per region |5 |5 |5 |5 |
-|Maximum capacity units (TiB) |200 |200 |200 |200 |
+|Maximum capacity-only units (TiB) |200 |200 |200 |200 |
|Maximum base capacity units (TiB) |100 |100 |100 |100 | |Minimum total SAN capacity (TiB) |1 |1 |1 |1 | |Maximum total IOPS |500,000 |500,000 |500,000 |500,000 | |Maximum total throughput (MB/s) |20,000 |20,000 |20,000 |20,000 |
-#### Quota and Capacity Increases
+#### Quota and capacity increases
To increase quota, raise a support ticket with the subscription ID and region information to request for an increase in quota for the ΓÇ£Maximum number of Elastic SAN that can be deployed per subscription per regionΓÇ¥.
-For capacity increase requests, please raise a support ticket with the subscription ID and the region information and it will be evaluated.
+For capacity increase requests, raise a support ticket with the subscription ID and the region information and it will be evaluated.
## Volume group
The performance of an individual volume is determined by its capacity. The maxim
## Next steps
-[Plan for deploying an Elastic SAN Preview](elastic-san-planning.md)
+[Plan for deploying an Elastic SAN](elastic-san-planning.md)
storage Elastic San Shared Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-shared-volumes.md
Title: Use clustered applications on Azure Elastic SAN Preview
-description: Learn more about using clustered applications on an Elastic SAN Preview volume and sharing volumes between compute clients.
+ Title: Use clustered applications on Azure Elastic SAN
+description: Learn more about using clustered applications on an Elastic SAN volume and sharing volumes between compute clients.
Previously updated : 10/19/2023 Last updated : 02/13/2024 - references_regions - ignite-2023-elastic-SAN
-# Use clustered applications on Azure Elastic SAN Preview
+# Use clustered applications on Azure Elastic SAN
Azure Elastic SAN volumes can be simultaneously attached to multiple compute clients, allowing you to deploy or migrate cluster applications to Azure. You need to use a cluster manager to share an Elastic SAN volume, like Windows Server Failover Cluster (WSFC), or Pacemaker. The cluster manager handles cluster node communications and write locking. Elastic SAN doesn't natively offer a fully managed filesystem that can be accessed over SMB or NFS.
storage Elastic San Snapshots https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-snapshots.md
Title: Backup Azure Elastic SAN Preview volumes
-description: Learn about snapshots for Azure Elastic SAN Preview, including how to create and use them.
+ Title: Backup Azure Elastic SAN volumes (preview)
+description: Learn about snapshots (preview) for Azure Elastic SAN, including how to create and use them.
Previously updated : 01/17/2024 Last updated : 02/13/2024
-# Snapshot Azure Elastic SAN Preview volumes
+# Snapshot Azure Elastic SAN volumes (preview)
-Azure Elastic SAN Preview volume snapshots are incremental point-in-time backups of your volumes. The first snapshot you take is a full copy of your volume but every subsequent snapshot consists only of the changes since the last snapshot. Snapshots of your volumes don't have any separate billing, but they reside in your elastic SAN and consume the SAN's capacity. Snapshots can't be used to change the state of an existing volume, you can only use them to either deploy a new volume or export the data to a managed disk snapshot.
+Azure Elastic SAN volume snapshots (preview) are incremental point-in-time backups of your volumes. The first snapshot you take is a full copy of your volume but every subsequent snapshot consists only of the changes since the last snapshot. Snapshots of your volumes don't have any separate billing, but they reside in your elastic SAN and consume the SAN's capacity. Snapshots can't be used to change the state of an existing volume, you can only use them to either deploy a new volume or export the data to a managed disk snapshot.
You can take as many snapshots of your volumes as you like, as long as there's available capacity in your elastic SAN. Snapshots persist until either the volume itself is deleted or the snapshots are deleted. Snapshots don't persist after the volume is deleted. If you need your data to persist after deleting a volume, [export your volume's snapshot to a managed disk snapshot](#export-volume-snapshot). +
+## Limitations
+
+- If a volume is larger than 4 TiB, export of a volume snapshot to a disk snapshot is not supported.
+ ## General guidance You can take a snapshot anytime, but if youΓÇÖre taking snapshots while the VM is running, keep these things in mind:
Currently, you can only use the Azure portal to create Elastic SAN volumes from
1. Navigate to your SAN and select **volumes**. 1. Select **Create volume**. 1. For **Source type** select **Disk snapshot** and fill out the rest of the values.
-1. Select **Create**.
-
-## Limitations
--- If a volume is larger than 4 TiB, export of a volume snapshot to a disk snapshot is not supported.
+1. Select **Create**.
stream-analytics Machine Learning Udf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/machine-learning-udf.md
You can achieve low latency by ensuring that your Azure Kubernetes Service (AKS)
## Limitations
-If you're using an Azure ML Managed Endpoint service, Stream Analytics can currently only access endpoints that have public network access enabled. Read more about it on the page about [Azure ML private endpoints](/azure/machine-learning/concept-secure-online-endpoint?view=azureml-api-2&tabs=cli#secure-inbound-scoring-requests).
+If you're using an Azure ML Managed Endpoint service, Stream Analytics can currently only access endpoints that have public network access enabled. Read more about it on the page about [Azure ML private endpoints](/azure/machine-learning/concept-secure-online-endpoint#secure-inbound-scoring-requests).
## Next steps
stream-analytics Sql Database Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/sql-database-output.md
The following table lists the property names and their description for creating
| Output alias |A friendly name used in queries to direct the query output to this database. | | Database | The name of the database where you're sending your output. | | Server name | The logical SQL server name or managed instance name. For SQL Managed Instance, it is required to specify the port 3342. For example, *sampleserver.public.database.windows.net,3342* |
-| Username | The username that has write access to the database. Stream Analytics supports only SQL authentication. |
+| Username | The username that has write access to the database. Stream Analytics supports three authentication mode: SQL server authentication, system assigned managed identity and use assigned managed identity |
| Password | The password to connect to the database. | | Table | The table name where the output is written. The table name is case-sensitive. The schema of this table should exactly match the number of fields and their types that your job output generates. | |Inherit partition scheme| An option for inheriting the partitioning scheme of your previous query step, to enable fully parallel topology with multiple writers to the table. For more information, see [Azure Stream Analytics output to Azure SQL Database](stream-analytics-sql-output-perf.md).|
synapse-analytics Gateway Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/gateway-ip-addresses.md
The table below lists the individual Gateway IP addresses and also Gateway IP address ranges per region.
-Periodically, we will retire Gateways using old hardware and migrate the traffic to new Gateways as per the process outlined at [Azure SQL Database traffic migration to newer Gateways](/azure/azure-sql/database/gateway-migration?view=azuresql&tabs=in-progress-ip). We strongly encourage customers to use the **Gateway IP address subnets** in order to not be impacted by this activity in a region.
+Periodically, we will retire Gateways using old hardware and migrate the traffic to new Gateways as per the process outlined at [Azure SQL Database traffic migration to newer Gateways](/azure/azure-sql/database/gateway-migration). We strongly encourage customers to use the **Gateway IP address subnets** in order to not be impacted by this activity in a region.
> [!IMPORTANT] > - Logins for SQL Database or dedicated SQL pools (formerly SQL DW) in Azure Synapse can land on **any of the Gateways in a region**. For consistent connectivity to SQL Database or dedicated SQL pools (formerly SQL DW) in Azure Synapse, allow network traffic to and from **ALL** Gateway IP addresses and Gateway IP address subnets for the region.
synapse-analytics Sql Data Warehouse Tables Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-tables-identity.md
A surrogate key on a table is a column with a unique identifier for each row. Th
> [!NOTE] > In Azure Synapse Analytics: > - The IDENTITY value increases on its own in each distribution and does not overlap with IDENTITY values in other distributions. The IDENTITY value in Synapse is not guaranteed to be unique if the user explicitly inserts a duplicate value with ΓÇ£SET IDENTITY_INSERT ONΓÇ¥ or reseeds IDENTITY. For details, see [CREATE TABLE (Transact-SQL) IDENTITY (Property)](/sql/t-sql/statements/create-table-transact-sql-identity-property?view=azure-sqldw-latest&preserve-view=true).
-> - UPDATE on distribution column does not guarantee IDENTITY value to be unique. Use [DBCC CHECKIDENT (Transact-SQL)](/sql/t-sql/database-console-commands/dbcc-checkident-transact-sql?view=azure-sqldw-latest) after UPDATE on distribution column to verify uniqueness.
+> - UPDATE on distribution column does not guarantee IDENTITY value to be unique. Use [DBCC CHECKIDENT (Transact-SQL)](/sql/t-sql/database-console-commands/dbcc-checkident-transact-sql?view=azure-sqldw-latest&preserve-view=true) after UPDATE on distribution column to verify uniqueness.
## Creating a table with an IDENTITY column
virtual-desktop Private Link Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/private-link-overview.md
Azure Virtual Desktop has three workflows with three corresponding resource type
3. **Connections to host pools**: every connection to a host pool has two sides - clients and session host virtual machines (VMs). To enable connections, you need to create a private endpoint for the *connection* sub-resource for each host pool you want to use with Private Link.
-The following table summarizes the private endpoints you need to create:
+The following high-level diagram shows how Private Link securely connects a local client to the Azure Virtual Desktop service. For more detailed information about client connections, see [Client connection sequence](#client-connection-sequence).
++
+The following table summarizes the private endpoints required:
| Purpose | Resource type | Target sub-resource | Quantity | |--|--|--|--|
The following table summarizes the private endpoints you need to create:
You can either share these private endpoints across your network topology or you can isolate your virtual networks so that each has their own private endpoint to the host pool or workspace.
-The following high-level diagram shows how Private Link securely connects a local client to the Azure Virtual Desktop service. For more detailed information about client connections, see [Client connection sequence](#client-connection-sequence).
-- ## Supported scenarios When adding Private Link with Azure Virtual Desktop, you have the following options to connect to Azure Virtual Desktop. Each can be enabled or disabled depending on your requirements.
When a user connects to Azure Virtual Desktop over Private Link, and Azure Virtu
Private Link with Azure Virtual Desktop has the following limitations: -- Before you use Private Link for Azure Virtual Desktop, you need to [enable the feature](private-link-setup.md#enable-the-feature) on each Azure subscription you want to Private Link with Azure Virtual Desktop.
+- Before you use Private Link for Azure Virtual Desktop, you need to [enable Private Link with Azure Virtual Desktop](private-link-setup.md#enable-private-link-with-azure-virtual-desktop-on-a-subscription) on each Azure subscription you want to Private Link with Azure Virtual Desktop.
- All [Remote Desktop clients to connect to Azure Virtual Desktop](users/remote-desktop-clients-overview.md) can be used with Private Link. If you're using the [Remote Desktop client for Windows](./users/connect-windows.md) on a private network without internet access and you're subscribed to both public and private feeds, you aren't able to access your feed.
Private Link with Azure Virtual Desktop has the following limitations:
- Early in the preview of Private Link with Azure Virtual Desktop, the private endpoint for the initial feed discovery (for the *global* sub-resource) shared the private DNS zone name of `privatelink.wvd.microsoft.com` with other private endpoints for workspaces and host pools. In this configuration, users are unable to establish private endpoints exclusively for host pools and workspaces. Starting September 1, 2023, sharing the private DNS zone in this configuration will no longer be supported. You need to create a new private endpoint for the *global* sub-resource to use the private DNS zone name of `privatelink-global.wvd.microsoft.com`. For the steps to do this, see [Initial feed discovery](private-link-setup.md#initial-feed-discovery). -- Azure PowerShell cmdlets for Azure Virtual Desktop that support Private Link are in preview. You need to download and install the [preview version of the Az.DesktopVirtualization module](https://www.powershellgallery.com/packages/Az.DesktopVirtualization/5.0.0-preview) to use these cmdlets, which have been added in version 5.0.0.- ## Next steps - Learn how to [Set up Private Link with Azure Virtual Desktop](private-link-setup.md).
virtual-desktop Private Link Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/private-link-setup.md
In order to use Private Link with Azure Virtual Desktop, you need the following
- Azure PowerShell cmdlets for Azure Virtual Desktop that support Private Link are in preview. You'll need to download and install the [preview version of the Az.DesktopVirtualization module](https://www.powershellgallery.com/packages/Az.DesktopVirtualization/5.0.0-preview) to use these cmdlets, which have been added in version 5.0.0.
-## Enable the feature
+## Enable Private Link with Azure Virtual Desktop on a subscription
To use Private Link with Azure Virtual Desktop, you need to re-register the *Microsoft.DesktopVirtualization* resource provider on each subscription you want to use Private Link with Azure Virtual Desktop. > [!IMPORTANT] > For Azure for US Government and Azure operated by 21Vianet, you also need to register the feature for each subscription.
-### Register the feature (Azure for US Government and Azure operated by 21Vianet only)
+### Register Private Link with Azure Virtual Desktop (Azure for US Government and Azure operated by 21Vianet only)
To register the *Azure Virtual Desktop Private Link* feature:
Here's how to create a private endpoint for the *connection* sub-resource for co
1. Select **Create** to create the private endpoint for the connection sub-resource. - # [Azure PowerShell](#tab/powershell) Here's how to create a private endpoint for the *connection* sub-resource used for connections to a host pool using the [Az.Network](/powershell/module/az.network/) and [Az.DesktopVirtualization](/powershell/module/az.desktopvirtualization/) PowerShell modules.
Here's how to create a private endpoint for the *connection* sub-resource used f
New-AzPrivateEndpoint @parameters ```
- Your output should be similar to the following. Check that the value for **ProvisioningState** is **Succeeded**.
+ Your output should be similar to the following output. Check that the value for **ProvisioningState** is **Succeeded**.
```output ResourceGroupName Name Location ProvisioningState Subnet
Here's how to create a private endpoint for the *connection* sub-resource used f
--output table ```
- Your output should be similar to the following. Check that the value for **ProvisioningState** is **Succeeded**.
+ Your output should be similar to the following output. Check that the value for **ProvisioningState** is **Succeeded**.
```output CustomNetworkInterfaceName Location Name ProvisioningState ResourceGroup
virtual-desktop Whats New Client Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-client-windows.md
description: Learn about recent changes to the Remote Desktop client for Windows
Previously updated : 02/07/2024 Last updated : 02/13/2024 # What's new in the Remote Desktop client for Windows
In this release, we've made the following changes:
- Made the following accessibility improvements: - Improved screen reader experience. - Greater contrast for background color of the connection bar remote commands drop-down menu. -- Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues.
+- Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues.
## Updates for version 1.2.5112
In this release, we've made the following changes:
- Fixed the [CVE-2024-21307](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2024-21307) security vulnerability. - Improved accessibility by making the **Change the size of text and apps** drop-down menu more visible in the High Contrast theme. - Improved client logging, diagnostics, and error classification to help admins troubleshoot connection and feed issues.
+- Fixed a Teams issue that caused incoming videos to flicker green during meeting calls.
>[!NOTE] >This release was originally 1.2.5102 in Insiders, but we changed the Public version number to 1.2.5105 after adding the security improvements addressing [CVE-2024-21307](https://msrc.microsoft.com/update-guide/vulnerability/CVE-2024-21307).
virtual-machines Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/overview.md
The default resources supporting a virtual machine and how they're billed are de
| Virtual network | For giving your virtual machine the ability to communicate with other resources | [Virtual Network pricing](https://azure.microsoft.com/pricing/details/virtual-network/) | | A virtual Network Interface Card (NIC) | For connecting to the virtual network | There is no separate cost for NICs. However, there is a limit to how many NICs you can use based on your [VM's size](sizes.md). Size your VM accordingly and reference [Virtual Machine pricing](https://azure.microsoft.com/pricing/details/virtual-machines/linux/). | | A private IP address and sometimes a public IP address. | For communication and data exchange on your network and with external networks | [IP Addresses pricing](https://azure.microsoft.com/pricing/details/ip-addresses/) |
-| Network security group (NSG) | For managing the network traffic too and from your VM. For example, you might need to open port 22 for SSH access, but you might want to block traffic to port 80. Blocking and allowing port access is done through the NSG.| There are no additional charges for network security groups in Azure. |
+| Network security group (NSG) | For managing the network traffic to and from your VM. For example, you might need to open port 22 for SSH access, but you might want to block traffic to port 80. Blocking and allowing port access is done through the NSG.| There are no additional charges for network security groups in Azure. |
| OS Disk and possibly separate disks for data. | It's a best practice to keep your data on a separate disk from your operating system, in case you ever have a VM fail, you can simply detach the data disk, and attach it to a new VM. | All new virtual machines have an operating system disk and a local disk. <br> Azure doesn't charge for local disk storage. <br> The operating system disk, which is usually 127GiB but is smaller for some images, is charged at the [regular rate for disks](https://azure.microsoft.com/pricing/details/managed-disks/). <br> You can see the cost for attach Premium (SSD based) and Standard (HDD) based disks to your virtual machines on the [Managed Disks pricing page](https://azure.microsoft.com/pricing/details/managed-disks/). | | In some cases, a license for the OS | For providing your virtual machine runs to run the OS | The cost varies based on the number of cores on your VM, so [size your VM accordingly](sizes.md). The cost can be reduced through the [Azure Hybrid Benefit](https://azure.microsoft.com/pricing/hybrid-benefit/#overview). |
virtual-machines Attach Disk Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/attach-disk-ps.md
After you add an empty disk, you'll need to initialize it. To initialize the dis
The script file can contain code to initialize the disks, for example:
+> [!NOTE]
+> The example script uses MBR partition style. If your disk is two tebibytes (TiB) or larger, you must use GPT partitioning. If it's under two TiB, you can use either MBR or GPT.
+ ```azurepowershell-interactive $disks = Get-Disk | Where partitionstyle -eq 'raw' | sort number
virtual-machines Attach Managed Disk Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/attach-managed-disk-portal.md
This article shows you how to attach a new managed data disk to a Windows virtua
1. Select the Windows **Start** menu inside the running VM and enter **diskmgmt.msc** in the search box. The **Disk Management** console opens. 1. Disk Management recognizes that you have a new, uninitialized disk and the **Initialize Disk** window appears. 1. Verify the new disk is selected and then select **OK** to initialize it.+
+ > [!NOTE]
+ > If your disk is two tebibytes (TiB) or larger, you must use GPT partitioning. If it's under two TiB, you can use either MBR or GPT.
+ 1. The new disk appears as **unallocated**. Right-click anywhere on the disk and select **New simple volume**. The **New Simple Volume Wizard** window opens. 1. Proceed through the wizard, keeping all of the defaults, and when you're done select **Finish**. 1. Close **Disk Management**.
web-application-firewall Geomatch Custom Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/geomatch-custom-rules.md
If you're using the Geomatch operator, the selectors can be any of the following
## Next steps
-After you learn about custom rules, [create your own custom rules](create-custom-waf-rules.md).
+- [Create your own custom rules](create-custom-waf-rules.md)
+- [Use Azure WAF geomatch custom rules to enhance network security](../geomatch-custom-rules-examples.md)
web-application-firewall Geomatch Custom Rules Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/geomatch-custom-rules-examples.md
+
+ Title: Use Azure WAF geomatch custom rules to enhance network security
+description: This article shows you how to use Microsoft Azure Web Application Firewall (WAF) geomatch customer rules to enhance network security.
+++ Last updated : 02/13/2024++++
+# Use Azure WAF geomatch custom rules to enhance network security
+
+Web application firewalls (WAFs) are an important tool that helps protect web applications from harmful attacks. They can filter, monitor, and stop web traffic using both preset and custom rules. You can make your own rule that the WAF checks for every request it gets. Custom rules have higher priority than the managed rules and are checked first.
+
+One of the most powerful features of Azure Web Application Firewall is geomatch custom rules. These rules let you match web requests to the geographic location of where they come from. You might want to stop requests from certain places known for harmful activity, or you might want to allow requests from places important to your business. Geomatch custom rules can also help you follow data sovereignty and privacy laws by limiting access to your web applications based on the location of the people using them.
+
+Use the priority parameter wisely when using geomatch custom rules to avoid unnecessary processing or conflicts. Azure WAF evaluates rules in the order determined by the priority parameter, a numerical value ranging from 1 to 100, with lower values indicating higher priority. The priority must be unique across all custom rules. Assign higher priority to critical or specific rules for your web application security and lower priority to less essential or general rules. This ensures WAF applies the most appropriate actions to your web traffic. For example, the scenario where you identify an explicit URI path is the most specific and should have a higher priority rule than other types of patterns. This protects a critical path on the application with the highest priority while allowing more generic traffic to be evaluated across other custom rules or managed rulesets.
+
+To make the paragraph easier to understand for a technical audience using present tense and active voice, you can rewrite it as follows:
+
+Always test your rules before applying them to production and regularly monitor their performance and impact. By following these best practices, you can enhance your web application security by using the power of geomatch custom rules.
+
+This article introduces Azure WAF geomatch custom rules and shows you how to create and manage them using the Azure portal, Bicep and Azure PowerShell.
+
+## Geomatch custom rule patterns
+
+Geomatch custom rules enable you to meet diverse security goals, such as blocking requests from high-risk areas and permitting requests from trusted locations. They're particularly effective in mitigating distributed denial-of-service (DDoS) attacks, which seek to inundate your web application with a multitude of requests from various sources. With geomatch custom rules, you can promptly pinpoint and block regions generating the most DDoS traffic, while still granting access to legitimate users. In this article, you learn about various custom rule patterns that you can employ to optimize your Azure WAF using geomatch custom rules.
+
+## Scenario 1: Block traffic from all countries except "x"
+
+Geomatch custom rules prove useful when you aim to block traffic from all countries, barring one. For instance, if your web application caters exclusively to users in the United States, you can formulate a geomatch custom rule that obstructs all requests not originating from the US. This strategy effectively minimizes your web applicationΓÇÖs attack surface and deters unauthorized access from other regions. This specific technique employs a negating condition to facilitate this traffic pattern. For creating a geomatch custom rule that obstructs traffic from all countries except the US, refer to the following portal, Bicep, and PowerShell examples:
+
+### Portal example - Application Gateway
++
+### Portal example - Front Door
++
+> [!NOTE]
+> Notice on the Azure Front Door WAF, you use `SocketAddr` as the match variable and not `RemoteAddr`. The `RemoteAddr` variable is the original client IP address that's usually sent via the `X-Forwarded-For` request header. The `SocketAddr` variable is the source IP address the WAF sees.
+
+### Bicep example - Application Gateway
+
+```
+properties: {
+ customRules: [
+ {
+ name: 'GeoRule1'
+ priority: 10
+ ruleType: 'MatchRule'
+ action: 'Block'
+ matchConditions: [
+ {
+ matchVariables: [
+ {
+ variableName: 'RemoteAddr'
+ }
+ ]
+ operator: 'GeoMatch'
+ negationConditon: true
+ matchValues: [
+ 'US'
+ ]
+ transforms: []
+ }
+ ]
+ state: 'Enabled'
+ }
+```
+### Bicep example - Front Door
+
+```
+properties: {
+ customRules: {
+ rules: [
+ {
+ name: 'GeoRule1'
+ enabledState: 'Enabled'
+ priority: 10
+ ruleType: 'MatchRule'
+ matchConditions: [
+ {
+ matchVariable: 'SocketAddr'
+ operator: 'GeoMatch'
+ negateCondition: true
+ matchValue: [
+ 'US'
+ ]
+ transforms: []
+ }
+ ]
+ action: 'Block'
+ }
+```
+
+### Azure PowerShell example - Application Gateway
+
+```azurepowershell
+$RGname = "rg-waf "
+$policyName = "waf-pol"
+$variable = New-AzApplicationGatewayFirewallMatchVariable -VariableName RemoteAddr
+$condition = New-AzApplicationGatewayFirewallCondition -MatchVariable $variable -Operator GeoMatch -MatchValue "US" -NegationCondition $true
+$rule = New-AzApplicationGatewayFirewallCustomRule -Name GeoRule1 -Priority 10 -RuleType MatchRule -MatchCondition $condition -Action Block
+$policy = Get-AzApplicationGatewayFirewallPolicy -Name $policyName -ResourceGroupName $RGname
+$policy.CustomRules.Add($rule)
+Set-AzApplicationGatewayFirewallPolicy -InputObject $policy
+```
+
+### Azure PowerShell example - Front Door
+
+```azurepowershell
+$RGname = "rg-waf"
+$policyName = "wafafdpol"
+$matchCondition = New-AzFrontDoorWafMatchConditionObject -MatchVariable SocketAddr -OperatorProperty GeoMatch -MatchValue "US" -NegateCondition $true
+$customRuleObject = New-AzFrontDoorWafCustomRuleObject -Name "GeoRule1" -RuleType MatchRule -MatchCondition $matchCondition -Action Block -Priority 10
+$afdWAFPolicy= Get-AzFrontDoorWafPolicy -Name $policyName -ResourceGroupName $RGname
+Update-AzFrontDoorWafPolicy -InputObject $afdWAFPolicy -Customrule $customRuleObject
+```
+## Scenario 2 - Block traffic from all countries except "x" and "y" that target the URI "foo" or "bar"
+
+Consider a scenario where you need to use geomatch custom rules to block traffic from all countries, except for two or more specific ones, targeting a specific URI. Suppose your web application has specific URI paths intended only for users in the US and Canada. In this case, you create a geomatch custom rule that blocks all requests not originating from these countries.
+
+This pattern processes request payloads from the US and Canada through the managed rulesets, catching any malicious attacks, while blocking requests from all other countries. This approach ensures that only your target audience can access your web application, avoiding unwanted traffic from other regions.
+
+To minimize potential false positives, include the country code **ZZ** in the list to capture IP addresses not yet mapped to a country in AzureΓÇÖs dataset. This technique uses a negate condition for the Geolocation type and a non-negate condition for the URI match.
+
+To create a geomatch custom rule that blocks traffic from all countries except the US and Canada to a specified URI, refer to the portal, Bicep, and Azure PowerShell examples provided.
+
+### Portal example - Application Gateway
++
+### Portal example - Front Door
++
+### Bicep example - Application Gateway
+
+```
+properties: {
+ customRules: [
+ {
+ name: 'GeoRule2'
+ priority: 11
+ ruleType: 'MatchRule'
+ action: 'Block'
+ matchConditions: [
+ {
+ matchVariables: [
+ {
+ variableName: 'RemoteAddr'
+ }
+ ]
+ operator: 'GeoMatch'
+ negationConditon: true
+ matchValues: [
+ 'US'
+ 'CA'
+ ]
+ transforms: []
+ }
+ {
+ matchVariables: [
+ {
+ variableName: 'RequestUri'
+ }
+ ]
+ operator: 'Contains'
+ negationConditon: false
+ matchValues: [
+ '/foo'
+ '/bar'
+ ]
+ transforms: []
+ }
+ ]
+ state: 'Enabled'
+ }
+```
+
+### Bicep example - Front Door
+
+```
+properties: {
+ customRules: {
+ rules: [
+ {
+ name: 'GeoRule2'
+ enabledState: 'Enabled'
+ priority: 11
+ ruleType: 'MatchRule'
+ matchConditions: [
+ {
+ matchVariable: 'SocketAddr'
+ operator: 'GeoMatch'
+ negateCondition: true
+ matchValue: [
+ 'US'
+ 'CA'
+ ]
+ transforms: []
+ }
+ {
+ matchVariable: 'RequestUri'
+ operator: 'Contains'
+ negateCondition: false
+ matchValue: [
+ '/foo'
+ '/bar'
+ ]
+ transforms: []
+ }
+ ]
+ action: 'Block'
+ }
+```
+
+### Azure PowerShell example - Application Gateway
+
+```azurepowershell
+$RGname = "rg-waf "
+$policyName = "waf-pol"
+$variable1a = New-AzApplicationGatewayFirewallMatchVariable -VariableName RemoteAddr
+$condition1a = New-AzApplicationGatewayFirewallCondition -MatchVariable $variable1a -Operator GeoMatch -MatchValue @(ΓÇ£USΓÇ¥, ΓÇ£CAΓÇ¥) -NegationCondition $true
+$variable1b = New-AzApplicationGatewayFirewallMatchVariable -VariableName RequestUri
+$condition1b = New-AzApplicationGatewayFirewallCondition -MatchVariable $variable1b -Operator Contains -MatchValue @(ΓÇ£/fooΓÇ¥, ΓÇ£/barΓÇ¥) -NegationCondition $false
+$rule1 = New-AzApplicationGatewayFirewallCustomRule -Name GeoRule2 -Priority 11 -RuleType MatchRule -MatchCondition $condition1a, $condition1b -Action Block
+$policy = Get-AzApplicationGatewayFirewallPolicy -Name $policyName -ResourceGroupName $RGname
+$policy.CustomRules.Add($rule1)
+Set-AzApplicationGatewayFirewallPolicy -InputObject $policy
+```
+
+### Azure PowerShell example - Front Door
+
+```azurepowershell
+$RGname = "rg-waf"
+$policyName = "wafafdpol"
+$matchCondition1a = New-AzFrontDoorWafMatchConditionObject -MatchVariable SocketAddr -OperatorProperty GeoMatch -MatchValue @(ΓÇ£USΓÇ¥, "CA") -NegateCondition $true
+$matchCondition1b = New-AzFrontDoorWafMatchConditionObject -MatchVariable RequestUri -OperatorProperty Contains -MatchValue @(ΓÇ£/fooΓÇ¥, ΓÇ£/barΓÇ¥) -NegateCondition $false
+$customRuleObject1 = New-AzFrontDoorWafCustomRuleObject -Name "GeoRule2" -RuleType MatchRule -MatchCondition $matchCondition1a, $matchCondition1b -Action Block -Priority 11
+$afdWAFPolicy= Get-AzFrontDoorWafPolicy -Name $policyName -ResourceGroupName $RGname
+Update-AzFrontDoorWafPolicy -InputObject $afdWAFPolicy -Customrule $customRuleObject1
+```
+
+## Scenario 3 - Block traffic specifically from country "x"
+
+You can use geomatch custom rules to block traffic from specific countries. For instance, if your web application receives many malicious requests from country "x", create a geomatch custom rule to block all requests from that country. This protects your web application from potential attacks and reduces resource load. Apply this pattern to block multiple malicious or hostile countries. This technique requires a match condition for the traffic pattern. To block traffic from country "x", see the following portal, Bicep, and Azure PowerShell examples.
+
+### Portal example - Application Gateway
++
+### Portal example - Front Door
++
+### Bicep example - Application Gateway
+
+```
+properties: {
+ customRules: [
+ {
+ name: 'GeoRule3'
+ priority: 12
+ ruleType: 'MatchRule'
+ action: 'Block'
+ matchConditions: [
+ {
+ matchVariables: [
+ {
+ variableName: 'RemoteAddr'
+ }
+ ]
+ operator: 'GeoMatch'
+ negationConditon: false
+ matchValues: [
+ 'US'
+ ]
+ transforms: []
+ }
+ ]
+ state: 'Enabled'
+ }
+```
+
+### Bicep example - Front Door
+
+```
+properties: {
+ customRules: {
+ rules: [
+ {
+ name: 'GeoRule3'
+ enabledState: 'Enabled'
+ priority: 12
+ ruleType: 'MatchRule'
+ matchConditions: [
+ {
+ matchVariable: 'SocketAddr'
+ operator: 'GeoMatch'
+ negateCondition: false
+ matchValue: [
+ 'US'
+ ]
+ transforms: []
+ }
+ ]
+ action: 'Block'
+ }
+```
+
+### Azure PowerShell example - Application Gateway
+
+```azurepowershell
+$RGname = "rg-waf "
+$policyName = "waf-pol"
+$variable2 = New-AzApplicationGatewayFirewallMatchVariable -VariableName RemoteAddr
+$condition2 = New-AzApplicationGatewayFirewallCondition -MatchVariable $variable2 -Operator GeoMatch -MatchValue "US" -NegationCondition $false
+$rule2 = New-AzApplicationGatewayFirewallCustomRule -Name GeoRule3 -Priority 12 -RuleType MatchRule -MatchCondition $condition2 -Action Block
+$policy = Get-AzApplicationGatewayFirewallPolicy -Name $policyName -ResourceGroupName $RGname
+$policy.CustomRules.Add($rule2)
+Set-AzApplicationGatewayFirewallPolicy -InputObject $policy
+```
+
+### Azure PowerShell example - Front Door
+
+```azurepowershell
+$RGname = "rg-waf"
+$policyName = "wafafdpol"
+$matchCondition2 = New-AzFrontDoorWafMatchConditionObject -MatchVariable SocketAddr -OperatorProperty GeoMatch -MatchValue "US" -NegateCondition $false
+$customRuleObject2 = New-AzFrontDoorWafCustomRuleObject -Name "GeoRule3" -RuleType MatchRule -MatchCondition $matchCondition2 -Action Block -Priority 12
+$afdWAFPolicy= Get-AzFrontDoorWafPolicy -Name $policyName -ResourceGroupName $RGname
+Update-AzFrontDoorWafPolicy -InputObject $afdWAFPolicy -Customrule $customRuleObject2
+```
+
+## Geomatch custom rule anti-patterns
+
+Avoid anti-patterns when using geomatch custom rules, such as setting the custom rule action to `allow` instead of `block`. This can have unintended consequences, like allowing traffic to bypass the WAF and potentially exposing your web application to other threats.
+
+Instead of using an `allow` action, use a `block` action with a negate condition, as shown in previous patterns. This ensures only traffic from desired countries is allowed and the WAF blocks all other traffic.
+
+### Scenario 4 - allow traffic from country "x"
+
+Avoid setting the geomatch custom rule to allow traffic from a specific country. For example, if you want to allow traffic from the United States because of a large customer base, creating a custom rule with the action `allow` and the value `United States` might seem like the solution. However, this rule allows all traffic from the United States, regardless of whether it has a malicious payload or not, as the `allow` action bypasses further rule processing of managed rulesets. Additionally, the WAF still processes traffic from all other countries, consuming resources. This exposes your web application to malicious requests from the United States that the WAF would otherwise block.
++
+### Scenario 5 - Allow traffic from all counties except "x"
+
+Avoid setting the rule action to `allow` and specifying a list of countries to exclude when using geomatch custom rules. For example, if you want to allow traffic from all countries except the United States, where you suspect malicious activity, this approach can have unintended consequences. It might allow traffic from unverified or unsafe countries or countries with low or no security standards, exposing your web application to potential vulnerabilities or attacks. Using the `allow` action for all countries except the US indicates to the WAF to stop processing request payloads against managed rulesets. All rule evaluation ceases once the custom rule with `allow` is processed, exposing the application to unwanted malicious attacks.
+
+Instead, use a more restrictive and specific rule action, such as block, and specify a list of countries to allow with a negate condition. This ensures only traffic from trusted and verified sources can access your web application while blocking any suspicious or unwanted traffic.
++
+## Next steps
+
+- [Geomatch custom rules](ag/geomatch-custom-rules.md)
+- [Create and use Web Application Firewall v2 custom rules on Application Gateway](ag/create-custom-waf-rules.md)