Updates from: 07/26/2024 01:10:40
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Custom Email Sendgrid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-email-sendgrid.md
The Localization element allows you to support multiple locales or languages in
<LocalizedString ElementType="DisplayControl" ElementId="emailVerificationControl" StringId="but_send_new_code">Send new code</LocalizedString> <LocalizedString ElementType="DisplayControl" ElementId="emailVerificationControl" StringId="but_change_claims">Change e-mail</LocalizedString> <!-- Claims-->
- <LocalizedString ElementType="ClaimType" ElementId="emailVerificationCode" StringId="DisplayName">Verification Code</LocalizedString>
- <LocalizedString ElementType="ClaimType" ElementId="emailVerificationCode" StringId="UserHelpText">Verification code received in the email.</LocalizedString>
- <LocalizedString ElementType="ClaimType" ElementId="emailVerificationCode" StringId="AdminHelpText">Verification code received in the email.</LocalizedString>
+ <LocalizedString ElementType="ClaimType" ElementId="VerificationCode" StringId="DisplayName">Verification Code</LocalizedString>
+ <LocalizedString ElementType="ClaimType" ElementId="VerificationCode" StringId="UserHelpText">Verification code received in the email.</LocalizedString>
+ <LocalizedString ElementType="ClaimType" ElementId="VerificationCode" StringId="AdminHelpText">Verification code received in the email.</LocalizedString>
<LocalizedString ElementType="ClaimType" ElementId="email" StringId="DisplayName">Email</LocalizedString> <!-- Email validation error messages--> <LocalizedString ElementType="ErrorMessage" StringId="UserMessageIfSessionDoesNotExist">You have exceeded the maximum time allowed.</LocalizedString>
ai-services Deployment Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/deployment-types.md
Standard deployments are optimized for low to medium volume workloads with high
> [!IMPORTANT] > Data might be processed outside of the resource's Azure geography, but data storage remains in its Azure geography. [Learn more about data residency](https://azure.microsoft.com/explore/global-infrastructure/data-residency/).
-Global deployments are available in the same Azure OpenAI resources as non-global offers but allow you to leverage Azure's global infrastructure to dynamically route traffic to the data center with best availability for each request. Global standard will provide the highest default quota for new models and eliminates the need to load balance across multiple resources.
+Global deployments are available in the same Azure OpenAI resources as non-global deployment types but allow you to leverage Azure's global infrastructure to dynamically route traffic to the data center with best availability for each request. Global standard provides the highest default quota and eliminates the need to load balance across multiple resources.
-The deployment type is optimized for low to medium volume workloads with high burstiness. Customers with high consistent volume may experience greater latency variability. The threshold is set per model. See the [quotas page to learn more](./quota.md).
-
-For customers that require the lower latency variance at large workload usage, we recommend purchasing provisioned throughput.
+Customers with high consistent volume may experience greater latency variability. The threshold is set per model. See the [quotas page to learn more](./quota.md). For applications that require the lower latency variance at large workload usage, we recommend purchasing provisioned throughput.
### How to disable access to global deployments in your subscription
ai-services Migration Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/migration-javascript.md
+
+ Title: How to migrate to OpenAI JavaScript v4.x
+
+description: Learn about migrating to the latest release of the OpenAI JavaScript library with Azure OpenAI.
+++++ Last updated : 07/11/2024+++
+# Migrating to the OpenAI JavaScript API library 4.x
+
+As of June 2024, we recommend migrating to the OpenAI JavaScript API library 4.x, the latest version of the official OpenAI JavaScript client library that supports the Azure OpenAI Service API version `2022-12-01` and later. This article helps you bring you up to speed on the changes specific to Azure OpenAI.
+
+## Authenticating the client
+
+There are several ways to authenticate API requests to Azure OpenAI. We highly recommend using Microsoft Entra ID tokens. See the [Azure Identity documentation](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/identity/identity/README.md) for more information.
+
+### Microsoft Entra ID
+
+There are several ways to authenticate with the Azure OpenAI service using Microsoft Entra ID tokens. The default way is to use the `DefaultAzureCredential` class from the `@azure/identity` package, but your application might be using a different credential class. For the purposes of this guide, we assume that you are using the `DefaultAzureCredential` class. A credential can be created as follows:
+
+```typescript
+import { DefaultAzureCredential } from "@azure/identity";
+const credential = new DefaultAzureCredential();
+```
+
+This object is then passed to the second argument of the `OpenAIClient` and `AssistantsClient` client constructors.
+
+In order to authenticate the `AzureOpenAI` client, however, we need to use the `getBearerTokenProvider` function from the `@azure/identity` package. This function creates a token provider that `AzureOpenAI` uses internally to obtain tokens for each request. The token provider is created as follows:
+
+```typescript
+import { DefaultAzureCredential, getBearerTokenProvider } from "@azure/identity";
+const credential = new DefaultAzureCredential();
+const scope = "https://cognitiveservices.azure.com/.default";
+const azureADTokenProvider = getBearerTokenProvider(credential, scope);
+```
+
+`azureADTokenProvider` is passed to the options object when creating the `AzureOpenAI` client.
+
+### (Highly Discouraged) API Key
+
+API keys are not recommended for production use because they are less secure than other authentication methods. However, if you are using an API key to authenticate `OpenAIClient` or `AssistantsClient`, an `AzureKeyCredential` object must be created as follows:
+
+```typescript
+import { AzureKeyCredential } from "@azure/openai";
+const apiKey = new AzureKeyCredential("your API key");
+```
+
+Authenticating `AzureOpenAI` with an API key involves setting the `AZURE_OPENAI_API_KEY` environment variable or setting the `apiKey` string property in the options object when creating the `AzureOpenAI` client.
+
+## Constructing the client
+
+# [OpenAI JavaScript (new)](#tab/javascript-new)
+
+```typescript
+import { AzureOpenAI } from "openai";
+const deployment = "Your Azure OpenAI deployment";
+const apiVersion = "2024-04-01-preview";
+const options = { azureADTokenProvider, deployment, apiVersion }
+const client = new AzureOpenAI(options);
+```
+
+The endpoint of the Azure OpenAI resource can be specified by setting the `endpoint` option but it can also be loaded by the client from the environment variable `AZURE_OPENAI_ENDPOINT`. This is the recommended way to set the endpoint because it allows the client to be used in different environments without changing the code and also to protect the endpoint from being exposed in the code.
+
+The API version is required to be specified, this is necessary to ensure that existing code doesn't break between preview API versions. Refer to [API versioning documentation](../api-version-deprecation.md) to learn more about Azure OpenAI API versions. Additionally, the `deployment` property isn't required but it's recommended to be set. Once `deployment` is set, it's used as the default deployment for all operations that require it. If the client isn't created with the `deployment` option, the `model` property in the options object should be set with the deployment name. However, audio operations such as `audio.transcriptions.create` require the client to be created with the `deployment` option set to the deployment name.
+
+# [Azure OpenAI JavaScript (previous)](#tab/javascript-old)
+
+```typescript
+import { OpenAIClient } from "@azure/openai";
+const endpoint = "Your Azure OpenAI resource endpoint";
+const client = new OpenAIClient(endpoint, credential);
+```
+
+If not set, the API version defaults to the last known one before the release of the client. The client is also not locked to a single model deployment, meaning that the deployment name has to be passed to each method that requires it.
+++
+## API differences
+
+There are key differences between the `OpenAIClient` and `AssistantsClient` clients and the `AzureOpenAI` client:
+
+- Operations are represented as a flat list of methods in both `OpenAIClient` and `AssistantsClient`, for example `client.getChatCompletions`. In `AzureOpenAI`, operations are grouped in nested groups, for example `client.chat.completions.create`.
+- `OpenAIClient` and `AssistantsClient` rename many of the names used in the Azure OpenAI service API. For example, snake case is used in the API but camel case is used in the client. In `AzureOpenAI`, names are kept the same as in the Azure OpenAI service API.
+
+## Migration examples
+
+The following sections provide examples of how to migrate from `OpenAIClient` and `AssistantsClient` to `AzureOpenAI`.
+
+### Chat completions
+
+# [OpenAI JavaScript (new)](#tab/javascript-new)
+
+```typescript
+const result = await client.chat.completions.create({ messages, model: '', max_tokens: 100 });
+```
+
+# [Azure OpenAI JavaScript (previous)](#tab/javascript-old)
+
+```typescript
+const result = await client.getChatCompletions(deploymentName, messages, { maxTokens: 100 });
+```
+++
+Note the following:
+- The `getChatCompletions` method has been replaced with the `chat.completions.create` method.
+- The `messages` parameter is now passed in the options object with the `messages` property.
+- The `maxTokens` property has been renamed to `max_tokens` and the `deploymentName` parameter has been removed. Generally, the names of the properties in the `options` object are the same as in the Azure OpenAI service API, following the snake case convention instead of the camel case convention used in the `AssistantsClient`. This is true for all the properties across all requests and responses in the `AzureOpenAI` client.
+- The `deploymentName` parameter isn't needed if the client was created with the `deployment` option. If the client was not created with the `deployment` option, the `model` property in the option object should be set with the deployment name.
+
+### Streaming chat completions
+
+# [OpenAI JavaScript (new)](#tab/javascript-new)
+
+```typescript
+const stream = await client.chat.completions.create({ model: '', messages, max_tokens: 100, stream: true });
+```
+
+# [Azure OpenAI JavaScript (previous)](#tab/javascript-old)
+
+```typescript
+const stream = await client.streamChatCompletions(deploymentName, messages, { maxTokens: 100 });
+```
+++++
+### Azure On Your Data
+
+# [OpenAI JavaScript (new)](#tab/javascript-new)
+
+```typescript
+import "@azure/openai/types";
+
+const azureSearchEndpoint = "Your Azure Search resource endpoint";
+const azureSearchIndexName = "Your Azure Search index name";
+const result = await client.chat.completions.create({ model: '', messages, data_sources: [{
+ type: "azure_search",
+ parameters: {
+ endpoint: azureSearchEndpoint,
+ index_name: azureSearchIndexName,
+ authentication: {
+ type: "system_assigned_managed_identity",
+ }
+ }
+ }]
+});
+```
+
+# [Azure OpenAI JavaScript (previous)](#tab/javascript-old)
+
+```typescript
+const azureSearchEndpoint = "Your Azure Search resource endpoint";
+const azureSearchIndexName = "Your Azure Search index name";
+const result = await client.getChatCompletions(deploymentName, messages, { azureExtensionOptions: {
+ data_sources: [{
+ type: "azure_search",
+ endpoint: azureSearchEndpoint,
+ indexName: azureSearchIndexName,
+ authentication: {
+ type: "system_assigned_managed_identity",
+ }
+ }]
+ }
+});
+```
+++
+- `"@azure/openai/types"` is imported which adds Azure-specific definitions (for example, `data_sources`) to the client types.
+- The `azureExtensionOptions` property has been replaced with the inner `data_sources` property.
+- The `parameters` property has been added to wrap the parameters of the extension, which mirrors the schema of the Azure OpenAI service API.
+- Camel case properties have been replaced with snake case properties.
+
+### Audio transcription
+
+# [OpenAI JavaScript (new)](#tab/javascript-new)
+
+```typescript
+import { createReadStream } from "fs";
+
+const result = await client.audio.transcriptions.create({
+ model: '',
+ file: createReadStream(audioFilePath),
+});
+```
+
+# [Azure OpenAI JavaScript (previous)](#tab/javascript-old)
+
+```typescript
+import { readFile } from "fs/promises";
+
+const audioFilePath = "path/to/audio/file";
+const audio = await readFile(audioFilePath);
+const result = await client.getAudioTranscription(deploymentName, audio);
+```
+++++
+- The `getAudioTranscription` method has been replaced with the `audio.transcriptions.create` method.
+- The `AzureOpenAI` has to be constructed with the `deployment` option set to the deployment name in order to use audio operations such as `audio.transcriptions.create`.
+- The `model` property is required to be set in the options object but its value isn't used in the operation so feel free to set it to any value.
+- The `file` property accepts various types including `Buffer`, `fs.ReadaStream`, and `Blob` but in this example, a file is streamed from disk using `fs.createReadStream`.
+
+### Audio translation
+
+# [OpenAI JavaScript (new)](#tab/javascript-new)
+
+```typescript
+import { createReadStream } from "fs";
+
+const result = await client.audio.translations.create({
+ model: '',
+ file: createReadStream(audioFilePath),
+});
+```
+
+# [Azure OpenAI JavaScript (previous)](#tab/javascript-old)
+
+```typescript
+import { readFile } from "fs/promises";
+
+const audioFilePath = "path/to/audio/file";
+const audio = await readFile(audioFilePath);
+const result = await client.getAudioTranslation(deploymentName, audio);
+```
++++
+- The `getAudioTranslation` method has been replaced with the `audio.translations.create` method.
+- All other changes are the same as in the audio transcription example.
+
+### Assistants
+
+The following examples show how to migrate some of the `AssistantsClient` methods.
+
+#### Assistant creation
+
+# [OpenAI JavaScript (new)](#tab/javascript-new)
+
+```typescript
+const options = ...;
+const assistantResponse = await assistantsClient.beta.assistants.create(
+ options
+);
+```
+
+# [Azure OpenAI JavaScript (previous)](#tab/javascript-old)
+
+```typescript
+const options = {
+ model: azureOpenAIDeployment,
+ name: "Math Tutor",
+ instructions:
+ "You are a personal math tutor. Write and run JavaScript code to answer math questions.",
+ tools: [{ type: "code_interpreter" }],
+};
+const assistantResponse = await assistantsClient.createAssistant(options);
+```
+++++
+- The `createAssistant` method has been replaced with the `beta.assistants.create` method
+
+#### Thread creation
+
+The following example shows how to migrate the `createThread` method call.
+
+# [OpenAI JavaScript (new)](#tab/javascript-new)
+
+```typescript
+const assistantThread = await assistantsClient.beta.threads.create();
+```
+
+# [Azure OpenAI JavaScript (previous)](#tab/javascript-old)
+
+```typescript
+const assistantThread = await assistantsClient.createThread();
+```
+++++
+- The `createThread` method has been replaced with the `beta.threads.create` method
+
+#### Message creation
+
+The following example shows how to migrate the `createMessage` method call.
+
+# [OpenAI JavaScript (new)](#tab/javascript-new)
+
+```typescript
+const threadResponse = await assistantsClient.beta.threads.messages.create(
+ assistantThread.id,
+ {
+ role,
+ content: message,
+ }
+);
+```
+
+# [Azure OpenAI JavaScript (previous)](#tab/javascript-old)
+
+```typescript
+const threadResponse = await assistantsClient.createMessage(
+ assistantThread.id,
+ role,
+ message
+);
+```
++++
+- The `createMessage` method has been replaced with the `beta.threads.messages.create` method.
+- The message specification has been moved from a parameter list to an object.
+
+#### Runs
+
+To run an assistant on a thread, the `createRun` method is used to create a run, and then a loop is used to poll the run status until it is in a terminal state. The following example shows how to migrate the run creation and polling.
+
+# [OpenAI JavaScript (new)](#tab/javascript-new)
+
+This code can be migrated and simplified by using the `createAndPoll` method, which creates a run and polls it until it is in a terminal state.
+
+```typescript
+const runResponse = await assistantsClient.beta.threads.runs.createAndPoll(
+ assistantThread.id,
+ {
+ assistant_id: assistantResponse.id,
+ },
+ { pollIntervalMs: 500 }
+);
+```
+
+# [Azure OpenAI JavaScript (previous)](#tab/javascript-old)
+
+```typescript
+let runResponse = await assistantsClient.createRun(assistantThread.id, {
+ assistantId: assistantResponse.id,
+});
+
+do {
+ await new Promise((r) => setTimeout(r, 500));
+ runResponse = await assistantsClient.getRun(
+ assistantThread.id,
+ runResponse.id
+ );
+} while (
+ runResponse.status === "queued" ||
+ runResponse.status === "in_progress"
+```
+++++
+- The `createRun` method has been replaced with the `beta.threads.runs.create` and `createAndPoll` methods.
+- The `createAndPoll` method is used to create a run and poll it until it is in a terminal state.
+
+#### Processing Run results
+
+# [OpenAI JavaScript (new)](#tab/javascript-new)
+
+Pages can be looped through by using the `for await` loop.
+
+```typescript
+for await (const runMessageDatum of runMessages) {
+ for (const item of runMessageDatum.content) {
+ ...
+ }
+}
+```
+
+# [Azure OpenAI JavaScript (previous)](#tab/javascript-old)
+
+Without paging, results had to be accessed manually page by page using the `data` property of the response object. For instance, accessing the first page can be done as follows:
+
+```typescript
+for (const runMessageDatum of runMessages.data) {
+ for (const item of runMessageDatum.content) {
+ ...
+ }
+}
+```
++++
+### Embeddings
+
+The following example shows how to migrate the `getEmbeddings` method call.
+
+# [OpenAI JavaScript (new)](#tab/javascript-new)
+
+```typescript
+const embeddings = await client.embeddings.create({ input, model: '' });
+```
+
+# [Azure OpenAI JavaScript (previous)](#tab/javascript-old)
+
+```typescript
+const embeddings = await client.getEmbeddings(deploymentName, input);
+```
++++
+- The `getEmbeddings` method has been replaced with the `embeddings.create` method.
+- The `input` parameter is now passed in the options object with the `input` property.
+- The `deploymentName` parameter has been removed. The `deploymentName` parameter isn't needed if the client was created with the `deployment` option. If the client was not created with the `deployment` option, the `model` property in the option object should be set with the deployment name.
+
+### Image generation
+
+The following example shows how to migrate the `getImages` method call.
+
+# [OpenAI JavaScript (new)](#tab/javascript-new)
+
+```typescript
+ const results = await client.images.generate({ prompt, model: '', n, size });
+```
+
+# [Azure OpenAI JavaScript (previous)](#tab/javascript-old)
+
+```typescript
+const results = await client.getImages(deploymentName, prompt, { n, size });
+```
+++++
+- The `getImages` method has been replaced with the `images.generate` method.
+- The `prompt` parameter is now passed in the options object with the `prompt` property.
+- The `deploymentName` parameter has been removed. The `deploymentName` parameter isn't needed if the client was created with the `deployment` option. If the client was not created with the `deployment` option, the `model` property in the option object should be set with the deployment name.
+
+### Content filter
+
+Content filter results are part of the chat completions response types in `OpenAIClient`. However, `AzureOpenAI` does not have a direct equivalent to the `contentFilterResults` property in the `ChatCompletion.Choice` interface. The content filter results can be accessed by importing `"@azure/openai/types"` and accessing the `content_filter_results` property. The following example shows how to access the content filter results.
+
+# [OpenAI JavaScript (new)](#tab/javascript-new)
+
+```typescript
+import "@azure/openai/types";
+
+const result = await client.chat.completions.create({ model: '', messages });
+for (const choice of results.choices) {
+ const filterResults = choice.content_filter_results;
+ if (!filterResults) {
+ console.log("No content filter is found");
+ return;
+ }
+ if (filterResults.error) {
+ console.log(
+ `Content filter ran into the error ${filterResults.error.code}: ${filterResults.error.message}`);
+ }
+ const { hate, sexual, self_harm, violence } = filterResults;
+ ...
+}
+```
+
+# [Azure OpenAI JavaScript (previous)](#tab/javascript-old)
+
+```typescript
+const results = await client.getChatCompletions(deploymentName, messages);
+for (const choice of results.choices) {
+ if (!choice.contentFilterResults) {
+ console.log("No content filter is found");
+ return;
+ }
+ if (choice.contentFilterResults.error) {
+ console.log(
+ `Content filter ran into the error ${choice.contentFilterResults.error.code}: ${choice.contentFilterResults.error.message}`);
+ }
+ const { hate, sexual, selfHarm, violence } = choice.contentFilterResults;
+ ...
+}
+```
++++
+- Camel case properties have been replaced with snake case properties.
+- `"@azure/openai/types"` is imported which adds Azure-specific definitions (for example, content_filter_results) to the client types, see the [Azure types](#azure-types) section for more information.
+
+## Comparing Types
+
+The following table explores several type names from `@azure/openai` and shows their nearest `openai` equivalent. The names differences illustrate several of the above-mentioned changes. This table provides an overview, and more detail and code samples are provided in the following sections.
+
+| Old Type Name | Nearest New Type | Symbol Type | Change description |
+| - | - | -- | -- |
+| `OpenAIClient` | `AzureOpenAI` | Class | This class replaces the former and has no methods in common with it. See the section on `AzureOpenAI` below. |
+| `AudioResult` | `Transcription`/`Transcription` | Interface | Depending on the calling operation, the two interfaces replace the former one |
+| `AudioResultFormat` | inline union type of the `response_format` property | Alias | It doesn't exist |
+| `AudioResultSimpleJson ` | `Transcription`/`Transcription` | Interface | Depending on the calling operation, the two interfaces replace the former one |
+| `AudioResultVerboseJson ` | N/A | Interface | |
+| `AudioSegment ` | N/A | Interface | |
+| `AudioTranscriptionTask ` | N/A | Alias | |
+| `AzureChatEnhancementConfiguration`, `AzureChatEnhancements`, `AzureChatExtensionConfiguration`, `AzureChatExtensionConfigurationUnion`, `AzureChatExtensionDataSourceResponseCitation`, `AzureChatExtensionsMessageContext`, `AzureChatExtensionType`, `AzureChatGroundingEnhancementConfiguration`, `AzureChatOCREnhancementConfiguration`, `AzureCosmosDBChatExtensionConfiguration`, `AzureCosmosDBFieldMappingOptions`, `AzureExtensionsOptions`, `AzureGroundingEnhancement`, `AzureGroundingEnhancementCoordinatePoint`, `AzureGroundingEnhancementLine`, `AzureGroundingEnhancementLineSpan`, `AzureMachineLearningIndexChatExtensionConfiguration`, `AzureSearchChatExtensionConfiguration`, `AzureSearchIndexFieldMappingOptions`, `AzureSearchQueryType`, `ContentFilterBlocklistIdResult`, `ContentFilterCitedDetectionResult`, `ContentFilterDetectionResult`, `ContentFilterErrorResults`, `ContentFilterResult`, `ContentFilterResultDetailsForPrompt`, `ContentFilterResultsForChoice`, `ContentFilterSeverity`, `ContentFilterResultsForPrompt`, `ContentFilterSuccessResultDetailsForPrompt`, `ContentFilterSuccessResultsForChoice`, `ElasticsearchChatExtensionConfiguration`, `ElasticsearchIndexFieldMappingOptions`, `ElasticsearchQueryType`, `ImageGenerationContentFilterResults`, `ImageGenerationPromptFilterResults`, `OnYourDataAccessTokenAuthenticationOptions`, `OnYourDataApiKeyAuthenticationOptions`, `OnYourDataAuthenticationOptions`, `OnYourDataAuthenticationOptionsUnion`, `OnYourDataConnectionStringAuthenticationOptions`, `OnYourDataDeploymentNameVectorizationSource`, `OnYourDataEncodedApiKeyAuthenticationOptions`, `OnYourDataEndpointVectorizationSource`, `OnYourDataKeyAndKeyIdAuthenticationOptions`, `OnYourDataModelIdVectorizationSource`, `OnYourDataSystemAssignedManagedIdentityAuthenticationOptions`, `OnYourDataUserAssignedManagedIdentityAuthenticationOptions`, `OnYourDataVectorizationSource`, `OnYourDataVectorizationSourceType`, `OnYourDataVectorizationSourceUnion`, `PineconeChatExtensionConfiguration`, `PineconeFieldMappingOptions` | N/A | Interfaces and Aliases | See the Azure types section below |
+| `AzureKeyCredential` | N/A | Class | The API key can be provided as a string value |
+| `ChatChoice` | `ChatCompletion.Choice` | Interface | |
+| `ChatChoiceLogProbabilityInfo` | `Logprobs` | Interface | |
+| `ChatCompletions` | `ChatCompletion` and `ChatCompletionChunk` | Interface | |
+| `ChatCompletionsFunctionToolCall` | `ChatCompletionMessageToolCall` | Interface | |
+| `ChatRequestFunctionMessage` | `ChatCompletionFunctionMessageParam` | Interface | |
+| `ChatRequestMessage` | `ChatCompletionMessageParam` | Interface | |
+| `ChatRequestMessageUnion` | `ChatCompletionMessageParam` | |
+| `ChatRequestSystemMessage` | `ChatCompletionSystemMessageParam` | Interface | |
+| `ChatRequestToolMessage` | `ChatCompletionToolMessageParam` | Interface | |
+| `ChatRequestUserMessage` | `ChatCompletionUserMessageParam` | Interface | |
+| `ChatResponseMessage` | `Delta` / `ChatCompletionMessage` | Interface | |
+| `ChatRole` | N/A | Alias | |
+| `ChatTokenLogProbabilityInfo` | `TopLogprob` | Interface | |
+| `ChatTokenLogProbabilityResult` | `ChatCompletionTokenLogprob` | Interface | |
+| `Choice` | `Choice` | Interface | |
+| `Completions` | `Completion` | Interface | |
+| `CompletionsFinishReason` | N/A | Alias | |
+| `CompletionsLogProbabilityModel` | `Logprobs` | Interface | |
+| `CompletionsUsage` | `CompletionUsage` | Interface | |
+| `EmbeddingItem` | `Embedding` | Interface | |
+| `Embeddings` | `CreateEmbeddingResponse` | Interface | |
+| `EmbeddingsUsage` | `CreateEmbeddingResponse.Usage` | Interface | |
+| `EventStream` | `Stream` | Interface | |
+| `FunctionCall` | `FunctionCall` | Interface | |
+| `FunctionCallPreset` | N/A | Alias | |
+| `FunctionDefinition` | `Function` | Interface | |
+| `FunctionName` | N/A | Alias | |
+| `GetAudioTranscriptionOptions` | `TranscriptionCreateParams` | Interface | |
+| `GetAudioTranslationOptions` | `TranslationCreateParams` | Interface | |
+| `GetChatCompletionsOptions` | `ChatCompletionCreateParamsNonStreaming` and `ChatCompletionCreateParamsStreaming` | Interface | |
+| `GetCompletionsOptions` | `CompletionCreateParams` | Interface | |
+| `GetEmbeddingsOptions` | `EmbeddingCreateParams` | Interface | |
+| `GetImagesOptions` | `ImageGenerateParams` | Interface | |
+| `ImageGenerationData` | `Image` | Interface | |
+| `ImageGenerationQuality` | N/A | Alias | |
+| `ImageGenerationResponseFormat` | N/A | Alias | |
+| `ImageGenerations` | `ImagesResponse` | Interface | |
+| `ImageGenerationStyle` | N/A | Alias | |
+| `ImageSize` | N/A | Alias | |
+| `MaxTokensFinishDetails` | N/A | Interface | |
+| `OpenAIClientOptions` | `AzureClientOptions` | Interface | |
+| `OpenAIError` | `OpenAIError` | Interface | |
+| `OpenAIKeyCredential` | N/A | Class | |
+| `StopFinishDetails` | N/A | Interface | |
+
+## Azure types
+
+`AzureOpenAI` connects to the Azure OpenAI service and can call all the operations available in the service. However, the types of the requests and responses are inherited from the `OpenAI` and are not yet updated to reflect the additional features supported exclusively by the Azure OpenAI service. TypeScript users will need to import `"@azure/openai/types"` from `@azure/openai@2.0.0-beta.1` which will merge Azure-specific definitions into existing types. Examples in [the Migration examples](#migration-examples) section show how to do this.
+
+## Next steps
+
+- [Azure OpenAI Assistants](../concepts/assistants.md)
ai-services Quotas Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/quotas-limits.md
The following sections provide you with a quick guide to the default quotas and
|Tier| Quota Limit in tokens per minute (TPM) | Requests per minute | ||::|::|
-|Enterprise agreement | 30 M | 60 K |
+|Enterprise agreement | 30 M | 180 K |
|Default | 450 K | 2.7 K | M = million | K = thousand
ai-services Avatar Gestures With Ssml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/text-to-speech-avatar/avatar-gestures-with-ssml.md
-# Customize text to speech avatar gestures with SSML (preview)
-
+# Customize text to speech avatar gestures with SSML
The [Speech Synthesis Markup Language (SSML)](../speech-synthesis-markup-structure.md) with input text determines the structure, content, and other characteristics of the text to speech output. Most SSML tags can also work in text to speech avatar. Furthermore, text to speech avatar batch mode provides avatar gestures insertion ability by using the SSML bookmark element with the format `<bookmark mark='gesture.*'/>`.
ai-services Batch Synthesis Avatar Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/text-to-speech-avatar/batch-synthesis-avatar-properties.md
-# Batch synthesis properties for text to speech avatar (preview)
-
+# Batch synthesis properties for text to speech avatar
Batch synthesis properties can be grouped as: avatar related properties, batch job related properties, and text to speech related properties, which are described in the following tables.
ai-services Batch Synthesis Avatar https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/text-to-speech-avatar/batch-synthesis-avatar.md
-# How to use batch synthesis for text to speech avatar (preview)
+# How to use batch synthesis for text to speech avatar
-
-The batch synthesis API for text to speech avatar (preview) allows for the asynchronous synthesis of text into a talking avatar as a video file. Publishers and video content platforms can utilize this API to create avatar video content in a batch. That approach can be suitable for various use cases such as training materials, presentations, or advertisements.
+The batch synthesis API for text to speech avatar allows for the asynchronous synthesis of text into a talking avatar as a video file. Publishers and video content platforms can utilize this API to create avatar video content in a batch. That approach can be suitable for various use cases such as training materials, presentations, or advertisements.
The synthetic avatar video will be generated asynchronously after the system receives text input. The generated video output can be downloaded in batch mode synthesis. You submit text for synthesis, poll for the synthesis status, and download the video output when the status indicates success. The text input formats must be plain text or Speech Synthesis Markup Language (SSML) text.
To perform batch synthesis, you can use the following REST API operations.
| Operation | Method | REST API call | |-|||
-| [Create batch synthesis](#create-a-batch-synthesis-request) | PUT | avatar/batchsyntheses/{SynthesisId}?api-version=2024-04-15-preview |
-| [Get batch synthesis](#get-batch-synthesis) | GET | avatar/batchsyntheses/{SynthesisId}?api-version=2024-04-15-preview |
-| [List batch synthesis](#list-batch-synthesis) | GET | avatar/batchsyntheses/?api-version=2024-04-15-preview |
-| [Delete batch synthesis](#delete-batch-synthesis) | DELETE | avatar/batchsyntheses/{SynthesisId}?api-version=2024-04-15-preview |
+| [Create batch synthesis](#create-a-batch-synthesis-request) | PUT | avatar/batchsyntheses/{SynthesisId}?api-version=2024-08-01 |
+| [Get batch synthesis](#get-batch-synthesis) | GET | avatar/batchsyntheses/{SynthesisId}?api-version=2024-08-01 |
+| [List batch synthesis](#list-batch-synthesis) | GET | avatar/batchsyntheses/?api-version=2024-08-01 |
+| [Delete batch synthesis](#delete-batch-synthesis) | DELETE | avatar/batchsyntheses/{SynthesisId}?api-version=2024-08-01 |
You can refer to the code samples on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/batch-avatar).
curl -v -X PUT -H "Ocp-Apim-Subscription-Key: YourSpeechKey" -H "Content-Type: a
"talkingAvatarCharacter": "lisa", "talkingAvatarStyle": "graceful-sitting" }
-}' "https://YourSpeechRegion.api.cognitive.microsoft.com/avatar/batchsyntheses/my-job-01?api-version=2024-04-15-preview"
+}' "https://YourSpeechRegion.api.cognitive.microsoft.com/avatar/batchsyntheses/my-job-01?api-version=2024-08-01"
``` You should receive a response body in the following format:
To retrieve the status of a batch synthesis job, make an HTTP GET request using
Replace `YourSynthesisId` with your batch synthesis ID, `YourSpeechKey` with your Speech resource key, and `YourSpeechRegion` with your Speech resource region. ```azurecli-interactive
-curl -v -X GET "https://YourSpeechRegion.api.cognitive.microsoft.com/avatar/batchsyntheses/YourSynthesisId?api-version=2024-04-15-preview" -H "Ocp-Apim-Subscription-Key: YourSpeechKey"
+curl -v -X GET "https://YourSpeechRegion.api.cognitive.microsoft.com/avatar/batchsyntheses/YourSynthesisId?api-version=2024-08-01" -H "Ocp-Apim-Subscription-Key: YourSpeechKey"
``` You should receive a response body in the following format:
To list all batch synthesis jobs for your Speech resource, make an HTTP GET requ
Replace `YourSpeechKey` with your Speech resource key and `YourSpeechRegion` with your Speech resource region. Optionally, you can set the `skip` and `top` (page size) query parameters in the URL. The default value for `skip` is 0, and the default value for `maxpagesize` is 100. ```azurecli-interactive
-curl -v -X GET "https://YourSpeechRegion.api.cognitive.microsoft.com/avatar/batchsyntheses?skip=0&maxpagesize=2&api-version=2024-04-15-preview" -H "Ocp-Apim-Subscription-Key: YourSpeechKey"
+curl -v -X GET "https://YourSpeechRegion.api.cognitive.microsoft.com/avatar/batchsyntheses?skip=0&maxpagesize=2&api-version=2024-08-01" -H "Ocp-Apim-Subscription-Key: YourSpeechKey"
``` You receive a response body in the following format:
You receive a response body in the following format:
} } ],
- "nextLink": "https://YourSpeechRegion.api.cognitive.microsoft.com/avatar/batchsyntheses/?api-version=2024-04-15-preview&skip=2&maxpagesize=2"
+ "nextLink": "https://YourSpeechRegion.api.cognitive.microsoft.com/avatar/batchsyntheses/?api-version=2024-08-01&skip=2&maxpagesize=2"
} ```
After you have retrieved the audio output results and no longer need the batch s
To delete a batch synthesis job, make an HTTP DELETE request using the following URI format. Replace `YourSynthesisId` with your batch synthesis ID, `YourSpeechKey` with your Speech resource key, and `YourSpeechRegion` with your Speech resource region. ```azurecli-interactive
-curl -v -X DELETE "https://YourSpeechRegion.api.cognitive.microsoft.com/avatar/batchsyntheses/YourSynthesisId?api-version=2024-04-15-preview" -H "Ocp-Apim-Subscription-Key: YourSpeechKey"
+curl -v -X DELETE "https://YourSpeechRegion.api.cognitive.microsoft.com/avatar/batchsyntheses/YourSynthesisId?api-version=2024-08-01" -H "Ocp-Apim-Subscription-Key: YourSpeechKey"
``` The response headers include `HTTP/1.1 204 No Content` if the delete request was successful.
ai-services Custom Avatar Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/text-to-speech-avatar/custom-avatar-create.md
-# How to create a custom text to speech avatar (preview)
-
+# How to create a custom text to speech avatar
Getting started with a custom text to speech avatar is a straightforward process. All it takes are a few of video files. If you'd like to train a [custom neural voice](../custom-neural-voice.md) for the same actor, you can do so separately.
ai-services Custom Avatar Record Video Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/text-to-speech-avatar/custom-avatar-record-video-samples.md
keywords: how to record video samples for custom text to speech avatar
-# How to record video samples for custom text to speech avatar (preview)
-
+# How to record video samples for custom text to speech avatar
This article provides instructions on preparing high-quality video samples for creating a custom text to speech avatar.
ai-services Real Time Synthesis Avatar https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/text-to-speech-avatar/real-time-synthesis-avatar.md
Title: Real-time synthesis for text to speech avatar (preview) - Speech service
+ Title: Real-time synthesis for text to speech avatar - Speech service
description: Learn how to use text to speech avatar with real-time synthesis.
-# How to do real-time synthesis for text to speech avatar (preview)
+# How to do real-time synthesis for text to speech avatar
-
-In this how-to guide, you learn how to use text to speech avatar (preview) with real-time synthesis. The synthetic avatar video will be generated in almost real time after the system receives the text input.
+In this how-to guide, you learn how to use text to speech avatar with real-time synthesis. The synthetic avatar video will be generated in almost real time after the system receives the text input.
## Prerequisites
ai-services What Is Custom Text To Speech Avatar https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/text-to-speech-avatar/what-is-custom-text-to-speech-avatar.md
-# What is custom text to speech avatar? (preview)
-
+# What is custom text to speech avatar?
Custom text to speech avatar allows you to create a customized, one-of-a-kind synthetic talking avatar for your application. With custom text to speech avatar, you can build a unique and natural-looking avatar for your product or brand by providing video recording data of your selected actors. If you also create a [custom neural voice](#custom-voice-and-custom-text-to-speech-avatar) for the same actor and use it as the avatar's voice, the avatar will be even more realistic.
Here's an overview of the steps to create a custom text to speech avatar:
1. **Prepare training data:** Ensure that the video recording is in the right format. It's a good idea to shoot the video recording in a professional-quality video shooting studio to get a clean background image. The quality of the resulting avatar heavily depends on the recorded video used for training. Factors like speaking rate, body posture, facial expression, hand gestures, consistency in the actor's position, and lighting of the video recording are essential to create an engaging custom text to speech avatar.
-1. **Train the avatar model:** We'll start training the custom text to speech model after verifying the consent statement of the avatar talent. In the preview stage of this service, this step will be done manually by Microsoft. You'll be notified after the model is successfully trained.
+1. **Train the avatar model:** We'll start training the custom text to speech model after verifying the consent statement of the avatar talent. This step is currently manually done by Microsoft. You'll be notified after the model is successfully trained.
1. **Deploy and use your avatar model in your APPs**
ai-services What Is Text To Speech Avatar https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/text-to-speech-avatar/what-is-text-to-speech-avatar.md
-# Text to speech avatar overview (preview)
-
+# Text to speech avatar overview
Text to speech avatar converts text into a digital video of a photorealistic human (either a prebuilt avatar or a [custom text to speech avatar](#custom-text-to-speech-avatar)) speaking with a natural-sounding voice. The text to speech avatar video can be synthesized asynchronously or in real time. Developers can build applications integrated with text to speech avatar through an API, or use a content creation tool on Speech Studio to create video content without coding.
The voice in the synthetic video could be a prebuilt neural voice available on A
Both batch synthesis and real-time synthesis resolution are 1920 x 1080, and the frames per second (FPS) are 25. Batch synthesis codec can be h264 or h265 if the format is mp4 and can set codec as vp9 if the format is `webm`; only `webm` can contain an alpha channel. Real-time synthesis codec is h264. Video bitrate can be configured for both batch synthesis and real-time synthesis in the request; the default value is 2000000; more detailed configurations can be found in the sample code.
-| | Batch synthesis | Real-Time synthesis |
+| | Batch synthesis | Real-time synthesis |
|||-| | **Resolution** | 1920 x 1080 | 1920 x 1080 | | **FPS** | 25 | 25 |
ai-studio Fine Tuning Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/fine-tuning-overview.md
There isn't a single right answer to this question, but you should have clearly
Now that you know when to leverage fine-tuning for your use-case, you can go to Azure AI Studio to find several models available to fine-tune including: - Azure OpenAI models
+- Phi-3 family of models
- Meta Llama 2 family models - Meta Llama 3.1 family of models
The following Azure OpenAI models are supported in Azure AI Studio for fine-tuni
Please note for fine-tuning Azure OpenAI models, you must add a connection to an Azure OpenAI resource with a supported region to your project.
+### Phi-3 family models
+The following Phi-3 family models are supported in Azure AI Studio for fine-tuning:
+- `Phi-3-mini-4k-instruct`
+- `Phi-3-mini-128k-instruct`
+- `Phi-3-medium-4k-instruct`
+- `Phi-3-medium-128k-instruct`
+
+Fine-tuning of Phi-3 models is currently supported in projects located in East US2.
+ ### Llama 2 family models The following Llama 2 family models are supported in Azure AI Studio for fine-tuning: - `Meta-Llama-2-70b` - `Meta-Llama-2-7b` - `Meta-Llama-2-13b`
-Fine-tuning of Llama 2 models is currently supported in projects located in West US 3.
+Fine-tuning of Llama 2 models is currently supported in projects located in West US3.
### Llama 3.1 family models The following Llama 3.1 family models are supported in Azure AI Studio for fine-tuning: - `Meta-Llama-3.1-70b-Instruct`-- `Meta-Llama-3.1-7b-Instruct`
+- `Meta-Llama-3.1-8b-Instruct`
-Fine-tuning of Llama 3.1 models is currently supported in projects located in West US 3.
+Fine-tuning of Llama 3.1 models is currently supported in projects located in West US3.
## Related content - [Learn how to fine-tune an Azure OpenAI model in Azure AI Studio](../../ai-services/openai/how-to/fine-tuning.md?context=/azure/ai-studio/context/context) - [Learn how to fine-tune a Llama 2 model in Azure AI Studio](../how-to/fine-tune-model-llama.md)
+- [Learn how to fine-tune a Phi-3 model in Azure AI Studio](../how-to/fine-tune-phi-3.md)
+- [How to deploy Phi-3 family of small language models with Azure AI Studio](../how-to/deploy-models-phi-3.md)
ai-studio Deploy Models Cohere Rerank https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-cohere-rerank.md
+
+ Title: How to deploy Cohere Rerank models as serverless APIs
+
+description: Learn to deploy and use Cohere Rerank models with Azure AI Studio.
+++ Last updated : 07/24/2024++++++
+# How to deploy Cohere Rerank models with Azure AI Studio
++
+In this article, you learn about the Cohere Rerank models, how to use Azure AI Studio to deploy them as serverless APIs with pay-as-you-go token-based billing, and how to work with the deployed models.
+
+## Cohere Rerank models
+
+Cohere offers two Rerank models in [Azure AI Studio](https://ai.azure.com). These models are available in the model catalog for deployment as serverless APIs:
+
+* Cohere Rerank 3 - English
+* Cohere Rerank 3 - Multilingual
+
+You can browse the Cohere family of models in the [Model Catalog](model-catalog.md) by filtering on the Cohere collection.
+
+### Cohere Rerank 3 - English
+
+Cohere Rerank English is a reranking model used for semantic search and retrieval-augmented generation (RAG). Rerank enables you to significantly improve search quality by augmenting traditional keyword-based search systems with a semantic-based reranking system that can contextualize the meaning of a user's query beyond keyword relevance. Cohere's Rerank delivers higher quality results than embedding-based search, lexical search, and even hybrid search, and it requires only adding a single line of code into your application.
+
+Use Rerank as a ranker after initial retrieval. In other words, after an initial search system finds the top 100 most relevant documents for a larger corpus of documents.
+
+Rerank supports JSON objects as documents where users can specify, at query time, the fields (keys) to use for semantic search. Some other attributes of Rerank include:
+
+* Context window of the model is 4,096 tokens
+* The max query length is 2,048 tokens
+
+Rerank English works well for code retrieval, semi-structured data retrieval, and long context.
+
+### Cohere Rerank 3 - Multilingual
+
+Cohere Rerank Multilingual is a reranking model used for semantic search and retrieval-augmented generation (RAG). Rerank Multilingual supports more than 100 languages and can be used to search within a language (for example, to search with a French query on French documents) and across languages (for example, to search with an English query on Chinese documents). Rerank enables you to significantly improve search quality by augmenting traditional keyword-based search systems with a semantic-based reranking system that can contextualize the meaning of a user's query beyond keyword relevance. Cohere's Rerank delivers higher quality results than embedding-based search, lexical search, and even hybrid search, and it requires only adding a single line of code into your application.
+
+Use Rerank as a ranker after initial retrieval. In other words, after an initial search system finds the top 100 most relevant documents for a larger corpus of documents.
+
+Rerank supports JSON objects as documents where users can specify, at query time, the fields (keys) to use for semantic search. Some other attributes of Rerank Multilingual include:
+
+* Context window of the model is 4,096 tokens
+* The max query length is 2,048 tokens
+
+Rerank multilingual performs well on multilingual benchmarks such as Miracl.
+
+## Deploy Cohere Rerank models as serverless APIs
+
+Certain models in the model catalog can be deployed as a serverless API with pay-as-you-go billing. This kind of deployment provides a way to consume models as an API without hosting them on your subscription, while keeping the enterprise security and compliance that organizations need. This deployment option doesn't require quota from your subscription.
+
+You can deploy the previously mentioned Cohere models as a service with pay-as-you-go billing. Cohere offers these models through the Microsoft Azure Marketplace and can change or update the terms of use and pricing of these models.
+
+### Prerequisites
+
+- An Azure subscription with a valid payment method. Free or trial Azure subscriptions won't work. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.
+- An [AI Studio hub](../how-to/create-azure-ai-resource.md). The serverless API model deployment offering for Cohere Rerank is only available with hubs created in these regions:
+
+ * East US
+ * East US 2
+ * North Central US
+ * South Central US
+ * West US
+ * West US 3
+ * Sweden Central
+
+ For a list of regions that are available for each of the models supporting serverless API endpoint deployments, see [Region availability for models in serverless API endpoints](deploy-models-serverless-availability.md).
+
+- An [Azure AI Studio project](../how-to/create-projects.md).
+- Azure role-based access controls are used to grant access to operations in Azure AI Studio. To perform the steps in this article, your user account must be assigned the __Azure AI Developer role__ on the resource group. For more information on permissions, see [Role-based access control in Azure AI Studio](../concepts/rbac-ai-studio.md).
++
+### Create a new deployment
+
+The following steps demonstrate the deployment of Cohere Rerank 3 - English, but you can use the same steps to deploy Cohere Rerank 3 - Multilingual by replacing the model name.
+
+To create a deployment:
+
+1. Sign in to [Azure AI Studio](https://ai.azure.com).
+1. Select **Model catalog** from the left sidebar.
+1. Search for *Cohere*.
+1. Select **cohere-rerank-3-english** to open the Model Details page.
+1. Select **Deploy** to open a serverless API deployment window for the model.
+1. Alternatively, you can initiate a deployment by starting from your project in AI Studio.
+
+ 1. From the left sidebar of your project, select **Components** > **Deployments**.
+ 1. Select **+ Create deployment**.
+ 1. Search for and select **Cohere-rerank-3-english**. to open the Model Details page.
+ 1. Select **Confirm** to open a serverless API deployment window for the model.
+
+1. Select the project in which you want to deploy your model.
+1. In the deployment wizard, select the link to **Azure Marketplace Terms** to learn more about the terms of use.
+1. Select the **Pricing and terms** tab to learn about pricing for the selected model.
+1. Select the **Subscribe and Deploy** button. If this is your first time deploying the model in the project, you have to subscribe your project for the particular offering. This step requires that your account has the **Azure AI Developer role** permissions on the resource group, as listed in the prerequisites. Each project has its own subscription to the particular Azure Marketplace offering of the model, which allows you to control and monitor spending. Currently, you can have only one deployment for each model within a project.
+1. Once you subscribe the project for the particular Azure Marketplace offering, subsequent deployments of the _same_ offering in the _same_ project don't require subscribing again. If this scenario applies to you, there's a **Continue to deploy** option to select.
+
+1. Give the deployment a name. This name becomes part of the deployment API URL. This URL must be unique in each Azure region.
+
+1. Select **Deploy**. Wait until the deployment is ready and you're redirected to the Deployments page.
+1. On the Deployments page, select the deployment, and note the endpoint's **Target** URL and the Secret **Key**. For more information on using the APIs, see the [reference](#rerank-api-reference-for-cohere-rerank-models-deployed-as-a-service) section.
+1. You can always find the endpoint's details, URL, and access keys by navigating to your **Project overview** page. Then, from the left sidebar of your project, select **Components** > **Deployments**.
+
+To learn about billing for the Cohere models deployed as a serverless API with pay-as-you-go token-based billing, see [Cost and quota considerations for Cohere models deployed as a service](#cost-and-quota-considerations-for-models-deployed-as-a-service).
+
+### Consume the Cohere Rerank models as a service
+
+Cohere Rerank models deployed as serverless APIs can be consumed using the Rerank API.
+
+1. From your **Project overview** page, go to the left sidebar and select **Components** > **Deployments**.
+
+1. Find and select the deployment you created.
+
+1. Copy the **Target** URL and the **Key** value.
+
+1. Cohere currently exposes `v1/rerank` for inference with the Rerank 3 - English and Rerank 3 - Multilingual models schema. For more information on using the APIs, see the [reference](#rerank-api-reference-for-cohere-rerank-models-deployed-as-a-service) section.
+
+## Rerank API reference for Cohere Rerank models deployed as a service
+
+Cohere Rerank 3 - English and Rerank 3 - Multilingual accept the native Cohere Rerank API on `v1/rerank`. This section contains details about the Cohere Rerank API.
+
+#### v1/rerank request
+
+```json
+ POST /v1/rerank HTTP/1.1
+ Host: <DEPLOYMENT_URI>
+ Authorization: Bearer <TOKEN>
+ Content-type: application/json
+```
+
+#### v1/rerank request schema
+
+Cohere Rerank 3 - English and Rerank 3 - Multilingual accept the following parameters for a `v1/rerank` API call:
+
+| Property | Type | Default | Description |
+| | | | |
+|`query` |`string` |Required |The search query. |
+|`documents` |`array` |None |A list of document objects or strings to rerank. |
+|`top_n` |`integer` |Length of `documents` |The number of most relevant documents or indices to return. |
+|`return_documents` |`boolean` |`FALSE` |If `FALSE`, returns results without the doc text - the API returns a list of {`index`, `relevance_score`} where index is inferred from the list passed into the request. </br>If `TRUE`, returns results with the doc text passed in - the API returns an ordered list of {`index`, `text`, `relevance_score`} where index + text refers to the list passed into the request. |
+|`max_chunks_per_doc` |`integer` |None |The maximum number of chunks to produce internally from a document.|
+|`rank_fields` |`array of strings` |None |If a JSON object is provided, you can specify which keys you would like to consider for reranking. The model reranks based on the order of the fields passed in (for example, `rank_fields=['title','author','text']` reranks, using the values in `title`, `author`, and `text` in that sequence. If the length of title, author, and text exceeds the context length of the model, the chunking won't reconsider earlier fields).<br> If not provided, the model uses the default text field for ranking. |
+
+#### v1/rerank response schema
+
+Response fields are fully documented on [Cohere's Rerank API reference](https://docs.cohere.com/reference/rerank). The response payload is a dictionary with the following fields:
+
+| Key | Type | Description |
+| | | |
+| `id` | `string` |An identifier for the response. |
+| `results` | `array of objects`|An ordered list of ranked documents, where each document is described by an object that includes `index` and `relevance_score` and, optionally, `text`. |
+| `meta` | `array of objects` | An optional meta object containing a list of warning strings. |
+
+<br>
+
+The `results` object is a dictionary with the following fields:
+
+| Key | Type | Description |
+| | | |
+| `document` | `object` |The document objects or strings that were reranked. |
+| `index` | `integer` |The `index` in the original list of documents to which the ranked document belongs. For example, if the first value in the `results` object has an index value of 3, it means in the list of documents passed in, the document at `index=3` had the highest relevance.|
+| `relevance_score` | `float` |Relevance scores are normalized to be in the range `[0, 1]`. Scores close to one indicate a high relevance to the query, and scores close to zero indicate low relevance. A score of `0.9` _doesn't_ necessarily mean that a document is twice as relevant as another with a score of `0.45`. |
++
+## Examples
+
+#### Request example
+
+```json
+ {
+ "query": "What is the capital of the United States?",
+ "rank_fields": ["Title", "Content"],
+ "documents": [
+ {"Title": "Facts about Carson City", "Content": "Carson City is the capital city of the American state of Nevada. "},
+ {"Title": "North Dakota", "Content" : "North Dakota is a state in the United States. 672,591 people lived in North Dakota in the year 2010. The capital and seat of government is Bismarck."},
+ {"Title": "Micronesia", "Content" : "Micronesia, officially the Federated States of Micronesia, is an island nation in the Pacific Ocean, northeast of Papua New Guinea. The country is a sovereign state in free association with the United States. The capital city of Federated States of Micronesia is Palikir."}
+ ],
+ "top_n": 3
+ }
+```
+
+#### Response example
+
+```json
+ {
+ "id": "571e6744-3074-457f-8935-08646a3352fb",
+ "results": [
+ {
+ "document": {
+ "Content": "Washington, D.C. (also known as simply Washington or D.C., and officially as the District of Columbia) is the capital of the United States. It is a federal district. The President of the USA and many major national government offices are in the territory. This makes it the political center of the United States of America.",
+ "Title": "Details about Washington D.C"
+ },
+ "index": 0,
+ "relevance_score": 0.98347044
+ },
+ {
+ "document": {
+ "Content": "Carson City is the capital city of the American state of Nevada. ",
+ "Title": "Facts about Carson City"
+ },
+ "index": 1,
+ "relevance_score": 0.07172112
+ },
+ {
+ "document": {
+ "Content": "Micronesia, officially the Federated States of Micronesia, is an island nation in the Pacific Ocean, northeast of Papua New Guinea. The country is a sovereign state in free association with the United States. The capital city of Federated States of Micronesia is Palikir.",
+ "Title": "Micronesia"
+ },
+ "index": 3,
+ "relevance_score": 0.05281402
+ },
+ {
+ "document": {
+ "Content": "North Dakota is a state in the United States. 672,591 people lived in North Dakota in the year 2010. The capital and seat of government is Bismarck.",
+ "Title": "North Dakota"
+ },
+ "index": 2,
+ "relevance_score": 0.03138043
+ }
+ ]
+ }
+```
+
+#### More inference examples
+
+| Package | Sample Notebook |
+|||
+|CLI using CURL and Python web requests| [cohere-rerank.ipynb](https://aka.ms/samples/cohere-rerank/webrequests)|
+|LangChain|[langchain.ipynb](https://aka.ms/samples/cohere-rerank/langchain)|
+|Cohere SDK|[cohere-sdk.ipynb](https://aka.ms/samples/cohere-rerank/cohere-python-sdk)|
+
+## Cost and quota considerations for models deployed as a service
+
+Quota is managed per deployment. Each deployment has a rate limit of 200,000 tokens per minute and 1,000 API requests per minute. However, we currently limit one deployment per model per project. Contact Microsoft Azure Support if the current rate limits aren't sufficient for your scenarios.
+
+Cohere models deployed as serverless APIs with pay-as-you-go billing are offered by Cohere through the Azure Marketplace and integrated with Azure AI Studio for use. You can find the Azure Marketplace pricing when deploying the model.
+
+Each time a project subscribes to a given offer from the Azure Marketplace, a new resource is created to track the costs associated with its consumption. The same resource is used to track costs associated with inference; however, multiple meters are available to track each scenario independently.
+
+For more information on how to track costs, see [monitor costs for models offered throughout the Azure Marketplace](./costs-plan-manage.md#monitor-costs-for-models-offered-through-the-azure-marketplace).
+++
+## Related content
+
+- [What is Azure AI Studio?](../what-is-ai-studio.md)
+- [Azure AI FAQ article](../faq.yml)
+- [Region availability for models in serverless API endpoints](deploy-models-serverless-availability.md)
ai-studio Deploy Models Llama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-llama.md
Meta Llama 3.1 models - like `Meta Llama 3.1 405B Instruct` - can be deployed as
The following models are available in Azure Marketplace for Llama 3.1 and Llama 3 when deployed as a service with pay-as-you-go:
-* [Meta-Llama-3.1-405B-Instruct (preview)](https://aka.ms/aistudio/landing/meta-llama-3-405B-base)
-* [Meta-Llama-3.1-70B-Instruct (preview)](https://aka.ms/aistudio/landing/meta-llama-3-8B-refresh)
-* [Meta Llama-3.1-8B-Instruct (preview)](https://aka.ms/aistudio/landing/meta-llama-3-70B-refresh)
+* [Meta-Llama-3.1-405B-Instruct (preview)](https://aka.ms/aistudio/landing/meta-llama-3.1-405B-instruct)
+* [Meta-Llama-3.1-70B-Instruct (preview)](https://aka.ms/aistudio/landing/meta-llama-3.1-70B-instruct)
+* [Meta Llama-3.1-8B-Instruct (preview)](https://aka.ms/aistudio/landing/meta-llama-3.1-8B-instruct)
* [Meta-Llama-3-70B-Instruct (preview)](https://aka.ms/aistudio/landing/meta-llama-3-70b-chat) * [Meta-Llama-3-8B-Instruct (preview)](https://aka.ms/aistudio/landing/meta-llama-3-8b-chat)
ai-studio Fine Tune Phi 3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/fine-tune-phi-3.md
+
+ Title: Fine-tune Phi-3 models in Azure AI Studio
+
+description: This article introduces fine-tuning Phi-3 models in Azure AI Studio.
++++ Last updated : 7/16/2024+
+# Fine-tune Phi-3 models in Azure AI Studio
+
+Azure AI Studio lets you tailor large language models to your personal datasets by using a process known as fine-tuning. Fine-tuning provides significant value by enabling customization and optimization for specific tasks and applications. It leads to improved performance, cost efficiency, reduced latency, and tailored outputs.
+
+In this article, you learn how to fine-tune Phi-3 family of small language models (SLMs) in Azure AI Studio as a service with pay-as you go billing.
+
+The Phi-3 family of SLMs is a collection of instruction-tuned generative text models. Phi-3 models are the most capable and cost-effective small language models (SLMs) available, outperforming models of the same size and next size up across various language, reasoning, coding, and math benchmarks.
+
+## [Phi-3-mini](#tab/phi-3-mini)
+
+Phi-3 Mini is a 3.8B parameters, lightweight, state-of-the-art open model built upon datasets used for Phi-2 - synthetic data and filtered websites - with a focus on high-quality, reasoning dense data. The model belongs to the Phi-3 model family, and the Mini version comes in two variants 4K and 128K which is the context length (in tokens) it can support.
+
+- [Phi-3-mini-4k-Instruct](https://ai.azure.com/explore/models/Phi-3-mini-4k-instruct/version/4/registry/azureml)
+- [Phi-3-mini-128k-Instruct](https://ai.azure.com/explore/models/Phi-3-mini-128k-instruct/version/4/registry/azureml)
+
+The model underwent a rigorous enhancement process, incorporating both supervised fine-tuning and direct preference optimization to ensure precise instruction adherence and robust safety measures. When assessed against benchmarks testing common sense, language understanding, math, code, long context and logical reasoning, Phi-3 Mini-4K-Instruct and Phi-3 Mini-128K-Instruct showcased a robust and state-of-the-art performance among models with less than 13 billion parameters.
++
+## [Phi-3-medium](#tab/phi-3-medium)
+Phi-3 Medium is a 14B parameters, lightweight, state-of-the-art open model. Phi-3-Medium was trained with Phi-3 datasets that include both synthetic data and the filtered, publicly available websites data, with a focus on high quality and reasoning-dense properties.
+
+The model belongs to the Phi-3 model family, and the Medium version comes in two variants, 4K and 128K, which denote the context length (in tokens) that each model variant can support.
+
+- Phi-3-medium-4k-Instruct
+- Phi-3-medium-128k-Instruct
+
+The model underwent a rigorous enhancement process, incorporating both supervised fine-tuning and direct preference optimization to ensure precise instruction adherence and robust safety measures. When assessed against benchmarks that test common sense, language understanding, math, code, long context and logical reasoning, Phi-3-Medium-4k-Instruct and Phi-3-Medium-128k-Instruct showcased a robust and state-of-the-art performance among models with less than 13 billion parameters.
++++
+## [Phi-3-mini](#tab/phi-3-mini)
+
+The following models are available in Azure AI studio for Phi 3 when fine-tuning as a service with pay-as-you-go:
+
+- `Phi-3-mini-4k-instruct` (preview)
+- `Phi-3-mini-128k-instruct` (preview)
+
+Fine-tuning of Phi-3 models is currently supported in projects located in East US 2.
+
+## [Phi-3-medium](#tab/phi-3-medium)
+
+The following models are available in Azure AI studio for Phi 3 when fine-tuning as a service with pay-as-you-go:
+
+- `Phi-3-medium-4k-instruct` (preview)
+- `Phi-3-medium-128k-instruct` (preview)
+
+Fine-tuning of Phi-3 models is currently supported in projects located in East US 2.
+++
+### Prerequisites
+
+- An Azure subscription. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.
+- An [AI Studio hub](../how-to/create-azure-ai-resource.md).
+
+ > [!IMPORTANT]
+ > For Phi-3 family models, the pay-as-you-go model fine-tune offering is only available with hubs created in **East US 2** regions.
+
+- An [AI Studio project](../how-to/create-projects.md).
+- Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure AI Studio. To perform the steps in this article, your user account must be assigned the __Azure AI Developer role__ on the resource group.
+
+ For more information on permissions, see [Role-based access control in Azure AI Studio](../concepts/rbac-ai-studio.md).
+
+### Subscription provider registration
+
+Verify the subscription is registered to the `Microsoft.Network` resource provider.
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Select **Subscriptions** from the left menu.
+1. Select the subscription you want to use.
+1. Select **AI project settings** > **Resource providers** from the left menu.
+1. Confirm that **Microsoft.Network** is in the list of resource providers. Otherwise add it.
++
+### Data preparation
+
+Prepare your training and validation data to fine-tune your model. Your training data and validation data sets consist of input and output examples for how you would like the model to perform.
+
+Make sure all your training examples follow the expected format for inference. To fine-tune models effectively, ensure a balanced and diverse dataset.
+
+This involves maintaining data balance, including various scenarios, and periodically refining training data to align with real-world expectations, ultimately leading to more accurate and balanced model responses.
+
+Different model types require a different format of training data.
+
+### Chat completion
+
+The training and validation data you use **must** be formatted as a JSON Lines (JSONL) document. For `Phi-3-mini-128k-instruct` the fine-tuning dataset must be formatted in the conversational format that is used by the Chat completions API.
+
+### Example file format
+
+```json
+ {"messages": [{"role": "system", "content": "You are an Xbox customer support agent whose primary goal is to help users with issues they are experiencing with their Xbox devices. You are friendly and concise. You only provide factual answers to queries, and do not provide answers that are not related to Xbox."}, {"role": "user", "content": "Is Xbox better than PlayStation?"}, {"role": "assistant", "content": "I apologize, but I cannot provide personal opinions. My primary job is to assist you with any issues related to your Xbox device. Do you have any Xbox-related issues that need addressing?"}]}
+ {"messages": [{"role": "system", "content": "You are an Xbox customer support agent whose primary goal is to help users with issues they are experiencing with their Xbox devices. You are friendly and concise. You only provide factual answers to queries, and do not provide answers that are not related to Xbox."}, {"role": "user", "content": "Can I play PC games on my Xbox?"}, {"role": "assistant", "content": "While Xbox consoles don't natively support PC games, they can play an array of Xbox games, many of which are also available on PC. Can I assist you with finding games compatible with your Xbox device?"}]}
+ {"messages": [{"role": "system", "content": "You are an Xbox customer support agent whose primary goal is to help users with issues they are experiencing with their Xbox devices. You are friendly and concise. You only provide factual answers to queries, and do not provide answers that are not related to Xbox."}, {"role": "user", "content": "I'm having trouble connecting my Xbox to the Wi-Fi."}, {"role": "assistant", "content": "No worries, let's go through the network settings on your Xbox. Can you please tell me what happens when you try to connect it to the Wi-Fi?"}]}
+```
+The supported file type is JSON Lines. Files are uploaded to the default datastore and made available in your project.
+
+## Fine-tune a Phi-3 model
+
+# [Phi-3-mini](#tab/phi-3-mini)
+
+To fine-tune a Phi-3 model:
+
+1. Sign in to [Azure AI Studio](https://ai.azure.com).
+1. Choose the model you want to fine-tune from the Azure AI Studio [model catalog](https://ai.azure.com/explore/models).
+
+1. On the model's **Details** page, select **fine-tune**.
+
+1. Select the project in which you want to fine-tune your models. To use the pay-as-you-go model fine-tune offering, your workspace must belong to the **East US 2** region.
+1. On the fine-tune wizard, select the link to **Azure AI studio Terms** to learn more about the terms of use. You can also select the **Azure AI studio offer details** tab to learn about pricing for the selected model.
+1. If this is your first time fine-tuning the model in the project, you have to subscribe your project for the particular offering (for example, Phi-3-mini-128k-instruct) from Azure AI studio. This step requires that your account has the Azure subscription permissions and resource group permissions listed in the prerequisites. Each project has its own subscription to the particular Azure AI studio offering, which allows you to control and monitor spending. Select **Subscribe and fine-tune**.
+
+ > [!NOTE]
+ > Subscribing a project to a particular Azure AI studio offering (in this case, Phi-3-mini-128k-instruct) requires that your account has **Contributor** or **Owner** access at the subscription level where the project is created. Alternatively, your user account can be assigned a custom role that has the Azure subscription permissions and resource group permissions listed in the [prerequisites](#prerequisites).
+
+1. Once you sign up the project for the particular Azure AI studio offering, subsequent fine-tuning of the _same_ offering in the _same_ project don't require subscribing again. Therefore, you don't need to have the subscription-level permissions for subsequent fine-tune jobs. If this scenario applies to you, select **Continue to fine-tune**.
+
+1. Enter a name for your fine-tuned model and the optional tags and description.
+1. Select training data to fine-tune your model. See [data preparation](#data-preparation) for more information.
+
+ > [!NOTE]
+ > If the you have your training/validation files in a credential less datastore, you will need to allow workspace managed identity access to your datastore in order to proceed with MaaS fine-tuning with a credential less storage. On the "Datastore" page, after clicking "Update authentication" > Select the following option:
+
+ ![Use workspace managed identity for data preview and profiling in Azure Machine Learning Studio.](../media/how-to/fine-tune/phi-3/credentials.png)
+
+ Make sure all your training examples follow the expected format for inference. To fine-tune models effectively, ensure a balanced and diverse dataset. This involves maintaining data balance, including various scenarios, and periodically refining training data to align with real-world expectations, ultimately leading to more accurate and balanced model responses.
+ - The batch size to use for training. When set to -1, batch_size is calculated as 0.2% of examples in training set and the max is 256.
+ - The fine-tuning learning rate is the original learning rate used for pretraining multiplied by this multiplier. We recommend experimenting with values between 0.5 and 2. Empirically, we've found that larger learning rates often perform better with larger batch sizes. Must be between 0.0 and 5.0.
+ - Number of training epochs. An epoch refers to one full cycle through the data set.
+
+1. Task parameters are an optional step and an advanced option- Tuning hyperparameter is essential for optimizing large language models (LLMs) in real-world applications. It allows for improved performance and efficient resource usage. Users can choose to keep he default settings or advanced users can customize parameters like epochs or learning rate.
+
+1. Review your selections and proceed to train your model.
+
+Once your model is fine-tuned, you can deploy the model and can use it in your own application, in the playground, or in prompt flow. For more information, see [How to deploy Phi-3 family of large language models with Azure AI Studio](./deploy-models-phi-3.md).
+
+# [Phi-3-medium](#tab/phi-3-medium)
+
+To fine-tune a Phi-3 model:
+
+1. Sign in to [Azure AI Studio](https://ai.azure.com).
+1. Choose the model you want to fine-tune from the Azure AI Studio [model catalog](https://ai.azure.com/explore/models).
+
+1. On the model's **Details** page, select **fine-tune**.
+
+1. Select the project in which you want to fine-tune your models. To use the pay-as-you-go model fine-tune offering, your workspace must belong to the **East US 2** region.
+1. On the fine-tune wizard, select the link to **Azure AI studio Terms** to learn more about the terms of use. You can also select the **Azure AI studio offer details** tab to learn about pricing for the selected model.
+1. If this is your first time fine-tuning the model in the project, you have to subscribe your project for the particular offering (for example, Phi-3-medium-128k-instruct) from Azure AI studio. This step requires that your account has the Azure subscription permissions and resource group permissions listed in the prerequisites. Each project has its own subscription to the particular Azure AI studio offering, which allows you to control and monitor spending. Select **Subscribe and fine-tune**.
+
+ > [!NOTE]
+ > Subscribing a project to a particular Azure AI studio offering (in this case, Phi-3-mini-128k-instruct) requires that your account has **Contributor** or **Owner** access at the subscription level where the project is created. Alternatively, your user account can be assigned a custom role that has the Azure subscription permissions and resource group permissions listed in the [prerequisites](#prerequisites).
+
+1. Once you sign up the project for the particular Azure AI studio offering, subsequent fine-tuning of the _same_ offering in the _same_ project don't require subscribing again. Therefore, you don't need to have the subscription-level permissions for subsequent fine-tune jobs. If this scenario applies to you, select **Continue to fine-tune**.
+
+1. Enter a name for your fine-tuned model and the optional tags and description.
+1. Select training data to fine-tune your model. See [data preparation](#data-preparation) for more information.
+
+ > [!NOTE]
+ > If you have your training/validation files in a credential less datastore, you will need to allow workspace managed identity access to your datastore in order to proceed with MaaS finetuning with a credential less storage. On the "Datastore" page, after clicking "Update authentication" > Select the following option:
+
+ ![Use workspace managed identity for data preview and profiling in Azure Machine Learning Studio.](../media/how-to/fine-tune/phi-3/credentials.png)
+
+ Make sure all your training examples follow the expected format for inference. To fine-tune models effectively, ensure a balanced and diverse dataset. This involves maintaining data balance, including various scenarios, and periodically refining training data to align with real-world expectations, ultimately leading to more accurate and balanced model responses.
+ - The batch size to use for training. When set to -1, batch_size is calculated as 0.2% of examples in training set and the max is 256.
+ - The fine-tuning learning rate is the original learning rate used for pretraining multiplied by this multiplier. We recommend experimenting with values between 0.5 and 2. Empirically, we've found that larger learning rates often perform better with larger batch sizes. Must be between 0.0 and 5.0.
+ - Number of training epochs. An epoch refers to one full cycle through the data set.
+
+1. Task parameters are an optional step and an advanced option- Tuning hyperparameter is essential for optimizing large language models (LLMs) in real-world applications. It allows for improved performance and efficient resource usage. Users can choose to keep the default settings or advanced users can customize parameters like epochs or learning rate.
+
+1. Review your selections and proceed to train your model.
+
+Once your model is fine-tuned, you can deploy the model and can use it in your own application, in the playground, or in prompt flow. For more information, see [How to deploy Phi-3 family of large language models with Azure AI Studio](./deploy-models-phi-3.md).
+++
+## Cleaning up your fine-tuned models
+
+You can delete a fine-tuned model from the fine-tuning model list in [Azure AI Studio](https://ai.azure.com) or from the model details page. Select the fine-tuned model to delete from the Fine-tuning page, and then select the Delete button to delete the fine-tuned model.
+
+>[!NOTE]
+> You can't delete a custom model if it has an existing deployment. You must first delete your model deployment before you can delete your custom model.
+
+## Cost and quotas
+
+### Cost and quota considerations for Phi-3 models fine-tuned as a service
+
+Phi models fine-tuned as a service are offered by Microsoft and integrated with Azure AI Studio for use. You can find the pricing when [deploying](./deploy-models-phi-3.md) or fine-tuning the models under the Pricing and terms tab on deployment wizard.
++
+## Content filtering
+
+Models deployed as a service with pay-as-you-go are protected by Azure AI Content Safety. When deployed to real-time endpoints, you can opt out of this capability. With Azure AI content safety enabled, both the prompt and completion pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions. Learn more about [Azure AI Content Safety](../concepts/content-filtering.md).
++
+## Next steps
+- [What is Azure AI Studio?](../what-is-ai-studio.md)
+- [Learn more about deploying Phi-3 models](./deploy-models-phi-3.md)
+- [Azure AI FAQ article](../faq.yml)
ai-studio Llmops Azure Devops Prompt Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/llmops-azure-devops-prompt-flow.md
+
+ Title: LLMOps with prompt flow and Azure DevOps in Azure AI Studio
+
+description: Learn how to set up a LLMOps environment and pipeline on Azure DevOps for prompt flow project using Azure AI Studio.
+++++++
+ - cli-v2
+ - sdk-v2
+ - ignite-2024
+ - build-2024
++
+# Streamlining LLMOps with Prompt Flow and Azure DevOps: A Comprehensive Approach
+
+Large Language Operations, or LLMOps, is the cornerstone of efficient prompt engineering and LLM-infused application development and deployment. As the demand for LLM-infused applications continues to soar, organizations find themselves in need of a cohesive and streamlined process to manage their end-to-end lifecycle.
+
+Azure AI Studio allows you to integrate with GitHub to automate the LLM-infused application development lifecycle with prompt flow.
+
+Azure AI Studio Prompt Flow provides a streamlined and structured approach to developing LLM-infused applications. Its well-defined process and lifecycle guides you through the process of building, testing, optimizing, and deploying flows, culminating in the creation of fully functional LLM-infused solutions.
+
+## LLMOps Prompt Flow Features
+
+LLMOps with prompt flow is a "LLMOps template and guidance" to help you build LLM-infused apps using prompt flow. It provides the following features:
+
+**Core Features**:
+- This template can be used for both **Azure AI Studio and Azure Machine Learning**.
+
+- It can be used for both **AZURE and LOCAL execution**.
+
+- It supports all types of flow - **Python Class flows, Function flows and YAML flows**.
+
+- It supports **Github, Azure DevOps and Jenkins CI/CD orchestration**.
+
+- It supports pure **python based Evaluation** as well using promptflow-evals package.
+
+- It supports INNER-LOOP Experimentation and Evaluation.
+
+- It supports OUTER-LOOP Deployment and Inferencing.
+
+- It supports **Centralized Code Hosting** for multiple flows based on prompt flow, providing a single repository for all your flows. Think of this platform as a single repository where all your prompt flow code resides. It's like a library for your flows, making it easy to find, access, and collaborate on different projects.
+
+- Each flow enjoys its own **Lifecycle Management**, allowing for smooth transitions from local experimentation to production deployment.
+ :::image type="content" source="../media/prompt-flow/llmops/workflow.png" alt-text="Screenshot of workflow." lightbox = "../media/prompt-flow/llmops/workflow.png":::
+
+- Experiment with multiple **Variant and Hyperparameter Experimentation**, evaluating flow variants with ease. Variants and hyperparameters are like ingredients in a recipe. This platform allows you to experiment with different combinations of variants across multiple nodes in a flow.
+
+- The repo supports deployment of flows to **Azure App Services, Kubernetes, Azure Managed computes** driven through configuration ensuring that your flows can scale as needed. It also generates **Docker images** infused with Flow compute session and your flows for deployment to **any target platform and Operating system** supporting Docker.
+ :::image type="content" source="../media/prompt-flow/llmops/endpoints.png" alt-text="Screenshot of endpoints." lightbox = "../media/prompt-flow/llmops/endpoints.png":::
+
+- Seamlessly implement **A/B Deployment**, enabling you to compare different flow versions effortlessly. As in traditional A/B testing for websites, this platform facilitates A/B deployment for prompt flow. This means you can effortlessly compare different versions of a flow in a real-world setting to determine which performs best.
+ :::image type="content" source="../media/prompt-flow/llmops/a-b-deployments.png" alt-text="Screenshot of deployments." lightbox = "../media/prompt-flow/llmops/a-b-deployments.png":::
+
+- Accommodates **Many-to-many dataset/flow relationships** for each standard and evaluation flow, ensuring versatility in flow test and evaluation. The platform is designed to accommodate multiple datasets for each flow.
+
+- It supports **Conditional Data and Model registration** by creating a new version for dataset in Azure AI Studio Data Asset and flows in model registry only when there's a change in them, not otherwise.
+
+- Generates **Comprehensive Reporting** for each **variant configuration**, allowing you to make informed decisions. Provides detailed Metric collection, experiment, and variant bulk runs for all runs and experiments, enabling data-driven decisions in csv as well as HTML files.
+ :::image type="content" source="../media/prompt-flow/llmops/variants.png" alt-text="Screenshot of flow variants report." lightbox = "../media/prompt-flow/llmops/variants.png":::
+
+Other features for customization:
+- Offers **BYOF** (bring-your-own-flows). A **complete platform** for developing multiple use-cases related to LLM-infused applications.
+
+- Offers **configuration based development**. No need to write extensive boiler-plate code.
+
+- Provides execution of both **prompt experimentation and evaluation** locally as well on cloud.
+
+- Endpoint testing within pipeline after deployment to check its availability and readiness.
+
+- Provides optional Human-in-loop to validate prompt metrics before deployment.
+
+LLMOps with prompt flow provides capabilities for both simple and complex LLM-infused apps. It's customizable to the needs of the application.
+
+## LLMOps Stages
+
+The lifecycle comprises four distinct stages:
+
+1. **Initialization:** Clearly define the business objective, gather relevant data samples, establish a basic prompt structure, and craft a flow that enhances its capabilities.
+
+2. **Experimentation:** Apply the flow to sample data, assess the prompt's performance, and refine the flow as needed. Continuously iterate until satisfied with the results.
+
+3. **Evaluation & Refinement:** Benchmark the flow's performance using a larger dataset, evaluate the prompt's effectiveness, and make refinements accordingly. Progress to the next stage if the results meet the desired standards.
+
+4. **Deployment:** Optimize the flow for efficiency and effectiveness, deploy it in a production environment including A/B deployment, monitor its performance, gather user feedback, and use this information to further enhance the flow.
+
+By adhering to this structured methodology, prompt flow empowers you to confidently develop, rigorously test, fine-tune, and deploy flows, leading to the creation of robust and sophisticated AI applications.
+
+LLMOps prompt flow template formalizes this structured methodology using code-first approach and helps you build LLM-infused apps using prompt flow using tools and process relevant to prompt flow. It offers a range of features including Centralized Code Hosting, Lifecycle Management, Variant and Hyperparameter Experimentation, A/B Deployment, reporting for all runs and experiments and more.
+
+The repository for this article is available at [LLMOps with Prompt flow template](https://github.com/microsoft/llmops-promptflow-template)
+
+## LLMOps process Flow
++
+1. The prompt engineer/data scientist opens a feature branch where they work on the specific task or feature. The prompt engineer/data scientist iterates on the flow using prompt flow for Microsoft Visual Studio Code, periodically committing changes and pushing those changes to the feature branch.
+
+2. Once local development and experimentation are completed, the prompt engineer/data scientist opens a pull request from the Feature branch to the Main branch. The pull request (PR) triggers a PR pipeline. This pipeline runs fast quality checks that should include:
+
+ - Execution of experimentation flows
+ - Execution of configured unit tests
+ - Compilation of the codebase
+ - Static code analysis
+
+3. The pipeline can contain a step that requires at least one team member to manually approve the PR before merging. The approver can't be the committer and they mush have prompt flow expertise and familiarity with the project requirements. If the PR isn't approved, the merge is blocked. If the PR is approved, or there's no approval step, the feature branch is merged into the Main branch.
+
+4. The merge to Main triggers the build and release process for the Development environment. Specifically:
+
+ a. The CI pipeline is triggered from the merge to Main. The CI pipeline performs all the steps done in the PR pipeline, and the following steps:
+ 1. Experimentation flow
+ 2. Evaluation flow
+ 3. Registers the flows in the AI Studio Registry when changes are detected
+ b. The CD pipeline is triggered after the completion of the CI pipeline. This flow performs the following steps:
+ 1. Deploys the flow from the AI Studio registry to a AI Studio deployment
+ 2. Runs integration tests that target the online endpoint
+ 3. Runs smoke tests that target the online endpoint
+
+5. An approval process is built into the release promotion process ΓÇô upon approval, the CI & CD processes described in steps 4.a. & 4.b. are repeated, targeting the Test environment. Steps 4.a. and 4.b. are the same, except that user acceptance tests are run after the smoke tests in the Test environment.
+
+6. The CI & CD processes described in steps 4.a. & 4.b. are run to promote the release to the Production environment after the Test environment is verified and approved.
+
+7. After release into a live environment, the operational tasks of monitoring performance metrics and evaluating the deployed language models take place. This includes but isn't limited to:
+ - Detecting data drifts
+ - Observing the infrastructure
+ - Managing costs
+ - Communicating the model's performance to stakeholders
+
+From here on, you can learn **LLMOps with prompt flow** by following the end-to-end samples we provided, which help you build LLM-infused applications using prompt flow and Azure DevOps. Its primary objective is to provide assistance in the development of such applications, leveraging the capabilities of prompt flow and LLMOps.
+
+## Prerequisites
+
+- An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [Azure AI Studio](https://azure.microsoft.com/free/).
+- An Azure AI Studio Hub and Project.
+- Git running on your local machine.
+- An [organization](/azure/devops/organizations/accounts/create-organization) in Azure DevOps. Organization in Azure DevOps helps to collaborate, Plan and track your work and code defects, issues and Set up continuous integration and deployment.
+- GitHub as the source control repository.
++
+> [!NOTE]
+>
+>Git version 2.27 or newer is required. For more information on installing the Git command, see https://git-scm.com/downloads and select your operating system
+
+> [!IMPORTANT]
+>The CLI commands in this article were tested using Bash. If you use a different shell, you may encounter errors.
++
+## Set up prompt flow
+
+Prompt flow uses connections resource to connect to endpoints like Azure OpenAI, OpenAI or Azure AI Search and uses compute session for the execution of the flows. These resources should be created before executing the flows in prompt flow.
+
+### Set up connections for prompt flow
+
+Connections can be created through **prompt flow portal UI** or using the **REST API**. Please follow the [guidelines](https://github.com/microsoft/llmops-promptflow-template/blob/main/docs/Azure_devops_how_to_setup.md#setup-connections-for-prompt-flow) to create connections for prompt flow.
+
+> [!NOTE]
+>
+> The sample flows use 'aoai' connection and connection named 'aoai' should be created to execute them.
++
+## Set up Azure Service Principal
+
+An **Azure Service Principal** is a security identity that applications, services, and automation tools use to access Azure resources. It represents an application or service that needs to authenticate with Azure and access resources on your behalf. Please follow the [guidelines](https://github.com/microsoft/llmops-promptflow-template/blob/main/docs/Azure_devops_how_to_setup.md#create-azure-service-principal) to create Service Principal in Azure.
+
+This Service Principal is later used to configure Azure DevOps Service connection and Azure DevOps to authenticate and connect to Azure Services. The jobs executed in Prompt Flow for both `experiment and evaluation runs` are under the identity of this Service Principal.
+
+> [!TIP]
+>
+>The set up provides `owner` permissions to the Service Principal.
+> * This is because the CD Pipeline automatically provides access to the newly provisioned AzureAI Studio Endpoint access to Azure AI Studio for reading connections information.
+> * It also adds it to Azure AI Studio associated key vault policy with `get` and `list` secret permissions.
+>
+>The owner permission can be changed to `contributor` level permissions by changing pipeline YAML code and removing the step related to permissions.
++
+## Set up Azure DevOps
+
+There are multiple steps that should be undertaken for setting up LLMOps process using Azure DevOps.
+
+### Create new Azure DevOps project
+
+Please follow the [guidelines](https://github.com/microsoft/llmops-promptflow-template/blob/main/docs/Azure_devops_how_to_setup.md#create-new-azure-devops-project) to create a new Azure DevOps project using **Azure DevOps UI**.
+
+### Set up authentication between Azure DevOps and Azure
+
+Please follow the [guidelines](https://github.com/microsoft/llmops-promptflow-template/blob/main/docs/Azure_devops_how_to_setup.md#set-up-authentication-with-azure-and-azure-devops) to use the earlier created [Service Principal](#set-up-azure-service-principal) and set up authentication between Azure DevOps and Azure Services.
+
+This step configures a new Azure DevOps Service Connection that stores the Service Principal information. The pipelines in the project can read the connection information using the connection name. This helps to configure Azure DevOps pipeline steps to connect to Azure automatically.
++
+### Create an Azure DevOps Variable Group
+
+Please follow the [guidelines](https://github.com/microsoft/llmops-promptflow-template/blob/main/docs/Azure_devops_how_to_setup.md#create-an-azure-devops-variable-group) to create a new Variable group and add a variable related to the Azure DevOps Service Connection.
+
+The Service principal name is available automatically as environment variable to the pipelines.
++
+### Configure Azure DevOps repository and pipelines
+
+This repo uses two branches - `main` and `development` for code promotions and execution of pipelines in lieu of changes to code in them. Please follow the [guidelines](https://github.com/microsoft/llmops-promptflow-template/blob/main/docs/Azure_devops_how_to_setup.md#configure-azure-devops-local-and-remote-repository) to set up your own local as well as remote repository to use code from this repository.
+
+The steps involve cloning both the `main` and `development branches` from the repository and associating the code to refer to the new Azure DevOps repository. Apart from code migration, pipelines - both PR and dev pipelines are configured such that they are executed automatically based on PR creation and merge triggers.
+
+The branch policy for development branch should also be configured to execute PR pipeline for any PR raised on development branch from a feature branch. The 'dev' pipeline is executed when the PR is merged to the development branch. The 'dev' pipeline consists of both CI and CD phases.
+
+There is also **human in the loop** implemented within the pipelines. After the CI phase in `dev` pipeline is executed, the CD phase follows after manual approval. The approval should happen from Azure DevOps pipeline build execution UI. The default time-out is 60 minutes after which the pipeline will be rejected and CD phase will not execute. Manually approving the execution will lead to execution of the CD steps of the pipeline. The manual approval is configured to send notifications to 'replace@youremail.com'. It should be replaced with an appropriate email ID.
+
+## Test the pipelines
+
+Please follow the [guidelines](https://github.com/microsoft/llmops-promptflow-template/blob/main/docs/Azure_devops_how_to_setup.md#test-the-pipelines) mentioned at to test the pipelines.
+
+The steps are:
+
+1. Raise a PR(Pull Request) from a feature branch to development branch.
+2. The PR pipeline should execute automatically as result of branch policy configuration.
+3. The PR is then merged to the development branch.
+4. The associated 'dev' pipeline is executed. It results in full CI and CD execution and result in provisioning or updating of existing Azure AI Studio Deployment.
+
+The test outputs should be similar to ones shown at [here](https://github.com/microsoft/llmops-promptflow-template/blob/main/docs/Azure_devops_how_to_setup.md#example-prompt-run-evaluation-and-deployment-scenario).
++
+## Local execution
+
+To harness the capabilities of the **local execution**, follow these installation steps:
+
+1. **Clone the Repository**: Begin by cloning the template's repository from its [GitHub repository](https://github.com/microsoft/llmops-promptflow-template.git).
+
+```bash
+git clone https://github.com/microsoft/llmops-promptflow-template.git
+```
+
+2. **Set up env file**: create .env file at top folder level and provide information for items mentioned. Some samples are shown next
+
+```bash
+
+SUBSCRIPTION_ID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
+AZURE_OPENAI_API_KEY=xxxxxxxxxxxxx
+AZURE_OPENAI_ENDPOINT=https://xxxxxxx
+MODEL_CONFIG_AZURE_ENDPOINT=https://xxxxxxx
+MODEL_CONFIG_API_KEY=xxxxxxxxxxx
+MAX_TOTAL_TOKEN=4096
+AOAI_API_KEY=xxxxxxxxxx
+AOAI_API_BASE=https://xxxxxxxxx
+```
+3. Prepare the local conda or virtual environment to install the dependencies.
+
+```bash
+
+python -m pip install -r ./.github/requirements/execute_job_requirements.txt
+
+```
+
+4. Change the value of `EXECUTION_TYPE` to `LOCAL` in `config.py` file located within `llmops/` directory.
+
+```python
+
+EXECUTION_TYPE = "LOCAL"
+
+```
+
+```bash
+
+python -m llmops.common.prompt_pipeline --subscription_id xxxx --base_path math_coding --env_name dev --output_file run_id.txt --build_id 100
+
+```
+
+Evaluations can be run using the `prompt_eval.py` python script locally.
+
+```bash
+python -m llmops.common.prompt_eval --run_id run_id.txt --subscription_id xxxxx --base_path math_coding --env_name dev --build_id 100
+```
+
+5. Bring or write your flows into the template based on documentation [here](https://github.com/microsoft/llmops-promptflow-template/blob/main/docs/how_to_onboard_new_flows.md).
+
+## Next steps
+* [LLMOps with Prompt flow template](https://github.com/microsoft/llmops-promptflow-template/) on GitHub
+* [LLMOps with Prompt flow template documentation](https://github.com/microsoft/llmops-promptflow-template/blob/main/docs/Azure_devops_how_to_setup.md) on GitHub
+* [FAQS for LLMOps with Prompt flow template](https://github.com/microsoft/llmops-promptflow-template/blob/main/docs/faqs.md)
+* [Prompt flow open source repository](https://github.com/microsoft/promptflow)
+* [Install and set up Python SDK v2](/python/api/overview/azure/ai-ml-readme)
ai-studio Llmops Github Prompt Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/llmops-github-prompt-flow.md
+
+ Title: LLMOps with prompt flow and GitHub in Azure AI Studio
+
+description: Learn how to set up a LLMOps environment and workflows in Github for prompt flow project using Azure AI Studio.
+++++++
+ - cli-v2
+ - sdk-v2
+ - ignite-2024
+ - build-2024
++
+# Elevating LLMOps with Prompt Flow and GitHub: A Unified Strategy for AI Workflows
+
+Large Language Operations, or LLMOps, is the cornerstone of efficient prompt engineering and LLM-infused application development and deployment. As the demand for LLM-infused applications continues to soar, organizations find themselves in need of a cohesive and streamlined process to manage their end-to-end lifecycle.
+
+Azure AI Studio allows you to integrate with GitHub to automate the LLM-infused application development lifecycle with prompt flow.
+
+Azure AI Studio Prompt Flow provides a streamlined and structured approach to developing LLM-infused applications. Its well-defined process and lifecycle guides you through the process of building, testing, optimizing, and deploying flows, culminating in the creation of fully functional LLM-infused solutions.
+
+## LLMOps Prompt Flow Features
+
+LLMOps with prompt flow is a "LLMOps template and guidance" to help you build LLM-infused apps using prompt flow. It provides the following features:
+
+- This template can be used for both **Azure AI Studio and Azure Machine Learning**.
+
+- It can be used for both **AZURE and LOCAL execution**.
+
+- It supports all types of flow - **Python Class flows, Function flows and YAML flows**.
+
+- It supports **Github, Azure DevOps and Jenkins CI/CD orchestration**.
+
+- It supports pure **python based Evaluation** as well using promptflow-evals package.
+
+- It supports INNER-LOOP Experimentation and Evaluation.
+
+- It supports OUTER-LOOP Deployment and Inferencing.
+
+- It supports **Centralized Code Hosting** for multiple flows based on prompt flow, providing a single repository for all your flows. Think of this platform as a single repository where all your prompt flow code resides. It's like a library for your flows, making it easy to find, access, and collaborate on different projects.
+
+- Each flow enjoys its own **Lifecycle Management**, allowing for smooth transitions from local experimentation to production deployment.
+ :::image type="content" source="../media/prompt-flow/llmops/workflow.png" alt-text="Screenshot of workflow." lightbox = "../media/prompt-flow/llmops/workflow.png":::
+
+- Experiment with multiple **Variant and Hyperparameter Experimentation**, evaluating flow variants with ease. Variants and hyperparameters are like ingredients in a recipe. This platform allows you to experiment with different combinations of variants across multiple nodes in a flow.
+
+- The repo supports deployment of flows to **Azure App Services, Kubernetes, Azure Managed computes** driven through configuration ensuring that your flows can scale as needed. It also generates **Docker images** infused with Flow compute session and your flows for deployment to **any target platform and Operating system** supporting Docker.
+ :::image type="content" source="../media/prompt-flow/llmops/endpoints.png" alt-text="Screenshot of endpoints." lightbox = "../media/prompt-flow/llmops/endpoints.png":::
+
+- Seamlessly implement **A/B Deployment**, enabling you to compare different flow versions effortlessly. As in traditional A/B testing for websites, this platform facilitates A/B deployment for prompt flow. This means you can effortlessly compare different versions of a flow in a real-world setting to determine which performs best.
+ :::image type="content" source="../media/prompt-flow/llmops/a-b-deployments.png" alt-text="Screenshot of deployments." lightbox = "../media/prompt-flow/llmops/a-b-deployments.png":::
+
+- Accommodates **Many-to-many dataset/flow relationships** for each standard and evaluation flow, ensuring versatility in flow test and evaluation. The platform is designed to accommodate multiple datasets for each flow.
+
+- It supports **Conditional Data and Model registration** by creating a new version for dataset in Azure AI Studio Data Asset and flows in model registry only when there's a change in them, not otherwise.
+
+- Generates **Comprehensive Reporting** for each **variant configuration**, allowing you to make informed decisions. Provides detailed Metric collection, experiment, and variant bulk runs for all runs and experiments, enabling data-driven decisions in csv as well as HTML files.
+ :::image type="content" source="../media/prompt-flow/llmops/variants.png" alt-text="Screenshot of flow variants report." lightbox = "../media/prompt-flow/llmops/variants.png":::
+
+Other features for customization:
+- Offers **BYOF** (bring-your-own-flows). A **complete platform** for developing multiple use-cases related to LLM-infused applications.
+
+- Offers **configuration based development**. No need to write extensive boiler-plate code.
+
+- Provides execution of both **prompt experimentation and evaluation** locally as well on cloud.
+
+- Endpoint testing within workflow after deployment to check its availability and readiness.
+
+- Provides optional Human-in-loop to validate prompt metrics before deployment.
+
+LLMOps with prompt flow provides capabilities for both simple as well as complex LLM-infused apps. It's customizable to the needs of the application.
+
+## LLMOps Stages
+
+The lifecycle comprises four distinct stages:
+
+1. **Initialization:** Clearly define the business objective, gather relevant data samples, establish a basic prompt structure, and craft a flow that enhances its capabilities.
+
+2. **Experimentation:** Apply the flow to sample data, assess the prompt's performance, and refine the flow as needed. Continuously iterate until satisfied with the results.
+
+3. **Evaluation & Refinement:** Benchmark the flow's performance using a larger dataset, evaluate the prompt's effectiveness, and make refinements accordingly. Progress to the next stage if the results meet the desired standards.
+
+4. **Deployment:** Optimize the flow for efficiency and effectiveness, deploy it in a production environment including A/B deployment, monitor its performance, gather user feedback, and use this information to further enhance the flow.
+
+By adhering to this structured methodology, Prompt Flow empowers you to confidently develop, rigorously test, fine-tune, and deploy flows, leading to the creation of robust and sophisticated AI applications.
+
+LLMOps Prompt Flow template formalizes this structured methodology using code-first approach and helps you build LLM-infused apps using Prompt Flow using tools and process relevant to Prompt Flow. It offers a range of features including Centralized Code Hosting, Lifecycle Management, Variant and Hyperparameter Experimentation, A/B Deployment, reporting for all runs and experiments and more.
+
+The repository for this article is available at [LLMOps with Prompt flow template.](https://github.com/microsoft/llmops-promptflow-template)
+
+## LLMOps process Flow
++
+1. The prompt engineer/data scientist opens a feature branch where they work on the specific task or feature. The prompt engineer/data scientist iterates on the flow using prompt flow for Microsoft Visual Studio Code, periodically committing changes and pushing those changes to the feature branch.
+
+2. Once local development and experimentation are completed, the prompt engineer/data scientist opens a pull request from the Feature branch to the Main branch. The pull request (PR) triggers a PR workflow. This workflow runs fast quality checks that should include:
+
+ 1. Execution of experimentation flows
+ 2. Execution of configured unit tests
+ 3. Compilation of the codebase
+ 4. Static code analysis
+
+3. The workflow can contain a step that requires at least one team member to manually approve the PR before merging. The approver can't be the committer and they mush have prompt flow expertise and familiarity with the project requirements. If the PR isn't approved, the merge is blocked. If the PR is approved, or there's no approval step, the feature branch is merged into the Main branch.
+
+4. The merge to Main triggers the build and release process for the Development environment. Specifically:
+
+ a. The CI workflow is triggered from the merge to Main. The CI workflow performs all the steps done in the PR workflow, and the following steps:
+ 1. Experimentation flow
+ 2. Evaluation flow
+ 3. Registers the flows in the AI Studio Registry when changes are detected
+ b. The CD workflow is triggered after the completion of the CI workflow. This flow performs the following steps:
+ 1. Deploys the flow from the Machine Learning registry to a AI Studio Deployment
+ 2. Runs integration tests that target the online endpoint
+ 3. Runs smoke tests that target the online endpoint
+
+5. An approval process is built into the release promotion process ΓÇô upon approval, the CI & CD processes described in steps 4.a. & 4.b. are repeated, targeting the Test environment. Steps a. and b. are the same, except that user acceptance tests are run after the smoke tests in the Test environment.
+
+6. The CI & CD processes described in steps 4.a. & 4.b. are run to promote the release to the Production environment after the Test environment is verified and approved.
+
+7. After release into a live environment, the operational tasks of monitoring performance metrics and evaluating the deployed language models take place. This includes but isn't limited to:
+ - Detecting data drifts
+ - Observing the infrastructure
+ - Managing costs
+ - Communicating the model's performance to stakeholders
+
+From here on, you can learn **LLMOps with prompt flow** by following the end-to-end samples we provided, which help you build LLM-infused applications using prompt flow and GitHub. Its primary objective is to provide assistance in the development of such applications, using the capabilities of prompt flow and LLMOps.
+
+## Prerequisites
+
+- An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [Azure AI Studio](https://azure.microsoft.com/free/).
+- An Azure AI Studio Hub and Project.
+- Git running on your local machine.
+- GitHub as the source control repository.
++
+> [!NOTE]
+>
+>Git version 2.27 or newer is required. For more information on installing the Git command, see https://git-scm.com/downloads and select your operating system
+
+> [!IMPORTANT]
+>The CLI commands in this article were tested using Bash. If you use a different shell, you may encounter errors.
++
+## Set up Prompt Flow
+
+Prompt Flow uses connections resource to connect to endpoints like Azure OpenAI, OpenAI, or Azure AI Search and uses compute session for the execution of the flows. These resources should be created before executing the flows in Prompt Flow.
+
+### Set up connections for prompt flow
+
+Connections can be created through **prompt flow portal UI** or using the **REST API**. Follow the [guidelines](https://github.com/microsoft/llmops-promptflow-template/blob/main/docs/Azure_devops_how_to_setup.md#setup-connections-for-prompt-flow) to create connections for prompt flow.
++
+> [!NOTE]
+>
+> The sample flows use 'aoai' connection and connection named 'aoai' should be created to execute them.
++
+## Set up GitHub Repository
+
+There are multiple steps that should be undertaken for setting up LLMOps process using GitHub Repository.
+
+### Fork and configure the repo
+
+Follow the [guidelines](https://github.com/microsoft/llmops-promptflow-template/blob/main/docs/github_workflows_how_to_setup.md#set-up-github-repo) to create a forked repo in your GitHub organization. This repo uses two branches - `main` and `development` for code promotions and execution of workflows in lieu of changes to code in them.
+
+### Set up authentication between GitHub and Azure
+
+Follow the [guidelines](https://github.com/microsoft/llmops-promptflow-template/blob/main/docs/github_workflows_how_to_setup.md#set-up-authentication-with-azure-and-github) to use the earlier created Service Principal and set up authentication between GitHub repository and Azure Services.
+
+This step configures a GitHub Secret that stores the Service Principal information. The workflows in the repository can read the connection information using the secret name. This helps to configure GitHub workflow steps to connect to Azure automatically.
++
+### Cloning the repo
+
+Follow the [guidelines](https://github.com/microsoft/llmops-promptflow-template/blob/main/docs/github_workflows_how_to_setup.md#cloning-the-repo) to create a new local repository.
+
+This helps us create a new feature branch from development branch and incorporate changes.
++
+## Test the workflows
+
+Follow the [guidelines](https://github.com/microsoft/llmops-promptflow-template/blob/main/docs/github_workflows_how_to_setup.md#cloning-the-repos) to test the workflows. The steps are
+
+1. Raise a PR(Pull Request) from a feature branch to development branch.
+2. The PR workflow should execute automatically as result of branch policy configuration.
+3. The PR is then merged to the development branch.
+4. The associated 'dev' workflow is executed. This results in full CI and CD execution and result in provisioning or updating of existing Azure AI Studio Deployment.
+
+The test outputs should be similar to ones shown at [here](https://github.com/microsoft/llmops-promptflow-template/blob/main/docs/github_workflows_how_to_setup.md#example-prompt-run-evaluation-and-deployment-scenario).
++
+## Local execution
+
+To harness the capabilities of the **local execution**, follow these installation steps:
+
+1. **Clone the Repository**: Begin by cloning the template's repository from its [GitHub repository](https://github.com/microsoft/llmops-promptflow-template.git).
+
+```bash
+git clone https://github.com/microsoft/llmops-promptflow-template.git
+```
+
+2. **Set up env file**: create .env file at top folder level and provide information for items mentioned. Some samples are shown next
+
+```bash
+
+SUBSCRIPTION_ID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
+AZURE_OPENAI_API_KEY=xxxxxxxxxxxxx
+AZURE_OPENAI_ENDPOINT=https://xxxxxxx
+MODEL_CONFIG_AZURE_ENDPOINT=https://xxxxxxx
+MODEL_CONFIG_API_KEY=xxxxxxxxxxx
+MAX_TOTAL_TOKEN=4096
+AOAI_API_KEY=xxxxxxxxxx
+AOAI_API_BASE=https://xxxxxxxxx
+```
+3. Prepare the local conda or virtual environment to install the dependencies.
+
+```bash
+
+python -m pip install -r ./.github/requirements/execute_job_requirements.txt
+
+```
+
+4. Change the value of `EXECUTION_TYPE` to `LOCAL` in `config.py` file located within `llmops/` directory.
+
+```python
+
+EXECUTION_TYPE = "LOCAL"
+
+```
+
+```bash
+
+python -m llmops.common.prompt_pipeline --subscription_id xxxx --base_path math_coding --env_name dev --output_file run_id.txt --build_id 100
+
+```
+
+Evaluations can be run using the `prompt_eval.py` python script locally.
+
+```bash
+python -m llmops.common.prompt_eval --run_id run_id.txt --subscription_id xxxxx --base_path math_coding --env_name dev --build_id 100
+```
+
+5. Bring or write your flows into the template based on documentation [here](https://github.com/microsoft/llmops-promptflow-template/blob/main/docs/how_to_onboard_new_flows.md).
+
+## Next steps
+* [LLMOps with Prompt flow template](https://github.com/microsoft/llmops-promptflow-template/) on GitHub
+* [LLMOps with Prompt flow template documentation](https://github.com/microsoft/llmops-promptflow-template/blob/main/docs/github_workflows_how_to_setup.md) on GitHub
+* [FAQS for LLMOps with Prompt flow template](https://github.com/microsoft/llmops-promptflow-template/blob/main/docs/faqs.md)
+* [Prompt flow open source repository](https://github.com/microsoft/promptflow)
+* [Install and set up Python SDK v2](/python/api/overview/azure/ai-ml-readme)
ai-studio Model Catalog Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/model-catalog-overview.md
Model | Managed compute | Serverless API (pay-as-you-go)
--|--|-- Llama family models | Llama-2-7b <br> Llama-2-7b-chat <br> Llama-2-13b <br> Llama-2-13b-chat <br> Llama-2-70b <br> Llama-2-70b-chat <br> Llama-3-8B-Instruct <br> Llama-3-70B-Instruct <br> Llama-3-8B <br> Llama-3-70B | Llama-3-70B-Instruct <br> Llama-3-8B-Instruct <br> Llama-2-7b <br> Llama-2-7b-chat <br> Llama-2-13b <br> Llama-2-13b-chat <br> Llama-2-70b <br> Llama-2-70b-chat Mistral family models | mistralai-Mixtral-8x22B-v0-1 <br> mistralai-Mixtral-8x22B-Instruct-v0-1 <br> mistral-community-Mixtral-8x22B-v0-1 <br> mistralai-Mixtral-8x7B-v01 <br> mistralai-Mistral-7B-Instruct-v0-2 <br> mistralai-Mistral-7B-v01 <br> mistralai-Mixtral-8x7B-Instruct-v01 <br> mistralai-Mistral-7B-Instruct-v01 | Mistral-large (2402) <br> Mistral-large (2407) <br> Mistral-small <br> Mistral-Nemo
-Cohere family models | Not available | Cohere-command-r-plus <br> Cohere-command-r <br> Cohere-embed-v3-english <br> Cohere-embed-v3-multilingual
+Cohere family models | Not available | Cohere-command-r-plus <br> Cohere-command-r <br> Cohere-embed-v3-english <br> Cohere-embed-v3-multilingual <br> Cohere-rerank-3-english <br> Cohere-rerank-3-multilingual
JAIS | Not available | jais-30b-chat Phi3 family models | Phi-3-mini-4k-Instruct <br> Phi-3-mini-128k-Instruct <br> Phi-3-small-8k-Instruct <br> Phi-3-small-128k-Instruct <br> Phi-3-medium-4k-instruct <br> Phi-3-medium-128k-instruct | Phi-3-mini-4k-Instruct <br> Phi-3-mini-128k-Instruct <br> Phi-3-small-8k-Instruct <br> Phi-3-small-128k-Instruct <br> Phi-3-medium-4k-instruct <br> Phi-3-medium-128k-instruct Nixtla | Not available | TimeGEN-1
Mistral Small | [Microsoft Managed Countries](/partner-center/marketplace/tax-de
Mistral Large (2402) <br> Mistral-Large (2407) | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) <br> Brazil <br> Hong Kong <br> Israel| East US, East US 2, North Central US, South Central US, Sweden Central, West US, West US 3 | Not available Mistral Nemo | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) <br> Brazil <br> Hong Kong <br> Israel| East US, East US 2, North Central US, South Central US, Sweden Central, West US, West US 3 | Not available Cohere-command-r-plus <br> Cohere-command-r <br> Cohere-embed-v3-english <br> Cohere-embed-v3-multilingual | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) <br> Japan | East US, East US 2, North Central US, South Central US, Sweden Central, West US, West US 3 | Not available
+Cohere-rerank-3-english <br> Cohere-rerank-3-multilingual | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) | East US, East US 2, North Central US, South Central US, Sweden Central, West US, West US 3 | Not available
TimeGEN-1 | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) <br> Mexico <br> Israel| East US, East US 2, North Central US, South Central US, Sweden Central, West US, West US 3 | Not available jais-30b-chat | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) | East US, East US 2, North Central US, South Central US, Sweden Central, West US, West US 3 | Not available Phi-3-mini-4k-instruct <br> Phi-3-mini-128k-instruct | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) | East US 2, Sweden Central | Not available
aks Quick Kubernetes Automatic Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-automatic-deploy.md
To create an AKS Automatic cluster, use the [az aks create][az-aks-create] comma
az aks create \ --resource-group myResourceGroup \ --name myAKSAutomaticCluster \
- --sku automatic \
- --generate-ssh-keys
+ --sku automatic
``` After a few minutes, the command completes and returns JSON-formatted information about the cluster.
aks Troubleshoot Udp Packet Drops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/troubleshoot-udp-packet-drops.md
Title: Diagnose and solve UDP packet drops in Azure Kubernetes Service (AKS) description: Learn how to diagnose and solve UDP packet drops in Azure Kubernetes Service (AKS). Previously updated : 05/09/2024 Last updated : 07/25/2024
Once you apply the new values, you can access your VM to ensure the new values a
Your values should now be set to the values outlined in `linuxosconfig.json`.
+## Revert to original values
+
+If you want to restore the buffer size to its default value of *0.2 MB*, you can update the `linuxosconfig.json` file with the following values:
+
+```json
+{
+ "sysctls": {
+ "netCoreRmemMax": 212992,
+ "netCoreRmemDefault": 212992
+ }
+}
+```
+ ## Next steps In this article, you learned how to diagnose and solve UDP packet drops in Azure Kubernetes Service (AKS). For more information on how to troubleshoot issues in AKS, see the [Azure Kubernetes Service troubleshooting documentation](/troubleshoot/azure/azure-kubernetes/welcome-azure-kubernetes).
api-center Manage Apis Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/manage-apis-azure-cli.md
-ms. date: 06/28/2024
Last updated : 06/28/2024 # Customer intent: As an API program manager, I want to automate processes to register and update APIs in my Azure API center.
az apic api register --resource-group myResourceGroup \
* The command sets the API properties such as name and type from values in the definition file. * By default, the command sets the API's **Lifecycle stage** to *design*.
-* It creates a default API version named *1-0-0* and a default definition named according to the specification format (for example, *openapi*).
+* It creates an API version named according to the `version` property in the API definition (or *1-0-0* by default), and an API definition named according to the specification format (for example, *openapi*).
After registering an API, you can update the API's properties by using the [az apic api update](/cli/azure/apic/api#az_apic_api_update), [az apic api version update](/cli/azure/apic/api/version#az_apic_api_version_update), and [az apic api definition update](/cli/azure/apic/api/definition#az_apic_api_definition_update) commands.
To delete individual API versions and definitions, use [az apic api version dele
* See the [Azure CLI reference for Azure API Center](/cli/azure/apic) for a complete command list, including commands to manage [environments](/cli/azure/apic/environment), [deployments](/cli/azure/apic/api/deployment), [metadata schemas](/cli/azure/apic/metadata), and [services](/cli/azure/apic). * [Import APIs to your API center from API Management](import-api-management-apis.md) * [Use the Visual Studio extension for API Center](use-vscode-extension.md) to build and register APIs from Visual Studio Code.
+* [Register APIs in your API center using GitHub Actions](register-apis-github-actions.md)
api-center Register Apis Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/register-apis-github-actions.md
+
+ Title: Register APIs using GitHub Actions - Azure API Center
+description: Learn how to automate the registration of APIs in your API center using a CI/CD workflow based on GitHub Actions.
++ Last updated : 07/24/2024+++
+# Customer intent: As an API developer, I want to automate the registration of APIs in my API center using a CI/CD workflow based on GitHub Actions.
++
+# Register APIs in your API center using GitHub Actions
+
+This article shows how to set up a basic [GitHub Actions](https://docs.github.com/en/actions/learn-github-actions) workflow to register an API in your organization's [API center](overview.md) when an API specification file is added to a GitHub repository.
+
+Using a GitHub Actions workflow to register APIs in your API center provides a consistent and repeatable CI/CD process for every new or updated API. The workflow can be extended to include steps such as adding metadata to the API registration.
+
+The following diagram shows how API registration in your API center can be automated using a GitHub Actions workflow.
++
+1. Set up a GitHub Actions workflow in your repository that triggers when a pull request that adds an API definition file is merged.
+1. Create a branch from the main branch in your GitHub repository.
+1. Add an API definition file, commit the changes, and push them to the new branch.
+1. Create a pull request to merge the new branch into the main branch.
+1. Merge the pull request.
+1. The merge triggers a GitHub Actions workflow that registers the API in your API center.
+
+## Prerequisites
+
+* An API center in your Azure subscription. If you haven't created one already, see [Quickstart: Create your API center](set-up-api-center.md).
+* Permissions to create a service principal in the Microsoft Entra ID tenant
+* A GitHub account and a GitHub repo in which you can configure secrets and GitHub Actions workflows
+* For Azure CLI:
+ [!INCLUDE [include](~/reusable-content/azure-cli/azure-cli-prepare-your-environment-no-header.md)]
+
+ [!INCLUDE [install-apic-extension](includes/install-apic-extension.md)]
+
+ > [!NOTE]
+ > Azure CLI command examples in this article can run in PowerShell or a bash shell. Where needed because of different variable syntax, separate command examples are provided for the two shells.
+
+## Set up a GitHub Actions workflow
+
+In this section, you set up the GitHub Actions workflow for this scenario:
+
+* Create a service principal to use for Azure credentials in the workflow.
+* Add the credentials as a secret in your GitHub repository.
+* Configure a GitHub Actions workflow that triggers when a pull request that adds an API definition file is merged. The workflow YAML file includes a step that uses the Azure CLI to register the API in your API center from the definition file.
+
+### Set up a service principal secret
+
+In the following steps, create a Microsoft Entra ID service principal, which will be used to add credentials to the workflow to authenticate with Azure.
+
+> [!NOTE]
+> Configuring a service principal is shown for demonstration purposes. The recommended way to authenticate with Azure for GitHub Actions is with OpenID Connect, an authentication method that uses short-lived tokens. Setting up OpenID Connect with GitHub Actions is more complex but offers hardened security. [Learn more](../app-service/deploy-github-actions.md?tabs=openid%2Caspnetcore#1-generate-deployment-credentials)
+
+Create a service principal using the [az ad sp create-for-rbac](/cli/azure/ad#az-ad-sp-create-for-rbac) command. The following example first uses the [az apic show](/cli/azure/apic#az-apic-show) command to retrieve the resource ID of the API center. The service principal is then created with the Contributor role for the API center.
+
+#### [Bash](#tab/bash)
+
+```azurecli
+#! /bin/bash
+apiCenter=<api-center-name>
+resourceGroup=<resource-group-name>
+spName=<service-principal-name>
+
+apicResourceId=$(az apic show --name $apiCenter --resource-group $resourceGroup --query "id" --output tsv)
+
+az ad sp create-for-rbac --name $spName --role Contributor --scopes $apicResourceId --json-auth
+```
+
+#### [PowerShell](#tab/powershell)
+
+```azurecli
+# PowerShell syntax
+$apiCenter = "<api-center-name>"
+$resourceGroup = "<resource-group-name>"
+$spName = "<service-principal-name>"
+
+$apicResourceId = $(az apic show --name $apiCenter --resource-group $resourceGroup --query "id" --output tsv)
+
+az ad sp create-for-rbac --name $spName --role Contributor --scopes $apicResourceId --json-auth
+```
++
+Copy the JSON output, which should look similar to the following:
+
+```json
+{
+ "clientId": "<GUID>",
+ "clientSecret": "<GUID>",
+ "subscriptionId": "<GUID>",
+ "tenantId": "<GUID>",
+ "activeDirectoryEndpointUrl": "https://login.microsoftonline.com",
+ "resourceManagerEndpointUrl": "https://management.azure.com/",
+ [...other endpoints...]
+}
+```
+
+### Add the service principal as a GitHub secret
+
+1. In [GitHub](https://github.com/), browse your repository. Select **Settings**.
+1. Under **Security**, select **Secrets and variables** > **Actions**
+1. Select **New repository secret**.
+1. Paste the entire JSON output from the Azure CLI command into the secret's value field. Name the secret `AZURE_CREDENTIALS`. Select **Add secret**.
+
+ The secret is listed under **Repository secrets**.
+
+ :::image type="content" source="media/register-apis-github-actions/repository-secrets-github-small.png" alt-text="Screenshot of secrets for Actions in a GitHub repository." lightbox="media/register-apis-github-actions/repository-secrets-github.png":::
++
+When you configure the GitHub workflow file later, you use the secret for the input `creds` of the [Azure/login](https://github.com/marketplace/actions/azure-login) action. For example:
+
+```yaml
+- uses: azure/login@v1
+ with:
+ creds: ${{ secrets.AZURE_CREDENTIALS }}
+```
+
+### Add the workflow file to your GitHub repository
+
+A GitHub Actions workflow is represented by a YAML (.yml) definition file. This definition contains the various steps and parameters that make up the workflow. [Learn more](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions)
+
+The following is a basic workflow file for this example that you can use or modify.
+
+In this example:
+* The workflow is triggered when a pull request that adds a JSON definition in the `APIs` path is closed on the main branch.
+* The location of the definition is extracted from the pull request using a GitHub script, which is authenticated with the default GitHub token.
+* The Azure credentials saved in your repo are used to sign into Azure.
+* The [az apic register](/cli/azure/apic/api#az-apic-api-register) command registers the API in the API center specified in the environment variables.
+
+To configure the workflow file:
+
+1. Copy and save the file under a name such as `register-api.yml`.
+1. Update the values for the environment variables to match your API center in Azure.
+1. Confirm or update the name of the repository folder (`APIs`) where you'll add the API definition file.
+1. Add this workflow file in the `/.github/workflows/` path in your GitHub repository.
+
+> [!TIP]
+> Using the [Visual Studio Code extension](use-vscode-extension.md) for Azure API Center, you can generate a starting workflow file by running an extension command. In the Command Palette, select **Azure API Center: Register APIs**. Select **CI/CD** > **GitHub**. You can then modify the file for your scenario.
+
+```yml
+name: Register API Definition to Azure API Center
+on:
+ pull_request:
+ types: [closed]
+ branches:
+ - main
+ paths:
+ - "APIs/**/*.json"
+permissions:
+ contents: read
+ pull-requests: read
+env:
+ # set this to your Azure API Center resource group name
+ RESOURCE_GROUP: <YOUR_RESOURCE_GROUP>
+ # set this to your Azure API Center service name
+ SERVICE_NAME: <YOUR_API_CENTER>
+jobs:
+ register:
+ runs-on: ubuntu-latest
+ environment: production
+ steps:
+ - uses: actions/checkout@v2
+
+ - name: Get specification file path in the PR
+ id: get-file-location
+ uses: actions/github-script@v5
+ with:
+ github-token: ${{ secrets.GITHUB_TOKEN }}
+ script: |
+ const pull_number = context.payload.pull_request.number;
+ const owner = context.repo.owner;
+ const repo = context.repo.repo;
+ const files = await github.rest.pulls.listFiles({
+ owner,
+ repo,
+ pull_number
+ });
+ if (files.data.length === 1) {
+ const filename = files.data[0].filename;
+ core.exportVariable('API_FILE_LOCATION', hfilename);
+ console.log(`API_FILE_LOCATION: ${{ env.API_FILE_LOCATION }}`);
+ }
+ else {
+ console.log('The PR does not add exactly one specification file.');
+ }
+ - name: Azure login
+ uses: azure/login@v1
+ with:
+ creds: ${{ secrets.AZURE_CREDENTIALS }}
+
+ - name: Register to API Center
+ uses: azure/CLI@v2
+ with:
+ azcliversion: latest
+ inlineScript: |
+ az apic api register -g ${{ env.RESOURCE_GROUP }} -n ${{ env.SERVICE_NAME }} --api-location ${{ env.API_FILE_LOCATION }}
+```
++
+## Add API definition file to the repository
+
+Test the workflow by adding an API definition file to the repository. Follow these high-level steps, which are typical of a development workflow. For details on working with GitHub branches, see the [GitHub documentation](https://docs.github.com/en/github/collaborating-with-pull-requests/getting-started/about-collaborative-development-models).
+
+1. Create a new working branch from the main branch in your repository.
+1. Add the API definition file to the repository in the `APIs` path. For example, `APIs/catfacts-api/07-15-2024.json`.
+1. Commit the changes and push them to the working branch.
+1. Create a pull request to merge the working branch into the main branch.
+1. After review, merge the pull request. The merge triggers the GitHub Actions workflow that registers the API in your API center.
+
+ :::image type="content" source="media/register-apis-github-actions/workflow-action.png" alt-text="Screenshot showing successful workflow run in GitHub.":::
+
+## Verify the API registration
+
+Verify that the API is registered in your API center.
+
+1. In the [Azure portal](https://portal.azure.com), navigate to your API center.
+1. In the left menu, under **Assets**, select **APIs**.
+1. Verify that the newly registered API appears in the list of APIs.
++
+## Add a new API version
+
+To add a new version to an existing API in your API center, follow the preceding steps, with a slight modification:
+
+1. Change to the same working branch in your repo, or create a new working branch.
+1. Add a new API definition file to the repository in the `APIs` path, in the folder for an existing API. For example, if you previously added a Cat Facts API definition, add a new version such as `APIs/catfacts-api/07-22-2024.json`.
+1. Commit the changes and push them to the working branch.
+1. Create a pull request to merge the working branch into the main branch.
+1. After review, merge the pull request. The merge triggers the GitHub Actions workflow that registers the new API version in your API center.
+1. In the Azure portal, navigate to your API center and confirm that the new version is registered.
+
+## Extend the scenario
+
+You can extend the GitHub Actions workflow to include other steps, such as adding metadata for the API registration. For example:
+
+1. Using the [metadata schema](metadata.md) in your API center, create a metadata JSON file to apply metadata values to your API registration.
+
+ For example, if the metadata schema includes properties such as `approver`, `team`, and `cost center`, a metadata JSON file might look like this:
+
+ ```json
+ {
+ "approver": "diego@contoso.com",
+ "team": "Store API dev team",
+ "costCenter": "12345"
+ }
+ ```
+1. Upload a metadata JSON file in the folder for each API in the repository.
+1. Add a workflow step to apply the metadata to the API registration using the [az apic api update](/cli/azure/apic/api#az-apic-api-update) command. In the following example, the API ID and metadata file are passed in environment variables, which would be set elsewhere in the workflow file.
+
+ ```yml
+ [...]
+ - name: Apply metadata to API in API Center
+ uses: azure/CLI@v2
+ with:
+ azcliversion: latest
+ inlineScript: |
+ az apic api update -g ${{ env.RESOURCE_GROUP }} -n ${{ env.SERVICE_NAME }} --api-id {{ env.API_ID }} --custom-properties {{ env.METADATA_FILE }}
+ ```
+
+## Related content
+
+* [Using secrets in GitHub Actions](https://docs.github.com/en/actions/reference/encrypted-secrets)
+* [Creating configuration variables for a repository](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions)
app-service Migration Alternatives https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migration-alternatives.md
> There are two automated migration features available to help you upgrade to App Service Environment v3. To learn more about those features and for help deciding which migration option is right for you, see [Migration path decision tree](upgrade-to-asev3.md#migration-path-decision-tree). Consider one of the automated options for a quicker path to [App Service Environment v3](overview.md). >
-If you're currently using App Service Environment v1 or v2, you have the opportunity to migrate your workloads to [App Service Environment v3](overview.md). App Service Environment v3 has [advantages and feature differences](overview.md#feature-differences) that provide enhanced support for your workloads and can reduce overall costs. Consider using the [the automated migration features](upgrade-to-asev3.md) if your environment meets the criteria described in the [migration path decision tree](upgrade-to-asev3.md#migration-path-decision-tree).
+If you're currently using App Service Environment v1 or v2, you have the opportunity to migrate your workloads to [App Service Environment v3](overview.md). App Service Environment v3 has [advantages and feature differences](overview.md#feature-differences) that provide enhanced support for your workloads and can reduce overall costs. Consider using [the automated migration features](upgrade-to-asev3.md) if your environment meets the criteria described in the [migration path decision tree](upgrade-to-asev3.md#migration-path-decision-tree).
If your App Service Environment isn't supported for the migration features, you must use one of the manual methods to migrate to App Service Environment v3.
app-service Side By Side Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/side-by-side-migrate.md
description: Learn how to migrate your App Service Environment v2 to App Service
Previously updated : 7/24/2024 Last updated : 7/25/2024 # Migration to App Service Environment v3 using the side-by-side migration feature
For ELB App Service Environments, get the public inbound IP address by running t
az rest --method get --uri "${ASE_ID}?api-version=2022-03-01" --query properties.networkingConfiguration.externalInboundIpAddresses ```
+> [!IMPORTANT]
+> If your migration includes a custom domain suffix, the default host name behavior for App Service Environment v3 is different than for App Service Environment v2. For App Service Environment v3, the default host name always uses the default domain suffix and is in the form *APP-NAME.ASE-NAME.appserviceenvironment.net*. Review all your dependent resources, such as App Gateway, that use the host names of your apps to ensure they're updated to account for this behavior. For more information on App Service Environment feature differences between the different versions, see [App Service Environment version comparison](version-comparison.md).
+>
+ ### 11. Redirect customer traffic, validate your App Service Environment v3, and complete migration This step is your opportunity to test and validate your new App Service Environment v3.
attestation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/overview.md
Azure Attestation supports both platform- and guest-attestation of AMD SEV-SNP b
Azure Attestation provides comprehensive attestation services for multiple environments and distinctive use cases.
-### SGX enclave attestation
-
-[Intel® Software Guard Extensions](https://www.intel.com/content/www/us/en/architecture-and-technology/software-guard-extensions.html) (SGX) refers to hardware-grade isolation, which is supported on certain Intel CPU models. SGX enables code to run in sanitized compartments known as SGX enclaves. Access and memory permissions are then managed by hardware to ensure a minimal attack surface with proper isolation.
+### AMD SEV-SNP attestation on Confidential VMs
-Client applications can be designed to take advantage of SGX enclaves by delegating security-sensitive tasks to take place inside those enclaves. Such applications can then make use of Azure Attestation to routinely establish trust in the enclave and its ability to access sensitive data.
+Azure [Confidential VM](../confidential-computing/confidential-vm-overview.md) (CVM) is based on [AMD processors with SEV-SNP technology](../confidential-computing/virtual-machine-options.md). CVM offers VM OS disk encryption option with platform-managed keys or customer-managed keys and binds the disk encryption keys to the virtual machine's TPM. When a CVM boots up, SNP report containing the guest VM firmware measurements will be sent to Azure Attestation. The service validates the measurements and issues an attestation token that is used to release keys from [Managed-HSM](../key-vault/managed-hsm/overview.md) or [Azure Key Vault](../key-vault/general/basic-concepts.md). These keys are used to decrypt the vTPM state of the guest VM, unlock the OS disk and start the CVM. The attestation and key release process is performed automatically on each CVM boot, and the process ensures the CVM boots up only upon successful attestation of the hardware.
-Intel® Xeon® Scalable processors only support [ECDSA-based attestation solutions](https://software.intel.com/content/www/us/en/develop/topics/software-guard-extensions/attestation-services.html#Elliptic%20Curve%20Digital%20Signature%20Algorithm%20(ECDSA)%20Attestation) for remotely attesting SGX enclaves. Utilizing ECDSA based attestation model, Azure Attestation supports validation of Intel® Xeon® E3 processors and Intel® Xeon® Scalable processor-based server platforms.
+### AMD SEV-SNP attestation on Confidential Containers
-> [!NOTE]
-> To perform attestation of Intel® Xeon® Scalable processor-based server platforms using Azure Attestation, users are expected to install [Azure DCAP version 1.10.0](https://github.com/microsoft/Azure-DCAP-Client) or higher.
+Azure [Confidential Containers](../confidential-computing/confidential-containers.md) is based on [AMD processors with SEV-SNP technology](../confidential-computing/virtual-machine-options.md). Confidential containers, hosted on [Azure Container Instances](../container-instances/container-instances-confidential-overview.md) and on [Azure Kubernetes Service (in preview)](../aks/deploy-confidential-containers-default-policy.md) offer the ability to run groups of containers in an SEV-SNP protected trusted execution environment which isolates that group of containers from the container management control plane and other running containers. Attestation in confidential containers involves fetching the AMD hardware attestation report directly from the processor. This can be accomplished with our [SKR sidecar container](https://github.com/microsoft/confidential-sidecar-containers/tree/main/cmd/skr) or compiled directly into your application logic. The hardware report can then be exchanged with Azure Attestation and [managed-HSM](../key-vault/managed-hsm/overview.md) or Premium [Azure Key Vault (AKV)](../key-vault/general/basic-concepts.md) to retrieve secrets. You can also provide the hardware report to your own key vault system as desired.
-### Open Enclave attestation
-[Open Enclave](https://openenclave.io/sdk/) (OE) is a collection of libraries targeted at creating a single unified enclaving abstraction for developers to build TEE-based applications. It offers a universal secure app model that minimizes platform specificities. Microsoft views it as an essential stepping-stone toward democratizing hardware-based enclave technologies such as SGX and increasing their uptake on Azure.
+### Trusted Launch attestation
-OE standardizes specific requirements for verification of an enclave evidence. This qualifies OE as a highly fitting attestation consumer of Azure Attestation.
+Azure customers can [prevent bootkit and rootkit infections](https://www.youtube.com/watch?v=CQqu_rTSi0Q) by enabling [trusted launch](../virtual-machines/trusted-launch.md) for their virtual machines (VMs). When the VM is Secure Boot and vTPM enabled with guest attestation extension installed, vTPM measurements get submitted to Azure Attestation periodically for monitoring boot integrity. An attestation failure indicates potential malware, which is surfaced to customers via Microsoft Defender for Cloud, through Alerts and Recommendations.
### TPM attestation
OE standardizes specific requirements for verification of an enclave evidence. T
Client applications can be designed to take advantage of TPM attestation by delegating security-sensitive tasks to only take place after a platform has been validated to be secure. Such applications can then make use of Azure Attestation to routinely establish trust in the platform and its ability to access sensitive data.
-### AMD SEV-SNP attestation
+### SGX enclave attestation
+
+[Intel® Software Guard Extensions](https://www.intel.com/content/www/us/en/architecture-and-technology/software-guard-extensions.html) (SGX) refers to hardware-grade isolation, which is supported on certain Intel CPU models. SGX enables code to run in sanitized compartments known as SGX enclaves. Access and memory permissions are then managed by hardware to ensure a minimal attack surface with proper isolation.
-Azure [Confidential VM](../confidential-computing/confidential-vm-overview.md) (CVM) is based on [AMD processors with SEV-SNP technology](../confidential-computing/virtual-machine-options.md). CVM offers VM OS disk encryption option with platform-managed keys or customer-managed keys and binds the disk encryption keys to the virtual machine's TPM. When a CVM boots up, SNP report containing the guest VM firmware measurements will be sent to Azure Attestation. The service validates the measurements and issues an attestation token that is used to release keys from [Managed-HSM](../key-vault/managed-hsm/overview.md) or [Azure Key Vault](../key-vault/general/basic-concepts.md). These keys are used to decrypt the vTPM state of the guest VM, unlock the OS disk and start the CVM. The attestation and key release process is performed automatically on each CVM boot, and the process ensures the CVM boots up only upon successful attestation of the hardware.
+Client applications can be designed to take advantage of SGX enclaves by delegating security-sensitive tasks to take place inside those enclaves. Such applications can then make use of Azure Attestation to routinely establish trust in the enclave and its ability to access sensitive data.
-### Trusted Launch attestation
+Intel® Xeon® Scalable processors only support [ECDSA-based attestation solutions](https://software.intel.com/content/www/us/en/develop/topics/software-guard-extensions/attestation-services.html#Elliptic%20Curve%20Digital%20Signature%20Algorithm%20(ECDSA)%20Attestation) for remotely attesting SGX enclaves. Utilizing ECDSA based attestation model, Azure Attestation supports validation of Intel® Xeon® E3 processors and Intel® Xeon® Scalable processor-based server platforms.
-Azure customers can [prevent bootkit and rootkit infections](https://www.youtube.com/watch?v=CQqu_rTSi0Q) by enabling [trusted launch](../virtual-machines/trusted-launch.md) for their virtual machines (VMs). When the VM is Secure Boot and vTPM enabled with guest attestation extension installed, vTPM measurements get submitted to Azure Attestation periodically for monitoring boot integrity. An attestation failure indicates potential malware, which is surfaced to customers via Microsoft Defender for Cloud, through Alerts and Recommendations.
+> [!NOTE]
+> To perform attestation of Intel® Xeon® Scalable processor-based server platforms using Azure Attestation, users are expected to install [Azure DCAP version 1.10.0](https://github.com/microsoft/Azure-DCAP-Client) or higher.
+
+### Open Enclave attestation
+[Open Enclave](https://openenclave.io/sdk/) (OE) is a collection of libraries targeted at creating a single unified enclaving abstraction for developers to build TEE-based applications. It offers a universal secure app model that minimizes platform specificities. Microsoft views it as an essential stepping-stone toward democratizing hardware-based enclave technologies such as SGX and increasing their uptake on Azure.
+
+OE standardizes specific requirements for verification of an enclave evidence. This qualifies OE as a highly fitting attestation consumer of Azure Attestation.
## Azure Attestation runs in a TEE
automation Extension Based Hybrid Runbook Worker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/extension-based-hybrid-runbook-worker.md
This article provides information on troubleshooting and resolving issues with A
To help troubleshoot issues with extension-based Hybrid Runbook Workers: -- Check the OS is supported and the prerequisites have been met. See [Prerequisites](../extension-based-hybrid-runbook-worker-install.md#prerequisites).
+- Check the OS is supported, and the prerequisites have been met. See [Prerequisites](../extension-based-hybrid-runbook-worker-install.md#prerequisites).
- Check whether the system-assigned managed identity is enabled on the VM. Azure VMs and Arc enabled Azure Machines should be enabled with a system-assigned managed identity. - Check whether the extension is enabled with the right settings. Setting file should have right `AutomationAccountURL`. Cross-check the URL with Automation account property - `AutomationHybridServiceUrl`. 
- - For windows: you can find the settings file at `C:\Packages\Plugins\Microsoft.Azure.Automation.HybridWorker.HybridWorkerForWindows\<version>\RuntimeSettings`.
- - For Linux: you can find the settings file atΓÇ»`/var/lib/waagent/Microsoft.Azure.Automation.HybridWorker.HybridWorkerForLinux/`.
+ - For Windows, you can find the settings file here:
+ > [!TIP]
+ > Replace `*` in the below path with the specific version that is installed if you know it.
+ ```
+ C:\Packages\Plugins\Microsoft.Azure.Automation.HybridWorker.HybridWorkerForWindows\*\RuntimeSettings
+ ```
+ - For Linux, you can find the settings file here:
+ ```
+ /var/lib/waagent/Microsoft.Azure.Automation.HybridWorker.HybridWorkerForLinux/
+ ```
- Check the error message shown in the Hybrid worker extension status/Detailed Status. It contains error message(s) and respective recommendation(s) to fix the issue. - Run the troubleshooter tool on the VM and it generates an output file. Open the output file and verify the errors identified by the troubleshooter tool.
- - For windows: you can find the troubleshooter at `C:\Packages\Plugins\Microsoft.Azure.Automation.HybridWorker.HybridWorkerForWindows\<version>\bin\troubleshooter\TroubleShootWindowsExtension.ps1`
- - For Linux: you can find the troubleshooter at `/var/lib/waagent/Microsoft.Azure.Automation.HybridWorker.HybridWorkerForLinux-<version>/Troubleshooter/LinuxTroubleshooter.py`
+ - For Windows, you can find the troubleshooter here:
+ > [!TIP]
+ > Replace `*` in the below path with the specific version that is installed if you know it.
+ ```
+ C:\Packages\Plugins\Microsoft.Azure.Automation.HybridWorker.HybridWorkerForWindows\*\bin\troubleshooter\TroubleShootWindowsExtension.ps1
+ ```
+ - For Linux, you can find the troubleshooter here:
+ > [!TIP]
+ > Replace `*` in the below path with the specific version that is installed if you know it.
+ ```
+ /var/lib/waagent/Microsoft.Azure.Automation.HybridWorker.HybridWorkerForLinux-*/Troubleshooter/LinuxTroubleshooter.py
+ ```
- For Linux machines, the Hybrid worker extension creates a `hweautomation` user and starts the Hybrid worker under the user. Check whether the user `hweautomation` is set up with the correct permissions. If your runbook is trying to access any local resources, ensure that the `hweautomation` has the correct permissions to the local resources. - Check whether the hybrid worker process is running.
- - For Windows: check theΓÇ»`Hybrid Worker Service` service.
- - For Linux: check the `hwd` service.
+ - For Windows, check theΓÇ»`Hybrid Worker Service` (***HybridWorkerService***) service.
+ - For Linux, check the `hwd` service.
- Collect logs:
- - For Windows: Run the log collector tool in </br>`C:\Packages\Plugins\Microsoft.Azure.Automation.HybridWorker.HybridWorkerForWindows\<version>\bin\troubleshooter\PullLogs.ps1` </br>
- Logs are in `C:\HybridWorkerExtensionLogs`.
- - For Linux: Logs are in folders </br>`/var/log/azure/Microsoft.Azure.Automation.HybridWorker.HybridWorkerForLinux` and `/home/hweautomation`.
+ - For Windows, run the log collector tool located here:
+ > [!TIP]
+ > Replace `*` in the below path with the specific version that is installed if you know it.
+ ```
+ C:\Packages\Plugins\Microsoft.Azure.Automation.HybridWorker.HybridWorkerForWindows\*\bin\troubleshooter\PullLogs.ps1
+ ```
+ Logs will be located here:
+ ```
+ C:\HybridWorkerExtensionLogs
+ ```
+ - For Linux: Logs are in the following folders:
+ ```
+ /var/log/azure/Microsoft.Azure.Automation.HybridWorker.HybridWorkerForLinux
+ ```
+ and
+ ```
+ /home/hweautomation
+ ```
### Unable to update Az modules while using the Hybrid Worker
The Hybrid Runbook Worker jobs failed as it was unable to import Az modules.
As a workaround, you can follow these steps:
-1. Go to the folder: C:\Program Files\Microsoft Monitoring Agent\Agent\AzureAutomation\7.3.1722.0\HybridAgent
-1. Edit the file with the name *Orchestrator.Sandbox.exe.config*
-1. Add the following lines inside the `<assemblyBinding>` tags:
+1. Navigate to the folder:
+ > [!TIP]
+ > Replace `*` in the below path with the specific version that is installed if you know it.
+ ```
+ C:\Program Files\Microsoft Monitoring Agent\Agent\AzureAutomation\*\HybridAgent
+ ```
+
+1. Edit the file with the name `Orchestrator.Sandbox.exe.config`
+
+1. Add the following lines inside the `<assemblyBinding>` tag:
```xml <dependentAssembly> <assemblyIdentity name="Newtonsoft.Json" publicKeyToken="30ad4fe6b2a6aeed" culture="neutral" />
For Custom user on the Hybrid Runbook Worker, update the permissions in the foll
| Folder |Permissions | | | |
-| C:\ProgramData\AzureConnectedMachineAgent\Tokens | Read |
-| C:\Packages\Plugins\Microsoft.Azure.Automation.HybridWorker.HybridWorkerForWindows | Read and Execute |
+| `C:\ProgramData\AzureConnectedMachineAgent\Tokens` | Read |
+| `C:\Packages\Plugins\Microsoft.Azure.Automation.HybridWorker.HybridWorkerForWindows` | Read and Execute |
### Scenario: Job failed to start as the Hybrid Worker wasn't available when the scheduled job started #### Issue
-Job fails to start on a Hybrid Worker and you see the following error:
+Job fails to start on a Hybrid Worker, and you see the following error:
*Failed to start, as hybrid worker wasn't available when scheduled job started, the hybrid worker was last active at mm/dd/yyyy*. #### Cause This error can occur due to the following reasons:-- The machines doesn't exist anymore.
+- The machines don't exist anymore.
- The machine is turned off and is unreachable. - The machine has a network connectivity issue. - The Hybrid Runbook Worker extension has been uninstalled from the machine.
Sometimes the installation process might get stuck.
### Resolution Follow the steps mentioned below to install Hybrid Worker extension again:
-1. Open PowerShell console
-1. Remove registry entry, if present: *HKLM:/Software/Microsoft/Azure/HybridWorker*
-1. Remove the registry entry, if present: *HKLM:/Software/Microsoft/HybridRunbookWorkerV2*
-1. Go to Hybrid Worker extension installation folder
- Cd "C:\Packages\Plugins\Microsoft.Azure.Automation.HybridWorker.HybridWorkerForWindows\<version>"
-1. Install Hybrid Worker extension: `.\bin\install.ps1`
-1. Enable Hybrid Worker extension: `.\bin\enable.ps1`
+1. Open PowerShell console.
+
+1. **Remove the registry key**, if present: `HKLM:\Software\Microsoft\Azure\HybridWorker`
+
+ 1. PowerShell code to remove the registry key along with any subkeys and values under it.:
+
+ ```powershell
+ Get-Item HKLM:\Software\Microsoft\Azure\HybridWorker | Remove-Item -Recurse
+ ```
+
+1. **Remove the registry key**, if present: `HKLM:\Software\Microsoft\HybridRunbookWorkerV2`
+
+ 1. PowerShell code to remove the registry key along with any subkeys and values under it.:
+
+ ```powershell
+ Get-Item HKLM:\Software\Microsoft\HybridRunbookWorkerV2 | Remove-Item -Recurse
+ ```
+1. Navigate to the Hybrid Worker extension installation folder:
+
+ > [!TIP]
+ > Replace `*` in the below command with the specific version that is installed if you know it.
+ ```powershell
+ cd "C:\Packages\Plugins\Microsoft.Azure.Automation.HybridWorker.HybridWorkerForWindows\*"
+ ```
+1. **Install** the Hybrid Worker extension:
+
+ ```powershell
+ .\bin\install.ps1
+ ```
+1. **Enable** the Hybrid Worker extension:
+
+ ```powershell
+ .\bin\enable.ps1
+ ```
### Scenario: Uninstallation process of Hybrid Worker extension on Windows VM gets stuck
You have installed a Hybrid Worker extension on a Windows VM from the portal, bu
Sometimes the uninstallation process might get stuck. #### Resolution
-1. Open PowerShell console
-1. Go to Hybrid Worker extension installation folder
- Cd "C:\Packages\Plugins\Microsoft.Azure.Automation.HybridWorker.HybridWorkerForWindows\<version\>"
-1. Disable Hybrid Worker extension: `.\bin\disable.cmd`
-1. Uninstall Hybrid Worker extension: `.\bin\uninstall.ps1`
-1. Remove registry entry, if present: *HKLM:/Software/Microsoft/Azure/HybridWorker*
-1. Remove the registry entry, if present: *HKLM:/Software/Microsoft/HybridRunbookWorkerV2*
+1. Open PowerShell console.
+
+1. Navigate to the Hybrid Worker extension installation folder:
+
+ > [!TIP]
+ > Replace `*` in the below command with the specific version that is installed if you know it.
+ ```powershell
+ cd "C:\Packages\Plugins\Microsoft.Azure.Automation.HybridWorker.HybridWorkerForWindows\*"
+ ```
+1. **Disable** the Hybrid Worker extension:
+
+ ```powershell
+ .\bin\disable.cmd
+ ```
+1. **Uninstall** the Hybrid Worker extension:
+
+ ```powershell
+ .\bin\uninstall.ps1
+ ```
+1. **Remove registry key**, if present: `HKLM:\Software\Microsoft\Azure\HybridWorker`
+
+ 1. PowerShell code to remove the registry key along with any subkeys and values under it.:
+
+ ```powershell
+ Get-Item HKLM:\Software\Microsoft\Azure\HybridWorker | Remove-Item -Recurse
+ ```
+
+1. **Remove the registry key**, if present: `HKLM:\Software\Microsoft\HybridRunbookWorkerV2`
+
+ 1. PowerShell code to remove the registry key along with any subkeys and values under it.:
+
+ ```powershell
+ Get-Item HKLM:\Software\Microsoft\HybridRunbookWorkerV2 | Remove-Item -Recurse
+ ```
### Scenario: Installation process of Hybrid Worker extension on Linux VM gets stuck
You have installed a Hybrid Worker extension on a Linux VM from the portal, but
Sometimes the uninstallation process might get stuck. #### Resolution
-1. Go to folder: `rm -r /home/hweautomation/state`
-1. Go to Hybrid Worker extension installation folder */var/lib/waagent/Microsoft.Azure.Automation.HybridWorker.HybridWorkerForLinux-\<version\>/*
-1. Go to above folder and run command `rm mrseq`
-1. Install Hybrid Worker Extension: *"installCommand":ΓÇ»"./extension_shim.shΓÇ»-cΓÇ»./HWExtensionHandlers.pyΓÇ»-i"*
-1. Enable Hybrid Worker extension: *"enableCommand":ΓÇ»"./extension_shim.shΓÇ»-cΓÇ»./HWExtensionHandlers.pyΓÇ»-e"*
+1. **Delete** the `state` folder:
+ ```bash
+ rm -r /home/hweautomation/state
+ ```
+1. Navigate to the Hybrid Worker extension installation folder:
+ > [!TIP]
+ > Replace `*` in the below command with the specific version that is installed if you know it.
+ ```bash
+ cd /var/lib/waagent/Microsoft.Azure.Automation.HybridWorker.HybridWorkerForLinux-*/
+ ```
+
+1. **Delete** the mrseq file:
+ ```bash
+ rm mrseq
+ ```
+1. **Install** the Hybrid Worker Extension:
+ ```bash
+ ./extension_shim.shΓÇ»-cΓÇ»./HWExtensionHandlers.pyΓÇ»-i
+ ```
+
+1. **Enable** the Hybrid Worker extension:
+ ```bash
+ ./extension_shim.shΓÇ»-cΓÇ»./HWExtensionHandlers.pyΓÇ»-e
+ ```
### Scenario: Uninstallation process of Hybrid Worker extension on Linux VM gets stuck
Sometimes the uninstallation process might get stuck.
#### Resolution Follow the steps mentioned below to completely uninstall Hybrid Worker extension:
-1. Go to Hybrid Worker Extension installation folder:
- */var/lib/waagent/Microsoft.Azure.Automation.HybridWorker.HybridWorkerForLinux-\<version\>/*
-1. Disable the extension: `"disableCommand":ΓÇ»"./extension_shim.shΓÇ»-cΓÇ»./HWExtensionHandlers.pyΓÇ»-d" `
-1. Uninstall the extension: `"uninstallCommand":ΓÇ»"./extension_shim.shΓÇ»-cΓÇ»./HWExtensionHandlers.pyΓÇ»-u"`
+1. Navigate to the Hybrid Worker Extension installation folder:
+ > [!TIP]
+ > Replace `*` in the below command with the specific version that is installed if you know it.
+ ```bash
+ cd /var/lib/waagent/Microsoft.Azure.Automation.HybridWorker.HybridWorkerForLinux-*/
+ ```
+1. **Disable** the Hybrid Worker extension:
+ ```bash
+ ./extension_shim.shΓÇ»-cΓÇ»./HWExtensionHandlers.pyΓÇ»-d
+ ```
+1. **Uninstall** the Hybrid Worker extension:
+ ```bash
+ ./extension_shim.shΓÇ»-cΓÇ»./HWExtensionHandlers.pyΓÇ»-u
+ ```
### Scenario: Runbook execution fails
Check the **Microsoft-SMA** event log for a corresponding event with the descrip
A runbook running on a Hybrid Runbook Worker fails with the following error message:
-`Connect-AzAccount : No certificate was found in the certificate store with thumbprint 0000000000000000000000000000000000000000`
-`At line:3 char:1`
-`+ Connect-AzAccount -ServicePrincipal -Tenant $Conn.TenantID -Appl ...`
-`+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~`
-` + CategoryInfo : CloseError: (:) [Connect-AzAccount],ArgumentException`
-` + FullyQualifiedErrorId : Microsoft.Azure.Commands.Profile.ConnectAzAccountCommand`
+```
+Connect-AzAccount : No certificate was found in the certificate store with thumbprint 0000000000000000000000000000000000000000
+At line:3 char:1
++ Connect-AzAccount -ServicePrincipal -Tenant $Conn.TenantID -Appl ...++ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ + CategoryInfo : CloseError: (:) [Connect-AzAccount],ArgumentException
+ + FullyQualifiedErrorId : Microsoft.Azure.Commands.Profile.ConnectAzAccountCommand
+```
#### Cause
As a workaround, you can create a configuration file named `OrchestratorSandbox.
</configuration> ```
-Place this file in the same folder as the executable file `OrchestratorSandbox.exe`. For example,
-
-`%ProgramFiles%\Microsoft Monitoring Agent\Agent\AzureAutomation\7.3.702.0\HybridAgent`
+Place this file in the same folder as the executable file `OrchestratorSandbox.exe`. For example:
+> [!TIP]
+> Replace `*` in the below path with the specific version that is installed if you know it.
+```
+%ProgramFiles%\Microsoft Monitoring Agent\Agent\AzureAutomation\*\HybridAgent
+```
### Scenario: Microsoft Azure VMs automatically dropped from a hybrid worker group
azure-arc Ssh Arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/ssh-arc-overview.md
Authenticating with Microsoft Entra credentials has additional requirements:
> The Virtual Machine Administrator Login and Virtual Machine User Login roles use `dataActions` and can be assigned at the management group, subscription, resource group, or resource scope. We recommend that you assign the roles at the management group, subscription, or resource level and not at the individual VM level. This practice avoids the risk of reaching the [Azure role assignments limit](../../role-based-access-control/troubleshoot-limits.md) per subscription. ### Availability
-SSH access to Arc-enabled servers is currently supported in all regions supported by Arc-Enabled Servers with the following exceptions:
+SSH access to Arc-enabled servers is currently supported in all regions supported by Arc-Enabled Servers.
## Getting started
azure-arc Recover From Resource Bridge Deletion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/recover-from-resource-bridge-deletion.md
In this article, you learn how to recover the Azure Arc resource bridge connection into a working state in disaster scenarios such as accidental deletion. In such cases, the connection between on-premises infrastructure and Azure is lost and any operations performed through Arc fail.
+## Prerequisites
+
+1. The disaster recovery script must be run from the same folder where the config (.yaml) files are present. The config files are present on the machine used to run the script to deploy Arc resource bridge.
+
+1. The machine being used to run the script must have bidirectional connectivity to the Arc resource bridge VM on port 6443 (Kubernetes API server) and 22 (SSH), and outbound connectivity to the Arc resource bridge VM on port 443 (HTTPS).
+ ## Recovering the Arc resource bridge if there is VM deletion To recover from Arc resource bridge VM deletion, you need to deploy a new resource bridge with the same resource ID as the current resource bridge using the following steps.
azure-functions Flex Consumption How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/flex-consumption-how-to.md
You can use Maven to create a Flex Consumption hosted function app and required
You can enable [virtual network integration](functions-networking-options.md#virtual-network-integration) for your app in a Flex Consumption plan. The examples in this section assume that you already have [created a virtual network with subnet](../virtual-network/quick-create-cli.md#create-a-virtual-network-and-subnet) in your account. You can enable virtual network integration when you create your app or at a later time.
+> [!IMPORTANT]
+> The Flex Consumption plan currently doesn't support subnets with names that contain underscore (`_`) characters.
+ To enable virtual networking when you create your app: ### [Azure CLI](#tab/azure-cli)
azure-functions Functions Dotnet Class Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-dotnet-class-library.md
Azure Functions supports C# and C# script programming languages. If you're looki
### Updating to target .NET 8 > [!NOTE]
-> Targeting .NET 8 with the in-process model is not yet enabled for Linux or for apps in sovereign clouds. Updates will be communicated on [this tracking thread on GitHub](https://github.com/Azure/azure-functions-host/issues/9951).
+> Targeting .NET 8 with the in-process model is not yet enabled for apps in sovereign clouds. Updates will be communicated on [this tracking thread on GitHub](https://github.com/Azure/azure-functions-host/issues/9951).
Apps using the in-process model can target .NET 8 by following the steps outlined in this section. However, if you choose to exercise this option, you should still begin planning your [migration to the isolated worker model](./migrate-dotnet-to-isolated-model.md) in advance of [support ending for the in-process model on November 10, 2026](https://aka.ms/azure-functions-retirements/in-process-model).
azure-monitor Azure Monitor Agent Troubleshoot Linux Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-troubleshoot-linux-vm.md
Follow the steps below to troubleshoot the latest version of the Azure Monitor a
``` 3. Wait for 10-15 minutes as extension maybe in transitioning status. If it still doesn't show up as above, [uninstall and install the extension](./azure-monitor-agent-manage.md) again. 4. Check if you see any errors in extension logs located at `/var/log/azure/Microsoft.Azure.Monitor.AzureMonitorLinuxAgent/` on your machine
- 5. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'AMA extension fails to install or provision' and **Problem type** as 'I need help with Azure Monitor Linux Agent'.
3. **Verify that the agent is running**: 1. Check if the agent is emitting heartbeat logs to Log Analytics workspace using the query below. Skip if 'Custom Metrics' is the only destination in the DCR:
Follow the steps below to troubleshoot the latest version of the Azure Monitor a
``` systemctl status azuremonitoragent ```
- 3. Check if you see any errors in core agent logs located at `/var/opt/microsoft/azuremonitoragent/log/mdsd.*` on your machine
- 3. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'AMA extension provisioned but not running' and **Problem type** as 'I need help with Azure Monitor Linux Agent'.
+ 3. Check if you see any errors in core agent logs located at `/var/opt/microsoft/azuremonitoragent/log/mdsd.*` on your machine
4. **Verify that the DCR exists and is associated with the virtual machine:** 1. If using Log Analytics workspace as destination, verify that DCR exists in the same physical region as the Log Analytics workspace. 2. Open Azure portal > select your data collection rule > Open **Configuration** : **Resources** from the pane on the left > You should see the virtual machine listed here. 3. If not listed, click 'Add' and select your virtual machine from the resource picker. Repeat across all DCRs.
- 4. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'DCR not found or associated' and **Problem type** as 'I need help configuring data collection from a VM'.
5. **Verify that agent was able to download the associated DCR(s) from AMCS service:** 1. Check if you see the latest DCR downloaded at this location `/etc/opt/microsoft/azuremonitoragent/config-cache/configchunks/`
- 2. If not, [file a ticket](#file-a-ticket) with **Summary** as 'AMA unable to download DCR config' and **Problem type** as 'I need help with Azure Monitor Linux Agent'.
- ## Issues collecting Syslog
For more information on how to troubleshoot syslog issues with Azure Monitor Age
2. The parsed configuration is stored at `/etc/opt/microsoft/azuremonitoragent/config-cache/configchunks/`. Check that Syslog collection is defined and the log destinations are the same as constructed in DCR UI / DCR JSON. 1. If yes, proceed to step 3. If not, the issue is in the configuration workflow. 2. Investigate `mdsd.err`,`mdsd.warn`, `mdsd.info` files under `/var/opt/microsoft/azuremonitoragent/log` for possible configuration errors.
- 3. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'Syslog DCR not available' and **Problem type** as 'I need help configuring data collection from a VM'.
+ 3. Validate the layout of the Syslog collection workflow to ensure all necessary pieces are in place and accessible: 1. For `rsyslog` users, ensure the `/etc/rsyslog.d/10-azuremonitoragent.conf` file is present, isn't empty, and is accessible by the `rsyslog` daemon (syslog user). 1. Check your rsyslog configuration at `/etc/rsyslog.conf` and `/etc/rsyslog.d/*` to see if you have any inputs bound to a non-default ruleset, as messages from these inputs won't be forwarded to Azure Monitor Agent. For instance, messages from an input configured with a non-default ruleset like `input(type="imtcp" port="514" `**`ruleset="myruleset"`**`)` won't be forward. 2. For `syslog-ng` users, ensure the `/etc/syslog-ng/conf.d/azuremonitoragent.conf` file is present, isn't empty, and is accessible by the `syslog-ng` daemon (syslog user). 3. Ensure the file `/run/azuremonitoragent/default_syslog.socket` exists and is accessible by `rsyslog` or `syslog-ng` respectively.
- 4. Check for a corresponding drop in count of processed syslog events in `/var/opt/microsoft/azuremonitoragent/log/mdsd.qos`. If such drop isn't indicated in the file, [file a ticket](#file-a-ticket) with **Summary** as 'Syslog data dropped in pipeline' and **Problem type** as 'I need help with Azure Monitor Linux Agent'.
5. Check that syslog daemon queue isn't overflowing, causing the upload to fail, by referring the guidance here: [Rsyslog data not uploaded due to Full Disk space issue on AMA Linux Agent](./azure-monitor-agent-troubleshoot-linux-vm-rsyslog.md) 4. To debug syslog events ingestion further, you can append trace flag **-T 0x2002** at the end of **MDSD_OPTIONS** in the file `/etc/default/azuremonitoragent`, and restart the agent: ```
For more information on how to troubleshoot syslog issues with Azure Monitor Age
5. After the issue is reproduced with the trace flag on, you'll find more debug information in `/var/opt/microsoft/azuremonitoragent/log/mdsd.info`. Inspect the file for the possible cause of syslog collection issue, such as parsing / processing / configuration / upload errors. > [!WARNING] > Ensure to remove trace flag setting **-T 0x2002** after the debugging session, since it generates many trace statements that could fill up the disk more quickly or make visually parsing the log file difficult.
-6. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'AMA fails to collect syslog events' and **Problem type** as 'I need help with Azure Monitor Linux Agent'.
## Troubleshooting issues on Arc-enabled server If after checking basic troubleshooting steps you don't see the Azure Monitor Agent emitting logs or find **'Failed to get MSI token from IMDS endpoint'** errors in `/var/opt/microsoft/azuremonitoragent/log/mdsd.err` log file, it's likely `syslog` user isn't a member of the group `himds`. Add `syslog` user to `himds` user group if the user isn't a member of this group. Create user `syslog` and the group `syslog`, if necessary, and make sure that the user is in that group. For more information check out Azure Arc-enabled server authentication requirements [here](../../azure-arc/servers/managed-identity-authentication.md).-
azure-monitor Azure Monitor Agent Troubleshoot Windows Arc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-troubleshoot-windows-arc.md
Follow the steps below to troubleshoot the latest version of the Azure Monitor a
GC Service (gcarcservice) : running Extension Service (extensionservice) : running ```
- If instead you see `Agent Status: Disconnected` or any other status, [file a ticket](#file-a-ticket) with **Summary** as 'Arc agent or extensions service not working' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
3. Wait for 10-15 minutes as extension maybe in transitioning status. If it still doesn't show up, [uninstall and install the extension](./azure-monitor-agent-manage.md) again and repeat the verification to see the extension show up. 4. If not, check if you see any errors in extension logs located at `C:\ProgramData\GuestConfig\extension_logs\Microsoft.Azure.Monitor.AzureMonitorWindowsAgent` on your machine
- 5. If none of the above works, [file a ticket](#file-a-ticket) with **Summary** as 'AMA extension fails to install or provision' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
3. **Verify that the agent is running**: 1. Check if the agent is emitting heartbeat logs to Log Analytics workspace using the query below. Skip if 'Custom Metrics' is the only destination in the DCR:
Follow the steps below to troubleshoot the latest version of the Azure Monitor a
``` 2. If not, open Task Manager and check if 'MonAgentCore.exe' process is running. If it is, wait for 5 minutes for heartbeat to show up. 3. If not, check if you see any errors in core agent logs located at `C:\Resources\Directory\AMADataStore\Configuration` on your machine
- 4. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'AMA extension provisioned but not running' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
4. **Verify that the DCR exists and is associated with the Arc-enabled server:** 1. If using Log Analytics workspace as destination, verify that DCR exists in the same physical region as the Log Analytics workspace. 2. On your Arc-enabled server, verify the existence of the file `C:\Resources\Directory\AMADataStore\mcs\mcsconfig.latest.xml`. If this file doesn't exist, the Arc-enabled server may not be associated with a DCR. 3. Open Azure portal > select your data collection rule > Open **Configuration** : **Resources** from the pane on the left > You should see the Arc-enabled server listed here 4. If not listed, click 'Add' and select your Arc-enabled server from the resource picker. Repeat across all DCRs.
- 5. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'DCR not found or associated' and **Problem type** as 'I need help configuring data collection from a VM'.
5. **Verify that agent was able to download the associated DCR(s) from AMCS service:** 1. Check if you see the latest DCR downloaded at this location `C:\Resources\Directory\AMADataStore\mcs\configchunks`
- 2. If not, [file a ticket](#file-a-ticket) with **Summary** as 'AMA unable to download DCR config' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
## Issues collecting Performance counters 1. Check that your DCR JSON contains a section for 'performanceCounters'. If not, fix your DCR. See [how to create DCR](./azure-monitor-agent-data-collection.md) or [sample DCR](./data-collection-rule-sample-agent.md).
-2. Check that the file `C:\Resources\Directory\AMADataStore\mcs\mcsconfig.lkg.xml` exists. If it doesn't exist, [file a ticket](#file-a-ticket) with **Summary** as 'AMA didn't run long enough to mark and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
+2. Check that the file `C:\Resources\Directory\AMADataStore\mcs\mcsconfig.lkg.xml` exists.
3. Open the file and check if it contains `CounterSet` nodes as shown in the example below: ```xml <CounterSet storeType="Local" duration="PT1M"
Follow the steps below to troubleshoot the latest version of the Azure Monitor a
<Counter>\PhysicalDisk(_Total)\Avg. Disk Queue Length</Counter> </CounterSet> ```
- If there are no `CounterSet` nodes, then the DCR wasn't parsed correctly. [File a ticket](#file-a-ticket) with **Summary** as 'AMA unable to parse DCR config' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
- ### Issues using 'Custom Metrics' as destination
Follow the steps below to troubleshoot the latest version of the Azure Monitor a
6. Collect logs by running the command `C:\Packages\Plugins\Microsoft.Azure.Monitor.AzureMonitorWindowsAgent\<version-number>\Monitoring\Agent\table2csv.exe C:\Resources\Directory\AMADataStore\Tables\MaMetricsExtensionEtw.tsf` 1. The command will generate the file 'MaMetricsExtensionEtw.csv' 2. Open it and look for any Level 2 errors and try to fix them.
-7. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'AMA unable to collect custom metrics' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
## Issues collecting Windows event logs 1. Check that your DCR JSON contains a section for 'windowsEventLogs'. If not, fix your DCR. See [how to create DCR](./azure-monitor-agent-data-collection.md) or [sample DCR](./data-collection-rule-sample-agent.md).
-2. Check that the file `C:\Resources\Directory\AMADataStore\mcs\mcsconfig.lkg.xml` exists. If it doesn't exist, [file a ticket](#file-a-ticket) with **Summary** as 'AMA didn't run long enough to mark and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
+2. Check that the file `C:\Resources\Directory\AMADataStore\mcs\mcsconfig.lkg.xml` exists.
3. Open the file and check if it contains `Subscription` nodes as shown in the example below: ```xml <Subscription eventName="c9302257006473204344_14882095577508259570"
Follow the steps below to troubleshoot the latest version of the Azure Monitor a
</Column> </Subscription> ```
- If there are no `Subscription` nodes, then the DCR wasn't parsed correctly. [File a ticket](#file-a-ticket) with **Summary** as 'AMA unable to parse DCR config' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
-
azure-monitor Azure Monitor Agent Troubleshoot Windows Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-troubleshoot-windows-vm.md
Follow the steps below to troubleshoot the latest version of the Azure Monitor a
``` 3. Wait for 10-15 minutes as extension maybe in transitioning status. If it still doesn't show up, [uninstall and install the extension](./azure-monitor-agent-manage.md) again and repeat the verification to see the extension show up. 4. If not, check if you see any errors in extension logs located at `C:\WindowsAzure\Logs\Plugins\Microsoft.Azure.Monitor.AzureMonitorWindowsAgent` on your machine
- 5. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'AMA extension fails to install or provision' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
3. **Verify that the agent is running**: 1. Check if the agent is emitting heartbeat logs to Log Analytics workspace using the query below. Skip if 'Custom Metrics' is the only destination in the DCR:
Follow the steps below to troubleshoot the latest version of the Azure Monitor a
``` 2. If not, open Task Manager and check if 'MonAgentCore.exe' process is running. If it is, wait for 5 minutes for heartbeat to show up. 3. If not, check if you see any errors in core agent logs located at `C:\WindowsAzure\Resources\AMADataStore.<virtual-machine-name>\Configuration` on your machine
- 4. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'AMA extension provisioned but not running' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
4. **Verify that the DCR exists and is associated with the virtual machine:** 1. If using Log Analytics workspace as destination, verify that DCR exists in the same physical region as the Log Analytics workspace. 2. On your virtual machine, verify the existence of the file `C:\WindowsAzure\Resources\AMADataStore.<virtual-machine-name>\mcs\mcsconfig.latest.xml`. If this file doesn't exist: - The virtual machine may not be associated with a DCR. See step 3 - The virtual machine may not have Managed Identity enabled. [See here](../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md#enable-system-assigned-managed-identity-during-creation-of-a-vm) on how to enable.
- - IMDS service isn't running/accessible from the virtual machine. [Check if you can access IMDS from the machine](../../virtual-machines/windows/instance-metadata-service.md?tabs=windows). If not, [file a ticket](#file-a-ticket) with **Summary** as 'IMDS service not running' and **Problem type** as 'I need help configuring data collection from a VM'.
- - AMA can't access IMDS. Check if you see IMDS errors in `C:\WindowsAzure\Resources\AMADataStore.<virtual-machine-name>\Tables\MAEventTable.tsf` file. If yes, [file a ticket](#file-a-ticket) with **Summary** as 'AMA cannot access IMDS' and **Problem type** as 'I need help configuring data collection from a VM'.
+ - IMDS service isn't running/accessible from the virtual machine. [Check if you can access IMDS from the machine](../../virtual-machines/windows/instance-metadata-service.md?tabs=windows).
+ - AMA can't access IMDS. Check if you see IMDS errors in `C:\WindowsAzure\Resources\AMADataStore.<virtual-machine-name>\Tables\MAEventTable.tsf` file.
3. Open Azure portal > select your data collection rule > Open **Configuration** : **Resources** from the pane on the left > You should see the virtual machine listed here 4. If not listed, click 'Add' and select your virtual machine from the resource picker. Repeat across all DCRs.
- 5. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'DCR not found or associated' and **Problem type** as 'I need help configuring data collection from a VM'.
5. **Verify that agent was able to download the associated DCR(s) from AMCS service:** 1. Check if you see the latest DCR downloaded at this location `C:\WindowsAzure\Resources\AMADataStore.<virtual-machine-name>\mcs\configchunks`
- 2. If not, [file a ticket](#file-a-ticket) with **Summary** as 'AMA unable to download DCR config' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
- ## Issues collecting Performance counters 1. Check that your DCR JSON contains a section for 'performanceCounters'. If not, fix your DCR. See [how to create DCR](./azure-monitor-agent-data-collection.md) or [sample DCR](./data-collection-rule-sample-agent.md).
-2. Check that the file `C:\WindowsAzure\Resources\AMADataStore.<virtual-machine-name>\mcs\mcsconfig.lkg.xml` exists. If it doesn't exist, [file a ticket](#file-a-ticket) with **Summary** as 'AMA didn't run long enough to mark and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
+2. Check that the file `C:\WindowsAzure\Resources\AMADataStore.<virtual-machine-name>\mcs\mcsconfig.lkg.xml` exists.
3. Open the file and check if it contains `CounterSet` nodes as shown in the example below: ```xml <CounterSet storeType="Local" duration="PT1M"
Follow the steps below to troubleshoot the latest version of the Azure Monitor a
<Counter>\PhysicalDisk(_Total)\Avg. Disk Queue Length</Counter> </CounterSet> ```
- If there are no `CounterSet` nodes, then the DCR wasn't parsed correctly. [File a ticket](#file-a-ticket) with **Summary** as 'AMA unable to parse DCR config' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
- ### Issues using 'Custom Metrics' as destination
Follow the steps below to troubleshoot the latest version of the Azure Monitor a
6. Collect logs by running the command `C:\Packages\Plugins\Microsoft.Azure.Monitor.AzureMonitorWindowsAgent\<version-number>\Monitoring\Agent\table2csv.exe C:\WindowsAzure\Resources\AMADataStore.<virtual-machine-name>\Tables\MaMetricsExtensionEtw.tsf` 1. The command will generate the file 'MaMetricsExtensionEtw.csv' 2. Open it and look for any Level 2 errors and try to fix them.
-7. If none of the above helps, [file a ticket](#file-a-ticket) with **Summary** as 'AMA unable to collect custom metrics' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
## Issues collecting Windows event logs 1. Check that your DCR JSON contains a section for 'windowsEventLogs'. If not, fix your DCR. See [how to create DCR](./azure-monitor-agent-data-collection.md) or [sample DCR](./data-collection-rule-sample-agent.md).
-2. Check that the file `C:\WindowsAzure\Resources\AMADataStore.<virtual-machine-name>\mcs\mcsconfig.lkg.xml` exists. If it doesn't exist, [file a ticket](#file-a-ticket) with **Summary** as 'AMA didn't run long enough to mark and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
+2. Check that the file `C:\WindowsAzure\Resources\AMADataStore.<virtual-machine-name>\mcs\mcsconfig.lkg.xml` exists.
3. Open the file and check if it contains `Subscription` nodes as shown in the example below: ```xml <Subscription eventName="c9302257006473204344_14882095577508259570"
Follow the steps below to troubleshoot the latest version of the Azure Monitor a
</Column> </Subscription> ```
- If there are no `Subscription`, nodes then the DCR wasn't parsed correctly. [File a ticket](#file-a-ticket) with **Summary** as 'AMA unable to parse DCR config' and **Problem type** as 'I need help with Azure Monitor Windows Agent'.
-
azure-monitor Data Collection Log Json https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-log-json.md
$tableParams = @'
{ "name": "FilePath", "type": "string"
- },
- {
- "name": "Computer",
- "type": "string"
} ] }
JSON files include a property name with each value, and the incoming stream in t
| `TimeGenerated` | datetime | The time the record was generated. | | `RawData` | string | This column will be empty for a JSON log. | | `FilePath` | string | If you add this column to the incoming stream in the DCR, it will be populated with the path to the log file. This column is not created automatically and can't be added using the portal. You must manually modify the DCR created by the portal or create the DCR using another method where you can explicitly define the incoming stream. |
-| `Computer` | string | If you add this column to the incoming stream in the DCR, it will be populated with the name of the computer. This column is not created automatically and can't be added using the portal. You must manually modify the DCR created by the portal or create the DCR using another method where you can explicitly define the incoming stream. |
### [Portal](#tab/portal)
Use the following ARM template to create a DCR for collecting text log files. In
"name": "FilePath", "type": "String" },
- {
- "name": "Computer",
- "type": "String"
- },
{ "name": "MyStringColumn", "type": "string"
azure-monitor Data Collection Syslog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-syslog.md
The following facilities are supported with the Syslog collector:
| 0 | None | | 1 | Kern | | 2 | user |
-| 3 | mail |
+| 3 | mail |
| 4 | daemon |
-| 4 | auth |
-| 5 | syslog
-| 6 | lpr |
-| 7 | news |
-| 8 | uucp |
-| 9 | ftp |
-| 10 | ntp |
-| 11 | audit |
-| 12 | alert |
-| 13 | mark |
-| 14 | local0 |
-| 15 | local1 |
-| 16 | local2 |
-| 17 | local3 |
-| 18 | local4 |
-| 19 | local5 |
-| 20 | local6 |
-| 21 | local7 |
+| 5 | auth |
+| 6 | syslog |
+| 7 | lpr |
+| 8 | news |
+| 9 | uucp |
+| 10 | ftp |
+| 11 | ntp |
+| 12 | audit |
+| 13 | alert |
+| 14 | clock |
+| 15 | local0 |
+| 16 | local1 |
+| 17 | local2 |
+| 18 | local3 |
+| 19 | local4 |
+| 20 | local5 |
+| 21 | local6 |
+| 22 | local7 |
:::image type="content" source="../../sentinel/media/forward-syslog-monitor-agent/create-rule-data-source.png" lightbox="../../sentinel/media/forward-syslog-monitor-agent/create-rule-data-source.png" alt-text="Screenshot that shows the page to select the data source type and minimum log level.":::
azure-monitor Azure Monitor Workspace Scaling Best Practice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/azure-monitor-workspace-scaling-best-practice.md
+
+ Title: Best practices for scaling Azure Monitor Workspaces with Azure Monitor managed service for Prometheus
+description: Learn best practices for organizing your Azure Monitor Workspaces to meet your scale and growing volume of data ingestion
++++ Last updated : 07/24/2024+
+# customer intent: As an azure administrator I want to understand the best practices for scaling Azure Monitor Workspaces to meet a growing volume of data ingestion
+++
+# Best practices for scaling Azure Monitor Workspaces with Azure Monitor managed service for Prometheus
+
+Azure Monitor managed service for Prometheus allows you to collect and analyze metrics at scale. Prometheus metrics are stored in Azure Monitor Workspaces. The workspace supports analysis tools like Azure Managed Grafana, Azure Monitor metrics explorer with PromQL, and open source tools such as PromQL and Grafana.
+
+This article provides best practices for organizing your Azure Monitor Workspaces to meet your scale and growing volume of data ingestion.
++
+## Topology design criteria
+
+A single Azure Monitor workspace can be sufficient for many use cases. Some organizations create multiple workspaces to better meet their requirements. As you identify the right criteria to create additional workspaces, create the lowest number of workspaces that match your requirements, while optimizing for minimal administrative management overhead.
+
+The following are scenarios that require splitting an Azure Monitor workspace into multiple workspaces:
+
+| Scenario | Best practice |
+|||
+|Sovereign clouds.| When working with more than one sovereign cloud, create an Azure Monitor workspace in each cloud.|
+| Compliance or regulatory requirements.| If you're subject to regulations that mandate the storage of data in specific regions, create an Azure Monitor workspace per region as per your requirements. |
+| Regional scaling. | When you're managing metrics for regionally diverse organizations such as large services or financial institutions with regional accounts, create an Azure Monitor workspace per region.
+| Azure tenants.| For multiple Azure tenants, create an Azure Monitor workspace in each tenant. Querying data across tenants isn't supported.
+| Deployment environments. | Create a separate workspace for each of your deployment environments to maintain discrete metrics for development, test, preproduction, and production environments.|
+| Service limits and quotas. | Azure Monitor workspaces have default ingestion limits, which can be increased by opening a support ticket. If you're approaching the limit, or estimate that you'll exceed the ingestion limit, consider requesting an increase, or splitting your workspace into two or more workspaces.|
+
+## Service limits and quotas
+
+Azure Monitor workspaces have default quotas and limitations for metrics of 1 million event ingested per minute. As your usage grows and you need to ingest more metrics, you can request an increase. If your capacity requirements are exceptionally large and your data ingestion needs are exceeding the limits of a single Azure Monitor workspace, consider creating multiple Azure Monitor workspaces. Use the Azure monitor workspace platform metrics to monitor utilization and limits. For more information on limits and quotas, see [Azure Monitor service limits and quotas](/azure/azure-monitor/service-limits#prometheus-metrics).
+
+Consider the following best practices for managing Azure Monitor workspace limits and quotas:
+
+| Best practice | Description |
+|||
+| Monitor and create an alert on ingestion limits and utilization.| In the Azure portal, navigate to your Azure Monitor Workspace. Go to Metrics and verify that the metrics Active Time Series % Utilization and Events Per Minute Ingested % Utilization are below 100%. Set an Azure Monitor Alert to monitor the utilization and fire when the utilization is greater than 80% of the limit. For more information on monitoring utilization and limits, see [How can I monitor the service limits and quotas](/azure/azure-monitor/essentials/prometheus-metrics-overview#how-can-i-monitor-the-service-limits-and-quota).|
+|Request for a limit increase when the utilization exceeds 80% of the current limit.|As your Azure usage grows, the volume of data ingested is likely to increase. We recommend that you request an increase in limits if your data ingestion is exceeding or close to 80% of the ingestion limit. To request a limit increase, open a support ticket. To open a support ticket, see [Create an Azure support request](/azure/azure-supportability/how-to-create-azure-support-request).|
+|Estimate your projected scale.|As your usage grows and you ingest more metrics into your workspace, make an estimate of the projected scale and rate of growth. Based on your projections, request an increase in the limit.
+|Ingestion with Remote-write using the Azure monitor side-car container. |If you're using the Azure monitor side-car container and remote-write to ingest metrics into an Azure Monitor workspace, consider the following limits: <li>The side-car container can process up to 150,000 unique time series.</li><li> The container might throw errors serving requests over 150,000 due to the high number of concurrent connections. Mitigate this issue by increasing the remote batch size from the 500 default, to 1,000. Changing the remote batch size reduces the number of open connections.</li>|
+|DCR/DCE limits. |Limits apply to the data collection rules (DCR) and data collection endpoints (DCE) that send Prometheus metrics to your Azure Monitor workspace. For information on these limits, see [Prometheus Service limits](/azure/azure-monitor/service-limits#prometheus-metrics). These limits can't be increased. <p> Consider creating additional DCRs and DCEs to distribute the ingestion load across multiple endpoints. This approach helps optimize performance and ensures efficient data handling. For more information about creating DCRs and DCEs, see [How to create custom Data collection endpoint(DCE) and custom Data collection rule(DCR) for an existing Azure monitor workspace to ingest Prometheus metrics](https://github.com/Azure/prometheus-collector/tree/main/Azure-ARM-templates/Prometheus-RemoteWrite-DCR-artifacts)|
++
+## Optimizing performance for high volumes of data
+
+### Ingestion
+
+To optimize ingestion, consider the following best practices:
+
+| Best practice | Description |
+|||
+| Identify High cardinality Metrics. | Identify metrics that have a high cardinality, or metrics that are generating many time series. Once you identify high-cardinality metrics, optimize them to reduce the number of time series by dropping unnecessary labels.|
+| Use Prometheus config to optimize ingestion. | Azure Managed Prometheus provides Configmaps, which have settings that can be configured and used to optimize ingestion. For more information, see [ama-metrics-settings-configmap](https://aka.ms/azureprometheus-addon-settings-configmap) and [ama-metrics-prometheus-config-configmap](https://github.com/Azure/prometheus-collector/blob/main/otelcollector/configmaps/ama-metrics-prometheus-config-configmap.yaml). These configurations follow the same format as the Prometheus configuration file.<br> For information on customizing collection, see [Customize scraping of Prometheus metrics in Azure Monitor managed service for Prometheus](/azure/azure-monitor/containers/prometheus-metrics-scrape-configuration).<p> For example, consider the following: <li> **Tune Scrape Intervals**.</li> The default scrape frequency is 30 seconds, which can be changed per default target using the configmap. To balance the trade-off between data granularity and resource usage, adjust the `scrape_interval` and `scrape_timeout` based on the criticality of metrics. <li> **Drop unnecessary labels for high cardinality metrics**.</li> For high cardinality metrics, identify labels that aren't necessary and drop them to reduce the number of time series. Use the `metric_relabel_configs` to drop specific labels from ingestion. For more information, see [Prometheus Configuration](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config).|
+
+Use the configmap, change the settings as required, and apply the configmap to the kube-system namespace for your cluster. If you're using remote-writing into and Azure Monitor workspace, apply the customizations during ingestion directly in your Prometheus configuration.
+
+### Queries
+
+To optimize queries, consider the following best practices:
+
+#### Use Recording rules to optimize query performance
+
+Prometheus recording rules are used to precompute frequently used, or computationally expensive queries, making them more efficient and faster to query. Recording rules are especially useful for high volume metrics where querying raw data can be slow and resource-intensive. For more information, see [Recording rules](https://prometheus.io/docs/prometheus/latest/configuration/recording_rules/#recording-rules). Azure Managed Prometheus provides a managed and scalable way to create and update recording rules with the help of [Azure Managed Prometheus Rule Groups.](/azure/azure-monitor/essentials/prometheus-rule-groups#rule-types)
+
+Once the rule groups are created, Azure Managed Prometheus automatically loads and starts evaluating them. Query rule groups from the Azure Monitor workspace like other Prometheus metrics.
+
+Recording rules have the following benefits:
++
+- **Improve query performance**
+ Recording rules can be used to precompute complex queries, making them faster to query later. Precomputing complex queries reduces the load on Prometheus when these metrics are queried.
+
+- **Efficiency and Reduced query time**
+ Recording rules precompute the query results, reducing the time taken to query the data. This is especially useful for dashboards with multiple panels or high cardinality metrics.
+
+- **Simplicity**
+ Recording rules simplify queries in Grafana or other visualization tools, as they can reference precomputed metrics.
+
+The following example shows a recording rule as defined in Azure Managed Prometheus rule group:
+``` yaml
+"record": "job:request_duration_seconds:avg ",
+"expression": "avg(rate(request_duration_seconds_sum[5m])) by (job)",
+"labels": { "workload_type": "job"
+ },
+"enabled": true
+```
+
+For more complex metrics, create recording rules that aggregate multiple metrics or perform more advanced calculations. In the following example, `instance:node_cpu_utilisation:rate5m` computes the cpu utilization when the cpu isn't idle
+
+```yaml
+"record": "instance:node_cpu_utilisation:rate5m",
+ "expression": "1 - avg without (cpu) (sum without (mode)(rate(node_cpu_seconds_total{job=\"node\", mode=~\"idle|iowait|steal\"}[5m])))",
+"labels": {
+ "workload_type": "job"
+ },
+"enabled": true
+```
++
+Consider the following best practices for optimizing recording rules:
+
+| Best practice | Description |
+|||
+| Identify High Volume Metrics. | Focus on metrics that are queried frequently and have a high cardinality. |
+| Optimize Rule Evaluation Interval. | To balance between data freshness and computational load, adjust the evaluation interval of your recording rules. |
+| Monitor Performance. | Monitor query performance and adjust recording rules as necessary. |
+| Optimize rules by limiting scope.|To make recording rules faster, limit them in scope to a specific cluster. For more information, see [Limiting rules to a specific cluster](/azure/azure-monitor/essentials/prometheus-rule-groups#limiting-rules-to-a-specific-cluster).|
++
+#### Using filters in queries
+
+Optimizing Prometheus queries using filters involves refining the queries to return only the necessary data, reducing the amount of data processed and improving performance. The following are some common techniques to refine Prometheus queries.
+
+| Best practice | Description |
+|||
+| Use label filters.|Label filters help to narrow down the data to only what you need. Prometheus allows filtering by using `{label_name="label_value"}` syntax. If you have a large number of metrics across multiple clusters, an easy way to limit time series is to use the `cluster` filter. <p> For example, instead of querying `container_cpu_usage_seconds_total`, filter by cluster `container_cpu_usage_seconds_total{cluster="cluster1"}`.|
+| Apply time range selectors.|Using specific time ranges can significantly reduce the amount of data queried.<p> For example, instead of querying all data points for the last seven days `http_requests_total{job="myapp"}`, query for the last hour using `http_requests_total{job="myapp"}[1h]`.|
+| Use aggregation and grouping.| Aggregation functions can be used to summarize data, which can be more efficient than processing raw data points. When aggregating data, use `by` to group by specific labels, or `without` to exclude specific labels.<p> For example, sum requests grouped by job: `sum(rate(http_requests_total[5m])) by (job)`.|
+|Filter early in the query.| To limit the dataset from the start, apply filters as early as possible in your query.<p> For example, instead of `sum(rate(http_requests_total[5m])) by (job)`, filter first, then aggregate as follows: `sum(rate(http_requests_total{job="myapp"}[5m])) by (job)`.|
+| Avoid regex where possible.| Regex filters can be powerful but are also computationally expensive. Use exact matches whenever possible.<p> For example, instead of `http_requests_total{job=~"myapp.*"}`, use `http_requests_total{job="myapp"}`.|
+| Use offset for historical data.| If you're comparing current data with historical data, use the `offset` modifier.<p> For example, to compare current requests against requests from 24 hours ago, use `rate(http_requests_total[5m]) - rate(http_requests_total[5m] offset 24h)`.|
+| Limit data points in charts.| When creating charts, limit the number of data points to improve rendering performance. Use the step parameter to control the resolution.<p> For example, in Grafana: Set a higher step value to reduce data points:`http_requests_total{job="myapp"}[1h:10s]`.|
++
+#### Parallel queries
+
+Running a high number of parallel queries in Prometheus can lead to performance bottlenecks and can affect the stability of your Prometheus server. To handle a large volume of parallel queries efficiently, follow the best practices below:
+
+| Best practice | Description |
+|||
+| Query Load Distribution.| Distribute the query load by spreading the queries across different time intervals or Prometheus instances.|
+|Staggered Queries.| Schedule queries to run at different intervals to avoid peaks of simultaneous query executions.|
+
+If you're still seeing issues with running many parallel queries, create a support ticket to request an increase in the query limits.
+
+### Alerts and recording rules
+
+#### Optimizing alerts and recording rules for high scale
+
+Prometheus alerts and recording rules can be defined as Prometheus rule groups. One rule group can contain up to 20 alerts or recording rules. Create up to 500 rule groups for each workspace to accommodate the number of alerts/rules required. To raise this limit, open a support ticket
+
+When defining the recording rules, take into account the evaluation interval to optimize the number of time series per rule and the performance of rule evaluations. Evaluation intervals can be between 1 minute and 24 hours. The default is 1 minute.
+
+### Use Resource Health to view queries from recording rule status
+
+Set up Resource Health to view the health of your Prometheus rule group in the portal. Resource Health allows you to detect problems in your recording rules, such as incorrect configuration, or query throttling problems. For more information on setting up Resource Health, see [View the resource health states of your Prometheus rule groups](/azure/azure-monitor/essentials/prometheus-rule-groups#view-the-resource-health-states-of-your-prometheus-rule-groups)
azure-netapp-files Configure Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-customer-managed-keys.md
Previously updated : 06/26/2024 Last updated : 07/25/2024
The following diagram demonstrates how customer-managed keys work with Azure Net
## Considerations
-* Customer-managed keys can only be configured on new volumes. You can't migrate existing volumes to customer-managed key encryption.
* To create a volume using customer-managed keys, you must select the *Standard* network features. You can't use customer-managed key volumes with volume configured using Basic network features. Follow instructions in to [Set the Network Features option](configure-network-features.md#set-the-network-features-option) in the volume creation page. * For increased security, you can select the **Disable public access** option within the network settings of your key vault. When selecting this option, you must also select **Allow trusted Microsoft services to bypass this firewall** to permit the Azure NetApp Files service to access your encryption key. * Customer-managed keys support automatic Managed System Identity (MSI) certificate renewal. If your certificate is valid, you don't need to manually update it.
You can use an Azure Key Vault that is configured to use Azure role-based access
* [Create an SMB volume](azure-netapp-files-create-volumes-smb.md) * [Create a dual-protocol volume](create-volumes-dual-protocol.md)
+## <a name="transition"></a> Transition an Azure NetApp Files volume to customer-managed keys (preview)
+
+Azure NetApp Files supports the ability to move existing volumes using platform-managed keys to customer-managed keys. Once you complete the migration, you can't revert to platform-managed keys.
+
+### Register the feature
+
+Encryption key transition for Azure NetApp Files is currently in preview. Before using this feature for the first time, you need to register it.
+
+1. Register the feature:
+
+ ```azurepowershell-interactive
+ Register-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFMigratePmkToCmk
+ ```
+
+2. Check the status of the feature registration:
+
+ ```azurepowershell-interactive
+ Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFMigratePmkToCmk
+ ```
+ > [!NOTE]
+ > The **RegistrationState** may be in the `Registering` state for up to 60 minutes before changing to `Registered`. Wait until the status is **Registered** before continuing.
+
+You can also use [Azure CLI commands](/cli/azure/feature) `az feature register` and `az feature show` to register the feature and display the registration status.
+
+### Transition volumes
+
+>[!NOTE]
+>When you transition volumes to use customer-managed keys, you must perform the transition for every virtual network where your Azure NetApp Files account has volumes.
+
+1. Ensure you [configured your Azure NetApp Files account to use customer-managed keys](#configure-a-netapp-account-to-use-customer-managed-keys).
+1. In the Azure portal, navigate to **Encryption**.
+1. Select the **CMK Migration** tab.
+1. From the drop-down menu, select the virtual network and key vault private endpoint you want to use.
+1. Azure generates a list of volumes to be encrypted by your customer-managed key.
+1. Select **Confirm** to initiate the migration.
+ ## Rekey all volumes under a NetApp account If you have already configured your NetApp account for customer-managed keys and have one or more volumes encrypted with customer-managed keys, you can change the key that is used to encrypt all volumes under the NetApp account. You can select any key that is in the same key vault. Changing key vaults isn't supported.
azure-netapp-files Faq Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/faq-security.md
By default key management for Azure NetApp Files is handled by the service, usin
Alternatively, [customer-managed keys for Azure NetApp Files volume encryption](configure-customer-managed-keys.md) can be used where keys are stored in [Azure Key Vault](../key-vault/general/basic-concepts.md). With customer-managed keys, you can fully manage the relationship between a key's life cycle, key usage permissions, and auditing operations on keys. The feature is generally available (GA) in [supported regions](configure-customer-managed-keys.md#supported-regions).
+Azure NetApp Files supports the ability to move existing volumes using platform-managed keys to customer-managed keys. Once you complete the transition, you cannot revert back to platform-managed keys. For additional information, see [Transition an Azure NetApp Files volume to customer-managed keys](configure-customer-managed-keys.md#transition).
+ Also, customer-managed keys using Azure Dedicated HSM is supported on a controlled basis. Support is currently available in the East US, South Central US, West US 2, and US Gov Virginia regions. You can request access [with the Azure NetApp Files feedback form](https://aka.ms/ANFFeedback). As capacity becomes available, requests will be approved. ## Can I configure the NFS export policy rules to control access to the Azure NetApp Files service mount target?
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
Previously updated : 07/19/2024 Last updated : 07/25/2024
Azure NetApp Files is updated regularly. This article provides a summary about t
## July 2024
+* [Transition a volume to customer-managed keys](configure-customer-managed-keys.md#transition) (Preview)
+
+ Azure NetApp Files now supports the ability to transition an existing volume to use customer-managed keys for volume encryption.
+ * [Customer-managed keys for Azure NetApp Files volume encryption](configure-customer-managed-keys.md#supported-regions) is now available in all US Gov regions * [Azure NetApp Files large volume enhancement:](large-volumes-requirements-considerations.md) increased throughput and maximum size limit of 2-PiB volume (preview)
azure-resource-manager Extension Resource Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/extension-resource-types.md
Title: Extension resource types description: Lists the Azure resource types are used to extend the capabilities of other resource types. Previously updated : 03/19/2024 Last updated : 07/24/2024 # Resource types that extend capabilities of other resources
An extension resource is a resource that adds to another resource's capabilities
* investigations * tenantActivityLogAlerts
+## Microsoft.App
+
+* functions
+* logicApps
+ ## Microsoft.Authorization * accessReviewHistoryDefinitions
An extension resource is a resource that adds to another resource's capabilities
* ec2Instances
-## Microsoft.AzureCIS
-
-* plannedQuotas
- ## Microsoft.AzureStackHCI
+* edgeDevices
* virtualMachineInstances ## Microsoft.Billing
An extension resource is a resource that adds to another resource's capabilities
## Microsoft.Chaos
-* artifactSetDefinitions
-* artifactSetSnapshots
* targets ## Microsoft.ConnectedVMwarevSphere
An extension resource is a resource that adds to another resource's capabilities
* backupInstances
+## Microsoft.Edge
+
+* connectivityStatuses
+* Sites
+* updates
+
+## Microsoft.EdgeMarketplace
+
+* offers
+* publishers
+ ## Microsoft.EventGrid * eventSubscriptions
An extension resource is a resource that adds to another resource's capabilities
* diagnostics * discoverySolutions
+* plugins
+* simplifiedSolutions
* solutions * troubleshooters
+## Microsoft.HybridCompute
+
+* networkConfigurations
+* settings
+ ## Microsoft.HybridConnectivity * endpoints * solutionConfigurations
+## Microsoft.HybridContainerService
+
+* kubernetesVersions
+* provisionedClusterInstances
+ ## microsoft.insights * dataCollectionRuleAssociations
An extension resource is a resource that adds to another resource's capabilities
* metricDefinitions * metricNamespaces * metrics
-* myWorkbooks
* tenantactiongroups * topology * transactions
An extension resource is a resource that adds to another resource's capabilities
* namespaces * sourceControlConfigurations
+## Microsoft.KubernetesRuntime
+
+* bgpPeers
+* loadBalancers
+* services
+* storageClasses
+
+## Microsoft.LoadTestService
+
+* loadTestMappings
+* loadTestProfileMappings
+ ## Microsoft.Maintenance * applyUpdates * configurationAssignments
+* scheduledevents
* updates ## Microsoft.ManagedIdentity
An extension resource is a resource that adds to another resource's capabilities
* managementGroups
+## Microsoft.Marketplace
+
+* products
+
+## Microsoft.Monitor
+
+* investigations
+ ## Microsoft.Network * cloudServiceNetworkInterfaces
An extension resource is a resource that adds to another resource's capabilities
## Microsoft.Quota
+* groupQuotas
* quotaRequests * quotas * usages
An extension resource is a resource that adds to another resource's capabilities
* events * impactedResources
+## Microsoft.ResourceNotifications
+
+* eventGridFilters
+ ## Microsoft.Resources * links
-* snapshots
* tags ## Microsoft.ScVmm
An extension resource is a resource that adds to another resource's capabilities
* adaptiveNetworkHardenings * advancedThreatProtectionSettings
-* antiMalwareSettings
* apiCollections * applications * assessmentMetadata
An extension resource is a resource that adds to another resource's capabilities
* Compliances * customRecommendations * dataCollectionAgents
-* dataSensitivitySettings
* defenderForStorageSettings * deviceSecurityGroups * governanceRules
An extension resource is a resource that adds to another resource's capabilities
* InformationProtectionPolicies * integrations * jitPolicies
+* pricings
* secureScoreControls * secureScores * securityStandards * serverVulnerabilityAssessments * sqlVulnerabilityAssessments * standardAssignments
+* trustedIps
## Microsoft.SecurityInsights
An extension resource is a resource that adds to another resource's capabilities
* automationRules * billingStatistics * bookmarks
+* businessApplicationAgents
* cases * contentPackages * contentProductPackages * contentProductTemplates * contentTemplates
+* contenttranslators
* dataConnectorDefinitions * dataConnectors
-* dynamicSummaries
* enrichment
+* enrichmentWidgets
* entities * entityQueryTemplates * exportConnections
azure-resource-manager Tag Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/tag-resources.md
For REST API operations, see [Azure Billing REST API Reference](/rest/api/billin
## Unique tags pagination
-When calling the [Unique Tags API](/rest/api/resources/tags/list) there is a limit to the size of each API response page that is returned. A tag that has a large set of unique values will require the API to fetch the next page to retrieve the remaining set of values. When this happens the tag key is shown again to indicate that the vales are still under this key.
+When calling the [Unique Tags API](/rest/api/resources/tags/list) there is a limit to the size of each API response page that is returned. A tag that has a large set of unique values will require the API to fetch the next page to retrieve the remaining set of values. When this happens the tag key is shown again to indicate that the values are still under this key.
This can result in some tools, like the Azure portal, to show the tag key twice.
backup Azure Backup Move Vaults Across Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-backup-move-vaults-across-regions.md
- Title: Move Azure Recovery Services vault to another region
-description: In this article, you'll learn how to ensure continued backups after moving the resources across regions.
- Previously updated : 09/24/2021----
-# Back up resources in Recovery Services vault after moving across regions
-
-Azure Resource Mover supports the movement of multiple resources across regions. While moving your resources from one region to another, you can ensure that your resources stay protected. As Azure Backup supports protection of several workloads, you may need to take some steps to continue having the same level of protection in the new region.
-
-To understand the detailed steps to achieve this, refer to the sections below.
-
->[!Note]
->Azure Backup currently doesnΓÇÖt support the movement of backup data from one Recovery Services vault to another. To protect your resource in the new region, the resource needs to be registered and backed up to a new/existing vault in the new region. When moving your resources from one region to another, backup data in your existing Recovery Services vaults in the older region can be retained/deleted based on your requirement. If you choose to retain data in the old vaults, you will incur backup charges accordingly.
-
-## Back up Azure Virtual Machine after moving across regions
-
-When an Azure Virtual Machine (VM) thatΓÇÖs been protected by a Recovery Services vault is moved from one region to another, it can no longer be backed up to the older vault. The backups in the old vault will start failing with the errors **BCMV2VMNotFound** or [**ResourceNotFound**](./backup-azure-vms-troubleshoot.md#320001-resourcenotfoundcould-not-perform-the-operation-as-vm-no-longer-exists--400094-bcmv2vmnotfoundthe-virtual-machine-doesnt-exist--an-azure-virtual-machine-wasnt-found). For information on how to protect your VMs in the new region, see the following sections.
-
-### Prepare to move Azure VMs
-
-Before you move a VM, ensure the following prerequisites are met:
-
-1. See the [prerequisites associated with VM move](../resource-mover/tutorial-move-region-virtual-machines.md#prerequisites) and ensure that the VM is eligible for move.
-1. [Select the VM on the **Backup Items** tab](./backup-azure-delete-vault.md#delete-protected-items-in-the-cloud) of existing vaultΓÇÖs dashboard and select **Stop protection** followed by retain/delete data as per your requirement. When the backup data for a VM is stopped with retain data, the recovery points remain forever and donΓÇÖt adhere to any policy. This ensures you always have your backup data ready for restore.
- >[!Note]
- >Retaining data in the older vault will incur backup charges. If you no longer wish to retain data to avoid billing, you need to delete the retained backup data using the [Delete data option](./backup-azure-manage-vms.md#delete-backup-data).
-1. Ensure that the VMs are turned on. All VMsΓÇÖ disks that need to be available in the destination region are attached and initialized in the VMs.
-1. Ensure that VMs have the latest trusted root certificates, and an updated certificate revocation list (CRL). To do so:
- - On Windows VMs, install the latest Windows updates.
- - On Linux VMs, refer to distributor guidance to ensure that machines have the latest certificates and CRL.
-1. Allow outbound connectivity from VMs:
- - If you're using a URL-based firewall proxy to control outbound connectivity, allow access to [these URLs](../resource-mover/support-matrix-move-region-azure-vm.md#url-access).
- - If you're using network security group (NSG) rules to control outbound connectivity, create [these service tag rules](../resource-mover/support-matrix-move-region-azure-vm.md#nsg-rules).
-
-### Move Azure VMs
-
-Move your VM to the new region using [Azure Resource Mover](../resource-mover/tutorial-move-region-virtual-machines.md).
-
-### Protect Azure VMs using Azure Backup
-
-Start protecting your VM in a new or existing Recovery Services vault in the new region. When you need to restore from your older backups, you can still do it from your old Recovery Services vault if you had chosen to retain the backup data.
-
-The above steps should help ensure that your resources are being backed up in the new region as well.
-
-## Back up Azure File Share after moving across regions
-
-Azure Backup offers [a snapshot management solution](./backup-afs.md) for your Azure Files today. This means, you donΓÇÖt move the file share data into the Recovery Services vaults. Also, as the snapshots donΓÇÖt move with your Storage Account, youΓÇÖll effectively have all your backups (snapshots) in the existing region only and protected by the existing vault. However, if you move your Storage Accounts along with the file shares across regions or create new file shares in the new region, see to the following sections to ensure that they are protected by Azure Backup.
-
-### Prepare to move Azure File Share
-
-Before you move the Storage Account, ensure the following prerequisites are met:
-
-1. See the [prerequisites to move Storage Account](../storage/common/storage-account-move.md?tabs=azure-portal#prerequisites).
-1. Export and modify a Resource Move template. For more information, see [Prepare Storage Account for region move](../storage/common/storage-account-move.md?tabs=azure-portal#prepare).
-
-### Move Azure File Share
-
-To move your Storage Accounts along with the Azure File Shares in them from one region to another, see [Move an Azure Storage account to another region](../storage/common/storage-account-move.md).
-
->[!Note]
->When Azure File Share is copied across regions, its associated snapshots donΓÇÖt move along with it. In order to move the snapshots data to the new region, you need to move the individual files and directories of the snapshots to the Storage Account in the new region using [AzCopy](../storage/common/storage-use-azcopy-files.md#copy-all-file-shares-directories-and-files-to-another-storage-account).
-
-### Protect Azure File share using Azure Backup
-
-Start protecting the Azure File Share copied into the new Storage Account in a new or existing Recovery Services vault in the new region.
-
-Once the Azure File Share is copied to the new region, you can choose to stop protection and retain/delete the snapshots (and the corresponding recovery points) of the original Azure File Share as per your requirement. This can be done by selecting your file share on the [Backup Items tab](./backup-azure-delete-vault.md#delete-protected-items-in-the-cloud) of the original vaultΓÇÖs dashboard. When the backup data for Azure File Share is stopped with retain data, the recovery points remain forever and donΓÇÖt adhere to any policy.
-
-This ensures that you will always have your snapshots ready for restore from the older vault.
-
-## Back up SQL Server/SAP HANA in Azure VM after moving across regions
-
-When you move a VM running SQL or SAP HANA servers to another region, the SQL and SAP HANA databases in those VMs can no longer be backed up in the vault of the earlier region. To protect the SQL and SAP HANA servers running in Azure VM in the new region, see the follow sections.
-
-### Prepare to move SQL Server/SAP HANA in Azure VM
-
-Before you move SQL Server/SAP HANA running in a VM to a new region, ensure the following prerequisites are met:
-
-1. See the [prerequisites associated with VM move](../resource-mover/tutorial-move-region-virtual-machines.md#prerequisites) and ensure that the VM is eligible for move.
-1. Select the VM on the [Backup Items tab](./backup-azure-delete-vault.md#delete-protected-items-in-the-cloud) of the existing vaultΓÇÖs dashboard and select _the databases_ for which backup needs to be stopped. Select **Stop protection** followed by retain/delete data as per your requirement. When the backup data is stopped with retain data, the recovery points remain forever and donΓÇÖt adhere to any policy. This ensures that you always have your backup data ready for restore.
- >[!Note]
- >Retaining data in the older vault will incur backup charges. If you no longer wish to retain data to avoid billing, you need to delete the retained backup data using [Delete data option](./backup-azure-manage-vms.md#delete-backup-data).
-1. Ensure that the VMs to be moved are turned on. All VMs disks that need to be available in the destination region are attached and initialized in the VMs.
-1. Ensure that VMs have the latest trusted root certificates, and an updated certificate revocation list (CRL). To do so:
- - On Windows VMs, install the latest Windows updates.
- - On Linux VMs, refer to the distributor guidance and ensure that machines have the latest certificates and CRL.
-1. Allow outbound connectivity from VMs:
- - If you're using a URL-based firewall proxy to control outbound connectivity, allow access to [these URLs](../resource-mover/support-matrix-move-region-azure-vm.md#url-access).
- - If you're using network security group (NSG) rules to control outbound connectivity, create [these service tag rules](../resource-mover/support-matrix-move-region-azure-vm.md#nsg-rules).
-
-### Move SQL Server/SAP HANA in Azure VM
-
-Move your VM to the new region using [Azure Resource Mover](../resource-mover/tutorial-move-region-virtual-machines.md).
-
-### Protect SQL Server/SAP HANA in Azure VM using Azure Backup
-
-Start protecting the VM in a new/existing Recovery Services vault in the new region. When you need to restore from your older backups, you can still do it from your old Recovery Services vault.
-
-The above steps should help ensure that your resources are being backed up in the new region as well.
chaos-studio Chaos Studio Tutorial Aad Outage Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-tutorial-aad-outage-portal.md
Title: Use a chaos experiment template to induce an outage on an Azure Active Directory instance
-description: Use the Azure portal to create an experiment from the AAD outage experiment template.
+ Title: Use a chaos experiment template to induce an outage on a Microsoft Entra ID instance
+description: Use the Azure portal to create an experiment from the Microsoft Entra ID outage experiment template.
Last updated 09/27/2023
-# Use a chaos experiment template to induce an outage on an Azure Active Directory instance
+# Use a chaos experiment template to induce an outage on a Microsoft Entra ID instance
-You can use a chaos experiment to verify that your application is resilient to failures by causing those failures in a controlled environment. In this article, you induce an outage on an Azure Active Directory resource using a pre-populated experiment template and Azure Chaos Studio.
+You can use a chaos experiment to verify that your application is resilient to failures by causing those failures in a controlled environment. In this article, you induce an outage on a Microsoft Entra ID resource using a pre-populated experiment template and Azure Chaos Studio.
## Prerequisites
Now you can create your experiment from a pre-filled experiment template. A chao
1. In Chaos Studio, go to **Experiments** > **Create** > **New from template**. [![Screenshot that shows the Experiments screen, with the New from template button highlighted.](images/tutorial-aad-outage-create.png)](images/tutorial-aad-outage-create.png#lightbox)
-1. Select **AAD Outage**.
+1. Select **Microsoft Entra ID Outage**.
[![Screenshot that shows the experiment templates screen, with the AAD outage template button highlighted.](images/tutorial-aad-outage-select.png)](images/tutorial-aad-outage-select.png#lightbox) 1. Add a name for your experiment that complies with resource naming guidelines. Select **Next: Permissions**.
You're now ready to run your experiment.
1. When **Status** changes to *Running*, select **Details** for the latest run under **History** to see details for the running experiment. ## Next steps
-Now that you've run an AAD outage template experiment, you're ready to:
+Now that you've run a Microsoft Entra ID outage template experiment, you're ready to:
- [Manage your experiment](chaos-studio-run-experiment.md) - [Create an experiment that shut down all targets in a zone](chaos-studio-tutorial-dynamic-target-portal.md)
communication-services Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/sms/concepts.md
Sending SMS to any recipient requires getting a phone number. Choosing the right
\* See [Alphanumeric sender ID FAQ](./sms-faq.md#alphanumeric-sender-id) for detailed formatting requirements. > [!IMPORTANT]
-> Effective **April 19, 2024**, All UK alpha sender IDs now require a [registration application](https://forms.office.com/r/pK8Jhyhtd4) approval.
+> Effective **April 19, 2024**, All UK alpha sender IDs now require a [registration application](../../quickstarts/sms/enable-alphanumeric-sender-id.md#enable-preregistered-alphanumeric-sender-id) approval.
## Next steps
confidential-computing Confidential Vm Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/confidential-vm-overview.md
Azure confidential VMs offer strong security and confidentiality for tenants. Th
- Secure key release with cryptographic binding between the platform's successful attestation and the VM's encryption keys. - Dedicated virtual [Trusted Platform Module (TPM)](/windows/security/information-protection/tpm/trusted-platform-module-overview) instance for attestation and protection of keys and secrets in the virtual machine. - Secure boot capability similar to [Trusted launch for Azure VMs](../virtual-machines/trusted-launch.md)
+- Ultra disk capability is supported on confidential VMs
## Confidential OS disk encryption
Confidential VMs *don't support*:
- Microsoft Azure Virtual Machine Scale Sets with Confidential OS disk encryption enabled - Limited Azure Compute Gallery support - Shared disks-- Ultra disks - Accelerated Networking - Live migration - Screenshots under boot diagnostics
container-apps Java Eureka Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/java-eureka-server.md
Now that you have an existing environment, you can create your container app and
This command returns the URL of your container app that consumes registers with the Eureka server component. Copy the URL to a text editor so you can use it in a coming step.
- Navigate top the `/allRegistrationStatus` route view all applications registered with the Eureka Server for Spring.
+ Navigate to the `/allRegistrationStatus` route to view all applications registered with the Eureka Server for Spring.
The binding injects several configurations into the application as environment variables, primarily the `eureka.client.service-url.defaultZone` property. This property indicates the internal endpoint of the Eureka Server Java component.
container-apps Sessions Code Interpreter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/sessions-code-interpreter.md
Last updated 05/06/2024 + # Serverless code interpreter sessions in Azure Container Apps (preview)
container-apps Sessions Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/sessions-custom-container.md
Last updated 06/26/2024 + # Azure Container Apps custom container sessions (preview)
container-registry Monitor Container Registry Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/monitor-container-registry-reference.md
+
+ Title: Monitoring data reference for Azure Container Registry
+description: This article contains important reference material you need when you monitor Azure Container Registry.
Last updated : 06/17/2024+++++++
+# Azure Container Registry monitoring data reference
++
+See [Monitor Azure Container Registry](monitor-container-registry.md) for details on the data you can collect for Azure Container Registry and how to use it.
++
+### Supported metrics for Microsoft.ContainerRegistry/registries
+
+The following table lists the metrics available for the Microsoft.ContainerRegistry/registries resource type.
+++
+> [!NOTE]
+> Because of layer sharing, registry **Storage used** might be less than the sum of storage for individual repositories. When you [delete](container-registry-delete.md) a repository or tag, you recover only the storage used by manifest files and the unique layers referenced.
+++
+- **Geolocation**. The Azure region for a registry or [geo-replica](container-registry-geo-replication.md).
++
+### Supported resource logs for Microsoft.ContainerRegistry/registries
+++
+For a reference of all Azure Monitor Logs and Log Analytics tables, see the [Azure Monitor Log Table Reference](/azure/azure-monitor/reference/tables/tables-resourcetype).
+
+### Container Registry Microsoft.ContainerRegistry/registries
+
+- [AzureActivity](/azure/azure-monitor/reference/tables/azureactivity#columns). Entries from the Azure Activity log that provide insight into any subscription-level or management group level events that occurred in Azure.
+- [AzureMetrics](/azure/azure-monitor/reference/tables/azuremetrics#columns). Metric data emitted by Azure services that measure their health and performance.
+- [ContainerRegistryLoginEvents](/azure/azure-monitor/reference/tables/containerregistryloginevents#columns). Registry authentication events and status, including the incoming identity and IP address.
+- [ContainerRegistryRepositoryEvents](/azure/azure-monitor/reference/tables/containerregistryrepositoryevents#columns). Operations on images and other artifacts in registry repositories. The following operations are logged: push, pull, untag, delete (including repository delete), purge tag, and purge manifest.
+
+ Purge events are logged only if a registry [retention policy](container-registry-retention-policy.md) is configured.
++
+- [Microsoft.ContainerRegistry resource provider operations](/azure/role-based-access-control/resource-provider-operations#microsoftcontainerregistry)
+
+The following table lists operations related to Azure Container Registry that can be created in the Activity log. This list isn't exhaustive.
+
+| Operation | Description |
+|:|:|
+| Create or Update Container Registry | Create a container registry or update a registry property |
+| Delete Container Registry | Delete a container registry |
+| List Container Registry Login Credentials | Show credentials for registry's admin account |
+| Import Image | Import an image or other artifact to a registry |
+| Create Role Assignment | Assign an identity a Role-based access control (RBAC) role to access a resource |
+
+## Related content
+
+- See [Monitor Azure Container Registry](monitor-container-registry.md) for a description of monitoring Container Registry.
+- See [Monitor Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources.
container-registry Monitor Container Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/monitor-container-registry.md
+
+ Title: Monitor Azure Container Registry
+description: Start here to learn how you can use the features of Azure Monitor to analyze and alert data in Azure Container Registry.
Last updated : 06/17/2024+++++++
+# Monitor Azure Container Registry
++
+This article describes the monitoring data generated by Azure Container Registry and how you can use the features of Azure Monitor to analyze and alert on this data.
+
+## Monitor overview
+
+The **Overview** page in the Azure portal for each registry includes a brief view of recent resource usage and activity, such as push and pull operations. This high-level information is useful, but only a small amount of data is shown there.
++
+For more information about the resource types for Container Registry, see [Azure Container Registry monitoring data reference](monitor-container-registry-reference.md).
+
+## Monitoring data
+
+Azure Container Registry collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](../azure-monitor/essentials/monitor-azure-resource.md#monitoring-data).
+
+See [Monitoring Azure Container Registry data reference](monitor-service-reference.md) for detailed information on the metrics and logs created by Azure Container Registry.
+
+## Collection and routing
+
+Platform metrics and the Activity log are collected and stored automatically, but can be routed to other locations by using a diagnostic setting.
+
+Resource Logs aren't collected and stored until you create a diagnostic setting and route them to one or more locations.
+
+See [Create diagnostic setting to collect platform logs and metrics in Azure](../azure-monitor/essentials/diagnostic-settings.md) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, or PowerShell. When you create a diagnostic setting, you specify which categories of logs to collect. The categories for Azure Container Registry are listed in [Azure Container Registry monitoring data reference](monitor-service-reference.md#resource-logs).
+
+> [!TIP]
+> You can also create registry diagnostic settings by navigating to your registry in the portal. In the menu, select **Diagnostic settings** under **Monitoring**.
+
+The metrics and logs you can collect are discussed in the following sections.
+++
+For a list of available metrics for Container Registry, see [Azure Container Registry monitoring data reference](monitor-container-registry-reference.md#metrics).
+
+## Analyzing metrics
+
+You can analyze metrics for an Azure container registry with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Analyze metrics with Azure Monitor metrics explorer](../azure-monitor/essentials/analyze-metrics.md) for details on using this tool.
+
+> [!TIP]
+> You can also go to the metrics explorer by navigating to your registry in the portal. In the menu, select **Metrics** under **Monitoring**.
+
+For a list of the platform metrics collected for Azure Container Registry, see [Monitoring Azure Container Registry data reference metrics](monitor-service-reference.md#metrics).
+
+### Azure CLI
+
+The following Azure CLI commands can be used to get information about the Azure Container Registry metrics.
+
+- [az monitor metrics list-definitions](/cli/azure/monitor/metrics#az-monitor-metrics-list-definitions) - List metric definitions and dimensions
+- [az monitor metrics list](/cli/azure/monitor/metrics#az-monitor-metrics-list) - Retrieve metric values
+
+### REST API
+
+You can use the Azure Monitor REST API to get information programmatically about the Azure Container Registry metrics.
+
+- [List metric definitions and dimensions](/rest/api/monitor/metricdefinitions/list)
+- [Retrieve metric values](/rest/api/monitor/metrics/list)
++
+## Analyzing logs
+
+Data in Azure Monitor Logs is stored in tables where each table has its own set of unique properties.
+
+All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](../azure-monitor/essentials/resource-logs-schema.md#top-level-common-schema). The schema for Azure Container Registry resource logs is found in the [Azure Container Registry Data Reference](monitor-service-reference.md#schemas).
+
+The [Activity log](../azure-monitor/essentials/activity-log.md) is a platform log in Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
+
+For a list of the types of resource logs collected for Azure Container Registry, see [Monitoring Azure Container Registry data reference](monitor-service-reference.md#resource-logs).
+
+For a list of the tables used by Azure Monitor Logs and queryable by Log Analytics, see [Monitoring Azure Container Reference data reference](monitor-service-reference.md#azure-monitor-logs-tables).
+
+For the available resource log categories, their associated Log Analytics tables, and the log schemas for Container Registry, see [Azure Container Registry monitoring data reference](monitor-container-registry-reference.md#resource-logs).
+++++
+For example, the following query retrieves the most recent 24 hours of data from the **ContainerRegistryRepositoryEvents** table:
+
+```Kusto
+ContainerRegistryRepositoryEvents
+| where TimeGenerated > ago(1d)
+```
+
+The following image shows sample output:
++
+Following are queries that you can use to help you monitor your registry resource.
+
+Error events from the last hour:
+
+```Kusto
+union Event, Syslog // Event table stores Windows event records, Syslog stores Linux records
+| where TimeGenerated > ago(1h)
+| where EventLevelName == "Error" // EventLevelName is used in the Event (Windows) records
+ or SeverityLevel== "err" // SeverityLevel is used in Syslog (Linux) records
+```
+
+100 most recent registry events:
+
+```Kusto
+ContainerRegistryRepositoryEvents
+| union ContainerRegistryLoginEvents
+| top 100 by TimeGenerated
+| project TimeGenerated, LoginServer, OperationName, Identity, Repository, DurationMs, Region , ResultType
+```
+
+Identity of user or object that deleted repository:
+
+```Kusto
+ContainerRegistryRepositoryEvents
+| where OperationName contains "Delete"
+| project LoginServer, OperationName, Repository, Identity, CallerIpAddress
+```
+
+Identity of user or object that deleted tag:
+
+```Kusto
+ContainerRegistryRepositoryEvents
+| where OperationName contains "Untag"
+| project LoginServer, OperationName, Repository, Tag, Identity, CallerIpAddress
+```
+
+Repository-level operation failures:
+
+```kusto
+ContainerRegistryRepositoryEvents
+| where ResultDescription contains "40"
+| project TimeGenerated, OperationName, Repository, Tag, ResultDescription
+```
+
+Registry authentication failures:
+
+```kusto
+ContainerRegistryLoginEvents
+| where ResultDescription != "200"
+| project TimeGenerated, Identity, CallerIpAddress, ResultDescription
+```
++
+### Azure Container Registry alert rules
+
+The following table lists some suggested alert rules for Container Registry. These alerts are just examples. You can set alerts for any metric, log entry, or activity log entry listed in the [Azure Container Registry monitoring data reference](monitor-container-registry-reference.md).
+
+| Alert type | Condition | Description |
+|:|:|:|
+| metric | Signal: Storage used<br/>Operator: Greater than<br/>Aggregation type: Average<br/>Threshold value: 5 GB| Alerts if the registry storage used exceeds a specified value.|
+
+### Example: Send email alert when registry storage used exceeds a value
+
+1. In the Azure portal, navigate to your registry.
+1. Select **Metrics** under **Monitoring**.
+1. In the metrics explorer, in **Metric**, select **Storage used**.
+1. Select **New alert rule**.
+1. In **Scope**, confirm the registry resource for which you want to create an alert rule.
+1. In **Condition**, select **Add condition**.
+ 1. In **Signal name**, select **Storage used**.
+ 1. In **Chart period**, select **Over the last 24 hours**.
+ 1. In **Alert logic**, in **Threshold value**, select a value such as *5*. In **Unit**, select a value such as *GB*.
+ 1. Accept default values for the remaining settings, and select **Done**.
+1. In **Actions**, select **Add action groups** > **+ Create action group**.
+ 1. Enter details of the action group.
+ 1. On the **Notifications** tab, select **Email/SMS message/Push/Voice** and enter a recipient such as *admin@contoso.com*. Select **Review + create**.
+1. Enter a name and description of the alert rule, and select the severity level.
+1. Select **Create alert rule**.
++
+## Related content
+
+- See [Azure Container Registry monitoring data reference](monitor-container-registry-reference.md) for a reference of the metrics, logs, and other important values created for Container Registry.
+- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for general details on monitoring Azure resources.
container-registry Monitor Service Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/monitor-service-reference.md
- Title: Monitoring Azure Container Registry data reference
-description: Important reference material needed when you monitor your Azure container registry. Provides details about metrics, resource logs, and log schemas.
----- Previously updated : 10/31/2023--
-# Monitoring Azure Container Registry data reference
-
-See [Monitor Azure Container Registry](monitor-service.md) for details on collecting and analyzing monitoring data for Azure Container Registry.
-
-## Metrics
-
-### Container Registry metrics
-
-Resource Provider and Type: [Microsoft.ContainerRegistry/registries](../azure-monitor/essentials/metrics-supported.md#microsoftcontainerregistryregistries)
-
-| Metric | Exportable via Diagnostic Settings? | Unit | Aggregation Type | Description | Dimensions |
-|:-|:--|:--|:|:|:-- |
-| AgentPoolCPUTime | Yes | Seconds | Total | CPU time used by [ACR tasks](container-registry-tasks-overview.md) running on dedicated [agent pools](tasks-agent-pools.md) | None |
-| RunDuration | Yes | Milliseconds | Total | Duration of [ACR tasks](container-registry-tasks-overview.md) runs | None |
-| StorageUsed | No | Bytes | Average | Storage used by the container registry<br/><br/>Sum of storage for unique and shared layers, manifest files, and replica copies in all repositories<sup>1</sup> | Geolocation |
-| SuccessfulPullCount | Yes | Count | Total | Successful pulls of container images and other artifacts from the registry. Total get requests to registry v2 manifest api with 200 response. | None |
-| SuccessfulPushCount | Yes | Count | Total | Successful pushes of container images and other artifacts to the registry. | None |
-| TotalPullCount | Yes | Count | Total | Total pulls of container images and other artifacts from the registry. Total get requests to registry v2 manifest api. | None |
-| TotalPushCount | Yes | Count | Total | Total pushes of container images and other artifacts to the registry | None |
-
-<sup>1</sup>Because of layer sharing, registry storage used may be less than the sum of storage for individual repositories. When you [delete](container-registry-delete.md) a repository or tag, you recover only the storage used by manifest files and the unique layers referenced.
-
-For more information, see a list of [all platform metrics supported in Azure Monitor](../azure-monitor/essentials/metrics-supported.md).
-
-## Metric Dimensions
-
-For more information on what metric dimensions are, see [Multi-dimensional metrics](../azure-monitor/essentials/data-platform-metrics.md#multi-dimensional-metrics).
-
-Azure Container Registry has the following dimensions associated with its metrics.
-
-| Dimension Name | Description |
-| - | -- |
-| **Geolocation** | The Azure region for a registry or [geo-replica](container-registry-geo-replication.md). |
--
-## Resource logs
-
-This section lists the types of resource logs you can collect for Azure Container Registry.
-
-For reference, see a list of [all resource logs category types supported in Azure Monitor](../azure-monitor/essentials/resource-logs-schema.md).
-
-### Container Registries
-
-Resource Provider and Type: [Microsoft.ContainerRegistry/registries](../azure-monitor/essentials/resource-logs-categories.md#microsoftcontainerregistryregistries)
-
-| Category | Display Name | Details |
-|:|:-||
-| ContainerRegistryLoginEvents | Login Events | Registry authentication events and status, including the incoming identity and IP address |
-| ContainerRegistryRepositoryEvents | Repository Events | Operations on images and other artifacts in registry repositories<br/><br/> The following operations are logged: push, pull, untag, delete (including repository delete), purge tag, and purge manifest<sup>1</sup> |
-
-<sup>1</sup>Purge events are logged only if a registry [retention policy](container-registry-retention-policy.md) is configured.
-
-## Azure Monitor Logs tables
-
-This section refers to all of the Azure Monitor Logs Kusto tables relevant to Azure Container Registry and available for query by Log Analytics.
-
-### Container Registry
-
-| Table | Description |
-|:|:-|
-| [AzureActivity](/azure/azure-monitor/reference/tables/azureactivity) | Entries from the Azure Activity log that provide insight into any subscription-level or management group level events that have occurred in Azure. |
-| [AzureMetrics](/azure/azure-monitor/reference/tables/azuremetrics) | Metric data emitted by Azure services that measure their health and performance. |
-| [ContainerRegistryLoginEvents](/azure/azure-monitor/reference/tables/containerregistryloginevents) | Azure Container Registry Login Auditing Logs |
-| [ContainerRegistryRepositoryEvents](/azure/azure-monitor/reference/tables/containerregistryrepositoryevents) | Azure Container Registry Repository Auditing Logs |
-
-For a reference of all Azure Monitor Logs / Log Analytics tables, see the [Azure Monitor Log Table Reference](/azure/azure-monitor/reference/tables/tables-resourcetype).
-
-## Activity log
-
-The following table lists operations related to Azure Container Registry that may be created in the [Activity log](../azure-monitor/essentials/activity-log.md). This list is not exhaustive.
-
-| Operation | Description |
-|:|:|
-| Create or Update Container Registry | Create a container registry or update a registry property |
-| Delete Container Registry | Delete a container registry |
-| List Container Registry Login Credentials | Show credentials for registry's admin account |
-| Import Image | Import an image or other artifact to a registry |
-| Create Role Assignment | Assign an identity an RBAC role to access a resource |
--
-## Schemas
-
-The following schemas are in use by Azure Container Registry's resource logs.
-
-| Schema | Description |
-|: |:|
-| [ContainerRegistryLoginEvents](/azure/azure-monitor/reference/tables/ContainerRegistryLoginEvents) | Schema for registry authentication events and status, including the incoming identity and IP address |
-| [ContainerRegistryRepositoryEvents](/azure/azure-monitor/reference/tables/ContainerRegistryRepositoryEvents) | Schema for operations on images and other artifacts in registry repositories |
-## Next steps
--- See [Monitor Azure Container Registry](monitor-service.md) for a description of monitoring an Azure container registry.-- See [Monitoring Azure resources with Azure Monitor](../azure-monitor/overview.md) for details on monitoring Azure resources.
cost-management-billing Tutorial Acm Opt Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/tutorial-acm-opt-recommendations.md
After you choose a suitable size, select **Resize** to start the resize action.
Resizing requires an actively running virtual machine to restart. If the virtual machine is in a production environment, we recommend that you run the resize operation after business hours. Scheduling the restart can reduce disruptions caused by momentarily unavailability.
+> [!TIP]
+> If youΓÇÖre unsure whether shutting down a VM will cause a problem for others, you can first restrict access to the VM. Consider [Configuring role assignments for the VM](/entr) to assess the VMΓÇÖs usage and determine if limiting access caused problems for others.
+ ## Verify the action When the VM resizing completes successfully, an Azure notification is shown.
cost-management-billing Cancel Azure Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/cancel-azure-subscription.md
Depending on your subscription type, you might not be able to delete a subscript
> - The subscription is automatically deleted 90 days after you cancel a subscription. > - You can also contact Microsoft Support to help you remove a subscription. However you must make sure that you don't need the subscription anymore because the process only allows seven days to reactivate the subscription. > - If you have deleted all resources but the Delete your subscription page shows that you still have active resources, you might have active *hidden resources*. You can't delete a subscription if you have active hidden resources. To delete them, navigate to **Subscriptions** > select the subscription > **Resources**. At the top of the page, select **Manage view** and then select **Show hidden types**. Then, delete the resources.
+> - If you have a disabled Access to Azure Active Directory subscription, the subscription gets automatically deleted after 90 days of cancelling. No action is needed to manually delete it.
## Reactivate a subscription
data-factory Connector Quickbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-quickbooks.md
The following properties are supported for QuickBooks linked service:
} ```
+### Handling refresh tokens for the linked service
+
+When you use the QuickBooks Online connector in a linked service, it's important to manage OAuth 2.0 refresh tokens from QuickBooks correctly. The linked service uses a refresh token to obtain new access tokens. However, QuickBooks Online periodically updates the refresh token, invalidating the previous one. The linked service does not automatically update the refresh token in Azure Key Vault, so you need to manage updating the refresh token to ensure uninterrupted connectivity. Otherwise you might encounter authentication failures once the refresh token expires.
+
+You can manually update the refresh token in Azure Key Vault based on QuickBooks Online's refresh token expiry policy. But another approach is to automate updates with a scheduled task or [Azure Function](/samples/azure/azure-quickstart-templates/functions-keyvault-secret-rotation) that checks for a new refresh token and updates it in Azure Key Vault.
+ ## Dataset properties For a full list of sections and properties available for defining datasets, see the [datasets](concepts-datasets-linked-services.md) article. This section provides a list of properties supported by QuickBooks dataset.
The Copy Activity in the service cannot copy data directly from Quickbooks Deskt
To learn details about the properties, check [Lookup activity](control-flow-lookup-activity.md). - ## Related content For a list of data stores supported as sources and sinks by the copy activity, see [supported data stores](copy-activity-overview.md#supported-data-stores-and-formats).
defender-for-cloud Gain End User Context Ai https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/gain-end-user-context-ai.md
Title: Gain end-user context for AI alerts description: Learn how to enhance AI security by adding user context for alerts with Microsoft Defender for Cloud threat protection. Previously updated : 07/18/2024 Last updated : 07/25/2024 #Customer intent: I want to learn how to enhance the security of my AI workloads by adding end-user context for AI alerts with Microsoft Defender for Cloud threat protection for AI workloads.
The provided schema consists of the `SecurityContext` objects that contains seve
| Field name | Type | Description | Optional | Example | |||-|-||
-| EndUserId | string | Acts as a unique identifier for the end user within the generative AI application. If Microsoft Entra ID authorization is used to authenticate end-users in the generative AI application, this should be a Microsoft Entra ID (previously known as Microsoft Entra ID) user object ID, otherwise this can be a GUID or some other identifier that uniquely identify the user. | Yes | 1234a123-12a3-1234-1ab2-a1b2c34d56e |
-| EndUserIdType | string | Specifies the type of end user identifier. It should be set to Microsoft Entra ID when using Microsoft Entra (previously known as Microsoft Entra ID) user object ID. | Yes, unless EndUserId is passed, in that case this must be set to proper value. | Microsoft Entra ID, Google, Other |
+| EndUserId | string | Acts as a unique identifier for the end user within the generative AI application. If Microsoft Entra ID authorization is used to authenticate end-users in the generative AI application, this should be a Microsoft Entra ID (previously known as Azure Active Directory) user object ID, otherwise this can be a GUID or some other identifier that uniquely identify the user. | Yes | 1234a123-12a3-1234-1ab2-a1b2c34d56e |
+| EndUserIdType | string | Specifies the type of end user identifier. It should be set to Microsoft Entra ID when using Microsoft Entra (previously known as Azure Active Directory) user object ID. | Yes, unless EndUserId is passed, in that case this must be set to proper value. | Microsoft Entra ID, Google, Other |
| EndUserTenantId | string | This property specifies the Microsoft 365 tenant ID the end user belongs to. It's required when the generative AI application is multitenant and end users from different tenants can sign-in. | Yes | 1234a123-12a3-1234-1ab2-a1b2c34d56e | | SourceIP | string | Captures the IP address of the client as seen directly by the server. It represents the most immediate client IP address that made the connection to the server. If the client connects through a proxy or load balancer, SourceIP is the IP address of that proxy or load balancer, not the original client's IP address: <br> - ASP.NET: HttpContext.Connection.RemoteIpAddress <br> - Python: request.remote_addr | Yes | 12.34.567.891, 1234:1:123a:123:1a2b:ab1:ab1c:ab12 | | SourceRequestHeaders | Dictionary<string, string> | Captures a subset of end user's request headers that proxies or load balancers add. Headers like X-Forwarded-For, X-Real-IP, or Forwarded are used by Microsoft Defender for Cloud to get the original client's IP address. User-Agent headers provide context about the client software initiating the API request. <br><br> Recommended header names include: User-Agent, X-Forwarded-For, X-Real-IP, Forwarded, CF-Connecting-IP, True-Client-IP, X-Client-IP, X-Forwarded, Forwarded-For | Yes | - |
dms Dms Tools Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/dms-tools-matrix.md
description: Learn about the services and tools available to migrate databases a
Previously updated : 08/23/2023 Last updated : 07/25/2024
This article provides a matrix of the Microsoft and third-party services and too
The following tables identify the services and tools you can use to plan for data migration and complete its various phases successfully.
-> [!NOTE]
-> In the following tables, items marked with an asterisk (*) represent third-party tools.
+> [!NOTE]
+> In the following tables, items marked with an asterisk (`*`) represent third-party tools.
## Business justification phase
-| Source | Target | Discover /<br/>Inventory | Target and SKU<br/>recommendation | TCO/ROI and<br/>Business case |
+| Source | Target | Discover /<br />Inventory | Target and SKU<br />recommendation | TCO/ROI and<br />Business case |
| | | | | |
-| SQL Server | Azure SQL DB | [MAP Toolkit](/previous-versions//bb977556(v=technet.10))<br/>[Azure Migrate](https://azure.microsoft.com/services/azure-migrate/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/> [DMA](/sql/dma/dma-overview)<br/>[Cloud Atlas*](https://www.unifycloud.com/cloud-migration-tool/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [TCO Calculator](https://azure.microsoft.com/pricing/tco/calculator/) |
- SQL Server | Azure SQL MI | [MAP Toolkit](/previous-versions//bb977556(v=technet.10))<br/>[Azure Migrate](https://azure.microsoft.com/services/azure-migrate/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/> [DMA](/sql/dma/dma-overview)<br/>[Cloud Atlas*](https://www.unifycloud.com/cloud-migration-tool/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [TCO Calculator](https://azure.microsoft.com/pricing/tco/calculator/) |
-| SQL Server | Azure SQL VM | [MAP Toolkit](/previous-versions//bb977556(v=technet.10))<br/>[Azure Migrate](https://azure.microsoft.com/services/azure-migrate/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/> [DMA](/sql/dma/dma-overview)<br/>[Cloud Atlas*](https://www.unifycloud.com/cloud-migration-tool/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [TCO Calculator](https://azure.microsoft.com/pricing/tco/calculator/) |
-| SQL Server | Azure Synapse Analytics | [MAP Toolkit](/previous-versions//bb977556(v=technet.10))<br/>[Azure Migrate](https://azure.microsoft.com/services/azure-migrate/)<br/>[Cloudamize*](https://www.cloudamize.com/) | | [TCO Calculator](https://azure.microsoft.com/pricing/tco/calculator/) |
-| Amazon RDS for SQL Server | Azure SQL DB, MI, VM | | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/> [DMA](/sql/dma/dma-overview) | [TCO Calculator](https://azure.microsoft.com/pricing/tco/calculator/) |
-| Oracle | Azure SQL DB, MI, VM | [MAP Toolkit](/previous-versions//bb977556(v=technet.10))<br/>[Azure Migrate](https://azure.microsoft.com/services/azure-migrate/) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[MigVisor*](https://www.migvisor.com/) | |
-| Oracle | Azure Synapse Analytics | [MAP Toolkit](/previous-versions//bb977556(v=technet.10))<br/>[Azure Migrate](https://azure.microsoft.com/services/azure-migrate/) | [SSMA](/sql/ssma/sql-server-migration-assistant) | |
-| Oracle | Azure DB for PostgreSQL -<br/>Single server | [MAP Toolkit](/previous-versions//bb977556(v=technet.10))<br/>[Azure Migrate](https://azure.microsoft.com/services/azure-migrate/) | | |
-| MongoDB | Azure Cosmos DB | [Cloudamize*](https://www.cloudamize.com/) | [Cloudamize*](https://www.cloudamize.com/) | |
-| Cassandra | Azure Cosmos DB | | | |
-| MySQL | Azure SQL DB, MI, VM | [Azure Migrate](https://azure.microsoft.com/services/azure-migrate/) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Cloud Atlas*](https://www.unifycloud.com/cloud-migration-tool/) | [TCO Calculator](https://azure.microsoft.com/pricing/tco/calculator/) |
-| MySQL | Azure DB for MySQL | [Azure Migrate](https://azure.microsoft.com/services/azure-migrate/) | | [TCO Calculator](https://azure.microsoft.com/pricing/tco/calculator/) |
-| Amazon RDS for MySQL | Azure DB for MySQL | | | [TCO Calculator](https://azure.microsoft.com/pricing/tco/calculator/) |
-| PostgreSQL | Azure DB for PostgreSQL -<br/>Single server | [Azure Migrate](https://azure.microsoft.com/services/azure-migrate/) | | [TCO Calculator](https://azure.microsoft.com/pricing/tco/calculator/) |
-| Amazon RDS for PostgreSQL | Azure DB for PostgreSQL -<br/>Single server | | | [TCO Calculator](https://azure.microsoft.com/pricing/tco/calculator/) |
-| DB2 | Azure SQL DB, MI, VM | [Azure Migrate](https://azure.microsoft.com/services/azure-migrate/) | [SSMA](/sql/ssma/sql-server-migration-assistant) | |
-| Access | Azure SQL DB, MI, VM | [Azure Migrate](https://azure.microsoft.com/services/azure-migrate/) | [SSMA](/sql/ssma/sql-server-migration-assistant) | |
-| Sybase - SAP ASE | Azure SQL DB, MI, VM | [Azure Migrate](https://azure.microsoft.com/services/azure-migrate/) | [SSMA](/sql/ssma/sql-server-migration-assistant) | |
-| Sybase - SAP IQ | Azure SQL DB, MI, VM | | | |
-| | | | | |
+| SQL Server | Azure SQL Database | [Azure Migrate](https://azure.microsoft.com/services/azure-migrate/)<br />[Cloudamize*](https://cloudamize.com/) | [Azure SQL Migration extension](migration-using-azure-data-studio.md)<br />[DMA](/sql/dma/dma-overview)<br />[Cloud Atlas*](https://www.unifycloud.com/cloud-migration-tool/)<br />[Cloudamize*](https://cloudamize.com/) | [TCO Calculator](https://azure.microsoft.com/pricing/tco/calculator/) |
+| SQL Server | Azure SQL Managed Instance | [Azure Migrate](https://azure.microsoft.com/services/azure-migrate/)<br />[Cloudamize*](https://cloudamize.com/) | [Azure SQL Migration extension](migration-using-azure-data-studio.md)<br />[DMA](/sql/dma/dma-overview)<br />[Cloud Atlas*](https://www.unifycloud.com/cloud-migration-tool/)<br />[Cloudamize*](https://cloudamize.com/) | [TCO Calculator](https://azure.microsoft.com/pricing/tco/calculator/) |
+| SQL Server | SQL Server on Azure VM | [Azure Migrate](https://azure.microsoft.com/services/azure-migrate/)<br />[Cloudamize*](https://cloudamize.com/) | [Azure SQL Migration extension](migration-using-azure-data-studio.md)<br />[DMA](/sql/dma/dma-overview)<br />[Cloud Atlas*](https://www.unifycloud.com/cloud-migration-tool/)<br />[Cloudamize*](https://cloudamize.com/) | [TCO Calculator](https://azure.microsoft.com/pricing/tco/calculator/) |
+| SQL Server | Azure Synapse Analytics | [Azure Migrate](https://azure.microsoft.com/services/azure-migrate/)<br />[Cloudamize*](https://cloudamize.com/) | | [TCO Calculator](https://azure.microsoft.com/pricing/tco/calculator/) |
+| Amazon RDS for SQL Server | Azure SQL Database, Azure SQL Managed Instance, SQL Server on Azure VM | | [Azure SQL Migration extension](migration-using-azure-data-studio.md)<br />[DMA](/sql/dma/dma-overview) | [TCO Calculator](https://azure.microsoft.com/pricing/tco/calculator/) |
+| Oracle | Azure SQL Database, Azure SQL Managed Instance, SQL Server on Azure VM | [Azure Migrate](https://azure.microsoft.com/services/azure-migrate/) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br />[MigVisor*](https://solutionshub.epam.com/solution/migvisor-by-epam) | |
+| Oracle | Azure Synapse Analytics | [Azure Migrate](https://azure.microsoft.com/services/azure-migrate/) | [SSMA](/sql/ssma/sql-server-migration-assistant) | |
+| Oracle | Azure Database for PostgreSQL -<br />Single server | [Azure Migrate](https://azure.microsoft.com/services/azure-migrate/) | | |
+| MongoDB | Azure Cosmos DB | [Cloudamize*](https://cloudamize.com/) | [Cloudamize*](https://cloudamize.com/) | |
+| Cassandra | Azure Cosmos DB | | | |
+| MySQL | Azure SQL Database, Azure SQL Managed Instance, SQL Server on Azure VM | [Azure Migrate](https://azure.microsoft.com/services/azure-migrate/) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br />[Cloud Atlas*](https://www.unifycloud.com/cloud-migration-tool/) | [TCO Calculator](https://azure.microsoft.com/pricing/tco/calculator/) |
+| MySQL | Azure Database for MySQL | [Azure Migrate](https://azure.microsoft.com/services/azure-migrate/) | | [TCO Calculator](https://azure.microsoft.com/pricing/tco/calculator/) |
+| Amazon RDS for MySQL | Azure Database for MySQL | | | [TCO Calculator](https://azure.microsoft.com/pricing/tco/calculator/) |
+| PostgreSQL | Azure Database for PostgreSQL -<br />Single server | [Azure Migrate](https://azure.microsoft.com/services/azure-migrate/) | | [TCO Calculator](https://azure.microsoft.com/pricing/tco/calculator/) |
+| Amazon RDS for PostgreSQL | Azure Database for PostgreSQL -<br />Single server | | | [TCO Calculator](https://azure.microsoft.com/pricing/tco/calculator/) |
+| DB2 | Azure SQL Database, Azure SQL Managed Instance, SQL Server on Azure VM | [Azure Migrate](https://azure.microsoft.com/services/azure-migrate/) | [SSMA](/sql/ssma/sql-server-migration-assistant) | |
+| Access | Azure SQL Database, Azure SQL Managed Instance, SQL Server on Azure VM | [Azure Migrate](https://azure.microsoft.com/services/azure-migrate/) | [SSMA](/sql/ssma/sql-server-migration-assistant) | |
+| Sybase - SAP ASE | Azure SQL Database, Azure SQL Managed Instance, SQL Server on Azure VM | [Azure Migrate](https://azure.microsoft.com/services/azure-migrate/) | [SSMA](/sql/ssma/sql-server-migration-assistant) | |
+| Sybase - SAP IQ | Azure SQL Database, Azure SQL Managed Instance, SQL Server on Azure VM | | | |
-## Pre-migration phase
+## Premigration phase
-| Source | Target | App Data Access<br/>Layer Assessment | Database<br/>Assessment | Performance<br/>Assessment |
+| Source | Target | App Data Access<br />Layer Assessment | Database<br />Assessment | Performance<br />Assessment |
| | | | | |
-| SQL Server | Azure SQL DB | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [DMA](/sql/dm)<br/>[DMA](/sql/dma/dma-overview)<br/>[Cloud Atlas*](https://www.unifycloud.com/cloud-migration-tool/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [DEA](https://www.microsoft.com/download/details.aspx?id=54090)<br/>[Cloudamize*](https://www.cloudamize.com/) |
-| SQL Server | Azure SQL MI | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [DMA](/sql/dm)<br/>[DMA](/sql/dma/dma-overview)<br/>[Cloud Atlas*](https://www.unifycloud.com/cloud-migration-tool/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [DEA](https://www.microsoft.com/download/details.aspx?id=54090)<br/>[Cloudamize*](https://www.cloudamize.com/) |
-| SQL Server | Azure SQL VM | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [DMA](/sql/dm)<br/>[DMA](/sql/dma/dma-overview)<br/>[Cloud Atlas*](https://www.unifycloud.com/cloud-migration-tool/)<br/>[Cloudamize*](https://www.cloudamize.com/) | [DEA](https://www.microsoft.com/download/details.aspx?id=54090)<br/>[Cloudamize*](https://www.cloudamize.com/) |
-| SQL Server | Azure Synapse Analytics | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) | | |
-| Amazon RDS for SQL | Azure SQL DB, MI, VM | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [DMA](/sql/dm)<br/>[DMA](/sql/dma/dma-overview) | [DEA](https://www.microsoft.com/download/details.aspx?id=54090) |
-| Oracle | Azure SQL DB, MI, VM | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [SSMA](/sql/ssma/sql-server-migration-assistant) | [SSMA](/sql/ssma/sql-server-migration-assistant) | |
-| Oracle | Azure Synapse Analytics | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [SSMA](/sql/ssma/sql-server-migration-assistant) | [SSMA](/sql/ssma/sql-server-migration-assistant) | |
-| Oracle | Azure DB for PostgreSQL -<br/>Single server | | [Ora2Pg*](http://ora2pg.darold.net/start.html) | |
-| Oracle | Azure DB for PostgreSQL -<br/>Flexible server | | [Ora2Pg*](http://ora2pg.darold.net/start.html) | |
-| MongoDB | Azure Cosmos DB | | [Cloudamize*](https://www.cloudamize.com/) | [Cloudamize*](https://www.cloudamize.com/) |
-| Cassandra | Azure Cosmos DB | | | |
-| MySQL | Azure SQL DB, MI, VM | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [SSMA](/sql/ssma/sql-server-migration-assistant) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Cloud Atlas*](https://www.unifycloud.com/cloud-migration-tool/) | |
-| MySQL | Azure DB for MySQL | | | |
-| Amazon RDS for MySQL | Azure DB for MySQL | | | |
-| PostgreSQL | Azure DB for PostgreSQL -<br/>Single server | | | |
-| PostgreSQL | Azure DB for PostgreSQL -<br/>Flexible server | | | |
-| Amazon RDS for PostgreSQL | Azure DB for PostgreSQL -<br/>Single server | | | |
-| DB2 | Azure SQL DB, MI, VM | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [SSMA](/sql/ssma/sql-server-migration-assistant) | [SSMA](/sql/ssma/sql-server-migration-assistant) | |
-| Access | Azure SQL DB, MI, VM | | [SSMA](/sql/ssma/sql-server-migration-assistant) | |
-| Sybase - SAP ASE | Azure SQL DB, MI, VM | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [SSMA](/sql/ssma/sql-server-migration-assistant) | [SSMA](/sql/ssma/sql-server-migration-assistant) | |
-| Sybase - SAP IQ | Azure SQL DB, MI, VM | | | |
-| | | | | |
+| SQL Server | Azure SQL Database | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [DMA](/sql/dm)<br />[DMA](/sql/dma/dma-overview)<br />[Cloud Atlas*](https://www.unifycloud.com/cloud-migration-tool/)<br />[Cloudamize*](https://cloudamize.com/) | [DEA](https://www.microsoft.com/download/details.aspx?id=54090)<br />[Cloudamize*](https://cloudamize.com/) |
+| SQL Server | Azure SQL Managed Instance | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [DMA](/sql/dm)<br />[DMA](/sql/dma/dma-overview)<br />[Cloud Atlas*](https://www.unifycloud.com/cloud-migration-tool/)<br />[Cloudamize*](https://cloudamize.com/) | [DEA](https://www.microsoft.com/download/details.aspx?id=54090)<br />[Cloudamize*](https://cloudamize.com/) |
+| SQL Server | SQL Server on Azure VM | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [DMA](/sql/dm)<br />[DMA](/sql/dma/dma-overview)<br />[Cloud Atlas*](https://www.unifycloud.com/cloud-migration-tool/)<br />[Cloudamize*](https://cloudamize.com/) | [DEA](https://www.microsoft.com/download/details.aspx?id=54090)<br />[Cloudamize*](https://cloudamize.com/) |
+| SQL Server | Azure Synapse Analytics | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) | | |
+| Amazon RDS for SQL | Azure SQL Database, Azure SQL Managed Instance, SQL Server on Azure VM | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [DMA](/sql/dm)<br />[DMA](/sql/dma/dma-overview) | [DEA](https://www.microsoft.com/download/details.aspx?id=54090) |
+| Oracle | Azure SQL Database, Azure SQL Managed Instance, SQL Server on Azure VM | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [SSMA](/sql/ssma/sql-server-migration-assistant) | [SSMA](/sql/ssma/sql-server-migration-assistant) | |
+| Oracle | Azure Synapse Analytics | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [SSMA](/sql/ssma/sql-server-migration-assistant) | [SSMA](/sql/ssma/sql-server-migration-assistant) | |
+| Oracle | Azure Database for PostgreSQL -<br />Single server | | [Ora2Pg*](https://ora2pg.darold.net/start.html) | |
+| Oracle | Azure Database for PostgreSQL -<br />Flexible server | | [Ora2Pg*](https://ora2pg.darold.net/start.html) | |
+| MongoDB | Azure Cosmos DB | | [Cloudamize*](https://cloudamize.com/) | [Cloudamize*](https://cloudamize.com/) |
+| Cassandra | Azure Cosmos DB | | | |
+| MySQL | Azure SQL Database, Azure SQL Managed Instance, SQL Server on Azure VM | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [SSMA](/sql/ssma/sql-server-migration-assistant) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br />[Cloud Atlas*](https://www.unifycloud.com/cloud-migration-tool/) | |
+| MySQL | Azure Database for MySQL | | | |
+| Amazon RDS for MySQL | Azure Database for MySQL | | | |
+| PostgreSQL | Azure Database for PostgreSQL -<br />Single server | | | |
+| PostgreSQL | Azure Database for PostgreSQL -<br />Flexible server | | | |
+| Amazon RDS for PostgreSQL | Azure Database for PostgreSQL -<br />Single server | | | |
+| DB2 | Azure SQL Database, Azure SQL Managed Instance, SQL Server on Azure VM | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [SSMA](/sql/ssma/sql-server-migration-assistant) | [SSMA](/sql/ssma/sql-server-migration-assistant) | |
+| Access | Azure SQL Database, Azure SQL Managed Instance, SQL Server on Azure VM | | [SSMA](/sql/ssma/sql-server-migration-assistant) | |
+| Sybase - SAP ASE | Azure SQL Database, Azure SQL Managed Instance, SQL Server on Azure VM | [DAMT](https://marketplace.visualstudio.com/items?itemName=ms-databasemigration.data-access-migration-toolkit) / [SSMA](/sql/ssma/sql-server-migration-assistant) | [SSMA](/sql/ssma/sql-server-migration-assistant) | |
+| Sybase - SAP IQ | Azure SQL Database, Azure SQL Managed Instance, SQL Server on Azure VM | | | |
## Migration phase
-| Source | Target | Schema | Data<br/>(Offline) | Data<br/>(Online) |
+| Source | Target | Schema | Data<br />(Offline) | Data<br />(Online) |
| | | | | |
-| SQL Server | Azure SQL DB | [SQL Database Projects extension](/azure-data-studio/extensions/sql-database-project-extension)<br/>[DMA](/sql/dm)<br/>[DMA](/sql/dma/dma-overview)<br/>[Cloudamize*](https://www.cloudamize.com/) | [Cloudamize*](https://www.cloudamize.com/)<br/>[Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
-| SQL Server | Azure SQL MI | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/>[Cloudamize*](https://www.cloudamize.com/) | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/>[Cloudamize*](https://www.cloudamize.com/) | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/>[Cloudamize*](https://www.cloudamize.com/)<br/>[Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
-| SQL Server | Azure SQL VM | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/>[DMA](/sql/dm)<br/>[Cloudamize*](https://www.cloudamize.com/)<br/>[Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
-| SQL Server | Azure Synapse Analytics | | | |
-| Amazon RDS for SQL Server| Azure SQL DB | [SQL Database Projects extension](/azure-data-studio/extensions/sql-database-project-extension)<br/>[DMA](/sql/dm)<br/>[DMA](/sql/dma/dma-overview)| [Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
-| Amazon RDS for SQL | Azure SQL MI |[Azure SQL Migration extension](./migration-using-azure-data-studio.md) | [Azure SQL Migration extension](./migration-using-azure-data-studio.md) | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/>[Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
-| Amazon RDS for SQL Server | Azure SQL VM | [Azure SQL Migration extension](./migration-using-azure-data-studio.md)<br/>[DMA](/sql/dm)<br/>[Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
-| Oracle | Azure SQL DB, MI, VM | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[SharePlex*](https://www.quest.com/products/shareplex/)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[SharePlex*](https://www.quest.com/products/shareplex/)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [SharePlex*](https://www.quest.com/products/shareplex/)<br/>[Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
-| Oracle | Azure Synapse Analytics | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [SharePlex*](https://www.quest.com/products/shareplex/)<br/>[Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
-| Oracle | Azure DB for PostgreSQL -<br/>Single server | [Ora2Pg*](http://ora2pg.darold.net/start.html)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [Ora2Pg*](http://ora2pg.darold.net/start.html)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | <br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
-| Oracle | Azure DB for PostgreSQL -<br/>Flexible server | [Ora2Pg*](http://ora2pg.darold.net/start.html)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [Ora2Pg*](http://ora2pg.darold.net/start.html)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | <br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
-| MongoDB | Azure Cosmos DB | [DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Cloudamize*](https://www.cloudamize.com/)<br/>[Imanis Data*](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/talena-inc.talena-solution-template?tab=Overview) | [DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Cloudamize*](https://www.cloudamize.com/)<br/>[Imanis Data*](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/talena-inc.talena-solution-template?tab=Overview) | [DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Cloudamize*](https://www.cloudamize.com/)<br/>[Imanis Data*](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/talena-inc.talena-solution-template?tab=Overview)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
-| Cassandra | Azure Cosmos DB | [Imanis Data*](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/talena-inc.talena-solution-template?tab=Overview) | [Imanis Data*](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/talena-inc.talena-solution-template?tab=Overview) | [Imanis Data*](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/talena-inc.talena-solution-template?tab=Overview) |
-| MySQL | Azure SQL DB, MI, VM | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
-| MySQL | Azure DB for MySQL | [MySQL dump*](https://dev.mysql.com/doc/refman/5.7/en/mysqldump.html) | [DMS](https://azure.microsoft.com/services/database-migration/) | [MyDumper/MyLoader*](https://centminmod.com/mydumper.html) with [data-in replication](../mysql/concepts-data-in-replication.md)<br/>[Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
-| Amazon RDS for MySQL | Azure DB for MySQL | [MySQL dump*](https://dev.mysql.com/doc/refman/5.7/en/mysqldump.html) | [DMS](https://azure.microsoft.com/services/database-migration/) | [MyDumper/MyLoader*](https://centminmod.com/mydumper.html) with [data-in replication](../mysql/concepts-data-in-replication.md)<br/>[Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
-| PostgreSQL | Azure DB for PostgreSQL -<br/>Single server | [PG dump*](https://www.postgresql.org/docs/current/static/app-pgdump.html) | [PG dump*](https://www.postgresql.org/docs/current/static/app-pgdump.html) | [DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
-| Amazon RDS for PostgreSQL | Azure DB for PostgreSQL -<br/>Single server | [PG dump*](https://www.postgresql.org/docs/current/static/app-pgdump.html) | [PG dump*](https://www.postgresql.org/docs/current/static/app-pgdump.html) | [DMS](https://azure.microsoft.com/services/database-migration/)<br/>[Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
-| DB2 | Azure SQL DB, MI, VM | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
-| Access | Azure SQL DB, MI, VM | [SSMA](/sql/ssma/sql-server-migration-assistant) | [SSMA](/sql/ssma/sql-server-migration-assistant) | [SSMA](/sql/ssma/sql-server-migration-assistant) |
-| Sybase - SAP ASE | Azure SQL DB, MI, VM | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br/>[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br/>[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
-| Sybase - SAP IQ | Azure SQL DB, MI, VM | [Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | |
-| | | | | |
+| SQL Server | Azure SQL Database | [SQL Database Projects extension](/azure-data-studio/extensions/sql-database-project-extension)<br />[DMA](/sql/dm)<br />[DMA](/sql/dma/dma-overview)<br />[Cloudamize*](https://cloudamize.com/) | [Cloudamize*](https://cloudamize.com/)<br />[Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br />[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
+| SQL Server | Azure SQL Managed Instance | [Azure SQL Migration extension](migration-using-azure-data-studio.md)<br />[Cloudamize*](https://cloudamize.com/) | [Azure SQL Migration extension](migration-using-azure-data-studio.md)<br />[Cloudamize*](https://cloudamize.com/) | [Azure SQL Migration extension](migration-using-azure-data-studio.md)<br />[Cloudamize*](https://cloudamize.com/)<br />[Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br />[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
+| SQL Server | SQL Server on Azure VM | [Azure SQL Migration extension](migration-using-azure-data-studio.md)<br />[DMA](/sql/dm)<br />[Cloudamize*](https://cloudamize.com/)<br />[Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br />[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
+| SQL Server | Azure Synapse Analytics | | | |
+| Amazon RDS for SQL Server | Azure SQL Database | [SQL Database Projects extension](/azure-data-studio/extensions/sql-database-project-extension)<br />[DMA](/sql/dm)<br />[DMA](/sql/dma/dma-overview) | [Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br />[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
+| Amazon RDS for SQL | Azure SQL Managed Instance | [Azure SQL Migration extension](migration-using-azure-data-studio.md) | [Azure SQL Migration extension](migration-using-azure-data-studio.md) | [Azure SQL Migration extension](migration-using-azure-data-studio.md)<br />[Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br />[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
+| Amazon RDS for SQL Server | SQL Server on Azure VM | [Azure SQL Migration extension](migration-using-azure-data-studio.md)<br />[DMA](/sql/dm)<br />[Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br />[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
+| Oracle | Azure SQL Database, Azure SQL Managed Instance, SQL Server on Azure VM | [SSMA](/sql/ssma/sql-server-migration-assistant)<br />[SharePlex*](https://www.quest.com/products/shareplex/)<br />[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br />[SharePlex*](https://www.quest.com/products/shareplex/)<br />[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [SharePlex*](https://www.quest.com/products/shareplex/)<br />[Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br />[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
+| Oracle | Azure Synapse Analytics | [SSMA](/sql/ssma/sql-server-migration-assistant)<br />[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br />[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [SharePlex*](https://www.quest.com/products/shareplex/)<br />[Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br />[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
+| Oracle | Azure Database for PostgreSQL -<br />Single server | [Ora2Pg*](https://ora2pg.darold.net/start.html)<br />[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [Ora2Pg*](https://ora2pg.darold.net/start.html)<br />[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) |<br />[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
+| Oracle | Azure Database for PostgreSQL -<br />Flexible server | [Ora2Pg*](https://ora2pg.darold.net/start.html)<br />[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [Ora2Pg*](https://ora2pg.darold.net/start.html)<br />[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) |<br />[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
+| MongoDB | Azure Cosmos DB | [DMS](https://azure.microsoft.com/services/database-migration/)<br />[Cloudamize*](https://cloudamize.com/)<br />[Imanis Data*](https://azuremarketplace.microsoft.com/marketplace/apps/talena-inc.talena-solution-template?tab=Overview) | [DMS](https://azure.microsoft.com/services/database-migration/)<br />[Cloudamize*](https://cloudamize.com/)<br />[Imanis Data*](https://azuremarketplace.microsoft.com/marketplace/apps/talena-inc.talena-solution-template?tab=Overview) | [DMS](https://azure.microsoft.com/services/database-migration/)<br />[Cloudamize*](https://cloudamize.com/)<br />[Imanis Data*](https://azuremarketplace.microsoft.com/marketplace/apps/talena-inc.talena-solution-template?tab=Overview)<br />[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
+| Cassandra | Azure Cosmos DB | [Imanis Data*](https://azuremarketplace.microsoft.com/marketplace/apps/talena-inc.talena-solution-template?tab=Overview) | [Imanis Data*](https://azuremarketplace.microsoft.com/marketplace/apps/talena-inc.talena-solution-template?tab=Overview) | [Imanis Data*](https://azuremarketplace.microsoft.com/marketplace/apps/talena-inc.talena-solution-template?tab=Overview) |
+| MySQL | Azure SQL Database, Azure SQL Managed Instance, SQL Server on Azure VM | [SSMA](/sql/ssma/sql-server-migration-assistant)<br />[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br />[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br />[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
+| MySQL | Azure Database for MySQL | [MySQL dump*](https://dev.mysql.com/doc/refman/5.7/en/mysqldump.html) | [DMS](https://azure.microsoft.com/services/database-migration/) | [MyDumper/MyLoader*](https://centminmod.com/mydumper.html) with [data-in replication](../mysql/concepts-data-in-replication.md)<br />[Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br />[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
+| Amazon RDS for MySQL | Azure Database for MySQL | [MySQL dump*](https://dev.mysql.com/doc/refman/5.7/en/mysqldump.html) | [DMS](https://azure.microsoft.com/services/database-migration/) | [MyDumper/MyLoader*](https://centminmod.com/mydumper.html) with [data-in replication](../mysql/concepts-data-in-replication.md)<br />[Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br />[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
+| PostgreSQL | Azure Database for PostgreSQL -<br />Single server | [PG dump*](https://www.postgresql.org/docs/current/app-pgdump.html) | [PG dump*](https://www.postgresql.org/docs/current/app-pgdump.html) | [DMS](https://azure.microsoft.com/services/database-migration/)<br />[Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br />[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
+| Amazon RDS for PostgreSQL | Azure Database for PostgreSQL -<br />Single server | [PG dump*](https://www.postgresql.org/docs/current/app-pgdump.html) | [PG dump*](https://www.postgresql.org/docs/current/app-pgdump.html) | [DMS](https://azure.microsoft.com/services/database-migration/)<br />[Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br />[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
+| DB2 | Azure SQL Database, Azure SQL Managed Instance, SQL Server on Azure VM | [SSMA](/sql/ssma/sql-server-migration-assistant)<br />[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br />[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br />[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
+| Access | Azure SQL Database, Azure SQL Managed Instance, SQL Server on Azure VM | [SSMA](/sql/ssma/sql-server-migration-assistant) | [SSMA](/sql/ssma/sql-server-migration-assistant) | [SSMA](/sql/ssma/sql-server-migration-assistant) |
+| Sybase - SAP ASE | Azure SQL Database, Azure SQL Managed Instance, SQL Server on Azure VM | [SSMA](/sql/ssma/sql-server-migration-assistant)<br />[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [SSMA](/sql/ssma/sql-server-migration-assistant)<br />[Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [Qlik*](https://www.qlik.com/us/products/qlik-replicate)<br />[Striim*](https://www.striim.com/partners/striim-and-microsoft-azure/) |
+| Sybase - SAP IQ | Azure SQL Database, Azure SQL Managed Instance, SQL Server on Azure VM | [Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | [Ispirer*](https://www.ispirer.com/blog/migration-to-the-microsoft-technology-stack) | |
## Post-migration phase | Source | Target | Optimize | | | | |
-| SQL Server | Azure SQL DB | [Cloud Atlas*](https://www.unifycloud.com/cloud-migration-tool/)<br/>[Cloudamize*](https://www.cloudamize.com/) |
-| SQL Server | Azure SQL MI | [Cloud Atlas*](https://www.unifycloud.com/cloud-migration-tool/)<br/>[Cloudamize*](https://www.cloudamize.com/) |
-| SQL Server | Azure SQL VM | [Cloud Atlas*](https://www.unifycloud.com/cloud-migration-tool/)<br/>[Cloudamize*](https://www.cloudamize.com/) |
-| SQL Server | Azure Synapse Analytics | |
-| RDS SQL | Azure SQL DB, MI, VM | |
-| Oracle | Azure SQL DB, MI, VM | |
-| Oracle | Azure Synapse Analytics | |
-| Oracle | Azure DB for PostgreSQL -<br/>Single server | |
-| MongoDB | Azure Cosmos DB | [Cloudamize*](https://www.cloudamize.com/) |
-| Cassandra | Azure Cosmos DB | |
-| MySQL | Azure SQL DB, MI, VM | |
-| MySQL | Azure DB for MySQL | |
-| Amazon RDS for MySQL | Azure DB for MySQL | |
-| PostgreSQL | Azure DB for PostgreSQL -<br/>Single server | |
-| PostgreSQL | Azure DB for PostgreSQL -<br/>Flexible server | |
-| Amazon RDS for PostgreSQL | Azure DB for PostgreSQL -<br/>Single server | |
-| DB2 | Azure SQL DB, MI, VM | |
-| Access | Azure SQL DB, MI, VM | |
-| Sybase - SAP ASE | Azure SQL DB, MI, VM | |
-| Sybase - SAP IQ | Azure SQL DB, MI, VM | |
-| | | |
+| SQL Server | Azure SQL Database | [Cloud Atlas*](https://www.unifycloud.com/cloud-migration-tool/)<br />[Cloudamize*](https://cloudamize.com/) |
+| SQL Server | Azure SQL Managed Instance | [Cloud Atlas*](https://www.unifycloud.com/cloud-migration-tool/)<br />[Cloudamize*](https://cloudamize.com/) |
+| SQL Server | SQL Server on Azure VM | [Cloud Atlas*](https://www.unifycloud.com/cloud-migration-tool/)<br />[Cloudamize*](https://cloudamize.com/) |
+| SQL Server | Azure Synapse Analytics | |
+| RDS SQL | Azure SQL Database, Azure SQL Managed Instance, SQL Server on Azure VM | |
+| Oracle | Azure SQL Database, Azure SQL Managed Instance, SQL Server on Azure VM | |
+| Oracle | Azure Synapse Analytics | |
+| Oracle | Azure Database for PostgreSQL -<br />Single server | |
+| MongoDB | Azure Cosmos DB | [Cloudamize*](https://cloudamize.com/) |
+| Cassandra | Azure Cosmos DB | |
+| MySQL | Azure SQL Database, Azure SQL Managed Instance, SQL Server on Azure VM | |
+| MySQL | Azure Database for MySQL | |
+| Amazon RDS for MySQL | Azure Database for MySQL | |
+| PostgreSQL | Azure Database for PostgreSQL -<br />Single server | |
+| PostgreSQL | Azure Database for PostgreSQL -<br />Flexible server | |
+| Amazon RDS for PostgreSQL | Azure Database for PostgreSQL -<br />Single server | |
+| DB2 | Azure SQL Database, Azure SQL Managed Instance, SQL Server on Azure VM | |
+| Access | Azure SQL Database, Azure SQL Managed Instance, SQL Server on Azure VM | |
+| Sybase - SAP ASE | Azure SQL Database, Azure SQL Managed Instance, SQL Server on Azure VM | |
+| Sybase - SAP IQ | Azure SQL Database, Azure SQL Managed Instance, SQL Server on Azure VM | |
-## Next steps
+## Related content
-For an overview of the Azure Database Migration Service, see the article [What is the Azure Database Migration Service](dms-overview.md).
+- [What is the Azure Database Migration Service](dms-overview.md)
expressroute Metro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/metro.md
You can create an ExpressRoute Metro circuit in the Azure portal in any of the t
:::image type="content" source="./media/metro/generate-letter-of-authorization.png" alt-text="Screenshot of generating letter of authorization.":::
+## Migrate from an existing Expressroute circuit to a Metro circuit
+
+If you want to migrate from an existing ExpressRoute circuit, create a new ExpressRoute Metro circuit. Then, follow the steps for [circuit migration](circuit-migration.md) to transition from the existing standard ExpressRoute circuit to the ExpressRoute Metro circuit.
+ ## Next steps * Review [ExpressRoute partners and peering locations](expressroute-locations.md) to understand the available ExpressRoute partners and peering locations.
external-attack-surface-management Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/overview.md
Microsoft's proprietary discovery technology recursively searches for infrastruc
Defender EASM includes the discovery of the following kinds of assets: - Domains-- Hostnames-- Web Pages - IP Blocks-- IP Addresses
+- Hosts
+- Email Contacts
- ASNs-- SSL Certificates-- WHOIS Contacts
+- WHOIS Organizations
![Screenshot of Discovery View](media/Overview-2.png)
external-attack-surface-management Understanding Inventory Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/understanding-inventory-assets.md
Microsoft's proprietary discovery technology recursively searches for infrastruc
Defender EASM includes the discovery of the following kinds of assets: - Domains-- Hosts-- Pages - IP Blocks-- IP Addresses-- Autonomous System Numbers (ASNs)-- SSL Certificates-- WHOIS Contacts
+- Hosts
+- Email Contacts
+- ASNs
+- WHOIS Organizations
These asset types comprise your attack surface inventory in Defender EASM. This solution discovers externally facing assets that are exposed to the open internet outside of traditional firewall protection; they need to be monitored and maintained to minimize risk and improve an organizationΓÇÖs security posture. Microsoft Defender External Attack Surface Management (Defender EASM) actively discovers and monitors these assets, then surfacing key insights that help customers efficiently address any vulnerabilities to their organization.
external-attack-surface-management What Is Discovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/external-attack-surface-management/what-is-discovery.md
Through this process, Microsoft enables organizations to proactively monitor the
To create a comprehensive mapping of your organizationΓÇÖs attack surface, the system first intakes known assets (i.e. ΓÇ£seedsΓÇ¥) that are recursively scanned to discover additional entities through their connections to a seed. An initial seed may be any of the following kinds of web infrastructure indexed by Microsoft: -- Organization Names - Domains - IP Blocks - Hosts
firewall Rule Processing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/rule-processing.md
With Firewall Policy, rules are organized inside Rule Collections and Rule Colle
Rules are processed based on Rule Collection Group Priority and Rule Collection priority. Priority is any number between 100 (highest priority) to 65,000 (lowest priority). Highest priority Rule Collection Groups are processed first. Inside a rule collection group, Rule Collections with highest priority (lowest number) are processed first.
-If a Firewall Policy is inherited from a parent policy, Rule Collection Groups in the parent policy always takes precedence regardless of the priority of a child policy.
+If a Firewall Policy is inherited from a parent policy, Rule Collection Groups in the parent policy always takes precedence regardless of the priority of a child policy.
++ > [!NOTE] > Application rules are always processed after Network rules, which are processed after DNAT rules regardless of Rule collection group or Rule collection priority and policy inheritance.
+So, to summarize:
+
+Parent policy always takes precedence.
+
+1. Rule collection groups are processed in priority order.
+1. Rule collections are processed in priority order.
+1. DNAT rules, then Network rules, then Application rules are processed.
++ Here's an example policy: Assuming BaseRCG1 is a rule collection group priority (200) that contains the rule collections: DNATRC1, DNATRC3,NetworkRC1.\
kubernetes-fleet Cluster Resource Override https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kubernetes-fleet/cluster-resource-override.md
rules:
`jsonPatchOverrides` apply a JSON patch on the selected resources following [RFC 6902](https://datatracker.ietf.org/doc/html/rfc6902).
-## Apply the cluster resource override
+## Apply the cluster resource placement
+
+### [Azure CLI](#tab/azure-cli)
1. Create a `ClusterResourcePlacement` resource to specify the placement rules for distributing the cluster resource overrides across the cluster infrastructure, as shown in the following example. Make sure you select the appropriate resource.
rules:
Type: Overridden ... ```-
+
The `ClusterResourcePlacementOverridden` condition indicates whether the resource override was successfully applied to the selected resources in the clusters. Each cluster maintains its own `Applicable Cluster Resource Overrides` list, which contains the cluster resource override snapshot if relevant. Individual status messages for each cluster indicate whether the override rules were successfully applied.
+### [Portal](#tab/azure-portal)
+
+1. On the Azure portal overview page for your Fleet resource, in the **Fleet Resources** section, select **Resource placements**.
+
+1. Select **Create**.
+
+1. Create a `ClusterResourcePlacement` resource to specify the placement rules for distributing the cluster resource overrides across the cluster infrastructure, as shown in the following example. Make sure you select the appropriate resource. Replace the default template with the YAML example below, and select **Add**.
+
+ :::image type="content" source="./media/cluster-resource-override/crp-create-inline.png" lightbox="./media/cluster-resource-override/crp-create.png" alt-text="A screenshot of the Azure portal page for creating a resource placement, showing the YAML template with placeholder values.":::
+
+ ```yaml
+ apiVersion: placement.kubernetes-fleet.io/v1beta1
+ kind: ClusterResourcePlacement
+ metadata:
+ name: crp
+ spec:
+ resourceSelectors:
+ - group: rbac.authorization.k8s.io
+ kind: ClusterRole
+ version: v1
+ name: secret-reader
+ policy:
+ placementType: PickAll
+ affinity:
+ clusterAffinity:
+ requiredDuringSchedulingIgnoredDuringExecution:
+ clusterSelectorTerms:
+ - labelSelector:
+ matchLabels:
+ env: prod
+ ```
+
+ This example distributes resources across all clusters labeled with `env: prod`. As the changes are implemented, the corresponding `ClusterResourceOverride` configurations will be applied to the designated clusters, triggered by the selection of matching cluster role resource, `secret-reader`.
++
+1. Verify that the cluster resource placement is created successfully.
+
+ :::image type="content" source="./media/cluster-resource-override/crp-success-inline.png" lightbox="./media/cluster-resource-override/crp-success.png" alt-text="A screenshot of the Azure portal page for cluster resource placements, showing a successfully created cluster resource placement.":::
+
+1. Verify the cluster resource placement applied to the selected resources by selecting the resource from the list and checking the status.
+++ ## Next steps To learn more about Fleet, see the following resources:
kubernetes-fleet Intelligent Resource Placement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kubernetes-fleet/intelligent-resource-placement.md
Kubernetes Fleet provides resource placement capability that can make scheduling
[!INCLUDE [preview-callout](./includes/preview/preview-callout.md)]
+This article discusses creating cluster resource placements, which can be done via Azure CLI or the Azure portal. For more, see [Propagate resources from a Fleet hub cluster to member clusters](./quickstart-resource-propagation.md).
+ ## Prerequisites * Read the [resource propagation conceptual overview](./concepts-resource-propagation.md) to understand the concepts and terminology used in this quickstart.
spec:
In this example, a cluster would only receive extra weight if it has the label `env=prod`. If it satisfies that label based constraint, then the cluster is given proportional weight based on the amount of total CPU in that member cluster. - ## Clean up resources
-If you no longer wish to use the `ClusterResourcePlacement` objects created in this article, you can delete them using the `kubectl delete` command. For example:
+### [Azure CLI](#tab/azure-cli)
+
+If you no longer wish to use the `ClusterResourcePlacement` object, you can delete it using the `kubectl delete` command. The following example deletes the `ClusterResourcePlacement` object named `crp`:
```bash
-kubectl delete clusterresourceplacement <name-of-the-crp-resource>
+kubectl delete clusterresourceplacement crp
```
+### [Portal](#tab/azure-portal)
+
+If you no longer wish to use your cluster resource placement, you can delete it from the Azure portal:
+
+1. On the Azure portal overview page for your Fleet resource, in the **Fleet Resources** section, select **Resource placements**.
+
+1. Select the cluster resource placement objects you want to delete, then select **Delete**.
+
+1. In the **Delete** tab, verify the correct objects are chosen. Once you're ready, select **Confirm delete** and **Delete**.
+++ ## Next steps To learn more about resource propagation, see the following resources:
kubernetes-fleet Quickstart Resource Propagation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kubernetes-fleet/quickstart-resource-propagation.md
The `ClusterResourcePlacement` API object is used to propagate resources from a
For more information, see [Kubernetes resource propagation from hub cluster to member clusters (Preview)](./concepts-resource-propagation.md) and the [upstream Fleet documentation](https://github.com/Azure/fleet/blob/main/docs/concepts/ClusterResourcePlacement/README.md).
+### [Azure CLI](#tab/azure-cli)
+ 1. Create a namespace to place onto the member clusters using the `kubectl create namespace` command. The following example creates a namespace named `my-namespace`: ```bash
For more information, see [Kubernetes resource propagation from hub cluster to m
Normal PlacementRolloutCompleted 103s cluster-resource-placement-controller Resources have been applied to the selected clusters ````
+### [Portal](#tab/azure-portal)
+
+1. Sign in to the Azure portal.
+
+1. On the Azure portal overview page for your Fleet resource, in the **Fleet Resources** section, select **Resource placements**.
+
+1. Select **Create**.
+
+1. Replace the placeholder values with the following YAML, and select **Add**.
+
+ :::image type="content" source="./media/quickstart-resource-propagation/create-crp-inline.png" lightbox="./media/quickstart-resource-propagation/create-crp.png" alt-text="A screenshot of the Azure portal page for creating a resource placement, showing the YAML template with placeholder values.":::
+
+ ```yml
+ apiVersion: placement.kubernetes-fleet.io/v1beta1
+ kind: ClusterResourcePlacement
+ metadata:
+ name: crp
+ spec:
+ resourceSelectors:
+ - group: ""
+ kind: Namespace
+ version: v1
+ name: my-namespace
+ policy:
+ placementType: PickAll
+ ```
+
+
+1. Verify that the cluster resource placement is created successfully.
+
+ :::image type="content" source="./media/quickstart-resource-propagation/crp-success-inline.png" lightbox="./media/quickstart-resource-propagation/crp-success.png" alt-text="A screenshot of the Azure portal page for cluster resource placements, showing a successfully created cluster resource placement.":::
+
+1. To see more details on an individual cluster resource placement, select it from the list.
+
+ :::image type="content" source="./media/quickstart-resource-propagation/crp-details-inline.png" lightbox="./media/quickstart-resource-propagation/crp-details.png" alt-text="A screenshot of the Azure portal overview page for an individual cluster resource placement, showing events and details.":::
+
+1. You can view additional details on the cluster resource placement's snapshots, bindings, works, and scheduling policy snapshots using the individual tabs. For example, select the **Cluster Resources Snapshots** tab.
+
+ :::image type="content" source="./media/quickstart-resource-propagation/crp-snapshot-inline.png" lightbox="./media/quickstart-resource-propagation/crp-snapshot.png" alt-text="A screenshot of the Azure portal page for a cluster resource placement, with the cluster resources snapshots tab selected.":::
+++ ## Clean up resources
+### [Azure CLI](#tab/azure-cli)
+ If you no longer wish to use the `ClusterResourcePlacement` object, you can delete it using the `kubectl delete` command. The following example deletes the `ClusterResourcePlacement` object named `crp`: ```bash kubectl delete clusterresourceplacement crp ```
+### [Portal](#tab/azure-portal)
+
+If you no longer wish to use your cluster resource placement, you can delete it from the Azure portal:
+
+1. On the Azure portal overview page for your Fleet resource, in the **Fleet Resources** section, select **Resource placements**.
+
+1. Select the cluster resource placement objects you want to delete, then select **Delete**.
+
+1. In the **Delete** tab, verify the correct objects are chosen. Once you're ready, select **Confirm delete** and **Delete**.
+++ ## Next steps To learn more about resource propagation, see the following resources:
kubernetes-fleet Resource Override https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kubernetes-fleet/resource-override.md
spec:
This example replaces the container image in the `Deployment` with the `nginx:1.20.0` image for clusters with the `env: prod` label and the `nginx:latest` image for clusters with the `env: test` label.
-## Apply the resource override
+## Apply the cluster resource placement
+
+### [Azure CLI](#tab/azure-cli)
1. Create a `ClusterResourcePlacement` resource to specify the placement rules for distributing the resource overrides across the cluster infrastructure, as shown in the following example. Make sure you select the appropriate namespaces.
This example replaces the container image in the `Deployment` with the `nginx:1.
The `ClusterResourcePlacementOverridden` condition indicates whether the resource override was successfully applied to the selected resources. Each cluster maintains its own `Applicable Resource Overrides` list, which contains the resource override snapshot if relevant. Individual status messages for each cluster indicate whether the override rules were successfully applied.
+### [Portal](#tab/azure-portal)
+
+1. On the Azure portal overview page for your Fleet resource, in the **Fleet Resources** section, select **Resource placements**.
+
+1. Select **Create**.
+
+1. Create a `ClusterResourcePlacement` resource to specify the placement rules for distributing the resource overrides across the cluster infrastructure, as shown in the following example. Make sure you select the appropriate namespaces. When you're ready, select **Add**.
+
+ ```yaml
+ apiVersion: placement.kubernetes-fleet.io/v1beta1
+ kind: ClusterResourcePlacement
+ metadata:
+ name: crp-example
+ spec:
+ resourceSelectors:
+ - group: ""
+ kind: Namespace
+ name: test-namespace
+ version: v1
+ policy:
+ placementType: PickAll
+ affinity:
+ clusterAffinity:
+ requiredDuringSchedulingIgnoredDuringExecution:
+ clusterSelectorTerms:
+ - labelSelector:
+ matchLabels:
+ env: prod
+ - labelSelector:
+ matchLabels:
+ env: test
+ ```
+
+ This example distributes resources within the `test-namespace` across all clusters labeled with `env:prod` and `env:test`. As the changes are implemented, the corresponding `ResourceOverride` configurations will be applied to the designated resources, triggered by the selection of matching deployment resource, `my-deployment`.
+
+ :::image type="content" source="./media/quickstart-resource-propagation/create-resource-propagation-inline.png" lightbox="./media/quickstart-resource-propagation/create-resource-propagation.png" alt-text="A screenshot of the Azure portal page for creating a resource placement, showing the YAML template with placeholder values.":::
+
+1. Verify that the cluster resource placement is created successfully.
+
+ :::image type="content" source="./media/quickstart-resource-propagation/overview-cluster-resource-inline.png" lightbox="./media/quickstart-resource-propagation/overview-cluster-resource.png" alt-text="A screenshot of the Azure portal page for cluster resource placements, showing a successfully created cluster resource placement.":::
+
+1. Verify the cluster resource placement applied to the selected resources by selecting the resource from the list and checking the status.
+++ ## Next steps To learn more about Fleet, see the following resources:
machine-learning Concept Endpoints Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-endpoints-online.md
For more information, see [Network isolation with managed online endpoints](conc
Monitoring for Azure Machine Learning endpoints is possible via integration with [Azure Monitor](monitor-azure-machine-learning.md#what-is-azure-monitor). This integration allows you to view metrics in charts, configure alerts, query from log tables, use Application Insights to analyze events from user containers, and so on.
-* **Metrics**: Use Azure Monitor to track various endpoint metrics, such as request latency, and drill down to deployment or status level. You can also track deployment-level metrics, such as CPU/GPU utilization and drill down to instance level. Azure Monitor allows you to track these metrics in charts and set up dashboards and alerts for further analysis.
-
-* **Logs**: Send metrics to the Log Analytics Workspace where you can query logs using the Kusto query syntax. You can also send metrics to Storage Account and/or Event Hubs for further processing. In addition, you can use dedicated Log tables for online endpoint related events, traffic, and container logs. Kusto query allows complex analysis joining multiple tables.
-
-* **Application insights**: Curated environments include the integration with Application Insights, and you can enable/disable it when you create an online deployment. Built-in metrics and logs are sent to Application insights, and you can use its built-in features such as Live metrics, Transaction search, Failures, and Performance for further analysis.
- For more information on monitoring, see [Monitor online endpoints](how-to-monitor-online-endpoints.md). ### Secret injection in online deployments (preview)
machine-learning Concept Model Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-model-catalog.md
Model | Managed compute | Serverless API (pay-as-you-go)
--|--|-- Llama family models | Llama-2-7b <br> Llama-2-7b-chat <br> Llama-2-13b <br> Llama-2-13b-chat <br> Llama-2-70b <br> Llama-2-70b-chat <br> Llama-3-8B-Instruct <br> Llama-3-70B-Instruct <br> Llama-3-8B <br> Llama-3-70B | Llama-3-70B-Instruct <br> Llama-3-8B-Instruct <br> Llama-2-7b <br> Llama-2-7b-chat <br> Llama-2-13b <br> Llama-2-13b-chat <br> Llama-2-70b <br> Llama-2-70b-chat Mistral family models | mistralai-Mixtral-8x22B-v0-1 <br> mistralai-Mixtral-8x22B-Instruct-v0-1 <br> mistral-community-Mixtral-8x22B-v0-1 <br> mistralai-Mixtral-8x7B-v01 <br> mistralai-Mistral-7B-Instruct-v0-2 <br> mistralai-Mistral-7B-v01 <br> mistralai-Mixtral-8x7B-Instruct-v01 <br> mistralai-Mistral-7B-Instruct-v01 | Mistral-large (2402) <br> Mistral-large (2407) <br> Mistral-small <br> Mistral-Nemo
-Cohere family models | Not available | Cohere-command-r-plus <br> Cohere-command-r <br> Cohere-embed-v3-english <br> Cohere-embed-v3-multilingual
+Cohere family models | Not available | Cohere-command-r-plus <br> Cohere-command-r <br> Cohere-embed-v3-english <br> Cohere-embed-v3-multilingual <br> Cohere-rerank-3-english <br> Cohere-rerank-3-multilingual
JAIS | Not available | jais-30b-chat Phi3 family models | Phi-3-mini-4k-Instruct <br> Phi-3-mini-128k-Instruct <br> Phi-3-small-8k-Instruct <br> Phi-3-small-128k-Instruct <br> Phi-3-medium-4k-instruct <br> Phi-3-medium-128k-instruct | Phi-3-mini-4k-Instruct <br> Phi-3-mini-128k-Instruct <br> Phi-3-small-8k-Instruct <br> Phi-3-small-128k-Instruct <br> Phi-3-medium-4k-instruct <br> Phi-3-medium-128k-instruct Nixtla | Not available | TimeGEN-1
Mistral Small | [Microsoft Managed Countries](/partner-center/marketplace/tax-de
Mistral Large (2402) <br> Mistral Large (2407) | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) <br> Brazil <br> Hong Kong <br> Israel| East US 2, Sweden Central, North Central US, South Central US, East US, West US 3, West US | Not available Mistral Nemo | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) <br> Brazil <br> Hong Kong <br> Israel| East US 2, Sweden Central, North Central US, South Central US, East US, West US 3, West US | Not available Cohere-command-r-plus <br> Cohere-command-r <br> Cohere-embed-v3-english <br> Cohere-embed-v3-multilingual | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) <br> Japan | East US 2, Sweden Central, North Central US, South Central US, East US, West US 3, West US | Not available
+Cohere-rerank-3-english <br> Cohere-rerank-3-multilingual | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions)| East US 2, Sweden Central, North Central US, South Central US, East US, West US 3, West US | Not available
TimeGEN-1 | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) <br> Mexico <br> Israel| East US, East US 2, North Central US, South Central US, Sweden Central, West US, West US 3 | Not available jais-30b-chat | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) | East US 2, Sweden Central, North Central US, South Central US, East US, West US 3, West US | Not available Phi-3-mini-4k-instruct <br> Phi-3-mini-128k-instruct | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) | East US 2, Sweden Central | Not available
machine-learning How To Access Data Interactive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-data-interactive.md
Previously updated : 09/05/2023 Last updated : 07/24/2024 #Customer intent: As a professional data scientist, I want to know how to build and deploy a model with Azure Machine Learning by using Python in a Jupyter Notebook.
[!INCLUDE [sdk v2](includes/machine-learning-sdk-v2.md)]
-A machine learning project typically starts with exploratory data analysis (EDA), data-preprocessing (cleaning, feature engineering), and includes building prototypes of ML models to validate hypotheses. This *prototyping* project phase is highly interactive in nature, and it lends itself to development in a Jupyter notebook, or an IDE with a *Python interactive console*. In this article you'll learn how to:
+A machine learning project typically starts with exploratory data analysis (EDA), data-preprocessing (cleaning, feature engineering), and it includes building ML model prototypes to validate hypotheses. This *prototyping* project phase is highly interactive in nature, and it lends itself to development in a Jupyter notebook, or in an IDE with a *Python interactive console*. In this article, learn how to:
> [!div class="checklist"]
-> * Access data from a Azure Machine Learning Datastores URI as if it were a file system.
-> * Materialize data into Pandas using `mltable` Python library.
-> * Materialize Azure Machine Learning data assets into Pandas using `mltable` Python library.
+> * Access data from an Azure Machine Learning Datastores URI as if it were a file system.
+> * Materialize data into Pandas using the `mltable` Python library.
+> * Materialize Azure Machine Learning data assets into Pandas using the `mltable` Python library.
> * Materialize data through an explicit download with the `azcopy` utility. ## Prerequisites
-* An Azure Machine Learning workspace. For more information, see [Manage Azure Machine Learning workspaces in the portal or with the Python SDK (v2)](how-to-manage-workspace.md).
-* An Azure Machine Learning Datastore. For more information, see [Create datastores](how-to-datastore.md).
+* An Azure Machine Learning workspace. For more information, visit [Manage Azure Machine Learning workspaces in the portal or with the Python SDK (v2)](how-to-manage-workspace.md).
+* An Azure Machine Learning Datastore. For more information, visit [Create datastores](how-to-datastore.md).
> [!TIP]
-> The guidance in this article describes data access during interactive development. It applies to any host that can run a Python session. This can include your local machine, a cloud VM, a GitHub Codespace, etc. We recommend use of an Azure Machine Learning compute instance - a fully managed and pre-configured cloud workstation. For more information, see [Create an Azure Machine Learning compute instance](how-to-create-compute-instance.md).
+> The guidance in this article describes data access during interactive development. It applies to any host that can run a Python session. This can include your local machine, a cloud VM, a GitHub Codespace, etc. We recommend use of an Azure Machine Learning compute instance - a fully managed and pre-configured cloud workstation. For more information, visit [Create an Azure Machine Learning compute instance](how-to-create-compute-instance.md).
> [!IMPORTANT]
-> Ensure you have the latest `azure-fsspec` and `mltable` python libraries installed in your python environment:
+> Ensure you have the latest `azure-fsspec` and `mltable` python libraries installed in your Python environment:
> > ```bash > pip install -U azureml-fsspec mltable
path_on_datastore = '<path>'
uri = f'azureml://subscriptions/{subscription}/resourcegroups/{resource_group}/workspaces/{workspace}/datastores/{datastore_name}/paths/{path_on_datastore}'. ```
-These Datastore URIs are a known implementation of the [Filesystem spec](https://filesystem-spec.readthedocs.io/en/latest/https://docsupdatetracker.net/index.html) (`fsspec`): a unified pythonic interface to local, remote and embedded file systems and bytes storage.
-You can pip install the `azureml-fsspec` package and its dependency `azureml-dataprep` package. Then, you can use the Azure Machine Learning Datastore `fsspec` implementation.
+These Datastore URIs are a known implementation of the [Filesystem spec](https://filesystem-spec.readthedocs.io/en/latest/https://docsupdatetracker.net/index.html) (`fsspec`): a unified pythonic interface to local, remote, and embedded file systems and bytes storage. First, pip install the `azureml-fsspec` package and its dependency `azureml-dataprep` package. Then, you can use the Azure Machine Learning Datastore `fsspec` implementation.
-The Azure Machine Learning Datastore `fsspec` implementation automatically handles the credential/identity passthrough that the Azure Machine Learning datastore uses. You can avoid both account key exposure in your scripts, and additional sign-in procedures, on a compute instance.
+The Azure Machine Learning Datastore `fsspec` implementation automatically handles the credential/identity passthrough that the Azure Machine Learning datastore uses. You can avoid both account key exposure in your scripts, and extra sign-in procedures, on a compute instance.
For example, you can directly use Datastore URIs in Pandas. This example shows how to read a CSV file:
df.head()
``` > [!TIP]
-> Rather than remember the datastore URI format, you can copy-and-paste the datastore URI from the Studio UI with these steps:
+> To avoid remembering the datastore URI format, you can copy-and-paste the datastore URI from the Studio UI with these steps:
> 1. Select **Data** from the left-hand menu, then select the **Datastores** tab. > 1. Select your datastore name, and then **Browse**. > 1. Find the file/folder you want to read into Pandas, and select the ellipsis (**...**) next to it. Select **Copy URI** from the menu. You can select the **Datastore URI** to copy into your notebook/script.
fs.upload(lpath='data/upload_files/crime-spring.csv', rpath='data/fsspec', recur
fs.upload(lpath='data/upload_folder/', rpath='data/fsspec_folder', recursive=True, **{'overwrite': 'MERGE_WITH_OVERWRITE'}) ``` `lpath` is the local path, and `rpath` is the remote path.
-If the folders you specify in `rpath` do not exist yet, we will create the folders for you.
+If the folders you specify in `rpath` don't yet exist, we create the folders for you.
We support three 'overwrite' modes:-- APPEND: if a file with the same name exists in the destination path, this keeps the original file-- FAIL_ON_FILE_CONFLICT: if a file with the same name exists in the destination path, this throws an error-- MERGE_WITH_OVERWRITE: if a file with the same name exists in the destination path, this overwrites that existing file with the new file
+- APPEND: if a file with the same name exists in the destination path, APPEND keeps the original file
+- FAIL_ON_FILE_CONFLICT: if a file with the same name exists in the destination path, FAIL_ON_FILE_CONFLICT throws an error
+- MERGE_WITH_OVERWRITE: if a file with the same name exists in the destination path, MERGE_WITH_OVERWRITE overwrites that existing file with the new file
### Download files via AzureMachineLearningFileSystem ```python
df = pd.read_csv("azureml://subscriptions/<subid>/resourcegroups/<rgname>/worksp
#### Read a folder of CSV files into Pandas
-The Pandas `read_csv()` method doesn't support reading a folder of CSV files. You must glob csv paths, and concatenate them to a data frame with the Pandas `concat()` method. The next code sample shows how to achieve this concatenation with the Azure Machine Learning filesystem:
+The Pandas `read_csv()` method doesn't support reading a folder of CSV files. To handle this, glob the csv paths, and concatenate them to a data frame with the Pandas `concat()` method. The next code sample shows how to achieve this concatenation with the Azure Machine Learning filesystem:
```python import pandas as pd
df.head()
#### Read a folder of parquet files into Pandas As part of an ETL process, Parquet files are typically written to a folder, which can then emit files relevant to the ETL such as progress, commits, etc. This example shows files created from an ETL process (files beginning with `_`) which then produce a parquet file of data.
-In these scenarios, you'll only read the parquet files in the folder, and ignore the ETL process files. This code sample shows how glob patterns can read only parquet files in a folder:
+In these scenarios, you only read the parquet files in the folder, and ignore the ETL process files. This code sample shows how glob patterns can read only parquet files in a folder:
```python import pandas as pd
df.head()
Filesystem spec (`fsspec`) has a range of [known implementations](https://filesystem-spec.readthedocs.io/en/stable/_modules/https://docsupdatetracker.net/index.html), including the Databricks Filesystem (`dbfs`).
-To access data from `dbfs` you need:
+To access data from the `dbfs` resource, you need:
- **Instance name**, in the form of `adb-<some-number>.<two digits>.azuredatabricks.net`. You can find this value in the URL of your Azure Databricks workspace.-- **Personal Access Token (PAT)**; for more information about PAT creation, see [Authentication using Azure Databricks personal access tokens](/azure/databricks/dev-tools/api/latest/authentication)
+- **Personal Access Token (PAT)**; for more information about PAT creation, visit [Authentication using Azure Databricks personal access tokens](/azure/databricks/dev-tools/api/latest/authentication)
-With these values, you must create an environment variable on your compute instance for the PAT token:
+With these values, you must create an environment variable for the PAT token on your compute instance:
```bash export ADB_PAT=<pat_token> ```
-You can then access data in Pandas as shown in this example:
+You can then access data in Pandas, as shown in this example:
```python import os
class CustomImageDataset(Dataset):
return image, label ```
-You can then instantiate the dataset as shown here:
+You can then instantiate the dataset, as shown here:
```python from azureml.fsspec import AzureMachineLearningFileSystem
The `mltable` library supports reading of tabular data from different path types
> [!NOTE] > `mltable` does user credential passthrough for paths on Azure Storage and Azure Machine Learning datastores. If you do not have permission to access the data on the underlying storage, you cannot access the data.
-### Files, folders and globs
+### Files, folders, and globs
`mltable` supports reading from:
df.head()
``` > [!TIP]
-> Instead of remembering the datastore URI format, you can copy-and-paste the datastore URI from the Studio UI with these steps:
+> To avoid remembering the datastore URI format, you can copy-and-paste the datastore URI from the Studio UI with these steps:
> 1. Select **Data** from the left-hand menu, then select the **Datastores** tab. > 1. Select your datastore name, and then **Browse**. > 1. Find the file/folder you want to read into Pandas, and select the ellipsis (**...**) next to it. Select **Copy URI** from the menu. You can select the **Datastore URI** to copy into your notebook/script.
df.head()
## A note on reading and processing large data volumes with Pandas > [!TIP]
-> Pandas is not designed to handle large datasets - Pandas can only process data that can fit into the memory of the compute instance.
+> Pandas is not designed to handle large datasets. Pandas can only process data that can fit into the memory of the compute instance.
> > For large datasets, we recommend use of Azure Machine Learning managed Spark. This provides the [PySpark Pandas API](https://spark.apache.org/docs/latest/api/python/user_guide/pandas_on_spark/https://docsupdatetracker.net/index.html).
You can also take subsets of large data with these operations:
## Downloading data using the `azcopy` utility
-Use the `azcopy` utility to download the data to the local SSD of your host (local machine, cloud VM, Azure Machine Learning Compute Instance), into the local filesystem. The `azcopy` utility, which is pre-installed on an Azure Machine Learning compute instance, will handle this. If you **don't** use an Azure Machine Learning compute instance or a Data Science Virtual Machine (DSVM), you may need to install `azcopy`. See [azcopy](../storage/common/storage-ref-azcopy.md) for more information.
+Use the `azcopy` utility to download the data to the local SSD of your host (local machine, cloud VM, Azure Machine Learning Compute Instance, etc.), into the local filesystem. The `azcopy` utility, which is preinstalled on an Azure Machine Learning compute instance, handles the data download. If you **don't** use an Azure Machine Learning compute instance or a Data Science Virtual Machine (DSVM), you might need to install `azcopy`. For more information, visit [azcopy](../storage/common/storage-ref-azcopy.md).
> [!CAUTION] > We don't recommend data downloads into the `/home/azureuser/cloudfiles/code` location on a compute instance. This location is designed to store notebook and code artifacts, **not** data. Reading data from this location will incur significant performance overhead when training. Instead, we recommend data storage in the `home/azureuser`, which is the local SSD of the compute node.
machine-learning How To Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-connection.md
Previously updated : 06/19/2023 Last updated : 07/24/2024 # Customer intent: As an experienced data scientist with Python skills, I have data located in external sources outside of Azure. I need to make that data available to the Azure Machine Learning platform, to train my machine learning models.
[!INCLUDE [dev v2](includes/machine-learning-dev-v2.md)]
-In this article, you'll learn how to connect to data sources located outside of Azure, to make that data available to Azure Machine Learning services. Azure connections serve as key vault proxies, and interactions with connections are actually direct interactions with an Azure key vault. Azure Machine Learning connections store username and password data resources securely, as secrets, in a key vault. The key vault RBAC controls access to these data resources. For this data availability, Azure supports connections to these external sources:
+In this article, learn how to connect to data sources located outside of Azure, to make that data available to Azure Machine Learning services. Azure connections serve as key vault proxies, and interactions with connections are direct interactions with an Azure key vault. An Azure Machine Learning connection securely stores username and password data resources, as secrets, in a key vault. The key vault RBAC controls access to these data resources. For this data availability, Azure supports connections to these external sources:
+ - Snowflake DB - Amazon S3 - Azure SQL DB
In this article, you'll learn how to connect to data sources located outside of
- An Azure Machine Learning workspace. > [!IMPORTANT]
-> An Azure Machine Learning connection securely stores the credentials passed during connection creation in the Workspace Azure Key Vault. A connection references the credentials from the key vault storage location for further use. You won't need to directly deal with the credentials after they are stored in the key vault. You have the option to store the credentials in the YAML file. A CLI command or SDK can override them. We recommend that you **avoid** credential storage in a YAML file, because a security breach could lead to a credential leak.
+> An Azure Machine Learning connection securely stores the credentials passed during connection creation in the Workspace Azure Key Vault. A connection references the credentials from the key vault storage location for further use. You don't need to directly deal with the credentials after they are stored in the key vault. You have the option to store the credentials in the YAML file. A CLI command or SDK can override them. We recommend that you **avoid** credential storage in a YAML file, because a security breach could lead to a credential leak.
> [!NOTE]
-> For a successful data import, please verify that you have installed the latest azure-ai-ml package (version 1.5.0 or later) for SDK, and the ml extension (version 2.15.1 or later).
+> For a successful data import, please verify that you installed the latest **azure-ai-ml** package (version 1.5.0 or later) for SDK, and the ml extension (version 2.15.1 or later).
> > If you have an older SDK package or CLI extension, please remove the old one and install the new one with the code shown in the tab section. Follow the instructions for SDK and CLI as shown here:
from azure.ai.ml import MLClient, load_workspace_connection
ml_client = MLClient.from_config() - wps_connection = load_workspace_connection(source="./my_s3_connection.yaml") ml_client.connections.create_or_update(workspace_connection=wps_connection)
ml_client.connections.create_or_update(workspace_connection=wps_connection)
## Non-data connections
-The following connection types can be used to connect to Git, Python feed, Azure Container Registry, and a connection that uses an API key. These connections are not data connections, but are used to connect to external services for use in your code.
+You can use these connection types to connect to Git:
+
+- Python feed
+- Azure Container Registry
+- a connection that uses an API key
+
+These connections aren't data connections, but are used to connect to external services for use in your code.
### Git
az ml connection create --file connection.yaml
# [Python SDK](#tab/python)
-The following example creates a Git connection to a GitHub repo. This connection is authenticated with a Personal Access Token (PAT):
+The following example creates a Git connection to a GitHub repo. A Personal Access Token (PAT) authenticates the connection:
```python from azure.ai.ml.entities import WorkspaceConnection
You can't create a Git connection in studio.
# [Azure CLI](#tab/cli)
-Create a connection to a Python feed with one of following YAML file. Be sure to update the appropriate values:
+Create a connection to a Python feed with one of following YAML files. Be sure to update the appropriate values:
* Connect using a personal access token (PAT):
az ml connection create --file connection.yaml
# [Python SDK](#tab/python)
-The following example creates a Python feed connection. This connection is authenticated with a personal access token (PAT) or a username and password:
+The following example creates a Python feed connection. A Personal Access Token (PAT), or a user name and password, authenticates the connection:
```python from azure.ai.ml.entities import WorkspaceConnection
You can't create a Python feed connection in studio.
# [Azure CLI](#tab/cli)
-Create a connection to an Azure Container Registry with one of following YAML file. Be sure to update the appropriate values:
+Create a connection to an Azure Container Registry with one of following YAML files. Be sure to update the appropriate values:
* Connect using Microsoft Entra ID authentication:
az ml connection create --file connection.yaml
# [Python SDK](#tab/python)
-The following example creates an Azure Container Registry connection. This connection is authenticated using a managed identity:
+The following example creates an Azure Container Registry connection. A managed identity authenticates this connection:
```python from azure.ai.ml.entities import WorkspaceConnection
ml_client.connections.create_or_update(workspace_connection=wps_connection)
## Related content
-If you are using a data connection (Snowflake DB, Amazon S3, or Azure SQL DB), see these articles for more information:
+If you use a data connection (Snowflake DB, Amazon S3, or Azure SQL DB), these articles offer more information:
- [Import data assets](how-to-import-data-assets.md) - [Schedule data import jobs](how-to-schedule-data-import.md)
machine-learning How To Deploy Models Cohere Rerank https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-models-cohere-rerank.md
+
+ Title: How to deploy Cohere Rerank models as serverless APIs
+
+description: Learn to deploy and use Cohere Rerank models with Azure Machine Learning studio.
++++ Last updated : 07/24/2024+++++
+#This functionality is also available in Azure AI Studio: /azure/ai-studio/how-to/deploy-models-cohere.md
++
+# How to deploy Cohere Rerank models with Azure Machine Learning studio
+
+In this article, you learn about the Cohere Rerank models, how to use Azure Machine Learning studio to deploy them as serverless APIs with pay-as-you-go token-based billing, and how to work with the deployed models.
+
+## Cohere Rerank models
+
+Cohere offers two Rerank models in Azure Machine Learning studio. These models are available in the model catalog for deployment as serverless APIs:
+
+* Cohere Rerank 3 - English
+* Cohere Rerank 3 - Multilingual
++
+You can browse the Cohere family of models in the [Model Catalog](concept-model-catalog.md) by filtering on the Cohere collection.
+
+### Cohere Rerank 3 - English
+
+Cohere Rerank English is a reranking model used for semantic search and retrieval-augmented generation (RAG). Rerank enables you to significantly improve search quality by augmenting traditional keyword-based search systems with a semantic-based reranking system that can contextualize the meaning of a user's query beyond keyword relevance. Cohere's Rerank delivers higher quality results than embedding-based search, lexical search, and even hybrid search, and it requires only adding a single line of code into your application.
+
+Use Rerank as a ranker after initial retrieval. In other words, after an initial search system finds the top 100 most relevant documents for a larger corpus of documents.
+
+Rerank supports JSON objects as documents where users can specify, at query time, the fields (keys) to use for semantic search. Some other attributes of Rerank include:
+
+* Context window of the model is 4,096 tokens
+* The max query length is 2,048 tokens
+
+Rerank English works well for code retrieval, semi-structured data retrieval, and long context.
+
+### Cohere Rerank 3 - Multilingual
+
+Cohere Rerank Multilingual is a reranking model used for semantic search and retrieval-augmented generation (RAG). Rerank Multilingual supports more than 100 languages and can be used to search within a language (for example, to search with a French query on French documents) and across languages (for example, to search with an English query on Chinese documents). Rerank enables you to significantly improve search quality by augmenting traditional keyword-based search systems with a semantic-based reranking system that can contextualize the meaning of a user's query beyond keyword relevance. Cohere's Rerank delivers higher quality results than embedding-based search, lexical search, and even hybrid search, and it requires only adding a single line of code into your application.
+
+Use Rerank as a ranker after initial retrieval. In other words, after an initial search system finds the top 100 most relevant documents for a larger corpus of documents.
+
+Rerank supports JSON objects as documents where users can specify, at query time, the fields (keys) to use for semantic search. Some other attributes of Rerank Multilingual include:
+
+* Context window of the model is 4,096 tokens
+* The max query length is 2,048 tokens
+
+Rerank multilingual performs well on multilingual benchmarks such as Miracl.
++
+## Deploy Cohere Rerank models as serverless APIs
+
+Certain models in the model catalog can be deployed as a serverless API with pay-as-you-go billing. This kind of deployment provides a way to consume models as an API without hosting them on your subscription, while keeping the enterprise security and compliance that organizations need. This deployment option doesn't require quota from your subscription.
+
+You can deploy the previously mentioned Cohere models as a service with pay-as-you-go billing. Cohere offers these models through the Microsoft Azure Marketplace and can change or update the terms of use and pricing of these models.
+
+### Prerequisites
+
+- An Azure subscription with a valid payment method. Free or trial Azure subscriptions won't work. If you don't have an Azure subscription, create a [paid Azure account](https://azure.microsoft.com/pricing/purchase-options/pay-as-you-go) to begin.
+- An Azure Machine Learning workspace. If you don't have these, use the steps in the [Quickstart: Create workspace resources](quickstart-create-resources.md) article to create them. The serverless API model deployment offering for Cohere Rerank is only available with workspaces created in these regions:
+
+ * East US
+ * East US 2
+ * North Central US
+ * South Central US
+ * West US
+ * West US 3
+ * Sweden Central
+
+ For a list of regions that are available for each of the models supporting serverless API endpoint deployments, see [Region availability for models in serverless API endpoints](concept-endpoint-serverless-availability.md).
+
+- Azure role-based access controls (Azure RBAC) are used to grant access to operations. To perform the steps in this article, your user account must be assigned the __Azure AI Developer role__ on the Resource Group.
+
+ For more information on permissions, see [Manage access to an Azure Machine Learning workspace](how-to-assign-roles.md).
+
+### Create a new deployment
+
+The following steps demonstrate the deployment of Cohere Rerank 3 - English, but you can use the same steps to deploy Cohere Rerank 3 - Multilingual by replacing the model name.
+
+To create a deployment:
+
+1. Go to [Azure Machine Learning studio](https://ml.azure.com/home).
+1. Select the workspace in which you want to deploy your models.
+1. Choose the model you want to deploy from the [model catalog](https://ml.azure.com/model/catalog).
+
+ Alternatively, you can initiate deployment by going to your workspace and selecting **Endpoints** > **Serverless endpoints** > **Create**.
+
+1. On the model's overview page in the model catalog, select **Deploy**.
+
+1. In the deployment wizard, select the link to **Azure Marketplace Terms** to learn more about the terms of use.
+1. You can also select the **Marketplace offer details** tab to learn about pricing for the selected model.
+1. If this is your first time deploying the model in the workspace, you have to subscribe your workspace for the particular offering of the model. This step requires that your account has the **Azure AI Developer role** permissions on the Resource Group, as listed in the prerequisites. Each workspace has its own subscription to the particular Azure Marketplace offering, which allows you to control and monitor spending. Select **Subscribe and Deploy**. Currently you can have only one deployment for each model within a workspace.
+
+1. Once you subscribe the workspace for the particular Azure Marketplace offering, subsequent deployments of the _same_ offering in the _same_ workspace don't require subscribing again. If this scenario applies to you, there's a **Continue to deploy** option to select.
+
+1. Give the deployment a name. This name becomes part of the deployment API URL. This URL must be unique in each Azure region.
+
+1. Select **Deploy**. Wait until the deployment is finished and you're redirected to the serverless endpoints page.
+1. Select the endpoint to open its Details page.
+1. You can always find the endpoint's details, URL, and access keys by navigating to **Workspace** > **Endpoints** > **Serverless endpoints**.
+1. Take note of the **Target** URL and the **Secret Key**. For more information on using the APIs, see the [reference] (#rerank-api-reference-for-cohere-rerank-models-deployed-as-a-serverless-api) section.
+
+To learn about billing for models deployed with pay-as-you-go, see [Cost and quota considerations for Cohere models deployed as a service](#cost-and-quota-considerations-for-models-deployed-as-a-service).
+
+### Consume the Cohere Rerank models as a service
+
+Cohere Rerank models deployed as serverless APIs can be consumed using the Rerank API.
+
+1. In the **workspace**, select **Endpoints** > **Serverless endpoints**.
+1. Find and select the deployment you created.
+1. Copy the **Target** URL and the **Key** token values.
+1. Cohere currently exposes `v1/rerank` for inference with the Rerank 3 - English and Rerank 3 - Multilingual models schema. For more information on using the APIs, see the [reference](#rerank-api-reference-for-cohere-rerank-models-deployed-as-a-serverless-api) section.
+
+## Rerank API reference for Cohere Rerank models deployed as a serverless API
+
+Cohere Rerank 3 - English and Rerank 3 - Multilingual accept the native Cohere Rerank API on `v1/rerank`. This section contains details about the Cohere Rerank API.
+
+#### v1/rerank request
+
+```json
+ POST /v1/rerank HTTP/1.1
+ Host: <DEPLOYMENT_URI>
+ Authorization: Bearer <TOKEN>
+ Content-type: application/json
+```
+
+#### v1/rerank request schema
+
+Cohere Rerank 3 - English and Rerank 3 - Multilingual accept the following parameters for a `v1/rerank` API call:
+
+| Property | Type | Default | Description |
+| | | | |
+|`query` |`string` |Required |The search query |
+|`documents` |`array` |None |A list of document objects or strings to rerank. |
+|`top_n` |`integer` |Length of `documents` |The number of most relevant documents or indices to return. |
+|`return_documents` |`boolean` |`FALSE` |If `FALSE`, returns results without the doc text - the API returns a list of {`index`, `relevance_score`} where index is inferred from the list passed into the request. </br>If `TRUE`, returns results with the doc text passed in - the API returns an ordered list of {`index`, `text`, `relevance_score`} where index + text refers to the list passed into the request. |
+|`max_chunks_per_doc` |`integer` |None |The maximum number of chunks to produce internally from a document.|
+|`rank_fields` |`array of strings` |None |If a JSON object is provided, you can specify which keys you would like to consider for reranking. The model reranks based on the order of the fields passed in (for example, `rank_fields=['title','author','text']` reranks, using the values in `title`, `author`, and `text` in that sequence. If the length of title, author, and text exceeds the context length of the model, the chunking won't reconsider earlier fields).<br> If not provided, the model uses the default text field for ranking. |
+
+#### v1/rerank response schema
+
+Response fields are fully documented on [Cohere's Rerank API reference](https://docs.cohere.com/reference/rerank). The response payload is a dictionary with the following fields:
+
+| Key | Type | Description |
+| | | |
+| `id` | `string` |An identifier for the response. |
+| `results` | `array of objects`|An ordered list of ranked documents, where each document is described by an object that includes `index` and `relevance_score` and, optionally, `text`. |
+| `meta` | `array of objects` | An optional meta object containing a list of warning strings. |
+
+<br>
+
+The `results` object is a dictionary with the following fields:
+
+| Key | Type | Description |
+| | | |
+| `document` | `object` |The document objects or strings that were reranked. |
+| `index` | `integer` |The `index` in the original list of documents to which the ranked document belongs. For example, if the first value in the `results` object has an index value of 3, it means in the list of documents passed in, the document at `index=3` had the highest relevance.|
+| `relevance_score` | `float` |Relevance scores are normalized to be in the range `[0, 1]`. Scores close to one indicate a high relevance to the query, and scores close to zero indicate low relevance. A score of `0.9` _doesn't_ necessarily mean that a document is twice as relevant as another with a score of `0.45`. |
++
+## Examples
+
+#### Request example
+
+```json
+ {
+ "query": "What is the capital of the United States?",
+ "rank_fields": ["Title", "Content"],
+ "documents": [
+ {"Title": "Facts about Carson City", "Content": "Carson City is the capital city of the American state of Nevada. "},
+ {"Title": "North Dakota", "Content" : "North Dakota is a state in the United States. 672,591 people lived in North Dakota in the year 2010. The capital and seat of government is Bismarck."},
+ {"Title": "Micronesia", "Content" : "Micronesia, officially the Federated States of Micronesia, is an island nation in the Pacific Ocean, northeast of Papua New Guinea. The country is a sovereign state in free association with the United States. The capital city of Federated States of Micronesia is Palikir."}
+ ],
+ "top_n": 3
+ }
+```
+
+#### Response example
+
+```json
+ {
+ "id": "571e6744-3074-457f-8935-08646a3352fb",
+ "results": [
+ {
+ "document": {
+ "Content": "Washington, D.C. (also known as simply Washington or D.C., and officially as the District of Columbia) is the capital of the United States. It is a federal district. The President of the USA and many major national government offices are in the territory. This makes it the political center of the United States of America.",
+ "Title": "Details about Washington D.C"
+ },
+ "index": 0,
+ "relevance_score": 0.98347044
+ },
+ {
+ "document": {
+ "Content": "Carson City is the capital city of the American state of Nevada. ",
+ "Title": "Facts about Carson City"
+ },
+ "index": 1,
+ "relevance_score": 0.07172112
+ },
+ {
+ "document": {
+ "Content": "Micronesia, officially the Federated States of Micronesia, is an island nation in the Pacific Ocean, northeast of Papua New Guinea. The country is a sovereign state in free association with the United States. The capital city of Federated States of Micronesia is Palikir.",
+ "Title": "Micronesia"
+ },
+ "index": 3,
+ "relevance_score": 0.05281402
+ },
+ {
+ "document": {
+ "Content": "North Dakota is a state in the United States. 672,591 people lived in North Dakota in the year 2010. The capital and seat of government is Bismarck.",
+ "Title": "North Dakota"
+ },
+ "index": 2,
+ "relevance_score": 0.03138043
+ }
+ ]
+ }
+```
+
+#### More inference examples
+
+| Package | Sample Notebook |
+|||
+|CLI using CURL and Python web requests| [cohere-rerank.ipynb](https://aka.ms/samples/cohere-rerank/webrequests)|
+|LangChain|[langchain.ipynb](https://aka.ms/samples/cohere-rerank/langchain)|
+|Cohere SDK|[cohere-sdk.ipynb](https://aka.ms/samples/cohere-rerank/cohere-python-sdk)|
+
+## Cost and quota considerations for models deployed as a service
+
+Quota is managed per deployment. Each deployment has a rate limit of 200,000 tokens per minute and 1,000 API requests per minute. However, we currently limit one deployment per model per project. Contact Microsoft Azure Support if the current rate limits aren't sufficient for your scenarios.
+
+Cohere models deployed as serverless APIs with pay-as-you-go billing are offered by Cohere through the Azure Marketplace and integrated with Azure Machine Learning studio for use. You can find the Azure Marketplace pricing when deploying the model.
+
+Each time a workspace subscribes to a given model offering from Azure Marketplace, a new resource is created to track the costs associated with its consumption. The same resource is used to track costs associated with inference; however, multiple meters are available to track each scenario independently.
+
+For more information on how to track costs, see [Monitor costs for models offered through the Azure Marketplace](../ai-studio/how-to/costs-plan-manage.md#monitor-costs-for-models-offered-through-the-azure-marketplace).
++
+## Related content
+
+- [Model Catalog and Collections](concept-model-catalog.md)
+- [Deploy and score a machine learning model by using an online endpoint](how-to-deploy-online-endpoints.md)
+- [Plan and manage costs for Azure AI Studio](concept-plan-manage-cost.md)
+- [Region availability for models in serverless API endpoints](concept-endpoint-serverless-availability.md)
machine-learning How To Manage Inputs Outputs Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-inputs-outputs-pipeline.md
Last updated 08/27/2023 -+ # Manage inputs and outputs of component and pipeline
machine-learning How To Manage Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-models.md
Last updated 06/12/2024 -+ # Work with registered models in Azure Machine Learning
machine-learning How To Monitor Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-monitor-online-endpoints.md
Depending on the resource that you select, the metrics that you see will be diff
#### Metrics at endpoint scope -- __Traffic__-
-| Metric ID | Unit | Description | Aggregate Method | Splittable By | Example Metric Alerts |
-| - | | | | | |
-| RequestsPerMinute | Count | The number of requests sent to Endpoint within a minute | Average | Deployment, ModelStatusCode, StatusCode, StatusCodeClass | Alert me when I have <= 0 transactions in the system |
-| RequestLatency | Milliseconds | The complete interval of time taken for a request to be responded | Average | Deployment | Alert me when average latency > 2 sec |
-| RequestLatency_P50 | Milliseconds | The request latency at the 50th percentile aggregated by all request latency values collected over a period of 60 seconds | Average | Deployment | Alert me when average latency > 2 sec |
-| RequestLatency_P90 | Milliseconds | The request latency at the 90th percentile aggregated by all request latency values collected over a period of 60 seconds | Average | Deployment | Alert me when average latency > 2 sec |
-| RequestLatency_P95 | Milliseconds | The request latency at the 95th percentile aggregated by all request latency values collected over a period of 60 seconds | Average | Deployment | Alert me when average latency > 2 sec |
-| RequestLatency_P99 | Milliseconds | The request latency at the 99th percentile aggregated by all request latency values collected over a period of 60 seconds | Average | Deployment | Alert me when average latency > 2 sec |
--- __Network__-
-| Metric ID | Unit | Description | Aggregate Method | Splittable By | Example Metric Alerts |
-| - | | | | | |
-| NetworkBytes | Bytes per second | The bytes per second served for the endpoint | Average | - | - |
-| ConnectionsActive | Count | The total number of concurrent TCP connections active from clients | Average | - | - |
-| NewConnectionsPerSecond | Count | The average number of new TCP connections per second established from clients | Average | - | - |
--- __Model Data Collection__-
-| Metric ID | Unit | Description | Aggregate Method | Splittable By | Example Metric Alerts |
-| - | | | | | |
-| DataCollectionEventsPerMinute | Count | The number of data collection events processed per minute | Average | Deployment, Type | - |
-| DataCollectionErrorsPerMinute | Count | The number of data collection events dropped per minute | Average | Deployment, Type, Reason | - |
-
-For example, you can split along the deployment dimension to compare the request latency of different deployments under an endpoint.
**Bandwidth throttling** Bandwidth will be throttled if the quota limits are exceeded for _managed_ online endpoints. For more information on limits, see the article on [limits for online endpoints](how-to-manage-quotas.md#azure-machine-learning-online-endpoints-and-batch-endpoints). To determine if requests are throttled: - Monitor the "Network bytes" metric - The response trailers will have the fields: `ms-azureml-bandwidth-request-delay-ms` and `ms-azureml-bandwidth-response-delay-ms`. The values of the fields are the delays, in milliseconds, of the bandwidth throttling.+ For more information, see [Bandwidth limit issues](how-to-troubleshoot-online-endpoints.md#bandwidth-limit-issues). #### Metrics at deployment scope -- __Saturation__-
-| Metric ID | Unit | Description | Aggregate Method | Splittable By | Example Metric Alerts |
-| - | | | | | |
-| CpuUtilizationPercentage | Percent | How much percentage of CPU was utilized | Minimun, Maximum, Average | InstanceId | Alert me when % Capacity Used > 75% |
-| CpuMemoryUtilizationPercentage | Percent | How much percent of Memory was utilized | Minimun, Maximum, Average | InstanceId | |
-| DiskUtilization | Percent | How much disk space was utilized | Minimun, Maximum, Average | InstanceId, Disk | |
-| GpuUtilizationPercentage | Percent | Percentage of GPU utilization on an instance - Utilization is reported at one minute intervals | Minimun, Maximum, Average | InstanceId | |
-| GpuMemoryUtilizationPercentage | Percent | Percentage of GPU memory utilization on an instance - Utilization is reported at one minute intervals | Minimun, Maximum, Average | InstanceId | |
-| GpuEnergyJoules | Joule | Interval energy in Joules on a GPU node - Energy is reported at one minute intervals | Minimun, Maximum, Average | InstanceId | |
--- __Availability__-
-| Metric ID | Unit | Description | Aggregate Method | Splittable By | Example Metric Alerts |
-| - | | | | | |
-| DeploymentCapacity | Count | The number of instances in the deployment | Minimum, Maximum, Average | InstanceId, State | Alert me when the % Availability of my service drops below 100% |
--- __Traffic__-
-| Metric ID | Unit | Description | Aggregate Method | Splittable By | Example Metric Alerts |
-| - | | | | | |
-| RequestsPerMinute | Count | The number of requests sent to online deployment within a minute | Average | StatusCode | Alert me when I have <= 0 transactions in the system |
-| RequestLatency_P50 | Milliseconds | The average P50 request latency aggregated by all request latency values collected over the selected time period | Average | - | Alert me when average latency > 2 sec |
-| RequestLatency_P90 | Milliseconds | The average P90 request latency aggregated by all request latency values collected over the selected time period | Average | - | Alert me when average latency > 2 sec |
-| RequestLatency_P95 | Milliseconds | The average P95 request latency aggregated by all request latency values collected over the selected time period | Average | - | Alert me when average latency > 2 sec |
-| RequestLatency_P99 | Milliseconds | The average P99 request latency aggregated by all request latency values collected over the selected time period | Average | - | Alert me when average latency > 2 sec |
--- __Model Data Collection__-
-| Metric ID | Unit | Description | Aggregate Method | Splittable By | Example Metric Alerts |
-| - | | | | | |
-| DataCollectionEventsPerMinute | Count | The number of data collection events processed per minute | Average | InstanceId, Type | - |
-| DataCollectionErrorsPerMinute | Count | The number of data collection events dropped per minute | Average | InstanceId, Type, Reason | - |
-
-For instance, you can compare CPU and/or memory utilization between difference instances for an online deployment.
### Create dashboards and alerts
machine-learning How To Track Monitor Analyze Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-track-monitor-analyze-runs.md
To cancel a job in the studio:
## Monitor job status by email notification
-1. In the [Azure portal](https://portal.azure.com), in the left navigation bar, select the **Monitor** tab.
+You can use diagnostic settings to trigger email notifications. To learn how to create diagnostic settings, see [Create diagnostic settings in Azure Monitor](/azure/azure-monitor/essentials/create-diagnostic-settings).
-1. Select **Diagnostic settings**, then choose **+ Add diagnostic setting**.
-
- :::image type="content" source="media/how-to-track-monitor-analyze-runs/diagnostic-setting.png" alt-text="Screenshot of diagnostic settings for email notification.":::
-
-1. Under **Category details**, select **AmlRunStatusChangedEvent**. Under **Destination details**, select **Send to Log Analytics workspace** and specify the **Subscription** and **Log Analytics workspace**.
-
- :::image type="content" source="media/how-to-track-monitor-analyze-runs/log-location.png" alt-text="Screenshot of where to save email notification.":::
-
- > [!NOTE]
- > The **Azure Log Analytics Workspace** is a different type of Azure resource than the **Azure Machine Learning service workspace**. If there are no options in that list, you can [create a Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md).
-
-1. In the **Logs** tab, select **New alert rule**.
-
- :::image type="content" source="media/how-to-track-monitor-analyze-runs/new-alert-rule.png" alt-text="Screenshot of button to add new alert rule.":::
-
-1. To learn how to create and manage log alerts using Azure Monitor, see [Create or edit a log search alert rule](../azure-monitor/alerts/alerts-log.md).
+To learn how to create and manage log alerts using Azure Monitor, see [Create or edit a log search alert rule](/azure/azure-monitor/alerts/alerts-create-log-alert-rule).
## Related content
machine-learning How To Train Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-model.md
- sdkv2 - build-2023 - ignite-2023
- - update-code1
+ - update-code2
# Train models with Azure Machine Learning CLI, SDK, and REST API
machine-learning How To Use Batch Pipeline Deployments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-batch-pipeline-deployments.md
- devplatv2 - event-tier1-build-2023 - ignite-2023
- - update-code1
+ - update-code2
# How to deploy pipelines with batch endpoints
machine-learning How To Use Sweep In Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-sweep-in-pipeline.md
Last updated 05/26/2022-+ # How to do hyperparameter tuning in pipeline (v2)
machine-learning Monitor Azure Machine Learning Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/monitor-azure-machine-learning-reference.md
The metrics categories are **Model**, **Quota**, **Resource**, **Run**, and **Tr
The following table lists the metrics available for the Microsoft.MachineLearningServices/workspaces resource type. [!INCLUDE [horz-monitor-ref-metrics-tableheader](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-ref-metrics-tableheader.md)] ### Supported metrics for Microsoft.MachineLearningServices/workspaces/onlineEndpoints The following table lists the metrics available for the Microsoft.MachineLearningServices/workspaces/onlineEndpoints resource type. [!INCLUDE [horz-monitor-ref-metrics-tableheader](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-ref-metrics-tableheader.md)] ### Supported metrics for Microsoft.MachineLearningServices/workspaces/onlineEndpoints/deployments The following table lists the metrics available for the Microsoft.MachineLearningServices/workspaces/onlineEndpoints/deployments resource type. [!INCLUDE [horz-monitor-ref-metrics-tableheader](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-ref-metrics-tableheader.md)] [!INCLUDE [horz-monitor-ref-metrics-dimensions-intro](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-ref-metrics-dimensions-intro.md)] [!INCLUDE [horz-monitor-ref-metrics-dimensions](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-ref-metrics-dimensions.md)]
The valid values for the RunType dimension are:
[!INCLUDE [horz-monitor-ref-resource-logs](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-ref-resource-logs.md)] ### Supported resource logs for Microsoft.MachineLearningServices/registries ### Supported resource logs for Microsoft.MachineLearningServices/workspaces ### Supported resource logs for Microsoft.MachineLearningServices/workspaces/onlineEndpoints [!INCLUDE [horz-monitor-ref-logs-tables](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-ref-logs-tables.md)] ### Machine Learning
machine-learning How To End To End Llmops With Prompt Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-end-to-end-llmops-with-prompt-flow.md
Last updated 01/02/2024
# LLMOps with prompt flow and GitHub
-Large Language Operations, or **LLMOps**, has become the cornerstone of efficient prompt engineering and LLM-infused application development and deployment. As the demand for LLM-infused applications continues to soar, organizations find themselves in need of a cohesive and streamlined process to manage their end-to-end lifecycle.
+Large Language Models Operations, or **LLMOps**, has become the cornerstone of efficient prompt engineering and LLM-infused application development and deployment. As the demand for LLM-infused applications continues to soar, organizations find themselves in need of a cohesive and streamlined process to manage their end-to-end lifecycle.
Azure Machine Learning allows you to integrate with GitHub to automate the LLM-infused application development lifecycle with prompt flow.
machine-learning Quickstart Create Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/quickstart-create-resources.md
Previously updated : 08/17/2023 Last updated : 07/22/2024 adobe-target: true #Customer intent: As a data scientist, I want to create a workspace so that I can start to use Azure Machine Learning.
The workspace is the top-level resource for your machine learning activities, pr
If you already have a workspace, skip this section and continue to [Create a compute instance](#create-a-compute-instance).
-If you don't yet have a workspace, create one now:
+If you don't yet have a workspace, create one now:
1. Sign in to [Azure Machine Learning studio](https://ml.azure.com) 1. Select **Create workspace**
If you don't yet have a workspace, create one now:
You'll use the *compute instance* to run Jupyter notebooks and Python scripts in the rest of the tutorials. If you don't yet have a compute instance, create one now:
-1. On the left navigation, select **Notebooks**.
-1. Select **Create compute** in the middle of the page.
-
- :::image type="content" source="media/quickstart-create-resources/create-compute.png" alt-text="Screenshot shows create compute in the middle of the screen.":::
+1. Select your workspace.
+1. On the top right, select **New**.
+1. Select **Compute instance** in the list.
- > [!TIP]
- > You'll only see this option if you don't yet have a compute instance in your workspace.
+ :::image type="content" source="media/quickstart-create-resources/create-compute.png" alt-text="Screenshot shows create compute in the New list.":::
-1. Supply a name. Keep all the defaults on the first page.
+1. Supply a name.
1. Keep the default values for the rest of the page.
+1. Select **Review + Create**.
1. Select **Create**. ## Quick tour of the studio
machine-learning Reference Yaml Job Command https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-job-command.md
Previously updated : 11/28/2022 Last updated : 07/25/2024
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
| | - | -- | -- | - | | `$schema` | string | The YAML schema. If you use the Azure Machine Learning VS Code extension to author the YAML file, including `$schema` at the top of your file enables you to invoke schema and resource completions. | | | | `type` | const | The type of job. | `command` | `command` |
-| `name` | string | Name of the job. Must be unique across all jobs in the workspace. If omitted, Azure Machine Learning will autogenerate a GUID for the name. | | |
-| `display_name` | string | Display name of the job in the studio UI. Can be non-unique within the workspace. If omitted, Azure Machine Learning will autogenerate a human-readable adjective-noun identifier for the display name. | | |
-| `experiment_name` | string | Experiment name to organize the job under. Each job's run record will be organized under the corresponding experiment in the studio's "Experiments" tab. If omitted, Azure Machine Learning will default it to the name of the working directory where the job was created. | | |
+| `name` | string | Name of the job. Must be unique across all jobs in the workspace. If omitted, Azure Machine Learning autogenerates a GUID for the name. | | |
+| `display_name` | string | Display name of the job in the studio UI. Can be nonunique within the workspace. If omitted, Azure Machine Learning autogenerates a human-readable adjective-noun identifier for the display name. | | |
+| `experiment_name` | string | Experiment name to organize the job under. Each job's run record is organized under the corresponding experiment in the studio's "Experiments" tab. If omitted, Azure Machine Learning defaults it to the name of the working directory where the job was created. | | |
| `description` | string | Description of the job. | | | | `tags` | object | Dictionary of tags for the job. | | | | `command` | string | **Required (if not using `component` field).** The command to execute. | | | | `code` | string | Local path to the source code directory to be uploaded and used for the job. | | |
-| `environment` | string or object | **Required (if not using `component` field).** The environment to use for the job. This can be either a reference to an existing versioned environment in the workspace or an inline environment specification. <br><br> To reference an existing environment use the `azureml:<environment_name>:<environment_version>` syntax or `azureml:<environment_name>@latest` (to reference the latest version of an environment). <br><br> To define an environment inline please follow the [Environment schema](reference-yaml-environment.md#yaml-syntax). Exclude the `name` and `version` properties as they are not supported for inline environments. | | |
+| `environment` | string or object | **Required (if not using `component` field).** The environment to use for the job. Can be either a reference to an existing versioned environment in the workspace or an inline environment specification. <br><br> To reference an existing environment, use the `azureml:<environment_name>:<environment_version>` syntax or `azureml:<environment_name>@latest` (to reference the latest version of an environment). <br><br> To define an environment inline, follow the [Environment schema](reference-yaml-environment.md#yaml-syntax). Exclude the `name` and `version` properties as they aren't supported for inline environments. | | |
| `environment_variables` | object | Dictionary of environment variable key-value pairs to set on the process where the command is executed. | | | | `distribution` | object | The distribution configuration for distributed training scenarios. One of [MpiConfiguration](#mpiconfiguration), [PyTorchConfiguration](#pytorchconfiguration), or [TensorFlowConfiguration](#tensorflowconfiguration). | | |
-| `compute` | string | Name of the compute target to execute the job on. This can be either a reference to an existing compute in the workspace (using the `azureml:<compute_name>` syntax) or `local` to designate local execution. **Note:** jobs in pipeline didn't support `local` as `compute` | | `local` |
+| `compute` | string | Name of the compute target to execute the job on. Can be either a reference to an existing compute in the workspace (using the `azureml:<compute_name>` syntax) or `local` to designate local execution. **Note:** jobs in pipeline didn't support `local` as `compute` | | `local` |
| `resources.instance_count` | integer | The number of nodes to use for the job. | | `1` |
-| `resources.instance_type` | string | The instance type to use for the job. Applicable for jobs running on Azure Arc-enabled Kubernetes compute (where the compute target specified in the `compute` field is of `type: kubernentes`). If omitted, this will default to the default instance type for the Kubernetes cluster. For more information, see [Create and select Kubernetes instance types](how-to-attach-kubernetes-anywhere.md). | | |
-| `resources.shm_size` | string | The size of the docker container's shared memory block. This should be in the format of `<number><unit>` where number has to be greater than 0 and the unit can be one of `b` (bytes), `k` (kilobytes), `m` (megabytes), or `g` (gigabytes). | | `2g` |
-| `limits.timeout` | integer | The maximum time in seconds the job is allowed to run. Once this limit is reached the system will cancel the job. | | |
+| `resources.instance_type` | string | The instance type to use for the job. Applicable for jobs running on Azure Arc-enabled Kubernetes compute (where the compute target specified in the `compute` field is of `type: kubernentes`). If omitted, defaults to the default instance type for the Kubernetes cluster. For more information, see [Create and select Kubernetes instance types](how-to-attach-kubernetes-anywhere.md). | | |
+| `resources.shm_size` | string | The size of the docker container's shared memory block. Should be in the format of `<number><unit>` where number has to be greater than 0 and the unit can be one of `b` (bytes), `k` (kilobytes), `m` (megabytes), or `g` (gigabytes). | | `2g` |
+| `limits.timeout` | integer | The maximum time in seconds the job is allowed to run. When this limit is reached, the system cancels the job. | | |
| `inputs` | object | Dictionary of inputs to the job. The key is a name for the input within the context of the job and the value is the input value. <br><br> Inputs can be referenced in the `command` using the `${{ inputs.<input_name> }}` expression. | | |
-| `inputs.<input_name>` | number, integer, boolean, string or object | One of a literal value (of type number, integer, boolean, or string) or an object containing a [job input data specification](#job-inputs). | | |
+| `inputs.<input_name>` | number, integer, boolean, string, or object | One of a literal value (of type number, integer, boolean, or string) or an object containing a [job input data specification](#job-inputs). | | |
| `outputs` | object | Dictionary of output configurations of the job. The key is a name for the output within the context of the job and the value is the output configuration. <br><br> Outputs can be referenced in the `command` using the `${{ outputs.<output_name> }}` expression. | |
-| `outputs.<output_name>` | object | You can leave the object empty, in which case by default the output will be of type `uri_folder` and Azure Machine Learning will system-generate an output location for the output. File(s) to the output directory will be written via read-write mount. If you want to specify a different mode for the output, provide an object containing the [job output specification](#job-outputs). | |
-| `identity` | object | The identity is used for data accessing. It can be [UserIdentityConfiguration](#useridentityconfiguration), [ManagedIdentityConfiguration](#managedidentityconfiguration) or None. If it's UserIdentityConfiguration the identity of job submitter will be used to access input data and write result to output folder, otherwise, the managed identity of the compute target will be used. | |
+| `outputs.<output_name>` | object | You can leave the object empty, in which case by default the output is of type `uri_folder` and Azure Machine Learning generates an output location for the output. Files to the output directory are written via read-write mount. If you want to specify a different mode for the output, provide an object containing the [job output specification](#job-outputs). | |
+| `identity` | object | The identity is used for data accessing. It can be [UserIdentityConfiguration](#useridentityconfiguration), [ManagedIdentityConfiguration](#managedidentityconfiguration), or None. If UserIdentityConfiguration, the identity of job submitter is used to access, input data and write result to output folder, otherwise, the managed identity of the compute target is used. | |
### Distribution configurations
The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
| Key | Type | Description | Allowed values | Default value | | | - | -- | -- | - | | `type` | string | The type of job input. Specify `uri_file` for input data that points to a single file source, or `uri_folder` for input data that points to a folder source. | `uri_file`, `uri_folder`, `mlflow_model`, `custom_model`| `uri_folder` |
-| `path` | string | The path to the data to use as input. This can be specified in a few ways: <br><br> - A local path to the data source file or folder, e.g. `path: ./iris.csv`. The data will get uploaded during job submission. <br><br> - A URI of a cloud path to the file or folder to use as the input. Supported URI types are `azureml`, `https`, `wasbs`, `abfss`, `adl`. See [Core yaml syntax](reference-yaml-core-syntax.md) for more information on how to use the `azureml://` URI format. <br><br> - An existing registered Azure Machine Learning data asset to use as the input. To reference a registered data asset use the `azureml:<data_name>:<data_version>` syntax or `azureml:<data_name>@latest` (to reference the latest version of that data asset), e.g. `path: azureml:cifar10-data:1` or `path: azureml:cifar10-data@latest`. | | |
-| `mode` | string | Mode of how the data should be delivered to the compute target. <br><br> For read-only mount (`ro_mount`), the data will be consumed as a mount path. A folder will be mounted as a folder and a file will be mounted as a file. Azure Machine Learning will resolve the input to the mount path. <br><br> For `download` mode the data will be downloaded to the compute target. Azure Machine Learning will resolve the input to the downloaded path. <br><br> If you only want the URL of the storage location of the data artifact(s) rather than mounting or downloading the data itself, you can use the `direct` mode. This will pass in the URL of the storage location as the job input. Note that in this case you are fully responsible for handling credentials to access the storage. <br><br> The `eval_mount` and `eval_download` modes are unique to MLTable, and either mounts the data as a path or downloads the data to the compute target. <br><br> For more information on modes, see [Access data in a job](how-to-read-write-data-v2.md?tabs=cli#modes) | `ro_mount`, `download`, `direct`, `eval_download`, `eval_mount` | `ro_mount` |
+| `path` | string | The path to the data to use as input. Can be specified in a few ways: <br><br> - A local path to the data source file or folder, for example, `path: ./iris.csv`. The data gets uploaded during job submission. <br><br> - A URI of a cloud path to the file or folder to use as the input. Supported URI types are `azureml`, `https`, `wasbs`, `abfss`, `adl`. See [Core yaml syntax](reference-yaml-core-syntax.md) for more information on how to use the `azureml://` URI format. <br><br> - An existing registered Azure Machine Learning data asset to use as the input. To reference a registered data asset, use the `azureml:<data_name>:<data_version>` syntax or `azureml:<data_name>@latest` (to reference the latest version of that data asset), for example, `path: azureml:cifar10-data:1` or `path: azureml:cifar10-data@latest`. | | |
+| `mode` | string | Mode of how the data should be delivered to the compute target. <br><br> For read-only mount (`ro_mount`), the data is consumed as a mount path. A folder is mounted as a folder and a file is mounted as a file. Azure Machine Learning resolves the input to the mount path. <br><br> For `download` mode, the data is downloaded to the compute target. Azure Machine Learning resolves the input to the downloaded path. <br><br> If you only want the URL of the storage location of the data artifacts rather than mounting or downloading the data itself, you can use the `direct` mode. This mode passes in the URL of the storage location as the job input. In this case, you're fully responsible for handling credentials to access the storage. <br><br> The `eval_mount` and `eval_download` modes are unique to MLTable, and either mounts the data as a path or downloads the data to the compute target. <br><br> For more information on modes, see [Access data in a job](how-to-read-write-data-v2.md?tabs=cli#modes) | `ro_mount`, `download`, `direct`, `eval_download`, `eval_mount` | `ro_mount` |
### Job outputs | Key | Type | Description | Allowed values | Default value | | | - | -- | -- | - |
-| `type` | string | The type of job output. For the default `uri_folder` type, the output will correspond to a folder. | `uri_folder` , `mlflow_model`, `custom_model`| `uri_folder` |
-| `mode` | string | Mode of how output file(s) will get delivered to the destination storage. For read-write mount mode (`rw_mount`) the output directory will be a mounted directory. For upload mode the file(s) written will get uploaded at the end of the job. | `rw_mount`, `upload` | `rw_mount` |
+| `type` | string | The type of job output. For the default `uri_folder` type, the output corresponds to a folder. | `uri_folder` , `mlflow_model`, `custom_model`| `uri_folder` |
+| `mode` | string | Mode of how output files get delivered to the destination storage. For read-write mount mode (`rw_mount`), the output directory is a mounted directory. For upload mode, the files written get uploaded at the end of the job. | `rw_mount`, `upload` | `rw_mount` |
### Identity configurations
The `az ml job` command can be used for managing Azure Machine Learning jobs.
## Examples
-Examples are available in the [examples GitHub repository](https://github.com/Azure/azureml-examples/tree/main/cli/jobs). Several are shown below.
+Examples are available in the [examples GitHub repository](https://github.com/Azure/azureml-examples/tree/main/cli/jobs). The following sections show some of the examples.
## YAML: hello world
Examples are available in the [examples GitHub repository](https://github.com/Az
## YAML: distributed TensorFlow ## YAML: distributed MPI ## Next steps
network-watcher Nsg Flow Logs Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/nsg-flow-logs-portal.md
Previously updated : 07/24/2024 Last updated : 07/25/2024 #CustomerIntent: As an Azure administrator, I want to log my virtual network IP traffic using Network Watcher NSG flow logs so that I can analyze it later.
Create a flow log for your network security group. This NSG flow log is saved in
| Storage accounts | Select the storage account that you want to save the flow logs to. If you want to create a new storage account, select **Create a new storage account**. | | Retention (days) | Enter a retention time for the logs (this option is only available with [Standard general-purpose v2](../storage/common/storage-account-overview.md?toc=/azure/network-watcher/toc.json#types-of-storage-accounts) storage accounts). Enter *0* if you want to retain the flow logs data in the storage account forever (until you delete it from the storage account). For information about pricing, see [Azure Storage pricing](https://azure.microsoft.com/pricing/details/storage/). |
- :::image type="content" source="./media/nsg-flow-logs-portal/create-nsg-flow-log.png" alt-text="Screenshot of creating an NSG flow log in the Azure portal.":::
+ :::image type="content" source="./media/nsg-flow-logs-portal/create-nsg-flow-log-basics.png" alt-text="Screenshot of creating an NSG flow log in the Azure portal.":::
> [!NOTE] > If the storage account is in a different subscription, the network security group and storage account must be associated with the same Microsoft Entra tenant. The account you use for each subscription must have the [necessary permissions](required-rbac-permissions.md).
To disable traffic analytics for a flow log, take the previous steps 1-3, then u
:::image type="content" source="./media/nsg-flow-logs-portal/disable-traffic-analytics.png" alt-text="Screenshot that shows how to disable traffic analytics for an existing flow log in the Azure portal." lightbox="./media/nsg-flow-logs-portal/disable-traffic-analytics.png":::
-## Change a flow log
+## Change flow log settings
-You can change the properties of a flow log after you create it. For example, you can change the flow log version or disable traffic analytics.
+You can change the settings of a flow log after you create it. For example, you can change the flow log version or disable traffic analytics.
1. In the search box at the top of the portal, enter *network watcher*. Select **Network Watcher** from the search results.
You can change the properties of a flow log after you create it. For example, yo
| Log analytics workspace | Change the Log Analytics workspace that you want to save the flow logs to (if traffic analytics is enabled). | | Traffic logging interval | Change the processing interval of traffic analytics (if traffic analytics is enabled). Available options are: one hour and 10 minutes. The default processing interval is every one hour. For more information, see [Traffic Analytics](traffic-analytics.md). |
+ :::image type="content" source="./media/nsg-flow-logs-portal/change-flow-log.png" alt-text="Screenshot that shows how to edit flow log's settings in the Azure portal where you can change some virtual network flow log settings." lightbox="./media/nsg-flow-logs-portal/change-flow-log.png":::
+
+1. Select **Save** to apply the changes or **Cancel** to exit without saving them.
+ ## List all flow logs You can list all flow logs in a subscription or a group of subscriptions. You can also list all flow logs in a region.
You can view the details of a flow log in a subscription or a group of subscript
:::image type="content" source="./media/nsg-flow-logs-portal/flow-log-settings.png" alt-text="Screenshot of Flow logs settings page in the Azure portal." lightbox="./media/nsg-flow-logs-portal/flow-log-settings.png":::
+1. Select **Cancel** to close the settings page without making changes.
+ ## Download a flow log The storage location of a flow log is defined at creation. To access and download flow logs from your storage account, you can use Azure Storage Explorer. Fore more information, see [Get started with Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md).
You can temporarily disable an NSG flow log without deleting it. Disabling a flo
:::image type="content" source="./media/nsg-flow-logs-portal/disable-flow-log.png" alt-text="Screenshot shows how to disable a flow log in the Azure portal." lightbox="./media/nsg-flow-logs-portal/disable-flow-log.png"::: > [!NOTE]
-> If traffic analytics is enabled for a flow log, it must disabled before you can disable the flow log. To disable traffic analytics, see [Change a flow log](#change-a-flow-log).
+> If traffic analytics is enabled for a flow log, it must be disabled before you can disable the flow log. To disable traffic analytics, see [Change a flow log](#change-flow-log-settings).
## Delete a flow log
network-watcher Vnet Flow Logs Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/vnet-flow-logs-portal.md
To disable traffic analytics for a flow log, take the previous steps 1-3, then u
:::image type="content" source="./media/vnet-flow-logs-portal/disable-traffic-analytics.png" alt-text="Screenshot that shows how to disable traffic analytics for an existing flow log in the Azure portal." lightbox="./media/vnet-flow-logs-portal/disable-traffic-analytics.png":::
-## Change a flow log
+## Change flow log settings
You can configure and change a flow log after you create it. For example, you can change the storage account or Log Analytics workspace.
You can configure and change a flow log after you create it. For example, you ca
1. In **Flow logs settings**, you can change any of the following settings:
- - **Storage Account**: Change the storage account that you want to save the flow logs to. If you want to create a new storage account, select **Create a new storage account**. You can also choose a storage account from a different subscription. If the storage account is in a different subscription, the resource that you're logging (virtual network, subnet, or network interface) and the storage account must be associated with the same Microsoft Entra tenant.
- - **Retention (days)**: Change the retention time in the storage account (this option is only available with [Standard general-purpose v2](../storage/common/storage-account-overview.md#types-of-storage-accounts) storage accounts). Enter *0* if you want to retain the flow logs data in the storage account forever (until you manually delete the data from the storage account).
- - **Traffic analytics**: Enable or disable traffic analytics for your flow log. For more information, see [Traffic analytics](traffic-analytics.md).
- - **Traffic analytics processing interval**: Change the processing interval of traffic analytics (if traffic analytics is enabled). Available options are: one hour and 10 minutes. The default processing interval is every one hour. For more information, see [Traffic analytics](traffic-analytics.md).
- - **Log Analytics Workspace**: Change the Log Analytics workspace that you want to save the flow logs to (if traffic analytics is enabled). For more information, see [Log Analytics workspace overview](../azure-monitor/logs/log-analytics-workspace-overview.md). You can also choose a Log Analytics Workspace from a different subscription.
+ | Setting | Value |
+ | - | -- |
+ | **Storage account** | |
+ | Subscription | Change the Azure subscription of the storage account that you want to use. |
+ | Storage account | Change the storage account that you want to save the flow logs to. If you want to create a new storage account, select **Create a new storage account**. |
+ | Retention (days) | Change the retention time in the storage account. Enter *0* if you want to retain the flow logs data in the storage account forever (until you manually delete the data from the storage account). |
+ | **Traffic analytics** | |
+ | Enable traffic analytics | Enable or disable traffic analytics by checking or unchecking the checkbox. |
+ | Subscription | Change the Azure subscription of the Log Analytics workspace that you want to use. |
+ | Log analytics workspace | Change the Log Analytics workspace that you want to save the flow logs to (if traffic analytics is enabled). |
+ | Traffic logging interval | Change the processing interval of traffic analytics (if traffic analytics is enabled). Available options are: one hour and 10 minutes. The default processing interval is every one hour. For more information, see [Traffic Analytics](traffic-analytics.md). |
:::image type="content" source="./media/vnet-flow-logs-portal/change-flow-log.png" alt-text="Screenshot that shows how to edit flow log's settings in the Azure portal where you can change some virtual network flow log settings." lightbox="./media/vnet-flow-logs-portal/change-flow-log.png":::
notification-hubs Monitor Notification Hubs Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/monitor-notification-hubs-reference.md
See [Monitor Notification Hubs](monitor-notification-hubs.md) for details on the
### Supported metrics for Microsoft.NotificationHubs/namespaces/notificationHubs The following table lists the metrics available for the Microsoft.NotificationHubs/namespaces/notificationHubs resource type. [!INCLUDE [horz-monitor-ref-metrics-tableheader](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-ref-metrics-tableheader.md)] [!INCLUDE [horz-monitor-ref-metrics-dimensions-intro](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-ref-metrics-dimensions-intro.md)]
The following table lists the metrics available for the Microsoft.NotificationHu
[!INCLUDE [horz-monitor-ref-resource-logs](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-ref-resource-logs.md)] ### Supported resource logs for Microsoft.NotificationHubs/namespaces ### Supported resource logs for Microsoft.NotificationHubs/namespaces/notificationHubs [!INCLUDE [horz-monitor-ref-logs-tables](~/reusable-content/ce-skilling/azure/includes/azure-monitor/horizontals/horz-monitor-ref-logs-tables.md)] <!-- No table(s) at https://learn.microsoft.com/azure/azure-monitor/reference/tables/tables-resourcetype. -->
notification-hubs Notification Hubs Diagnostic Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/notification-hubs-diagnostic-logs.md
All logs are stored in JavaScript Object Notation (JSON) format in the following
- **AzureActivity**: Displays logs from operations and actions that are conducted against your namespace in the Azure portal or through Azure Resource Manager template deployments. - **AzureDiagnostics**: Displays logs from operations and actions that are conducted against your namespace by using the API, or through management clients on the language SDK.
-Diagnostic log JSON strings include the elements listed in the following table:
-
-| Name | Description |
-| - | - |
-| time | UTC timestamp of the log |
-| resourceId | Relative path to the Azure resource |
-| operationName | Name of the management operation |
-| category | Log category. Valid values: `OperationalLogs` |
-| callerIdentity | Identity of the caller who initiated the management operation |
-| resultType | Status of the management operation. Valid values: `Succeeded` or `Failed` |
-| resultDescription | Description of the management operation |
-| correlationId | Correlation ID of the management operation (if specified) |
-| callerIpAddress | The caller IP address. Empty for calls that originated from the Azure portal |
+For a list of elements that are included in diagnostic log strings, see [Azure Monitor Logs tables](monitor-notification-hubs-reference.md#azure-monitor-logs-tables).
Here's an example of an operational log JSON string:
operational-excellence Overview Relocation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operational-excellence/overview-relocation.md
The following tables provide links to each Azure service relocation document. Th
[Azure API Management](../api-management/api-management-howto-migrate.md?toc=/azure/operational-excellence/toc.json)| ✅ | ✅| ❌ | [Azure Application Gateway and Web Application Firewall](relocation-app-gateway.md)| ✅ | ❌| ❌ | [Azure App Service](../app-service/manage-move-across-regions.md?toc=/azure/operational-excellence/toc.json)|✅ | ❌| ❌ |
-[Azure Backup (Recovery Services vault)](../backup/azure-backup-move-vaults-across-regions.md?toc=/azure/operational-excellence/toc.json)| ✅ | ✅| ❌ |
+[Azure Backup](relocation-backup.md)| ✅ | ❌| ❌ |
[Azure Batch](../batch/account-move.md?toc=/azure/operational-excellence/toc.json)|✅ | ✅| ❌ | [Azure Cache for Redis](../azure-cache-for-redis/cache-moving-resources.md?toc=/azure/operational-excellence/toc.json)| ✅ | ❌| ❌ | [Azure Container Registry](../container-registry/manual-regional-move.md)|✅ | ✅| ❌ |
The following tables provide links to each Azure service relocation document. Th
[Azure Event Grid domains](relocation-event-grid-domains.md)| ✅ | ❌| ❌ | [Azure Event Grid custom topics](relocation-event-grid-custom-topics.md)| ✅ | ❌| ❌ | [Azure Event Grid system topics](relocation-event-grid-system-topics.md)| ✅ | ❌| ❌ |
+[Azure Firewall](./relocation-firewall.md)|❌ | ✅| ❌ |
[Azure Functions](../azure-functions/functions-move-across-regions.md?toc=/azure/operational-excellence/toc.json)|✅ |❌ | ❌ | [Azure Logic apps](../logic-apps/move-logic-app-resources.md?toc=/azure/operational-excellence/toc.json)| ✅| ❌ | ❌ | [Azure Monitor - Log Analytics](./relocation-log-analytics.md)| ✅| ❌ | ❌ |
operational-excellence Relocation Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operational-excellence/relocation-backup.md
+
+ Title: Relocate Azure Backup to another region
+description: This article offers guidance on relocating Azure Backup to another region.
+++ Last updated : 06/13/2024++
+# Customer intent: As ant administrator, I want to relocate Azure Backup to another region.
++
+<!-- remove ../backup/azure-backup-move-vaults-across-regions.md?toc=/azure/operational-excellence/toc.json -->
+
+# Relocate Azure Backup to another region
+
+This article covers relocation guidance for [Azure Backup](../backup/backup-overview.md) across regions.
+
+Azure Backup doesnΓÇÖt support the relocation of backup data from one Recovery Services vault to another. In order to continue to protect your resources, you must register and back them up to a Recovery Services vault in the new region.
+
+After you relocate your resources to the new region, you can choose to either keep or delete the backup data in the Recovery Services vaults in the old region.
+
+>[!NOTE]
+>If you do choose to keep the backup data in the old region, you do incur backup charges.
++
+## Prerequisites
+
+- Copy internal resources or settings of Azure Resource Vault.
+ - Network firewall reconfiguration
+ - Alert Notification.
+ - Move workbook if configured
+ - Diagnostic settings reconfiguration
+- List all Recovery Service Vault dependent resources. The most common dependencies are:
+ - Azure Virtual Machine (VM)
+ - Public IP address
+ - Azure Virtual Network
+ - Azure Recovery Service Vault
+- Whether the VM is moved with the vault or not, you can always restore the VM from the retained backup history in the vault.
+- Copy the backup VM configuration metadata to validate once the relocation is complete.
+- Confirm that all services and features that are in use by source resource vault are supported in the target region.
++
+## Prepare
+
+Azure Backup currently doesnΓÇÖt support the movement of backup data from one Recovery Services vault to another across regions.
+
+Instead, you must redeploy the Recovery Service vault and reconfigure the backup for resources to a Recovery Service vault in the new region.
++
+**To prepare for redeployment and configuration:**
+
+1. Export a Resource Manager template. This template contains settings that describe your Recovery Vault.
+
+ 1. Sign in to the [Azure portal](https://portal.azure.com).
+ 2. Select **All resources** and then select your Recovery Vault resource.
+ 3. Select **Export template**.
+ 4. Choose **Download** in the **Export template** page.
+ 5. Locate the .zip file that you downloaded from the portal, and unzip that file to a folder of your choice.
+
+ This zip file contains the .json files that include the template and scripts to deploy the template.
+
+1. Validate all the associated resources detail in the downloaded template, such as private endpoint, backup policy, and security settings.
+
+1. Update the parameter of the Recovery Vault by changing the value properties under parameters, such as Recovery Vault name, replication type, sku, target location etc.
++
+## Redeploy
+
+[Create and reconfigure the Recovery Service vault](/azure/backup/backup-create-recovery-services-vault) in the target region.
+
+Make sure to reconfigure all associated settings that were captured from the source Recovery service vault:
+
+- (Optional) Private Endpoint - Follow the procedure to relocate a [virtual network]](/technical-delivery-playbook/azure-services/networking/virtual-network/) as described and create the Private Endpoint.
+- Network firewall reconfiguration
+- Alert Notification.
+- Move workbook if configured
+- Diagnostic settings reconfiguration
+
+## Backup resources
+
+In order to continue to protect your resources, you must register and back them up to a Recovery Services vault in the new region. This section shows you how to back up the following:
+
+- [Azure Virtual Machines](#back-up-azure-virtual-machine)
+- [Azure File Share](#back-up-azure-file-share)
+- [SQL Server/SAP HANA in Azure VM](#back-up-sql-serversap-hana-in-azure-vm)
+- [On-premises resources](#back-up-services-for-on-premises-resources)
+
+### Back up Azure Virtual Machine
+
+When an Azure Virtual Machine (VM) protected by a Recovery Services vault is moved from one region to another, it can no longer be backed up to the older vault. The backups in the old vault may start failing with the errors **BCMV2VMNotFound** or [**ResourceNotFound**](../backup/backup-azure-vms-troubleshoot.md#320001-resourcenotfoundcould-not-perform-the-operation-as-vm-no-longer-exists--400094-bcmv2vmnotfoundthe-virtual-machine-doesnt-exist--an-azure-virtual-machine-wasnt-found).
+
+You can also choose to write a customized script for bulk VM protection:
+
+```azurepowershell
+
+https://management.azure.com/Subscriptions/{subscriptionId}/resourceGroups/{vaultresourceGroupName}/providers/Microsoft.RecoveryServices/vaults/{vaultName}/backupFabrics/{fabricName}/protectionContainers/{containerName}/protectedItems/{protectedItemName}?api-version=2019-05-13
+
+```
+
+1. Prepare Azure Virtual Machines (VMs) for relocation:
+
+ 1. See the [prerequisites associated with VM relocation](../resource-mover/tutorial-move-region-virtual-machines.md#prerequisites) and ensure that the VM is eligible for relocation.
+ 1. [Select the VM on the **Backup Items** tab](../backup/backup-azure-delete-vault.md#delete-protected-items-in-the-cloud) of existing vaultΓÇÖs dashboard and select **Stop protection** followed by retain/delete data as per your requirement. When the backup data for a VM is stopped with retain data, the recovery points remain forever and donΓÇÖt adhere to any policy.
+ >[!Note]
+ >Retaining data in the older vault will incur backup charges. If you no longer wish to retain data to avoid billing, you need to delete the retained backup data using the [Delete data option](../backup/backup-azure-manage-vms.md#delete-backup-data).
+ 1. Ensure that the VMs are turned on. All VMsΓÇÖ disks that need to be available in the destination region are attached and initialized in the VMs.
+ 1. Ensure that VMs have the latest trusted root certificates, and an updated certificate revocation list (CRL). To do so:
+ - On Windows VMs, install the latest Windows updates.
+ - On Linux VMs, refer to distributor guidance to ensure that machines have the latest certificates and CRL.
+ 1. Allow outbound connectivity from VMs:
+ - If you're using a URL-based firewall proxy to control outbound connectivity, allow access to [these URLs](../resource-mover/support-matrix-move-region-azure-vm.md#url-access).
+ - If you're using network security group (NSG) rules to control outbound connectivity, create [these service tag rules](../resource-mover/support-matrix-move-region-azure-vm.md#nsg-rules).
++
+1. Redeploy Azure VMs by using [Azure Resource Mover](../resource-mover/tutorial-move-region-virtual-machines.md) to relocate your VM to the new region.
++
+### Back up Azure File Share
+
+1. [Back up Azure file shares with Azure CLI](../backup/backup-afs-cli.md).
+1. Satisfy the [prerequisites to relocate the Storage Account](../storage/common/storage-account-move.md?tabs=azure-portal#prerequisites).
+1. Export and modify a Resource Move template. For more information, see [Prepare Storage Account for region relocation](../storage/common/storage-account-move.md?tabs=azure-portal#prepare).
+1. [Relocate the Azure Storage account to another region](../storage/common/storage-account-move.md).
+1. When Azure File Share is copied across regions, its associated snapshots donΓÇÖt relocate along with it. To relocate the snapshots data to the new region, you need to relocate the individual files and directories of the snapshots to the Storage Account in the new region by using [AzCopy](../storage/common/storage-use-azcopy-files.md#copy-all-file-shares-directories-and-files-to-another-storage-account).
+1. Choose whether you want to retain or delete the snapshots (and the corresponding recovery points) of the original Azure File Share by selecting your file share on the [Backup Items tab](../backup/backup-azure-delete-vault.md#delete-protected-items-in-the-cloud) of the original vaultΓÇÖs dashboard. When the backup data for Azure File Share is stopped with retain data, the recovery points remain forever and donΓÇÖt adhere to any policy.
++
+>[!NOTE]
+>While configuring File Share, if the Recovery Service Vault isn't available, check to see whether it is associated with another Recovery Service vault.
+
+### Back up SQL Server/SAP HANA in Azure VM
+
+When you relocate a VM that runs SQL or SAP HANA servers, you will no longer be able to back up the SQL and SAP HANA databases in the vault of the earlier region.
+
+**To protect the SQL and SAP HANA servers that are running in the new region:**
+
+1. Before you relocate SQL Server/SAP HANA running in a VM to a new region, ensure the following prerequisites are met:
+
+ 1. See the [prerequisites associated with VM relocate](../resource-mover/tutorial-move-region-virtual-machines.md#prerequisites) and ensure that the VM is eligible for relocate.
+ 1. Select the VM on the [Backup Items tab](../backup/backup-azure-delete-vault.md#delete-protected-items-in-the-cloud) of the existing vaultΓÇÖs dashboard and select _the databases_ for which backup needs to be stopped. Select **Stop protection** followed by retain/delete data as per your requirement. When the backup data is stopped with retain data, the recovery points remain forever and donΓÇÖt adhere to any policy.
+ >[!Note]
+ >Retaining data in the older vault will incur backup charges. If you no longer wish to retain data to avoid billing, you need to delete the retained backup data using [Delete data option](../backup/backup-azure-manage-vms.md#delete-backup-data).
+ 1. Ensure that the VMs to be moved are turned on. All VMs disks that need to be available in the destination region are attached and initialized in the VMs.
+ 1. Ensure that VMs have the latest trusted root certificates, and an updated certificate revocation list (CRL). To do so:
+ - On Windows VMs, install the latest Windows updates.
+ - On Linux VMs, refer to the distributor guidance and ensure that machines have the latest certificates and CRL.
+ 1. Allow outbound connectivity from VMs:
+ - If you're using a URL-based firewall proxy to control outbound connectivity, allow access to [these URLs](../resource-mover/support-matrix-move-region-azure-vm.md#url-access).
+ - If you're using network security group (NSG) rules to control outbound connectivity, create [these service tag rules](../resource-mover/support-matrix-move-region-azure-vm.md#nsg-rules).
+
+1. Relocate your VM to the new region using [Azure Resource Mover](../resource-mover/tutorial-move-region-virtual-machines.md).
+++
+### Back up services for on-premises resources
+
+1. To backup files, folders, and system state for VMs (Hyper-V & VMware) and other on-premises workloads, see [About the Microsoft Azure Recovery Services (MARS)](../backup/backup-azure-about-mars.md).
+1. Download vault credentials to register the server in the vault.
+ :::image type="content" source="media\relocation\backup\mars-agent-credential-download.png" alt-text="Screenshot showing how to download vault credentials to register the server in the vault.":::
+1. Reconfigure backup agent on on-premises virtual machine.
+ :::image type="content" source="media\relocation\backup\mars-register-to-target-rsv.png" alt-text="Screenshot showing how to reconfigure an on premise virtual machine.":::
++
operational-excellence Relocation Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operational-excellence/relocation-firewall.md
+
+ Title: Relocation guidance for Azure Firewall
+description: Learn how to relocate Azure Firewall to a new region
+++ Last updated : 07/23/2024+++
+ - subject-relocation
+++
+# Relocate Azure Firewall to another region
+
+This article shows you how to relocate an Azure Firewall that protects an Azure Virtual Network.
++
+## Prerequisites
+
+- We highly recommend that you use Premium SKU. If you are on Standard SKU, consider [migrating from an existing Standard SKU Azure Firewall to Premium SKU](/azure/firewall-manager/migrate-to-policy) before you being relocation.
+
+- The following information must be collected in order to properly plan and execute an Azure Firewall relocation:
+
+ - **Deployment model.** *Classic Firewall Rules* or *Firewall policy*.
+ - **Firewall policy name.** (If *Firewall policy* deployment model is used).
+ - **Diagnostic setting at the firewall instance level.** (If Log Analytics workspace is used).
+ - **TLS (Transport Layer Security) Inspection configuration.**: (If Azure Key Vault, Certificate and Managed Identity is used.)
+ - **Public IP control.** Assess that any external identity relying on Azure Firewall public IP remains fixed and trusted.
++
+- Azure Firewall Standard and Premium tiers have the following dependencies that you may need to be deploy in the target region:
+
+ - [Azure Virtual Network](./relocation-virtual-network.md)
+ - (If used) [Log Analytics Workspace](./relocation-log-analytics.md)
+
+
+- If you're using the TLS Inspection feature of Azure Firewall Premium tier, the following dependencies also need to be deployed in the target region:
+
+ - [Azure Key Vault](./relocation-key-vault.md)
+ - [Azure Managed Identity](./relocation-managed-identity.md)
++
+## Downtime
+
+To understand the possible downtimes involved, see [Cloud Adoption Framework for Azure: Select a relocation method](/azure/cloud-adoption-framework/relocate/select#select-a-relocation-method).
+
+
+## Prepare
+
+To prepare for relocation, you need to first export and modify the template from the source region. To view a sample ARM template for Azure Firewall, see [review the template](../firewall-manager/quick-firewall-policy.md#review-the-template).
+
+### Export template
++
+# [portal](#tab/azure-portal)
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+2. Select **All resources** and then select your Azure Firewall resource.
+3. On the **Azure Firewall** page, select **Export template** under **Automation** in the left menu.
+4. Choose **Download** in the **Export template** page.
+
+5. Locate the .zip file that you downloaded from the portal, and unzip that file to a folder of your choice.
+
+ This zip file contains the .json files that include the template and scripts to deploy the template.
++
+# [PowerShell](#tab/azure-powershell)
+
+1. Sign in to your Azure subscription with the `Connect-AzAccount` command and follow the on-screen directions:
+
+```azurecli
+
+Connect-AzAccount
+```
+
+2. If your identity is associated with more than one subscription, then set your active subscription to subscription of the Azure Firewall resource that you want to move.
+
+```azurepowershell
+
+$context = Get-AzSubscription -SubscriptionId <subscription-id>
+Set-AzContext $context
+
+```
+
+3. Export the template of your source Azure Firewall resource by running the following commands:
+
+```azurepowershell
+
+$resource = Get-AzResource `
+ -ResourceGroupName <resource-group-name> `
+ -ResourceName <resource-name> `
+ -ResourceType <resource-type>
+
+Export-AzResourceGroup `
+ -ResourceGroupName <resource-group-name> `
+ -Resource $resource.ResourceId
+
+```
+
+4. Locate the `template.json` in your current directory.
++++
+### Modify template
+
+In this section, you learn how to modify the template that you generated in the previous section.
+
+If you're running classic firewall rules without Firewall policy, migrate to Firewall policy before preceding with the steps in this section. To learn how to migrate from classic firewall rules to Firewall policy, see [Migrate Azure Firewall configuration to Azure Firewall policy using PowerShell](/azure/firewall-manager/migrate-to-policy).
++
+# [Azure portal](#tab/azure-portal)
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. If you're using Premium SKU with TLS Inspection enabled,
+ 1. [Relocate the key vault](./relocation-key-vault.md) that's used for TLS inspection into the new target region. Then, follow [the procedures](../application-gateway/key-vault-certs.md) to move certificates or generate new certificates for TLS inspection into the new key vault in the target region.
+ 1. [Relocate managed identity](./relocation-managed-identity.md) into the new target region. Reassign the corresponding roles for the key vault in the target region and subscription.
+
+1. In the Azure portal, select **Create a resource**.
+
+1. In **Search the Marketplace**, type `template deployment`, and then press **Enter**.
+
+1. Select **Template deployment** and the select **Create**.
+
+1. Select **Build your own template in the editor**.
+
+1. Select **Load file**, and then follow the instructions to load the `template.json` file that you downloaded in the previous section
+
+1. In the `template.json` file, replace:
+ - `firewallName` with the default value of your Azure Firewall name.
+ - `azureFirewallPublicIpId` with the ID of your public IP address in the target region.
+ - `virtualNetworkName` with the name of the virtual network in the target region.
+ - `firewallPolicy.id` with your policy ID.
+
+1. [Create a new firewall policy](/azure/firewall-manager/create-policy-powershell) using the configuration of the source region and reflect changes introduced by the new target region (IP Address Ranges, Public IP, Rule Collections).
+
+1. If you're using Premium SKU and you want to enable TLS Inspection, update the newly created firewall policy and enable TLS inspection by following [the instructions here](https://techcommunity.microsoft.com/t5/azure-network-security-blog/building-a-poc-for-tls-inspection-in-azure-firewall/ba-p/3676723).
+
+1. Review and update the configuration for the topics below to reflect the changes required for the target region.
+ - **IP Groups.** To include IP addresses from the target region, if different from the source, *IP Groups* should be reviewed. The IP addresses included in the groups must be modified.
+ - **Zones.** Configure the [availability Zones (AZ)](../reliability/availability-zones-overview.md) in the target region.
+ - **Forced Tunneling.** [Ensure that you've relcoated the virtual network](./relocation-virtual-network.md) and that the firewall *Management Subnet* is present before the Azure Firewall is relocated. Update the IP Address in the target region of the Network Virtual Appliance (NVA) to which the Azure Firewall should redirect the traffic, in the User Defined Route (UDR).
+ - **DNS.** Review IP Addresses for your custom custom *DNS Servers* to reflect your target region. If the *DNS Proxy* feature is enabled, be sure to configure your virtual network DNS server settings and set the Azure FirewallΓÇÖs private IP address as a *Custom DNS server*.
+ - **Private IP ranges (SNAT).** - If custom ranges are defined for SNAT, it's recommended that you review and eventually adjust to include the target region address space.
+ - **Tags.** - Verify and eventually update any tag that may reflect or refer to the new firewall location.
+ - **Diagnostic Settings.** When recreating the Azure Firewall in the target region, be sure to review the *Diagnostic Setting* adn configure it to reflect the target region (Log Analytics workspace, storage account, Event Hub, or 3rd-party partner solution).
+
+1. Edit the `location` property in the `template.json` file to the target region (The following example sets the target region to `centralus`.):
+
+```json
+ "resources": [
+ {
+ "type": "Microsoft.Network/azureFirewalls",
+ "apiVersion": "2023-09-01",
+ "name": "[parameters('azureFirewalls_fw_name')]",
+ "location": "centralus",}]
+```
+
+To find the location code for your target region, see [Data residency in Azure](https://azure.microsoft.com/explore/global-infrastructure/data-residency/#overview).
+
+1. Save the `template.json` file.
+
+# [PowerShell](#tab/azure-powershell)
+++
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. If you're using Premium SKU with TLS Inspection enabled,
+ 1. [Relocate the key vault](./relocation-key-vault.md) used for TLS inspection into the new target region and follow the procedures to move certificates or generate new certificates for TLS inspection in the new key vault in the target region.
+ 1. [Relocate managed identity](./relocation-managed-identity.md) into the new target region and reassign the corresponding roles for the key vault in the target region and subscription.
++
+1. In the `template.json` file, replace:
+ - `firewallName` with the default value of your Azure Firewall name.
+ - `azureFirewallPublicIpId` with the ID of your public IP address in the target region.
+ - `virtualNetworkName` with the name of the virtual network in the target region.
+ - `firewallPolicy.id` with your policy ID.
+
+1. [Create a new firewall policy](/azure/firewall-manager/create-policy-powershell) using the configuration of the source region and reflect changes introduced by the new target region (IP Address Ranges, Public IP, Rule Collections).
+
+1. Review and update the configuration for the topics below to reflect the changes required for the target region.
+ - **IP Groups.** To include IP addresses from the target region, if different from the source, *IP Groups* should be reviewed. The IP addresses included in the groups must be modified.
+ - **Zones.** Configure the [availability Zones (AZ)](../reliability/availability-zones-overview.md) in the target region.
+ - **Forced Tunneling.** [Ensure that you've relcoated the virtual network](./relocation-virtual-network.md) and that the firewall *Management Subnet* is present before the Azure Firewall is relocated. Update the IP Address in the target region of the Network Virtual Appliance (NVA) to which the Azure Firewall should redirect the traffic, in the User Defined Route (UDR).
+ - **DNS.** Review IP Addresses for your custom custom *DNS Servers* to reflect your target region. If the *DNS Proxy* feature is enabled, be sure to configure your virtual network DNS server settings and set the Azure FirewallΓÇÖs private IP address as a *Custom DNS server*.
+ - **Private IP ranges (SNAT).** - If custom ranges are defined for SNAT, it's recommended that you review and eventually adjust to include the target region address space.
+ - **Tags.** - Verify and eventually update any tag that may reflect or refer to the new firewall location.
+ - **Diagnostic Settings.** When recreating the Azure Firewall in the target region, be sure to review the *Diagnostic Setting* adn configure it to reflect the target region (Log Analytics workspace, storage account, Event Hub, or 3rd-party partner solution).
+
+1. Edit the `location` property in the `template.json` file to the target region (The following example sets the target region to `centralus`.):
+
+```json
+ "resources": [
+ {
+ "type": "Microsoft.Network/azureFirewalls",
+ "apiVersion": "2023-09-01",
+ "name": "[parameters('azureFirewalls_fw_name')]",
+ "location": "centralus",}]
+```
+
+To find the location code for your target region, see [Data residency in Azure](https://azure.microsoft.com/explore/global-infrastructure/data-residency/#overview).
++++
+## Redeploy
+
+Deploy the template to create a new Azure Firewall in the target region.
++
+# [Azure portal](#tab/azure-portal)
+
+1. Enter or select the property values:
+
+ - Subscription: Select an Azure subscription.
+
+ - Resource group: Select Create new and give the resource group a name.
+
+ - Location: Select an Azure location.
+
+1. The Azure Firewall is now deployed with the adopted configuration to reflect the needed changes in the target region.
+1. Verify configuration and functionality.
++
+# [PowerShell](#tab/azure-powershell)
+
+1. Obtain the subscription ID where you want to deploy the target public IP by running the following command:
+
+```azurepowershell
+
+Get-AzSubscription
+
+```
+
+2. Run the following commands to deploy your template:
+
+```azurepowershell
+
+$resourceGroupName = Read-Host -Prompt "Enter the Resource Group name"
+$location = Read-Host -Prompt "Enter the location (i.e. eastus)"
+
+New-AzResourceGroup -Name $resourceGroupName -Location "$location"
+New-AzResourceGroupDeployment -ResourceGroupName $resourceGroupName -TemplateUri "<name of your local template file>"
+
+1. The Azure Firewall is now deployed with the adopted configuration to reflect the needed changes in the target region.
+1. Verify configuration and functionality.
++
+```
+
+## Related content
+
+- [Move resources to a new resource group or subscription](../azure-resource-manager/management/move-resource-group-and-subscription.md)
+- [Move Azure VMs to another region](../site-recovery/azure-to-azure-tutorial-migrate.md)
operator-nexus Reference Nexus Platform Runtime Upgrades https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/reference-nexus-platform-runtime-upgrades.md
Title: Operator Nexus Platform Cluster runtime upgrades
-description: Detail the cadence and process Nexus uses to release new runtime versions to customers
+description: Detail the cadence and process Nexus uses to release new runtime versions to customers.
Previously updated : 12/29/2023 Last updated : 07/24/2024
This document details how Operator Nexus releases, manages, and supports various platform runtime upgrades for near edge customers.
-Operator Nexus will release platform cluster runtime versions with three minor versions per year and monthly patch versions in between.
+Operator Nexus releases a platform cluster runtime version with three minor versions per year and monthly patch versions in between.
Operator Nexus supports n-2 platform cluster runtime releases for customers, providing approximately one year of support upon release.
In addition to minor releases, Operator Nexus releases patch platform cluster re
## Patch Platform Cluster runtime releases
-Platform Cluster patch releases will be scheduled monthly to provide customers with an updated version of Azure Linux. These releases will be applied to the latest minor release.
+Platform Cluster patch releases are scheduled monthly to provide customers with an updated version of Azure Linux, starting in Nexus release 2408.1. These patch releases are applied to the latest minor release.
-Operator Nexus will also release patch platform cluster runtime releases addressing critical functional or high severity security issues.
+The contents of these releases are primarily scoped to updating current versions of the HostOS and Kubernetes patch releases. Operator Nexus will also release patch platform cluster runtime releases addressing critical functional or high severity security issues. These patch runtime releases are released alongside a new management bundle release to enable the deployment of this new runtime.
+
+The patch releases are optional and not required to be applied. Operator Nexus certifies the various supported runtime releases to ensure there's a path to upgrade, regardless of the patch runtime release, to the next minor runtime version.
## Platform Cluster runtime releases out of support
-When a customer is on a release that has moved out of support, Microsoft will attempt to mitigate the customer tickets but it may not be possible to address. When a runtime minor release has dropped support, it will no longer be an option to deploy to a new instance.
+When a customer is on a release that becomes out of support, Microsoft attempts to mitigate the customer tickets but it may not be possible to address. When a runtime minor release drops support, it will no longer be an option to deploy to a new instance.
When an instance is running an n-3 version: -- The cluster will continue to run; however, normal operations may start to degrade as older versions of software aren't validated -- Support tickets raised will continue to get support, but the issues may not be able to be mitigated. -- The n-3 release will no longer be available to customers to deploy a new instance. -- There's no upgrade path supported (more details below), requiring customers to repave instances. -- Platform Cluster runtime versions past support may continue to run but Microsoft doesn't guarantee all functionality to be compatible with the newest version of software in the Cluster Manager. An upgrade path will be supported for customers on supported releases. Upgrading from an n-3 version or greater is not supported and will require a repave of the site. Customers need to execute a platform cluster runtime upgrade before a site gets to n-3.-- There's currently a requirement for the customer to update their platform cluster runtime within a year of the most recent platform cluster runtime upgrade or first deployment to ensure their Nexus instance can connect to Azure. After a year without any runtime upgrade, the Operator Nexus instance, having lost connection to Azure, will require a new deployment
+- The cluster continues to run; however, normal operations may start to degrade as older versions of software aren't validated
+- Support tickets raised will receive support, but the issues may not be able to be mitigated.
+- The n-3 release isn't available to customers to deploy a new instance.
+- An upgrade path is no longer supported, requiring customers to repave instances.
+- Platform Cluster runtime versions past support may continue to run but Microsoft doesn't guarantee all functionality to be compatible with the newest version of software in the Cluster Manager. An upgrade path is provided for customers on supported releases. Upgrading from an n-3 version or greater isn't supported and requires a redeployment of the site. Customers need to execute a platform cluster runtime upgrade before a site gets to n-3.
+- A requirement for the customer to update their platform cluster runtime within a year of the most recent platform cluster runtime upgrade or first deployment to ensure their Nexus instance can connect to Azure. After a year without any runtime upgrade, the Operator Nexus instance, losing connection to Azure, will require a new deployment
## Skipping minor releases
partner-solutions Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/elastic/create.md
After you've selected the offer for Elastic, you're ready to set up the applicat
- Resources with _Exclude_ tags don't send logs to Elastic. - If there's a conflict between inclusion and exclusion rules, exclusion takes priority.
+ Select **Next: Azure OpenAI configuration** to create and configure Azure OpenAI connector that can be used within Elastic's AI Assistant.
+
+1. On **Azure OpenAI configuration**, specify the Azure OpenAI resource and the deployment that would be required to configure the connector. The details of the deployment (url, API keys etc.) are passed on to Elastic to prepare the connector to be used with Elastic's AI Assistant.
+
+ :::image type="content" source="media/create/configure-aoai-connector.png" alt-text="Screenshot of how to configure Azure OpenAI Connector.":::
+
+ >[!Note]
+ >Only deployments of text/chat completion models (like gpt4) are supported currently. Learn more about Elastic Connectors [here](https://www.elastic.co/guide/en/kibana/current/openai-action-type.html).
+ Select **Next: Tags** to set up tags for the new Elastic resource. 1. In **Tags**, add custom tags for the new Elastic resource. Each tag consists of a name and value. When you've finished adding tags, select **Next: Review+Create** to navigate to the final step for resource creation.
After you've selected the offer for Elastic, you're ready to set up the applicat
> [Azure portal](https://portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/Microsoft.Elastic%2Fmonitors) > [!div class="nextstepaction"]
- > [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/elastic.ec-azure-pp?tab=Overview)
+ > [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/elastic.ec-azure-pp?tab=Overview)
partner-solutions Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/elastic/manage.md
In the destination details section, check the option to send to partner solution
:::image type="content" source="media/manage/diagnostic-settings.png" alt-text="Screenshot of configure diagnostic settings.":::
+## Configure Azure OpenAI Connector
+
+If not configured already while creating the resource, you can navigate to the **Azure OpenAI configuration** blade under the Elastic deployment configuration section. Click on **Add** to select the Azure OpenAI resource and a deployment of a text/chat completion model(like gpt4). This makes it seamless for you to have your connector ready without having to switch contexts between the AOAI resource(in Azure portal) and the Connectors page in Elastic portal, thus avoiding having to copy and paste urls and keys.
++
+Click on **Create**.
+
+Once the Connector is created, navigate to Kibana and search for Connectors under Stack Management. The newly created Azure OpenAI Connector should be visible there. This connector can be used within Elastic's Observability AI Assistant to help provide contextual responses to your natural language prompts on your observability data by invoking the Azure OpenAI deployment. Learn more about Elastic OpenAI Connectors [here](https://www.elastic.co/guide/en/kibana/current/openai-action-type.html).
+ ## Private link management You can limit network access to a [private link](../../private-link/private-link-overview.md). To enable private link access, select **Configuration** in the left navigation. Under **Networking**, select **Private Link** and the name of the private link.
partner-solutions How To Set Up Data Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/split-experimentation/how-to-set-up-data-access.md
Register a new app or use an existing Microsoft Entra application registration t
To register a new app:
-1. Go to **Identity** > **Applications** > **App registrations**.
+1. In the Microsoft admin center, go to **Identity** > **Applications** > **App registrations**.
:::image type="content" source="media/data-access/app-registration.png" alt-text="Screenshot of the Microsoft Entra admin center showing the App registrations page.":::
To register a new app:
Configure the application ID URI to allow the Entra application to be used as global audience/scope when requesting an authentication token.
-1. Open your app in the Azure portal and under **Overview**, get the **Application ID URI**.
+1. In the Microsoft Entra admin center, in **Identity** > **Applications** > **App registrations**, open your application by selecting its **Display name**. In the pane that opens, under **Overview**, copy the **Application ID URI**. If instead of the application ID URI you see **Add an Application ID URI**, select this option, then select **Add** and **Save**.
:::image type="content" source="media/data-access/get-application-id-uri.png" alt-text="Screenshot of the app in the Azure portal.":::
-1. Back in the Microsoft Entra admin center, in **Identity** > **Applications** > **App registrations**, open your application by selecting its **Display name**.
-1. In the pane that opens, select **Expose an API** and Ensure the **Application ID URI** value is: `api://<Entra application ID>` where `Entra application ID` must be the same Microsoft Entra application ID.
+1. Then select **Expose an API** in the app's left menu. Ensure the **Application ID URI** value is: `api://<Entra application ID>` where `Entra application ID` must be the same Microsoft Entra application ID.
:::image type="content" source="media/data-access/app-registration.png" alt-text="Screenshot of the Microsoft Entra admin center showing the App registrations page.":::
In the Microsoft Entra admin center, go to your app and open the **Expose an API
Split Experimentation workspace supports well-known roles to scope access control. Add the following roles in the Entra application.
-1. Go to the **App roles** menu and select **Create app role**.
-1. Select or enter the following information in the pane that opens to create a first role:
+1. Go to the **App roles** menu of your app and select **Create app role**.
+1. Select or enter the following information in the pane that opens to create an *ExperimentationDataOwner* role. This role gives the app full access to execute all operations on the Split Experimentation resource.
- **Display name**: enter *ExperimentationDataOwner* - **Allowed member types**: select **Both (Users/Groups + Applications)**
Split Experimentation workspace supports well-known roles to scope access contro
:::image type="content" source="media/data-access/create-app-role.png" alt-text="Screenshot of the Microsoft Entra admin center showing how to create an app role.":::
-1. Create a second role:
+1. Create an *ExperimentationDataReader* role. This role gives the app read access on the Split Experimentation resource, but doesn't allow it to make any changes.
- **Display name**: enter *ExperimentationDataReader* - **Allowed member types**: select **Both (Users/Groups + Applications)**
Split Experimentation workspace supports well-known roles to scope access contro
#### Choose an assignment requirement option
-1. Go to the **Overview** menu and select the link under to **Managed application in local directory**
-1. Open **Manage** > **Properties** and select your preferred option for the **Assignment required** setting.
+1. Go to the **Overview** menu of your app and select the link under **Managed application in local directory**. This opens your app in the Microsoft admin center **Identity** > **Enterprise Application** menu.
+1. Open **Manage** > **Properties** on the left and select your preferred option for the **Assignment required** setting.
- **Yes**: means that only the entries explicitly defined under **Users and Groups** in the enterprise application can obtain a token and therefore access the associated Split Experimentation Workspace. This is the recommended option. - **No**: means that everyone in the same Entra tenant can obtain tokens and therefore may be allowed, via the Split Experimentation control plane opt-in setting, to access the associated Split Experimentation Workspace.
Split Experimentation workspace supports well-known roles to scope access contro
#### Assign users and groups
-1. Go to the **Users and groups** menu and select **Add user/group**
+1. Go back to the **Users and groups** menu and select **Add user/group**
:::image type="content" source="media/data-access/assign-users.png" alt-text="Screenshot of the Microsoft Entra admin center showing how to assign roles to users."::: 1. Select a user or a group and select one of the roles you created for the Split Experimentation Workspace.
postgresql Concepts Query Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-query-store.md
description: This article describes the Query Store feature in Azure Database fo
Previously updated : 05/14/2024 Last updated : 07/25/2024
Here are some examples of how you can gain more insights into your workload usin
## Configuration options
-When Query Store is enabled, it saves data in aggregation windows of length determined by the `pg_qs.interval_length_minutes` server parameter (defaults to 15 minutes). For each window, it stores the 500 distinct queries per window.
+When Query Store is enabled, it saves data in aggregation windows of length determined by the `pg_qs.interval_length_minutes` server parameter (defaults to 15 minutes). For each window, it stores up to 500 distinct (with distinct userid, dbid, and queryid) queries per window. If during an interval the number of distinct queries reaches 500, the 5% with lower usage are deallocated to make room for more.
+ The following options are available for configuring Query Store parameters: | **Parameter** | **Description** | **Default** | **Range** |
postgresql How To Restore Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-restore-server-portal.md
If your source server is configured with geo-redundant backup, you can restore t
:::image type="content" source="./media/how-to-restore-server-portal/geo-restore-click.png" alt-text="Screenshot that shows the Restore button.":::
-3. Under **Source details**, for **Geo-redundant restore (preview)**, select the **Restore to paired region** checkbox.
+3. Under **Source details**, for **Geo-redundant restore**, select the **Restore to paired region** checkbox.
:::image type="content" source="./media/how-to-restore-server-portal/geo-restore-choose-checkbox.png" alt-text="Screenshot that shows the option for restoring to a paired region for geo-redundant restore.":::
reliability Reliability App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-app-service.md
Azure App Service is an HTTP-based service for hosting web applications, REST AP
To explore how Azure App Service can bolster the reliability and resiliency of your application workload, see [Why use App Service?](../app-service/overview.md#why-use-app-service)
-## Reliability recommendations
-
-### Reliability recommendations summary
-
-| Category | Priority |Recommendation |
-||--||
-| [**High Availability**](#high-availability) |:::image type="icon" source="media/icon-recommendation-high.svg":::| [ASP-1 - Deploy zone-redundant App Service plans](#-asp-1deploy-zone-redundant-app-service-plans) |
-|[**Resiliency**](#resiliency)|:::image type="icon" source="media/icon-recommendation-high.svg"::: |[ASP-2 -Use an App Service plan that supports availability zones](#-asp-2--use-an-app-service-plan-that-supports-availability-zones) |
-||:::image type="icon" source="media/icon-recommendation-high.svg"::: |[ASP-4 - Create separate App Service plans for production and test](#-asp-4create-separate-app-service-plans-for-production-and-test) |
-|[**Scalability**](#scalability)|:::image type="icon" source="media/icon-recommendation-medium.svg"::: |[ASP-3 - Avoid frequently scaling up or down](#-asp-3avoid-frequently-scaling-up-or-down) |
-||:::image type="icon" source="media/icon-recommendation-medium.svg"::: |[ASP-5 - Enable Autoscale/Automatic scaling to ensure adequate resources are available to service requests](#-asp-5enable-autoscaleautomatic-scaling-to-ensure-that-adequate-resources-are-available-to-service-requests) |
--
-### High availability
-
-#### :::image type="icon" source="media/icon-recommendation-high.svg"::: **ASP-1 - Deploy zone-redundant App Service plans**
-To enhance the resiliency and reliability of your business-critical workloads, it's recommended that you deploy your new App Service Plans with zone-redundancy. Follow the steps to [redeploy to availability zone support](#create-a-resource-with-availability-zone-enabled), configure your pipelines to redeploy your WebApp on the new App Services Plan, and then use a [Blue-Green deployment](../spring-apps/enterprise/concepts-blue-green-deployment-strategies.md) approach to failover to the new site.
-
-By distributing your applications across multiple availability zones, you can ensure their continued operation even in the event of a datacenter-level failure. For more information on availability zone support in Azure App Service, see [Availability zone support](#availability-zone-support).
-
-# [Azure Resource Graph](#tab/graph)
-----
-### Resiliency
-
-#### :::image type="icon" source="media/icon-recommendation-high.svg"::: **ASP-2 -Use an App Service plan that supports availability zones**
-
-Availability zone support is only available on certain App Service plans. To see which plan you need in order to use availability zones, see [Availability zone prerequisites](#prerequisites).
-
-# [Azure Resource Graph](#tab/graph)
-----
-#### :::image type="icon" source="media/icon-recommendation-high.svg"::: **ASP-4 - Create separate App Service plans for production and test**
-
-To enhance the resiliency and reliability of your business-critical workloads, you should migrate your existing App Service plans and App Service Environments to availability zone support. By distributing your applications across multiple availability zones, you can ensure their continued operation even in the event of a datacenter-level failure. For more information on availability zone support in Azure App Service, see [Availability zone support](#availability-zone-support).
--
-### Scalability
-
-#### :::image type="icon" source="media/icon-recommendation-medium.svg"::: **ASP-3 - Avoid frequently scaling up or down**
-
-It's recommended that you avoid frequently scaling up or down your Azure App Service instances. Instead, choose an appropriate tier and instance size that can handle your typical workload, and scale out the instances to accommodate changes in traffic volume. Scaling up or down can potentially trigger an application restart, which may result in service disruptions.
--
-#### :::image type="icon" source="media/icon-recommendation-medium.svg"::: **ASP-5 - Enable Autoscale/Automatic scaling to ensure that adequate resources are available to service requests**
-
-It's recommended that you enable autoscale/automatic scaling for your Azure App Service to ensure that sufficient resources are available to handle incoming requests. Autoscaling is rule based scaling, while automatic scaling performs automatic in and out scaling based on HTTP traffic. For more information see, [automatic scaling in Azure App Service](/azure/app-service/manage-automatic-scaling) or [get started with autoscale in Azure](/azure/azure-monitor/autoscale/autoscale-get-started).
-
-# [Azure Resource Graph](#tab/graph)
---- ## Availability zone support
reliability Reliability Microsoft Purview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-microsoft-purview.md
Microsoft Purview makes commercially reasonable efforts to provide availability
|Australia East|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg"::: |:::image type="icon" source="media/yes-icon.svg":::| |West US 2||:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::| |Canada Central|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg"::: |:::image type="icon" source="media/yes-icon.svg":::|
-|Central India||:::image type="icon" source="media/yes-icon.svg":::|||
+|Central India||:::image type="icon" source="media/yes-icon.svg":::||:::image type="icon" source="media/yes-icon.svg":::|
|East US 2||:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::| |France Central||:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::| |Germany West Central||:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|:::image type="icon" source="media/yes-icon.svg":::|
sap Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/get-started.md
In the SAP workload documentation space, you can find the following areas:
## Change Log -- July 19, 2024: Change in [Setting up Pacemaker on RHEL in Azure](./high-availability-guide-rhel-pacemaker.md) to add a statement around clusters spanning Virtual networks(VNets)/subnets
+- July 24, 2024: Release of SBD STONITH support using iSCSI target server or Azure shared disk in [Configuring Pacemaker on RHEL in Azure](./high-availability-guide-rhel-pacemaker.md).
+- July 19, 2024: Change in [Setting up Pacemaker on RHEL in Azure](./high-availability-guide-rhel-pacemaker.md) to add a statement around clusters spanning Virtual networks(VNets)/subnets.
- July 18, 2024: Add note about metadata heavy workload to Azure Premium Files in [Azure Storage types for SAP workload](./planning-guide-storage.md) - June 26, 2024: Adapt [Azure Storage types for SAP workload](./planning-guide-storage.md) to latest features, like snapshot capabilities for Premium SSD v2 and Ultra disk. Adapt ANF to support of mix of NFS and block storage between /hana/data and /hana/log - June 26, 2024: Fix wrong memory stated for some VMs in [SAP HANA Azure virtual machine Premium SSD storage configurations](./hana-vm-premium-ssd-v1.md) and [SAP HANA Azure virtual machine Premium SSD v2 storage configurations](./hana-vm-premium-ssd-v2.md)-- June 19, 2024: Update the SAP high availability guides to lift the restriction of using floating IP on the NIC secondary IP address in load-balancing scenarios
+- June 19, 2024: Update the SAP high availability guides to lift the restriction of using floating IP on the NIC secondary IP address in load-balancing scenarios.
- May 21, 2024: Update timeouts and added start delay for pacemaker scheduled events in [Set up Pacemaker on RHEL in Azure](./high-availability-guide-rhel-pacemaker.md) and [Set up Pacemaker on SUSE Linux Enterprise Server (SLES) in Azure](./high-availability-guide-suse-pacemaker.md). - April 1, 2024: Reference the considerations section for sizing HANA shared file system in [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md), [SAP HANA Azure virtual machine Premium SSD storage configurations](./hana-vm-premium-ssd-v1.md), [SAP HANA Azure virtual machine Premium SSD v2 storage configurations](./hana-vm-premium-ssd-v2.md), and [Azure Files NFS for SAP](planning-guide-storage-azure-files.md) - March 18, 2024: Added considerations for sizing the HANA shared file system in [SAP HANA Azure virtual machine storage configurations](./hana-vm-operations-storage.md)
sap High Availability Guide Suse Multi Sid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-suse-multi-sid.md
Previously updated : 06/19/2024 Last updated : 07/25/2024
This documentation assumes that:
sudo ssh slesmsscl1 "cat /usr/sap/sapservices" | grep ERS22 | sudo tee -a /usr/sap/sapservices ```
-8. **[1]** Create the SAP cluster resources for the newly installed SAP system.
+8. **[A]** Disabling `systemd` services of the ASCS and ERS SAP instance. This step is only applicable, if SAP startup framework is managed by systemd as per SAP Note [3115048](https://me.sap.com/notes/3115048)
+
+ > [!NOTE]
+ > When managing SAP instances like SAP ASCS and SAP ERS using SLES cluster configuration, you would need to make additional modifications to integrate the cluster with the native systemd-based SAP start framework. This ensures that maintenance procedures do no compromise cluster stability. After installing or switching SAP startup framework to systemd-enabled setup as per SAP Note [3115048](https://me.sap.com/notes/3115048), you should disable the `systemd` services for the ASCS and ERS SAP instances.
+
+ ```bash
+ # Stop all ASCS and ERS instances using <sid>adm
+ sapcontrol -nr 10 -function Stop
+ sapcontrol -nr 10 -function StopService
+
+ sapcontrol -nr 12 -function Stop
+ sapcontrol -nr 12 -function StopService
+
+ # Execute below command on VM where you have performed ASCS instance installation for each SAP system (e.g. slesmsscl1)
+ sudo systemctl disable SAPNW2_10
+ sudo systemctl disable SAPNW3_20
+ # Execute below command on VM where you have performed ERS instance installation for each SAP system (e.g. slesmsscl2)
+ sudo systemctl disable SAPNW2_12
+ sudo systemctl disable SAPNW2_22
+ ```
+
+9. **[1]** Create the SAP cluster resources for the newly installed SAP system.
Depending on whether you are running an ENSA1 or ENSA2 system, select respective tab to define the resources for **NW2** and **NW3** systems. SAP introduced support for [ENSA2](https://help.sap.com/docs/ABAP_PLATFORM_NEW/cff8531bc1d9416d91bb6781e628d4e0/6d655c383abf4c129b0e5c8683e7ecd8.html), including replication, in SAP NetWeaver 7.52. Starting with ABAP Platform 1809, ENSA2 is installed by default. For ENSA2 support, see SAP Note [2630416](https://launchpad.support.sap.com/#/notes/2630416).
sap High Availability Guide Suse Netapp Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-suse-netapp-files.md
Previously updated : 06/19/2024 Last updated : 07/25/2024
The following items are prefixed with either **[A]** - applicable to all nodes,
sudo ssh anftstsapcl2 "cat /usr/sap/sapservices" | grep ERS01 | sudo tee -a /usr/sap/sapservices ```
-9. **[1]** Create the SAP cluster resources.
+9. **[A]** Disabling `systemd` services of the ASCS and ERS SAP instance. This step is only applicable, if SAP startup framework is managed by systemd as per SAP Note [3115048](https://me.sap.com/notes/3115048)
- Depending on whether you are running an ENSA1 or ENSA2 system, select respective tab to define the resources. SAP introduced support for [ENSA2](https://help.sap.com/docs/ABAP_PLATFORM_NEW/cff8531bc1d9416d91bb6781e628d4e0/6d655c383abf4c129b0e5c8683e7ecd8.html), including replication, in SAP NetWeaver 7.52. Starting with ABAP Platform 1809, ENSA2 is installed by default. For ENSA2 support, see SAP Note [2630416](https://launchpad.support.sap.com/#/notes/2630416).
-
- #### [ENSA1](#tab/ensa1)
+ > [!NOTE]
+ > When managing SAP instances like SAP ASCS and SAP ERS using SLES cluster configuration, you would need to make additional modifications to integrate the cluster with the native systemd-based SAP start framework. This ensures that maintenance procedures do no compromise cluster stability. After installation or switching SAP startup framework to systemd-enabled setup as per SAP Note [3115048](https://me.sap.com/notes/3115048), you should disable the `systemd` services for the ASCS and ERS SAP instances.
```bash
- sudo crm configure property maintenance-mode="true"
+ # Stop ASCS and ERS instances using <sid>adm
+ sapcontrol -nr 00 -function Stop
+ sapcontrol -nr 00 -function StopService
+
+ sapcontrol -nr 01 -function Stop
+ sapcontrol -nr 01 -function StopService
+
+ # Execute below command on VM where you have performed ASCS instance installation (e.g. anftstsapcl1)
+ sudo systemctl disable SAPQAS_00
+ # Execute below command on VM where you have performed ERS instance installation (e.g. anftstsapcl2)
+ sudo systemctl disable SAPQAS_01
+ ```
+
+10. **[1]** Create the SAP cluster resources.
+
+ Depending on whether you are running an ENSA1 or ENSA2 system, select respective tab to define the resources. SAP introduced support for [ENSA2](https://help.sap.com/docs/ABAP_PLATFORM_NEW/cff8531bc1d9416d91bb6781e628d4e0/6d655c383abf4c129b0e5c8683e7ecd8.html), including replication, in SAP NetWeaver 7.52. Starting with ABAP Platform 1809, ENSA2 is installed by default. For ENSA2 support, see SAP Note [2630416](https://launchpad.support.sap.com/#/notes/2630416).
+
+ #### [ENSA1](#tab/ensa1)
+
+ ```bash
+ sudo crm configure property maintenance-mode="true"
- # If using NFSv3
- sudo crm configure primitive rsc_sap_QAS_ASCS00 SAPInstance \
+ # If using NFSv3
+ sudo crm configure primitive rsc_sap_QAS_ASCS00 SAPInstance \
operations \$id=rsc_sap_QAS_ASCS00-operations \ op monitor interval=11 timeout=60 on-fail=restart \ params InstanceName=QAS_ASCS00_anftstsapvh START_PROFILE="/sapmnt/QAS/profile/QAS_ASCS00_anftstsapvh" \ AUTOMATIC_RECOVER=false \ meta resource-stickiness=5000 failure-timeout=60 migration-threshold=1 priority=10
- # If using NFSv4.1
- sudo crm configure primitive rsc_sap_QAS_ASCS00 SAPInstance \
+ # If using NFSv4.1
+ sudo crm configure primitive rsc_sap_QAS_ASCS00 SAPInstance \
operations \$id=rsc_sap_QAS_ASCS00-operations \ op monitor interval=11 timeout=105 on-fail=restart \ params InstanceName=QAS_ASCS00_anftstsapvh START_PROFILE="/sapmnt/QAS/profile/QAS_ASCS00_anftstsapvh" \ AUTOMATIC_RECOVER=false \ meta resource-stickiness=5000 failure-timeout=105 migration-threshold=1 priority=10
- # If using NFSv3
- sudo crm configure primitive rsc_sap_QAS_ERS01 SAPInstance \
+ # If using NFSv3
+ sudo crm configure primitive rsc_sap_QAS_ERS01 SAPInstance \
operations \$id=rsc_sap_QAS_ERS01-operations \ op monitor interval=11 timeout=60 on-fail=restart \ params InstanceName=QAS_ERS01_anftstsapers START_PROFILE="/sapmnt/QAS/profile/QAS_ERS01_anftstsapers" AUTOMATIC_RECOVER=false IS_ERS=true \ meta priority=1000
- # If using NFSv4.1
- sudo crm configure primitive rsc_sap_QAS_ERS01 SAPInstance \
+ # If using NFSv4.1
+ sudo crm configure primitive rsc_sap_QAS_ERS01 SAPInstance \
operations \$id=rsc_sap_QAS_ERS01-operations \ op monitor interval=11 timeout=105 on-fail=restart \ params InstanceName=QAS_ERS01_anftstsapers START_PROFILE="/sapmnt/QAS/profile/QAS_ERS01_anftstsapers" AUTOMATIC_RECOVER=false IS_ERS=true \ meta priority=1000
- sudo crm configure modgroup g-QAS_ASCS add rsc_sap_QAS_ASCS00
- sudo crm configure modgroup g-QAS_ERS add rsc_sap_QAS_ERS01
+ sudo crm configure modgroup g-QAS_ASCS add rsc_sap_QAS_ASCS00
+ sudo crm configure modgroup g-QAS_ERS add rsc_sap_QAS_ERS01
- sudo crm configure colocation col_sap_QAS_no_both -5000: g-QAS_ERS g-QAS_ASCS
- sudo crm configure location loc_sap_QAS_failover_to_ers rsc_sap_QAS_ASCS00 rule 2000: runs_ers_QAS eq 1
- sudo crm configure order ord_sap_QAS_first_start_ascs Optional: rsc_sap_QAS_ASCS00:start rsc_sap_QAS_ERS01:stop symmetrical=false
+ sudo crm configure colocation col_sap_QAS_no_both -5000: g-QAS_ERS g-QAS_ASCS
+ sudo crm configure location loc_sap_QAS_failover_to_ers rsc_sap_QAS_ASCS00 rule 2000: runs_ers_QAS eq 1
+ sudo crm configure order ord_sap_QAS_first_start_ascs Optional: rsc_sap_QAS_ASCS00:start rsc_sap_QAS_ERS01:stop symmetrical=false
- sudo crm_attribute --delete --name priority-fencing-delay
+ sudo crm_attribute --delete --name priority-fencing-delay
- sudo crm node online anftstsapcl1
- sudo crm configure property maintenance-mode="false"
- ```
+ sudo crm node online anftstsapcl1
+ sudo crm configure property maintenance-mode="false"
+ ```
- #### [ENSA2](#tab/ensa2)
+ #### [ENSA2](#tab/ensa2)
- > [!NOTE]
- > If you have a two-node cluster running ENSA2, you have the option to configure priority-fencing-delay cluster property. This property introduces additional delay in fencing a node that has higher total resoure priority when a split-brain scenario occurs. For more information, see [SUSE Linux Enteprise Server high availability extension administration guide](https://documentation.suse.com/sle-ha/15-SP3/single-html/SLE-HA-administration/#pro-ha-storage-protect-fencing).
- >
- > The property priority-fencing-delay is only applicable for ENSA2 running on two-node cluster.
+ > [!NOTE]
+ > If you have a two-node cluster running ENSA2, you have the option to configure priority-fencing-delay cluster property. This property introduces additional delay in fencing a node that has higher total resoure priority when a split-brain scenario occurs. For more information, see [SUSE Linux Enteprise Server high availability extension administration guide](https://documentation.suse.com/sle-ha/15-SP3/single-html/SLE-HA-administration/#pro-ha-storage-protect-fencing).
+ >
+ > The property priority-fencing-delay is only applicable for ENSA2 running on two-node cluster.
- ```bash
- sudo crm configure property maintenance-mode="true"
+ ```bash
+ sudo crm configure property maintenance-mode="true"
- sudo crm configure property priority-fencing-delay=30
+ sudo crm configure property priority-fencing-delay=30
- # If using NFSv3
- sudo crm configure primitive rsc_sap_QAS_ASCS00 SAPInstance \
+ # If using NFSv3
+ sudo crm configure primitive rsc_sap_QAS_ASCS00 SAPInstance \
operations \$id=rsc_sap_QAS_ASCS00-operations \ op monitor interval=11 timeout=60 on-fail=restart \ params InstanceName=QAS_ASCS00_anftstsapvh START_PROFILE="/sapmnt/QAS/profile/QAS_ASCS00_anftstsapvh" \ AUTOMATIC_RECOVER=false \ meta resource-stickiness=5000 priority=100
- # If using NFSv4.1
- sudo crm configure primitive rsc_sap_QAS_ASCS00 SAPInstance \
+ # If using NFSv4.1
+ sudo crm configure primitive rsc_sap_QAS_ASCS00 SAPInstance \
operations \$id=rsc_sap_QAS_ASCS00-operations \ op monitor interval=11 timeout=105 on-fail=restart \ params InstanceName=QAS_ASCS00_anftstsapvh START_PROFILE="/sapmnt/QAS/profile/QAS_ASCS00_anftstsapvh" \ AUTOMATIC_RECOVER=false \ meta resource-stickiness=5000 priority=100
- # If using NFSv3
- sudo crm configure primitive rsc_sap_QAS_ERS01 SAPInstance \
+ # If using NFSv3
+ sudo crm configure primitive rsc_sap_QAS_ERS01 SAPInstance \
operations \$id=rsc_sap_QAS_ERS01-operations \ op monitor interval=11 timeout=60 on-fail=restart \ params InstanceName=QAS_ERS01_anftstsapers START_PROFILE="/sapmnt/QAS/profile/QAS_ERS01_anftstsapers" AUTOMATIC_RECOVER=false IS_ERS=true
- # If using NFSv4.1
- sudo crm configure primitive rsc_sap_QAS_ERS01 SAPInstance \
+ # If using NFSv4.1
+ sudo crm configure primitive rsc_sap_QAS_ERS01 SAPInstance \
operations \$id=rsc_sap_QAS_ERS01-operations \ op monitor interval=11 timeout=105 on-fail=restart \ params InstanceName=QAS_ERS01_anftstsapers START_PROFILE="/sapmnt/QAS/profile/QAS_ERS01_anftstsapers" AUTOMATIC_RECOVER=false IS_ERS=true
- sudo crm configure modgroup g-QAS_ASCS add rsc_sap_QAS_ASCS00
- sudo crm configure modgroup g-QAS_ERS add rsc_sap_QAS_ERS01
+ sudo crm configure modgroup g-QAS_ASCS add rsc_sap_QAS_ASCS00
+ sudo crm configure modgroup g-QAS_ERS add rsc_sap_QAS_ERS01
- sudo crm configure colocation col_sap_QAS_no_both -5000: g-QAS_ERS g-QAS_ASCS
- sudo crm configure order ord_sap_QAS_first_start_ascs Optional: rsc_sap_QAS_ASCS00:start rsc_sap_QAS_ERS01:stop symmetrical=false
+ sudo crm configure colocation col_sap_QAS_no_both -5000: g-QAS_ERS g-QAS_ASCS
+ sudo crm configure order ord_sap_QAS_first_start_ascs Optional: rsc_sap_QAS_ASCS00:start rsc_sap_QAS_ERS01:stop symmetrical=false
- sudo crm node online anftstsapcl1
- sudo crm configure property maintenance-mode="false"
- ```
+ sudo crm node online anftstsapcl1
+ sudo crm configure property maintenance-mode="false"
+ ```
-
+
If you're upgrading from an older version and switching to enqueue server 2, see SAP note [2641019](https://launchpad.support.sap.com/#/notes/2641019).
sap High Availability Guide Suse Nfs Azure Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-suse-nfs-azure-files.md
Previously updated : 06/19/2024 Last updated : 07/25/2024
The following items are prefixed with either **[A]** - applicable to all nodes,
sudo ssh sap-cl2 "cat /usr/sap/sapservices" | grep ERS01 | sudo tee -a /usr/sap/sapservices ```
-9. **[1]** Create the SAP cluster resources
+9. **[A]** Disabling `systemd` services of the ASCS and ERS SAP instance. This step is only applicable, if SAP startup framework is managed by systemd as per SAP Note [3115048](https://me.sap.com/notes/3115048)
- Depending on whether you are running an ENSA1 or ENSA2 system, select respective tab to define the resources. SAP introduced support for [ENSA2](https://help.sap.com/docs/ABAP_PLATFORM_NEW/cff8531bc1d9416d91bb6781e628d4e0/6d655c383abf4c129b0e5c8683e7ecd8.html), including replication, in SAP NetWeaver 7.52. Starting with ABAP Platform 1809, ENSA2 is installed by default. For ENSA2 support, see SAP Note [2630416](https://launchpad.support.sap.com/#/notes/2630416).
-
- #### [ENSA1](#tab/ensa1)
+ > [!NOTE]
+ > When managing SAP instances like SAP ASCS and SAP ERS using SLES cluster configuration, you would need to make additional modifications to integrate the cluster with the native systemd-based SAP start framework. This ensures that maintenance procedures do no compromise cluster stability. After installation or switching SAP startup framework to systemd-enabled setup as per SAP Note [3115048](https://me.sap.com/notes/3115048), you should disable the `systemd` services for the ASCS and ERS SAP instances.
```bash
- sudo crm configure property maintenance-mode="true"
+ # Stop ASCS and ERS instances using <sid>adm
+ sapcontrol -nr 00 -function Stop
+ sapcontrol -nr 00 -function StopService
+
+ sapcontrol -nr 01 -function Stop
+ sapcontrol -nr 01 -function StopService
+
+ # Execute below command on VM where you have performed ASCS instance installation (e.g. sap-cl1)
+ sudo systemctl disable SAPNW1_00
+ # Execute below command on VM where you have performed ERS instance installation (e.g. sap-cl2)
+ sudo systemctl disable SAPNW1_01
+ ```
+
+10. **[1]** Create the SAP cluster resources
+
+ Depending on whether you are running an ENSA1 or ENSA2 system, select respective tab to define the resources. SAP introduced support for [ENSA2](https://help.sap.com/docs/ABAP_PLATFORM_NEW/cff8531bc1d9416d91bb6781e628d4e0/6d655c383abf4c129b0e5c8683e7ecd8.html), including replication, in SAP NetWeaver 7.52. Starting with ABAP Platform 1809, ENSA2 is installed by default. For ENSA2 support, see SAP Note [2630416](https://launchpad.support.sap.com/#/notes/2630416).
+
+ #### [ENSA1](#tab/ensa1)
+
+ ```bash
+ sudo crm configure property maintenance-mode="true"
- sudo crm configure primitive rsc_sap_NW1_ASCS00 SAPInstance \
+ sudo crm configure primitive rsc_sap_NW1_ASCS00 SAPInstance \
operations \$id=rsc_sap_NW1_ASCS00-operations \ op monitor interval=11 timeout=60 on-fail=restart \ params InstanceName=NW1_ASCS00_sapascs START_PROFILE="/sapmnt/NW1/profile/NW1_ASCS00_sapascs" \ AUTOMATIC_RECOVER=false \ meta resource-stickiness=5000 failure-timeout=60 migration-threshold=1 priority=10
- sudo crm configure primitive rsc_sap_NW1_ERS01 SAPInstance \
+ sudo crm configure primitive rsc_sap_NW1_ERS01 SAPInstance \
operations \$id=rsc_sap_NW1_ERS01-operations \ op monitor interval=11 timeout=60 on-fail=restart \ params InstanceName=NW1_ERS01_sapers START_PROFILE="/sapmnt/NW1/profile/NW1_ERS01_sapers" AUTOMATIC_RECOVER=false IS_ERS=true \ meta priority=1000
- sudo crm configure modgroup g-NW1_ASCS add rsc_sap_NW1_ASCS00
- sudo crm configure modgroup g-NW1_ERS add rsc_sap_NW1_ERS01
+ sudo crm configure modgroup g-NW1_ASCS add rsc_sap_NW1_ASCS00
+ sudo crm configure modgroup g-NW1_ERS add rsc_sap_NW1_ERS01
- sudo crm configure colocation col_sap_NW1_no_both -5000: g-NW1_ERS g-NW1_ASCS
- sudo crm configure location loc_sap_NW1_failover_to_ers rsc_sap_NW1_ASCS00 rule 2000: runs_ers_NW1 eq 1
- sudo crm configure order ord_sap_NW1_first_start_ascs Optional: rsc_sap_NW1_ASCS00:start rsc_sap_NW1_ERS01:stop symmetrical=false
+ sudo crm configure colocation col_sap_NW1_no_both -5000: g-NW1_ERS g-NW1_ASCS
+ sudo crm configure location loc_sap_NW1_failover_to_ers rsc_sap_NW1_ASCS00 rule 2000: runs_ers_NW1 eq 1
+ sudo crm configure order ord_sap_NW1_first_start_ascs Optional: rsc_sap_NW1_ASCS00:start rsc_sap_NW1_ERS01:stop symmetrical=false
- sudo crm_attribute --delete --name priority-fencing-delay
+ sudo crm_attribute --delete --name priority-fencing-delay
- sudo crm node online sap-cl1
- sudo crm configure property maintenance-mode="false"
- ```
+ sudo crm node online sap-cl1
+ sudo crm configure property maintenance-mode="false"
+ ```
- #### [ENSA2](#tab/ensa2)
+ #### [ENSA2](#tab/ensa2)
- > [!NOTE]
- > If you have a two-node cluster running ENSA2, you have the option to configure priority-fencing-delay cluster property. This property introduces additional delay in fencing a node that has higher total resoure priority when a split-brain scenario occurs. For more information, see [SUSE Linux Enteprise Server high availability extension administration guide](https://documentation.suse.com/sle-ha/15-SP3/single-html/SLE-HA-administration/#pro-ha-storage-protect-fencing).
- >
- > The property priority-fencing-delay is only applicable for ENSA2 running on two-node cluster.
+ > [!NOTE]
+ > If you have a two-node cluster running ENSA2, you have the option to configure priority-fencing-delay cluster property. This property introduces additional delay in fencing a node that has higher total resoure priority when a split-brain scenario occurs. For more information, see [SUSE Linux Enteprise Server high availability extension administration guide](https://documentation.suse.com/sle-ha/15-SP3/single-html/SLE-HA-administration/#pro-ha-storage-protect-fencing).
+ >
+ > The property priority-fencing-delay is only applicable for ENSA2 running on two-node cluster.
- ```bash
- sudo crm configure property maintenance-mode="true"
+ ```bash
+ sudo crm configure property maintenance-mode="true"
- sudo crm configure property priority-fencing-delay=30
-
- sudo crm configure primitive rsc_sap_NW1_ASCS00 SAPInstance \
+ sudo crm configure property priority-fencing-delay=30
+
+ sudo crm configure primitive rsc_sap_NW1_ASCS00 SAPInstance \
operations \$id=rsc_sap_NW1_ASCS00-operations \ op monitor interval=11 timeout=60 on-fail=restart \ params InstanceName=NW1_ASCS00_sapascs START_PROFILE="/sapmnt/NW1/profile/NW1_ASCS00_sapascs" \ AUTOMATIC_RECOVER=false \ meta resource-stickiness=5000 priority=100
-
- sudo crm configure primitive rsc_sap_NW1_ERS01 SAPInstance \
+
+ sudo crm configure primitive rsc_sap_NW1_ERS01 SAPInstance \
operations \$id=rsc_sap_NW1_ERS01-operations \ op monitor interval=11 timeout=60 on-fail=restart \ params InstanceName=NW1_ERS01_sapers START_PROFILE="/sapmnt/NW1/profile/NW1_ERS01_sapers" AUTOMATIC_RECOVER=false IS_ERS=true
- sudo crm configure modgroup g-NW1_ASCS add rsc_sap_NW1_ASCS00
- sudo crm configure modgroup g-NW1_ERS add rsc_sap_NW1_ERS01
+ sudo crm configure modgroup g-NW1_ASCS add rsc_sap_NW1_ASCS00
+ sudo crm configure modgroup g-NW1_ERS add rsc_sap_NW1_ERS01
- sudo crm configure colocation col_sap_NW1_no_both -5000: g-NW1_ERS g-NW1_ASCS
- sudo crm configure order ord_sap_NW1_first_start_ascs Optional: rsc_sap_NW1_ASCS00:start rsc_sap_NW1_ERS01:stop symmetrical=false
+ sudo crm configure colocation col_sap_NW1_no_both -5000: g-NW1_ERS g-NW1_ASCS
+ sudo crm configure order ord_sap_NW1_first_start_ascs Optional: rsc_sap_NW1_ASCS00:start rsc_sap_NW1_ERS01:stop symmetrical=false
- sudo crm node online sap-cl1
- sudo crm configure property maintenance-mode="false"
- ```
+ sudo crm node online sap-cl1
+ sudo crm configure property maintenance-mode="false"
+ ```
-
+
If you are upgrading from an older version and switching to enqueue server 2, see SAP note [2641019](https://launchpad.support.sap.com/#/notes/2641019).
sap High Availability Guide Suse Nfs Simple Mount https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-suse-nfs-simple-mount.md
Previously updated : 06/19/2024 Last updated : 07/25/2024
The instructions in this section are applicable only if you're using Azure NetAp
sudo ssh sap-cl2 "cat /usr/sap/sapservices" | grep ERS01 | sudo tee -a /usr/sap/sapservices ```
-9. **[A]** Enable `sapping` and `sappong`. The `sapping` agent runs before `sapinit` to hide the `/usr/sap/sapservices` file. The `sappong` agent runs after `sapinit` to unhide the `sapservices` file during VM boot. `SAPStartSrv` isn't started automatically for an SAP instance at boot time, because the Pacemaker cluster manages it.
+9. **[A]** Disabling `systemd` services of the ASCS and ERS SAP instance. This step is only applicable, if SAP startup framework is managed by systemd as per SAP Note [3115048](https://me.sap.com/notes/3115048)
+
+ > [!NOTE]
+ > When managing SAP instances like SAP ASCS and SAP ERS using SLES cluster configuration, you would need to make additional modifications to integrate the cluster with the native systemd-based SAP start framework. This ensures that maintenance procedures do no compromise cluster stability. After installation or switching SAP startup framework to systemd-enabled setup as per SAP Note [3115048](https://me.sap.com/notes/3115048), you should disable the `systemd` services for the ASCS and ERS SAP instances.
+
+ ```bash
+ # Stop ASCS and ERS instances using <sid>adm
+ sapcontrol -nr 00 -function Stop
+ sapcontrol -nr 00 -function StopService
+
+ sapcontrol -nr 01 -function Stop
+ sapcontrol -nr 01 -function StopService
+
+ # Execute below command on VM where you have performed ASCS instance installation (e.g. sap-cl1)
+ sudo systemctl disable SAPNW1_00
+ # Execute below command on VM where you have performed ERS instance installation (e.g. sap-cl2)
+ sudo systemctl disable SAPNW1_01
+ ```
+
+10. **[A]** Enable `sapping` and `sappong`. The `sapping` agent runs before `sapinit` to hide the `/usr/sap/sapservices` file. The `sappong` agent runs after `sapinit` to unhide the `sapservices` file during VM boot. `SAPStartSrv` isn't started automatically for an SAP instance at boot time, because the Pacemaker cluster manages it.
```bash sudo systemctl enable sapping sudo systemctl enable sappong ```
-10. **[1]** Create `SAPStartSrv` resource for ASCS and ERS by creating a file and then load the file.
+11. **[1]** Create `SAPStartSrv` resource for ASCS and ERS by creating a file and then load the file.
```bash vi crm_sapstartsrv.txt
The instructions in this section are applicable only if you're using Azure NetAp
> [!NOTE] > If you’ve set up a SAPStartSrv resource using the "crm configure primitive…" command on crmsh version 4.4.0+20220708.6ed6b56f-150400.3.3.1 or later, it’s important to review the configuration of the SAPStartSrv resource primitives. If a monitor operation is present, it should be removed. While SUSE also suggests removing the start and stop operations, but these are not as crucial as the monitor operation. For more information, see [recent changes to crmsh package can result in unsupported configuration of SAPStartSrv resource Agent in a SAP NetWeaver HA cluster](https://www.suse.com/support/kb/doc/?id=000021423).
-11. **[1]** Create the SAP cluster resources.
+12. **[1]** Create the SAP cluster resources.
Depending on whether you are running an ENSA1 or ENSA2 system, select respective tab to define the resources. SAP introduced support for [ENSA2](https://help.sap.com/docs/ABAP_PLATFORM_NEW/cff8531bc1d9416d91bb6781e628d4e0/6d655c383abf4c129b0e5c8683e7ecd8.html), including replication, in SAP NetWeaver 7.52. Starting with ABAP Platform 1809, ENSA2 is installed by default. For ENSA2 support, see SAP Note [2630416](https://launchpad.support.sap.com/#/notes/2630416).
sap High Availability Guide Suse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-suse.md
Previously updated : 06/19/2024 Last updated : 07/25/2024
The following items are prefixed with either **[A]** - applicable to all nodes,
sudo ssh nw1-cl-1 "cat /usr/sap/sapservices" | grep ERS02 | sudo tee -a /usr/sap/sapservices ```
+1. **[A]** Disabling `systemd` services of the ASCS and ERS SAP instance. This step is only applicable, if SAP startup framework is managed by systemd as per SAP Note [3115048](https://me.sap.com/notes/3115048)
+
+ > [!NOTE]
+ > When managing SAP instances like SAP ASCS and SAP ERS using SLES cluster configuration, you would need to make additional modifications to integrate the cluster with the native systemd-based SAP start framework. This ensures that maintenance procedures do no compromise cluster stability. After installation or switching SAP startup framework to systemd-enabled setup as per SAP Note [3115048](https://me.sap.com/notes/3115048), you should disable the `systemd` services for the ASCS and ERS SAP instances.
+
+ ```bash
+ # Stop ASCS and ERS instances using <sid>adm
+ sapcontrol -nr 00 -function Stop
+ sapcontrol -nr 00 -function StopService
+
+ sapcontrol -nr 01 -function Stop
+ sapcontrol -nr 01 -function StopService
+
+ # Execute below command on VM where you have performed ASCS instance installation (e.g. nw1-cl-0)
+ sudo systemctl disable SAPNW1_00
+ # Execute below command on VM where you have performed ERS instance installation (e.g. nw1-cl-1)
+ sudo systemctl disable SAPNW1_01
+ ```
+ 1. **[1]** Create the SAP cluster resources Depending on whether you are running an ENSA1 or ENSA2 system, select respective tab to define the resources. SAP introduced support for [ENSA2](https://help.sap.com/docs/ABAP_PLATFORM_NEW/cff8531bc1d9416d91bb6781e628d4e0/6d655c383abf4c129b0e5c8683e7ecd8.html), including replication, in SAP NetWeaver 7.52. Starting with ABAP Platform 1809, ENSA2 is installed by default. For ENSA2 support, see SAP Note [2630416](https://launchpad.support.sap.com/#/notes/2630416).
The following items are prefixed with either **[A]** - applicable to all nodes,
sudo crm configure property maintenance-mode="false" ```
-
+
If you're upgrading from an older version and switching to enqueue server 2, see SAP note [2641019](https://launchpad.support.sap.com/#/notes/2641019).
sap Planning Guide Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/planning-guide-storage.md
ms.assetid: d7c59cc1-b2d0-4d90-9126-628f9c7a5538
Previously updated : 07/17/2024 Last updated : 07/25/2024
The capability matrix for SAP workload looks like:
## Azure NetApp Files
-[Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-introduction.md) is an Azure native, first-party, enterprise-class, high-performance file storage service certified for use with SAP HANA. It provides _Volumes as a service_ for which you can create NetApp accounts, capacity pools, and volumes. You can select service and performance levels and manage data protection. You can create and manage high-performance, highly available, and scalable file shares by using the same protocols and tools that you're familiar with and rely on on-premises.
+[Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-introduction.md) is an Azure native, first-party, enterprise-class, high-performance file storage service certified for use with SAP HANA. It provides _Volumes as a service_ for which you create NetApp accounts, capacity pools, and volumes. With Azure NetApp Files, you select service and performance levels and manage data protection to create and manage high-performance, highly available, and scalable file shares by using the same protocols and tools that you're familiar with and rely on on-premises.
+The following types of SAP workload are supported on Azure NetApp Files volumes:
-For information about service levels, see [Service levels for Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-service-levels.md). For the different types of SAP workload the following service levels are highly recommended:
+- SAP DBMS workload
+- SAPMNT share
+- Global transport directory
-- SAP DB-- SAPMNT share: Performance, ideally Ultra-- Global transport directory: Performance, ideally Ultra
+Azure NetApp Files is available in three service levels, each with their own throughput and pricing specifications. Which one is right for your deployment depends on the size of the deployment. Customized sizing recommendations are available in the [SAP on Azure NetApp Files TCO Estimator](https://aka.ms/anfsapcalc).
-Sizing recommendations are available in the [SAP on Azure NetApp Files TCO Estimator](https://aka.ms/anfsapcalc).
+For information about service levels, see [Service levels for Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-service-levels.md).
+
+### Deploying volumes
+
+For optimal results, use [Application volume group for SAP HANA](../../azure-netapp-files/application-volume-group-introduction.md) to deploy the volumes. Application volume group places volumes in optimal locations in the Azure infrastructure using affinity and anti-affinity rules to reduce contention and to allow for the best throughput and lowest latency.
> [!NOTE]
-> The minimum provisioning size is a 1 TiB unit that is called capacity pool. You then create volumes out of this capacity pool. Whereas the smallest volume you can build is 100 GiB. You can expand a capacity pool in 1 TiB see. For pricing, see [Azure NetApp Files Pricing](https://azure.microsoft.com/pricing/details/netapp/)
+> Capacity pools are a basic provisioning unit for Azure NetApp Files. Capacity pools are offered beginning at 1 TiB in size; you can expand a capacity pool in 1-TiB increments. Capacity pools are the parent unit for volumes; the smallest volume size is 100 GiB. For pricing, see [Azure NetApp Files Pricing](https://azure.microsoft.com/pricing/details/netapp/)
-Azure NetApp Files is currently supported for several SAP workload scenarios:
+Azure NetApp Files is supported for several SAP workload scenarios:
+- SAP HANA deployments using NFS shares for /han)
- Providing SMB or NFS shares for SAP's global transport directory - The share sapmnt in high availability scenarios as documented in: - [High availability for SAP NetWeaver on Azure VMs on Windows with Azure NetApp Files(SMB) for SAP applications](./high-availability-guide-windows-netapp-files-smb.md) - [High availability for SAP NetWeaver on Azure VMs on SUSE Linux Enterprise Server with Azure NetApp Files for SAP applications](./high-availability-guide-suse-netapp-files.md) - [Azure Virtual Machines high availability for SAP NetWeaver on Red Hat Enterprise Linux with Azure NetApp Files for SAP applications](./high-availability-guide-rhel-netapp-files.md)-- SAP HANA deployments using NFS v4.1 shares for /han)-- IBM Db2 in Suse or Red Hat Linux guest OS-- Oracle deployments in Oracle Linux guest OS using [dNFS](https://docs.oracle.com/en/database/oracle/oracle-database/19/ntdbi/creating-an-oracle-database-on-direct-nfs.html#GUID-2A0CCBAB-9335-45A8-B8E3-7E8C4B889DEA) for Oracle data and redo log volumes. Some more details can be found in the article [Azure Virtual Machines Oracle DBMS deployment for SAP workload](./dbms-guide-oracle.md)-- SAP ASE in Suse or Red Hat Linux guest OS
+- IBM Db2 in Suse or Red Hat Linux-based Azure VM
+- SAP on Oracle deployments in Oracle Linux guest OS using [dNFS](https://docs.oracle.com/en/database/oracle/oracle-database/19/ntdbi/creating-an-oracle-database-on-direct-nfs.html#GUID-2A0CCBAB-9335-45A8-B8E3-7E8C4B889DEA) for Oracle data and redo log volumes. Some more details can be found in the article [Azure Virtual Machines Oracle DBMS deployment for SAP workload](./dbms-guide-oracle.md)
+- SAP on ASE in Suse or Red Hat Linux guest OS
+- AP on MAXDB in Suse or Red Hat Linux guest OS
+- SAP on Microsoft SQL Server with SMB volumes
> [!NOTE]
-> So far no DBMS workloads are supported on SMB based on Azure NetApp Files.
+> For DBMS workloads on Linux, use NFS-based volumes on Azure NetApp Files.
+
+### Decoupling throughput from volume size
Storage for database applications typically has throughput requirements that don't scale linearly with the size of the volumes, ie log volumes are relatively small in size but require high levels of throughput.
Here's an example:
- A volume for database files requires 500 MiB/s throughput and 39 TiB capacity - A volume for log files requires 2000 MiB/s throughput and 1 TiB capacity
-You can create a manual QoS capacity pool for this scenario and allocate throughput independently of the volume sizes. The total capacity required is 40 TiB, and the total throughput is 2500 MiB/s. A capacity pool in the Premium service level (64 MiB/s per allocated TiB) accommodates both performance and capacity requirements (40 TiB * 64 TiB/s/TiB = 2560 TiB).
+You can create a manual QoS capacity pool for this scenario and allocate throughput independently of the volume sizes. The total capacity required is 40 TiB, and the total throughput budget is 2500 MiB/s. A capacity pool in the Premium service level (64 MiB/s per allocated TiB) accommodates both performance and capacity requirements (40 MiB * 64 iB/s/TiB = 2560 MiB).
-Linear performance scaling would require considerable overprovisioning of the log volume to achieve the throughput requirement. To achieve the 2000 MiB/s throughput for the log volume, you'd need to deploy a capacity pool in the Ultra tier (128 MiB/s per allocated TiB) of 16 TiB, resulting in a wasted capacity of 15 TiB.
+Linear performance scaling would require considerable overprovisioning of the log volume to achieve the throughput requirement. To achieve the 2000 MiB/s throughput for the log volume, you'd need to deploy a capacity pool in the Ultra tier (128 MiB/s per allocated TiB) of 16 TiB, resulting in an overprovisioning and therefore a wasted capacity of 15 TiB.
Use the [Azure NetApp Files Performance Calculator](https://aka.ms/anfcalc) to get an estimate for your scenario.
-The capability matrix for SAP workload looks like:
+The capability matrix for SAP workload on Azure NetApp Files looks like:
| Capability| Comment| Notes/Links | | | | |
-| OS base VHD | Doesn't work | - |
-| Data disk | Suitable | SAP HANA, Oracle on Oracle Linux, Db2 and SAP ASE on SLES/RHEL |
-| SAP global transport directory | Yes | SMB and NFS |
-| SAP sapmnt | Suitable | All systems SMB (Windows only) or NFS (Linux only) |
-| Backup storage | Suitable | - |
-| Shares/shared disk | Yes | SMB 3.0, NFS v3, and NFS v4.1 |
-| Resiliency | LRS and GRS | [GRS available](../../azure-netapp-files/cross-region-replication-introduction.md) |
+| OS base VHD | Use managed disk | - |
+| Data disk | Suitable | SAP HANA, Oracle on Oracle Linux, Db2 and SAP ASE on SLES/RHEL, MAXDB, SQL Server |
+| SAP global transport directory | Yes | SMB (Windows only) and NFS (Linux only) |
+| SAP sapmnt | Suitable |SMB (Windows only) or NFS (Linux only) |
+| Backup storage | Suitable | Use snapshots and/or Azure NetApp Files backup; log backup for HANA can also be used as file based backup destination |
+| Shares/shared disk | Yes | SMB, NFS |
+| Resiliency | LRS and GRS | [GRS with cross-region replication](../../azure-netapp-files/cross-region-replication-introduction.md); [ZRS with cross-zone replication](../../azure-netapp-files/cross-zone-replication-introduction.md) |
| Latency | Very low | Typically less than 1 ms | | IOPS SLA | Yes | - |
-| IOPS linear to capacity | Linear with auto QoS; independent with Manual QoS | Three [service levels](../../azure-netapp-files/azure-netapp-files-service-levels.md) available |
-| Throughput SLA | Yes | Sizing recommendations are available in the SAP on Azure NetApp FIles TCO Estimator |
-| Throughput linear to capacity | Linear with auto QoS; independent with Manual QoS | Three [service levels](../../azure-netapp-files/azure-netapp-files-service-levels.md) available |
-| HANA certified | Yes | - |
-| Disk snapshots possible | Yes | - |
-| Azure Backup VM snapshots possible | No | Use [AzAcSnap](../../azure-netapp-files/azacsnap-introduction.md) or [SnapCenter](https://docs.netapp.com/us-en/snapcenter/concept/concept_snapcenter_overview.html) |
-| Costs | Competitive when including benefits of snapshots and integrated backup | - |
+| IOPS linear to capacity | Linear with auto QoS; independently configurable with Manual QoS | Three [service levels](../../azure-netapp-files/azure-netapp-files-service-levels.md) available |
+| Throughput SLA | Yes | Sizing recommendations are available in the SAP on the [Azure NetApp Files TCO Estimator](https://aka.ms/anfsapcalc) |
+| Throughput linear to capacity | Linear with auto QoS; independently configurable with Manual QoS | Three [service levels](../../azure-netapp-files/azure-netapp-files-service-levels.md) available |
+| HANA certified | [Yes](https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/#/solutions?filters=v:deCertified;ve:24&sort=Latest%20Certification&sortDesc=true) | - |
+| Disk snapshots possible | Yes | See [How Azure NetApp Files snapshots work](../../azure-netapp-files/snapshots-introduction.md) |
+| Application consistent snapshot and backup orchestration | No | Use [AzAcSnap](../../azure-netapp-files/azacsnap-introduction.md) or [SnapCenter](https://docs.netapp.com/us-en/snapcenter/concept/concept_snapcenter_overview.html) |
+| Costs | Use TCO estimation tools | Use the [SAP on Azure NetApp Files TCO Estimator](https://aka.ms/anfsapcalc) and enter the size of the landscape |
Other built-in functionality of Azure NetApp Files storage: -- Capability to perform snapshots of volume-- Cloning of Azure NetApp Files volumes from snapshots-- Restore volumes from snapshots (snap-revert)-- [Application consistent Snapshot backup for SAP HANA and Oracle](../../azure-netapp-files/azacsnap-introduction.md)
+- Capability to perform application consistent [snapshots](../..//azure-netapp-files/snapshots-introduction.md) of volume using [AzAcSnap](../../azure-netapp-files/azacsnap-introduction.md)
+- Cloning of Azure NetApp Files [volumes from snapshots](../../azure-netapp-files/snapshots-restore-new-volume.md) for testing and development
+- Restoring [volumes from from snapshots (snap-revert)](../../azure-netapp-files/snapshots-revert-volume.md) for rapid restores from corruptions and errors
> [!IMPORTANT] > Specifically for database deployments you want to achieve low latencies for at least your redo logs. Especially for SAP HANA, SAP requires a latency of less than 1 millisecond for HANA redo log writes of smaller sizes. To get to such latencies, see the possibilities below. > [!IMPORTANT]
-> Even for non-DBMS usage, you should use the functionality that allows you to create the NFS share in the same Azure Availability Zones as you placed your VM(s) that should mount the NFS shares into. This functionality is documented in the article [Manage availability zone volume placement for Azure NetApp Files](../../azure-netapp-files/manage-availability-zone-volume-placement.md). The motivation to have this type of Availability Zone alignment is the reduction of risk surface by having the NFS shares yet in another AvZone where you don't run VMs in.
---- You go for the closest proximity between VM and NFS share that can be arranged by using [Application Volume Groups](../../azure-netapp-files/application-volume-group-introduction.md). The advantage of Application Volume Groups, besides allocating best proximity and with that creating lowest latency, is that your different NFS shares for SAP HANA deployments are distributed across different controllers in the Azure NetApp Files backend clusters. Disadvantage of this method is that you need to go through a pinning process again. A process that ends restricting your VM deployment to a single datacenter. Instead of an Availability Zones as the first method introduced. This means less flexibility in changing VM sizes and VM families of the VMs that have the NFS volumes mounted.-- Current process of not using Availability Placement Groups. Which so far are available for SAP HANA only. This process also uses the same manual pinning process as this is the case with Availability Volume groups. This method is the method used for the last three years. It has the same flexibility restrictions as the process has with Availability Volume Groups.-
+> When deploying Azure NetApp Files volumes take note of the zone in which the virtual machines are or will be deployed. Ensure you select the same zone. This functionality is documented in the article [Manage availability zone volume placement for Azure NetApp Files](../../azure-netapp-files/manage-availability-zone-volume-placement.md). Application volume group for SAP HANA uses the same functionality to deploy the volumes in the closest possible proximity to the application VMs.
-As preferences for allocating NFS volumes based on Azure NetApp Files for database specific usage, you should attempt to allocate the NFS volume in the same zone as your VM first. Especially for non-HANA databases. Only if latency proves to be insufficient you should go through a manual pinning process. For smaller HANA workload or nonproduction HANA workload, you should follow a zonal allocation method as well. Only in cases where performance and latency aren't sufficient you should use Application Volume Groups.
+The motivation to have this type of Availability Zone alignment is the reduction of risk surface by having the NFS shares in the same availability zone as the application VMs.
+* Deploy Azure NetApp Files volumes for your SAP HANA deployment using [application volume group for SAP HANA](../../azure-netapp-files/application-volume-group-introduction.md). The advantage of Application Volume Group is that data volumes are deployed over multiple storage endpoints, reducing network contention and improving performance.
-**Summary**: Azure NetApp Files is a HANA certified low latency storage that allows to deploy NFS and SMB volumes or shares. The storage comes with three different service levels that provide different throughput and IOPS in a linear manner per GiB capacity of the volume. The Azure NetApp Files storage is enabling to deploy SAP HANA scale-out scenarios with a standby node. The storage is suitable for providing file shares as needed for /sapmnt or SAP global transport directory. Azure NetApp Files storage come with functionality availability that is available as native NetApp functionality.
+**Summary**: Azure NetApp Files is a certified low latency storage solution for SAP HANA. The service provides volumes carved out of one or more capacity pools. Capacity pools are available in three service levels which define the total capacity and throughput allocated. The volumes can be resized, and allocated throughput can be adjusted without service interruption to cater for changing requirements and to control cost. The service provides functionality to replicate volumes to other regions or zones for disaster recovery and business continuance purposes.
## Azure Premium Files [Azure Premium Files](../../storage/files/storage-files-planning.md) is a shared storage that offers SMB and NFS for a moderate price and sufficient latency to handle shares of the SAP application layer. On top, Azure premium Files offers synchronous zonal replication of the shares with an automatism that in case one replica fails, another replica in another zone can take over. In opposite to Azure NetApp Files, there are no performance tiers. There also is no need for a capacity pool. Charging is based on the real provisioned capacity of the different shares. Azure Premium Files haven't been tested as DBMS storage for SAP workload at all. But instead the usage scenario for SAP workload focused on all types of SMB and NFS shares as they're used on the SAP application layer. Azure Premium Files is also suited for the usage for **/hana/shared**.
search Cognitive Search Custom Skill Web Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-custom-skill-web-api.md
- ignite-2023 Previously updated : 07/22/2024 Last updated : 07/25/2024 # Custom Web API skill in an Azure AI Search enrichment pipeline
Parameters are case-sensitive.
| Parameter name | Description | |--|-| | `uri` | The URI of the Web API to which the JSON payload is sent. Only the **https** URI scheme is allowed. |
-| `authResourceId` | (Optional) A string that if set, indicates that this skill should use a managed identity on the connection to the function or app hosting the code. This property takes an application (client) ID or app's registration in Microsoft Entra ID, in any of these formats: `api://<appId>`, `<appId>/.default`, `api://<appId>/.default`. This value is used to scope the authentication token retrieved by the indexer, and is sent along with the custom Web skill API request to the function or app. Setting this property requires that your search service is [configured for managed identity](search-howto-managed-identities-data-sources.md) and your Azure function app is [configured for a Microsoft Entra sign in](../app-service/configure-authentication-provider-aad.md). To use this parameter, call the API with `api-version=2023-10-01-Preview`. |
+| `authResourceId` | (Optional) A string that if set, indicates that this skill should use a system managed identity on the connection to the function or app hosting the code. This property takes an application (client) ID or app's registration in Microsoft Entra ID, in any of these formats: `api://<appId>`, `<appId>/.default`, `api://<appId>/.default`. This value is used to scope the authentication token retrieved by the indexer, and is sent along with the custom Web skill API request to the function or app. Setting this property requires that your search service is [configured for managed identity](search-howto-managed-identities-data-sources.md) and your Azure function app is [configured for a Microsoft Entra sign in](../app-service/configure-authentication-provider-aad.md). To use this parameter, call the API with `api-version=2023-10-01-Preview`. |
+| `authIdentity` | (Optional) A user-managed identity used by the search service for connecting to the function or app hosting the code. You can use either a [system or user managed identity](search-howto-managed-identities-data-sources.md). To use a system manged identity, leave `authIdentity` blank. |
| `httpMethod` | The method to use while sending the payload. Allowed methods are `PUT` or `POST` | | `httpHeaders` | A collection of key-value pairs where the keys represent header names and values represent header values that are sent to your Web API along with the payload. The following headers are prohibited from being in this collection: `Accept`, `Accept-Charset`, `Accept-Encoding`, `Content-Length`, `Content-Type`, `Cookie`, `Host`, `TE`, `Upgrade`, `Via`. | | `timeout` | (Optional) When specified, indicates the timeout for the http client making the API call. It must be formatted as an XSD "dayTimeDuration" value (a restricted subset of an [ISO 8601 duration](https://www.w3.org/TR/xmlschema11-2/#dayTimeDuration) value). For example, `PT60S` for 60 seconds. If not set, a default value of 30 seconds is chosen. The timeout can be set to a maximum of 230 seconds and a minimum of 1 second. |
search Cognitive Search Defining Skillset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-defining-skillset.md
Context also determines where outputs are produced in the [enrichment tree](cogn
Skills read from and write to an enriched document. Skill inputs specify the origin of the incoming data. It's often the root node of the enriched document. For blobs, a typical skill input is the document's content property.
-[Skill reference documentation](cognitive-search-predefined-skills.md) for each skill describes the inputs it can consume. Each input has a "name" that identifies a specific input, and a "source" that specifies the location fo the data in the enriched document. The following example is from the Entity Recognition skill:
+[Skill reference documentation](cognitive-search-predefined-skills.md) for each skill describes the inputs it can consume. Each input has a "name" that identifies a specific input, and a "source" that specifies the location of the data in the enriched document. The following example is from the Entity Recognition skill:
```json "inputs": [
search Cognitive Search Skill Azure Openai Embedding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-skill-azure-openai-embedding.md
- ignite-2023 - build-2024 Previously updated : 05/28/2024 Last updated : 07/25/2024 # Azure OpenAI Embedding skill
Parameters are case-sensitive.
| Inputs | Description | ||-|
-| `resourceUri` | The URI of a model provider, such as an Azure OpenAI resource or an OpenAI URL. |
+| `resourceUri` | The URI of the model provider, in this case, an Azure OpenAI resource. This parameter only supports URLs with domain `openai.azure.com`, such as `https://<resourcename>.openai.azure.com`. If the Azure OpenAI endpoint has a URL with domain `cognitiveservices.azure.com`, like `https://<resourcename>.cognitiveservices.azure.com`, a [custom subdomain](/azure/ai-services/openai/how-to/use-your-data-securely#enabled-custom-subdomain) with `openai.azure.com` must be created first for the Azure OpenAI resource and use `https://<resourcename>.openai.azure.com` instead. |
| `apiKey` | The secret key used to access the model. If you provide a key, leave `authIdentity` empty. If you set both the `apiKey` and `authIdentity`, the `apiKey` is used on the connection. | | `deploymentId` | The name of the deployed Azure OpenAI embedding model. The model should be an embedding model, such as text-embedding-ada-002. See the [List of Azure OpenAI models](/azure/ai-services/openai/concepts/models) for supported models.| | `authIdentity` | A user-managed identity used by the search service for connecting to Azure OpenAI. You can use either a [system or user managed identity](search-howto-managed-identities-data-sources.md). To use a system manged identity, leave `apiKey` and `authIdentity` blank. The system-managed identity is used automatically. A managed identity must have [Cognitive Services OpenAI User](/azure/ai-services/openai/how-to/role-based-access-control#azure-openai-roles) permissions to send text to Azure OpenAI. |
search Search Get Started Rag https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-rag.md
You can also start a new file on your local system and create requests manually
Requests to the search endpoint must be authenticated and authorized. You can use API keys or roles for this task. Keys are easier to start with, but roles are more secure. This quickstart assumes roles.
+1. Sign in to the [Azure portal](https://portal.azure.com).
+ 1. Configure Azure OpenAI to use a system-assigned managed identity: 1. In the Azure portal, find your Azure OpenAI resource.
Requests to the search endpoint must be authenticated and authorized. You can us
1. On the left menu, select **Settings** > **Keys**, and then select either **Role-based access control** or **Both**.
- 1. On the left menu, select **Access control (IAM)**.
- 1. Assign roles:
+ 1. On the left menu, select **Access control (IAM)**.
+ 1. On Azure AI Search, add two role assignments for the Azure OpenAI managed identity: - **Search Index Data Reader** - **Search Service Contributor**
- 1. On Azure OpenAI, assign yourself to a role. The code for this quickstart runs locally. Requests to Azure OpenAI originate from your system:
+ 1. On Azure OpenAI, select **Access control (IAM)** to assign yourself to a role. The code for this quickstart runs locally. Requests to Azure OpenAI originate from your system:
- **Cognitive Services OpenAI User**
We recommend the hotels-sample-index, which can be created in minutes and runs o
1. Run the following query to test your index: `hotels near the ocean with beach access and good views`.
+ Output should look similar to the following example. Results that are returned directly from the search engine consist of fields and their verbatim values, along with metadata like a search score and a semantic ranking score and caption if you use semantic ranking.
+
+ ```
+ "@search.score": 5.600783,
+ "@search.rerankerScore": 2.4191176891326904,
+ "@search.captions": [
+ {
+ "text": "Contoso Ocean Motel. Budget. pool\r\nair conditioning\r\nbar. Oceanfront hotel overlooking the beach features rooms with a private balcony and 2 indoor and outdoor pools. Various shops and art entertainment are on the boardwalk, just steps away..",
+ "highlights": "Contoso Ocean Motel. Budget.<em> pool\r\nair conditioning\r\nbar. O</em>ceanfront hotel overlooking the beach features rooms with a private balcony and 2 indoor and outdoor pools. Various shops and art entertainment are on the boardwalk, just steps away."
+ }
+ ],
+ "HotelId": "41",
+ "HotelName": "Contoso Ocean Motel",
+ "Description": "Oceanfront hotel overlooking the beach features rooms with a private balcony and 2 indoor and outdoor pools. Various shops and art entertainment are on the boardwalk, just steps away.",
+ "Category": "Budget",
+ "Tags": [
+ "pool",
+ "air conditioning",
+ "bar"
+ ],
+ ```
+ ## Get service endpoints
+In the remaining sections, you set up API calls to Azure OpenAI and Azure AI Search. Get the service endpoints so that you can provide them as variables in your code.
+ 1. Sign in to the [Azure portal](https://portal.azure.com). 1. [Find your search service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices).
We recommend the hotels-sample-index, which can be created in minutes and runs o
## Set up the query and chat thread
-This section uses Visual Studio Code and Python to call the chat APIs on Azure OpenAI.
+This section uses Visual Studio Code and Python to call the chat completion APIs on Azure OpenAI.
+
+1. Start Visual Studio Code and [open the .ipynb file](https://github.com/Azure-Samples/azure-search-python-samples/tree/main/Quickstart-RAG) or create a new Python file.
1. Install the following Python packages.
search Search Region Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-region-support.md
Previously updated : 07/09/2024 Last updated : 07/25/2024
None of these regions support Azure [role-based access for data plane operations
| Region | AI enrichment | Semantic ranking | Availability zones | |--|--|--|--|
-| Arizona | ✅ | | |
+| Arizona | ✅ | ✅ | |
| Texas | | | |
-| Virginia | ✅ | | ✅ |
+| Virginia | ✅ | ✅ | ✅ |
## Azure operated by 21Vianet
search Vector Search Vectorizer Azure Open Ai https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/vector-search-vectorizer-azure-open-ai.md
- build-2024 Previously updated : 05/28/2024 Last updated : 07/25/2024 # Azure OpenAI vectorizer
Parameters are case-sensitive.
| Parameter name | Description | |--|-|
-| `resourceUri` | The URI of a model provider, such as an Azure OpenAI resource or an OpenAI URL. |
+| `resourceUri` | The URI of the model provider, in this case, an Azure OpenAI resource. This parameter only supports URLs with domain `openai.azure.com`, such as `https://<resourcename>.openai.azure.com`. If the Azure OpenAI endpoint has a URL with domain `cognitiveservices.azure.com`, like `https://<resourcename>.cognitiveservices.azure.com`, a [custom subdomain](/azure/ai-services/openai/how-to/use-your-data-securely#enabled-custom-subdomain) with `openai.azure.com` must be created first for the Azure OpenAI resource and use `https://<resourcename>.openai.azure.com` instead. |
| `apiKey` | The secret key used to access the model. If you provide a key, leave `authIdentity` empty. If you set both the `apiKey` and `authIdentity`, the `apiKey` is used on the connection. | | `deploymentId` | The name of the deployed Azure OpenAI embedding model. The model should be an embedding model, such as text-embedding-ada-002. See the [List of Azure OpenAI models](/azure/ai-services/openai/concepts/models) for supported models.| | `authIdentity` | A user-managed identity used by the search service for connecting to Azure OpenAI. You can use either a [system or user managed identity](search-howto-managed-identities-data-sources.md). To use a system manged identity, leave `apiKey` and `authIdentity` blank. The system-managed identity is used automatically. A managed identity must have [Cognitive Services OpenAI User](/azure/ai-services/openai/how-to/role-based-access-control#azure-openai-roles) permissions to send text to Azure OpenAI. |
The expected field dimensions for a field configured with an Azure OpenAI vector
+ [Integrated vectorization](vector-search-integrated-vectorization.md) + [How to configure a vectorizer in a search index](vector-search-how-to-configure-vectorizer.md)
-+ [Azure OpenAI Embedding skill](cognitive-search-skill-azure-openai-embedding.md)
++ [Azure OpenAI Embedding skill](cognitive-search-skill-azure-openai-embedding.md)
sentinel Cef Syslog Ama Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/cef-syslog-ama-overview.md
Previously updated : 06/27/2024 Last updated : 07/12/2024 #Customer intent: As a security operator, I want to understand how Microsoft Sentinel collects Syslog and CEF messages with the Azure Monitor Agent so that I can determine if this solution fits my organization's needs.
Using the same facility for both Syslog and CEF messages might result in data in
To avoid this scenario, use one of these methods: -- **If the source device enables configuration of the target facility**: On each source machine that sends logs to the log forwarder in CEF format, edit the Syslog configuration file to remove the facilities used to send CEF messages. This way, the facilities sent in CEF aren't also be sent in Syslog. Make sure that each DCR you configure in the next steps uses the relevant facility for CEF or Syslog respectively.
+- **If the source device enables configuration of the target facility**: On each source machine that sends logs to the log forwarder in CEF format, edit the Syslog configuration file to remove the facilities used to send CEF messages. This way, the facilities sent in CEF aren't also be sent in Syslog. Make sure that each DCR you configure uses the relevant facility for CEF or Syslog respectively.
To see an example of how to arrange a DCR to ingest both Syslog and CEF messages from the same agent, go to [Syslog and CEF streams in the same DCR](connect-cef-syslog-ama.md?tabs=api#syslog-and-cef-streams-in-the-same-dcr). -- **If changing the facility for the source appliance isn't applicable**: Use an ingest time transformation to filter out CEF messages from the Syslog stream to avoid duplication, as shown in the following query example.
+- **If changing the facility for the source appliance isn't applicable**: After you create the DCR, add ingestion time transformation to filter out CEF messages from the Syslog stream to avoid duplication. See [Tutorial: Edit a data collection rule (DCR)](../azure-monitor/essentials/data-collection-rule-edit.md). Add KQL transformation similar to the following example:
- ```kusto
- source |
- where ProcessName !contains "CEF"
+ ```json
+ "transformKql": " source\n | where ProcessName !contains \"CEF\"\n"
```-
+
## Next steps > [!div class="nextstepaction"]
sentinel Deploy Side By Side https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/deploy-side-by-side.md
Title: Deploy Microsoft Sentinel side-by-side to an existing SIEM.
+ Title: Deploying Microsoft Sentinel side-by-side to an existing SIEM.
description: Learn how to deploy Microsoft Sentinel side-by-side to an existing SIEM.-+ Previously updated : 05/30/2022- Last updated : 07/24/2024+
-# Deploy Microsoft Sentinel side-by-side to an existing SIEM
+# Deploying Microsoft Sentinel side-by-side to an existing SIEM
Your security operations center (SOC) team uses centralized security information and event management (SIEM) and security orchestration, automation, and response (SOAR) solutions to protect your increasingly decentralized digital estate.
-This article describes how to deploy Microsoft Sentinel in a side-by-side configuration together with your existing SIEM.
+This article describes the approach and methods to consider when deploying Microsoft Sentinel in a side-by-side configuration together with your existing SIEM.
-## Select a side-by-side approach and method
+## Side-by-side approach
-Use a side-by-side architecture either as a short-term, transitional phase that leads to a completely cloud-hosted SIEM, or as a medium- to long-term operational model, depending on the SIEM needs of your organization.
+Use a side-by-side architecture either as a short-term, transitional phase that leads to a cloud-hosted SIEM, or as a medium- to long-term operational model, depending on the SIEM needs of your organization.
-For example, while the recommended architecture is to use a side-by-side architecture just long enough to complete a migration to Microsoft Sentinel, your organization may want to stay with your side-by-side configuration for longer, such as if you aren't ready to move away from your legacy SIEM. Typically, organizations who use a long-term, side-by-side configuration use Microsoft Sentinel to analyze only their cloud data.
+For example, while the recommended architecture is to use a side-by-side architecture just long enough to complete a migration to Microsoft Sentinel, your organization might want to stay with your side-by-side configuration for longer, such as if you aren't ready to move away from your legacy SIEM. Typically, organizations who use a long-term, side-by-side configuration use Microsoft Sentinel to analyze only their cloud data. Many organizations avoid running multiple on-premises analytics solutions because of cost and complexity.
+
+Microsoft Sentinel provides [pay-as-you-go pricing](billing.md) and flexible infrastructure, giving SOC teams time to adapt to the change. Deploy and test your content at a pace that works best for your organization, and learn about how to [fully migrate to Microsoft Sentinel](migration.md).
Consider the pros and cons for each approach when deciding which one to use.
-> [!NOTE]
-> Many organizations avoid running multiple on-premises analytics solutions because of cost and complexity.
->
-> Microsoft Sentinel provides [pay-as-you-go pricing](billing.md) and flexible infrastructure, giving SOC teams time to adapt to the change. Deploy and test your content at a pace that works best for your organization, and learn about how to [fully migrate to Microsoft Sentinel](migration.md).
->
### Short-term approach
+The following table describes the pros and cons of using a side-by-side architecture for a relatively short period of time.
+ |**Pros** |**Cons** | ||| |ΓÇó Gives SOC staff time to adapt to new processes as you deploy workloads and analytics.<br><br>ΓÇó Gains deep correlation across all data sources for hunting scenarios.<br><br>ΓÇó Eliminates having to do analytics between SIEMs, create forwarding rules, and close investigations in two places.<br><br>ΓÇó Enables your SOC team to quickly downgrade legacy SIEM solutions, eliminating infrastructure and licensing costs. |ΓÇó Can require a steep learning curve for SOC staff. | ### Medium- to long-term approach
+The following table describes the pros and cons of using a side-by-side architecture for a relatively medium or longer period of time.
+ |**Pros** |**Cons** | ||| |ΓÇó Lets you use key Microsoft Sentinel benefits, like AI, ML, and investigation capabilities, without moving completely away from your legacy SIEM.<br><br>ΓÇó Saves money compared to your legacy SIEM, by analyzing cloud or Microsoft data in Microsoft Sentinel. |ΓÇó Increases complexity by separating analytics across different databases.<br><br>ΓÇó Splits case management and investigations for multi-environment incidents.<br><br>ΓÇó Incurs greater staff and infrastructure costs.<br><br>ΓÇó Requires SOC staff to be knowledgeable about two different SIEM solutions. |
-### Send alerts from a legacy SIEM to Microsoft Sentinel (Recommended)
+## Side-by-side method
+
+Determine how you'll configure and use Microsoft Sentinel side-by-side with your legacy SIEM.
+
+### Method 1: Send alerts from a legacy SIEM to Microsoft Sentinel (Recommended)
Send alerts, or indicators of anomalous activity, from your legacy SIEM to Microsoft Sentinel.
Send alerts, or indicators of anomalous activity, from your legacy SIEM to Micro
- Use your legacy SIEM to analyze on-premises data and generate alerts. - Forward the alerts from your on-premises SIEM into Microsoft Sentinel to establish a single interface.
-For example, forward alerts using [Logstash](connect-logstash.md), [APIs](/rest/api/securityinsights/), or [Syslog](connect-syslog.md), and store them in [JSON](https://techcommunity.microsoft.com/t5/azure-sentinel/tip-easily-use-json-fields-in-sentinel/ba-p/768747) format in your Microsoft Sentinel [Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md).
+For example, forward alerts using [Logstash](connect-logstash-data-connection-rules.md), [APIs](/rest/api/securityinsights/), or [Syslog](connect-cef-syslog-ama.md), and store them in [JSON](https://techcommunity.microsoft.com/t5/azure-sentinel/tip-easily-use-json-fields-in-sentinel/ba-p/768747) format in your Microsoft Sentinel Log Analytics workspace.
By sending alerts from your legacy SIEM to Microsoft Sentinel, your team can cross-correlate and investigate those alerts in Microsoft Sentinel. The team can still access the legacy SIEM for deeper investigation if needed. Meanwhile, you can continue deploying data sources over an extended transition period.
For more information, see:
If you want to fully migrate to Microsoft Sentinel, review the full [migration guide](migration.md).
-### Send alerts and enriched incidents from Microsoft Sentinel to a legacy SIEM
+### Method 2: Send alerts and enriched incidents from Microsoft Sentinel to a legacy SIEM
Analyze some data in Microsoft Sentinel, such as cloud data, and then send the generated alerts to a legacy SIEM. Use the *legacy* SIEM as your single interface to do cross-correlation with the alerts that Microsoft Sentinel generated. You can still use Microsoft Sentinel for deeper investigation of the Microsoft Sentinel-generated alerts.
The following table describes side-by-side configurations that are *not* recomme
|**Send logs from a legacy SIEM to Microsoft Sentinel** | While this method provides you with the full functionality of Microsoft Sentinel, your organization still pays for two different data ingestion sources. Besides adding architectural complexity, this model can result in higher costs. | |**Use Microsoft Sentinel and your legacy SIEM as two fully separate solutions** | You could use Microsoft Sentinel to analyze some data sources, like your cloud data, and continue to use your on-premises SIEM for other sources. This setup allows for clear boundaries for when to use each solution, and avoids duplication of costs. <br><br>However, cross-correlation becomes difficult, and you can't fully diagnose attacks that cross both sets of data sources. In today's landscape, where threats often move laterally across an organization, such visibility gaps can pose significant security risks. |
-## Use automation to streamline processes
+## Streamline processes by using automation
Use automated workflows to group and prioritize alerts into a common incident, and modify its priority. For more information, see: -- [Security Orchestration, Automation, and Response (SOAR) in Microsoft Sentinel](automation.md).
+- [Automation in Microsoft Sentinel: Security orchestration, automation, and response (SOAR)](automation/automation.md)
- [Automate threat response with playbooks in Microsoft Sentinel](automate-responses-with-playbooks.md) - [Automate incident handling in Microsoft Sentinel with automation rules](automate-incident-handling-with-automation-rules.md)
-## Next steps
+## Related content
Explore Microsoft's Microsoft Sentinel resources to expand your skills and get the most out of Microsoft Sentinel.
-Also consider increasing your threat protection by using Microsoft Sentinel alongside [Microsoft Defender XDR](./microsoft-365-defender-sentinel-integration.md) and [Microsoft Defender for Cloud](../security-center/azure-defender.md) for [integrated threat protection](https://www.microsoft.com/security/business/threat-protection). Benefit from the breadth of visibility that Microsoft Sentinel delivers, while diving deeper into detailed threat analysis.
+Consider increasing your threat protection by using Microsoft Sentinel alongside [Microsoft Defender XDR](./microsoft-365-defender-sentinel-integration.md) and [Microsoft Defender for Cloud](../security-center/azure-defender.md) for [integrated threat protection](https://www.microsoft.com/security/business/threat-protection). Benefit from the breadth of visibility that Microsoft Sentinel delivers, while diving deeper into detailed threat analysis.
For more information, see: - [Rule migration best practices](https://techcommunity.microsoft.com/t5/azure-sentinel/best-practices-for-migrating-detection-rules-from-arcsight/ba-p/2216417) - [Webinar: Best Practices for Converting Detection Rules](https://www.youtube.com/watch?v=njXK1h9lfR4)-- [Security Orchestration, Automation, and Response (SOAR) in Microsoft Sentinel](automation.md) - [Manage your SOC better with incident metrics](manage-soc-with-incident-metrics.md) - [Microsoft Sentinel learning path](/training/paths/security-ops-sentinel/) - [SC-200 Microsoft Security Operations Analyst certification](/certifications/exams/sc-200)
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md
The listed features were released in the last three months. For information abou
## July 2024
+- [SOC optimizations now generally available](#soc-optimizations-now-generally-available)
- [SAP Business Technology Platform (BTP) connector now generally available](#sap-business-technology-platform-btp-connector-now-generally-available-ga) - [Microsoft unified security platform now generally available](#microsoft-unified-security-platform-now-generally-available)
+### SOC optimizations now generally available
+
+The SOC optimization experience in both the Azure and Defender portals is now generally available for all Microsoft Sentinel customers, including both data value and threat-based recommendations.
+
+- **Use data value recommendations** to improve your data usage of ingested billable logs, gain visibility to underused logs, and discover the right detections for those logs or the right adjustments to your log tier or ingestion.
+
+- **Use threat-based recommendations** to help identify gaps in coverage against specific attacks based on Microsoft research and mitigate them by ingesting the recommended logs and adding recommended detections.
+
+The [`recommendations`](soc-optimization/soc-optimization-api.md) API is still in Preview.
+
+For more information, see:
+
+- [Optimize your security operations](soc-optimization/soc-optimization-access.md)
+- [SOC optimization reference of recommendations](soc-optimization/soc-optimization-reference.md)
+ ### SAP Business Technology Platform (BTP) connector now generally available (GA) The Microsoft Sentinel Solution for SAP BTP is now generally available (GA). This solution provides visibility into your SAP BTP environment, and helps you detect and respond to threats and suspicious activities.
service-bus-messaging Advanced Features Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/advanced-features-overview.md
Title: Azure Service Bus messaging - advanced features
-description: This article provides a high-level overview of advanced features in Azure Service Bus.
- Previously updated : 06/08/2023
+description: This article provides a high-level overview of advanced features in Azure Service Bus such as sessions, scheduled delivery, autodelete on idle, etc.
+ Last updated : 07/25/2024
+#customer intent: as a developer of messaging applications, I want to know what features are supported by Azure Service Bus to make informed decisions.
# Azure Service Bus - advanced features
Messages in the dead-letter queue are annotated with the reason why they've been
You can submit messages to a queue or a topic for delayed processing, setting a time when the message becomes available for consumption. Scheduled messages can also be canceled. For more information, see [Scheduled messages](message-sequencing.md#scheduled-messages). ## Message deferral
-A queue or subscription client can defer retrieval of a received message until a later time. The message may have been posted out of an expected order and the client wants to wait until it receives another message. Deferred messages remain in the queue or subscription and must be reactivated explicitly using their service-assigned sequence number. For more information, see [Message deferral](message-deferral.md).
+A queue or subscription client can defer retrieval of a received message until a later time. The message might have been posted out of an expected order and the client wants to wait until it receives another message. Deferred messages remain in the queue or subscription and must be reactivated explicitly using their service-assigned sequence number. For more information, see [Message deferral](message-deferral.md).
## Transactions A transaction groups two or more operations together into an execution scope. Service Bus allows you to group operations against multiple messaging entities within the scope of a single transaction. A message entity can be a queue, topic, or subscription. For more information, see [Overview of Service Bus transaction processing](service-bus-transactions.md). ## Autodelete on idle
-Autodelete on idle enables you to specify an idle interval after which a queue or topic subscription is automatically deleted. The interval is reset when a message is added to or removed from the subscription. The minimum duration is 5 minutes. For an overview on what is considered as idleness for entities, please check [Idleness](message-expiration.md#idleness).
+Autodelete on idle enables you to specify an idle interval after which a queue or topic subscription is automatically deleted. The interval is reset when a message is added to or removed from the subscription. The minimum duration is 5 minutes. For an overview on what is considered as idleness for entities, see [Idleness](message-expiration.md#idleness).
## Duplicate detection The duplicate detection feature enables the sender to resend the same message again and for the broker to drop a potential duplicate. For more information, see [Duplicate detection](duplicate-detection.md).
-## Batch delete of Messages
-Azure Service Bus supports deletion of messages in batches. This is useful in scenarios when messages within queues or subscriptions have become expired , or no longer relevant, necessitating a cleanup. For more information, see [Batch delete](batch-delete.md).
+## Batch deletion of Messages
+Azure Service Bus supports deletion of messages in batches. It's useful in scenarios when messages within queues or subscriptions have become expired, or no longer relevant, necessitating a cleanup. For more information, see [Batch delete](batch-delete.md).
## Support ordering The **Support ordering** feature allows you to specify whether messages that are sent to a topic are forwarded to the subscription in the same order in which they were sent. This feature doesn't support partitioned topics. For more information, see [TopicProperties.SupportOrdering](/dotnet/api/azure.messaging.servicebus.administration.topicproperties.supportordering) in .NET or [TopicProperties.setOrderingSupported](/java/api/com.azure.messaging.servicebus.administration.models.topicproperties.setorderingsupported) in Java. ## Geo-disaster recovery
-When an Azure region experiences downtime, the disaster recovery feature enables message processing to continue operating in a different region or data center. The feature keeps a structural mirror of a namespace available in the secondary region and allows the namespace identity to switch to the secondary namespace. Already posted messages remain in the former primary namespace for recovery once the availability episode subsides. For more information, see [Azure Service Bus Geo-disaster recovery](service-bus-geo-dr.md).
+When an Azure region experiences downtime, the disaster recovery feature enables message processing to continue operating in a different region or data center. The feature keeps a structural mirror of a namespace available in the secondary region and allows the namespace identity to switch to the secondary namespace. Already posted messages remain in the former primary namespace for recovery once the availability episode subsides. For more information, see [Azure Service Bus Geo-disaster recovery](service-bus-geo-dr.md). This feature replicates only metadata (entities, configuration, properties) of Service Bus entities, not the data in them.
+
+## Geo replication
+The Service Bus Geo-Replication feature is one of the options to [insulate Azure Service Bus applications against outages and disasters](service-bus-outages-disasters.md), providing replication of both metadata (entities, configuration, properties) and data (message data and message property / state changes).
## Security
-Service Bus supports standard [AMQP 1.0](service-bus-amqp-overview.md) and [HTTP or REST](/rest/api/servicebus/) protocols and their respective security facilities, including transport-level security (TLS). Clients can be authorized for access using [Shared Access Signature](service-bus-sas.md) or [Microsoft Entra ID](service-bus-authentication-and-authorization.md) role-based security.
+Service Bus supports standard [Advanced Message Queuing Protocol (AMQP) 1.0](service-bus-amqp-overview.md) and [HTTP or REST](/rest/api/servicebus/) protocols and their respective security facilities, including transport-level security (TLS). Clients can be authorized for access using [Shared Access Signature](service-bus-sas.md) or [Microsoft Entra ID](service-bus-authentication-and-authorization.md) role-based security.
For protection against unwanted traffic, Service Bus provides [security features](network-security.md) such as IP firewall and integration with virtual networks.
-## Next steps
+## Related content
See [Service Bus messaging samples](service-bus-samples.md) that show how to use these Service Bus features.
service-bus-messaging Disable Local Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/disable-local-authentication.md
Title: Disable local authentication with Azure Service Bus description: This article explains how to disable local or Shared Access Signature key authentication for a Service Bus namespace. Previously updated : 02/01/2022 Last updated : 07/25/2024
+#customer intent: As a developer or IT adminstrator, I want to know how to disable shared access key authentication and use only the Microsoft Entra ID authentication for higher security.
# Disable local or shared access key authentication with Azure Service Bus
-There are two ways to authenticate to Azure Service Bus resources: Microsoft Entra ID and Shared Access Signatures (SAS). Microsoft Entra ID provides superior security and ease of use over shared access signatures (SAS). With Microsoft Entra ID, thereΓÇÖs no need to store the tokens in your code and risk potential security vulnerabilities. We recommend that you use Microsoft Entra ID with your Azure Service Bus applications when possible.
+There are two ways to authenticate to Azure Service Bus resources:
-This article explains how to disable SAS key authentication and use only Microsoft Entra ID for authentication.
+- Microsoft Entra ID
+- Shared Access Signatures (SAS)
+
+Microsoft Entra ID provides superior security and ease of use over shared access signatures (SAS). With Microsoft Entra ID, thereΓÇÖs no need to store the tokens in your code and risk potential security vulnerabilities. We recommend that you use Microsoft Entra ID with your Azure Service Bus applications when possible.
+
+This article explains how to disable SAS key authentication (or local authentication) and use only Microsoft Entra ID for authentication.
## Use portal to disable local auth In this section, you learn how to use the Azure portal to disable local authentication.
In this section, you learn how to use the Azure portal to disable local authenti
1. Navigate to your Service Bus namespace in the [Azure portal](https://portal.azure.com). 1. In the **Essentials** section of the **Overview** page, select **Enabled**, for **Local Authentication**.
- :::image type="content" source="./media/disable-local-authentication/portal-overview-enabled.png" alt-text="Image showing the Overview page of a Service Bus namespace with Local Authentication set to Enabled.":::
+ :::image type="content" source="./media/disable-local-authentication/portal-overview-enabled.png" alt-text="Screenshot that shows the Overview page of a Service Bus namespace with Local Authentication set to Enabled." lightbox="./media/disable-local-authentication/portal-overview-enabled.png":::
1. On the **Local Authentication** page, select **Disabled**, and select **OK**.
- :::image type="content" source="./media/disable-local-authentication/select-disabled.png" alt-text="Disable location.":::
+ :::image type="content" source="./media/disable-local-authentication/select-disabled.png" alt-text="Screenshot that shows the selection of Disabled option on the Local Authentication page.":::
## Use Resource Manager template to disable local auth You can disable local authentication for a Service Bus namespace by setting `disableLocalAuth` property to `true` as shown in the following Azure Resource Manager template.
You can disable local authentication for a Service Bus namespace by setting `dis
``` ## Azure policy
-You can assign the [disable local auth](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcfb11c26-f069-4c14-8e36-56c394dae5af) Azure policy to an Azure subscription or a resource group to enforce disabling of local authentication for all Service Bus namespaces in the subscription or the resource group.
+You can assign the [disable local auth](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcfb11c26-f069-4c14-8e36-56c394dae5af) Azure policy to an Azure subscription or a resource group to enforce disabling of local authentication for all Service Bus namespaces in the subscription or the resource group.
-## Next steps
+## Related content
See the following to learn about Microsoft Entra ID and SAS authentication. - [Authentication with SAS](service-bus-sas.md)
service-bus-messaging Message Expiration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/message-expiration.md
For development and test environments in which queues and topics are often used
The expiration for any individual message can be controlled by setting the **time-to-live** system property, which specifies a relative duration. The expiration becomes an absolute instant when the message is enqueued into the entity. At that time, the **expires-at-utc** property takes on the value **enqueued-time-utc** + **time-to-live**. The time-to-live (TTL) setting on a brokered message isn't enforced when there are no clients actively listening.
+> [!NOTE]
+> Messages that have expired may not be immediately removed by the broker. The broker may opt to lazily expire these messages, based on whether the entity is in active use at the time a message expires. Consequently, customers might observe an incorrect message count when using message expiration, and may even see these messages during a peek operation. However, when receiving messages, the expired message will not be included.
+ Past the **expires-at-utc** instant, messages become ineligible for retrieval. The expiration doesn't affect messages that are currently locked for delivery. Those messages are still handled normally. If the lock expires or the message is abandoned, the expiration takes immediate effect. While the message is under lock, the application might be in possession of a message that has expired. Whether the application is willing to go ahead with processing or chooses to abandon the message is up to the implementer. Extremely low TTL in the order of milliseconds or seconds might cause messages to expire before receiver applications receive it. Consider the highest TTL that works for your application.
service-bus-messaging Message Sequencing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/message-sequencing.md
Title: Azure Service Bus message sequencing and timestamps | Microsoft Docs
+ Title: Azure Service Bus message sequencing and timestamps
description: This article explains how to preserve sequencing and ordering (with timestamps) of Azure Service Bus messages.- Previously updated : 06/06/2023+ Last updated : 07/24/2024
+#customer intent: As an architect or a developer, I want to know how the messages are sequenced (stamped with sequence number) and time-stamped in a queue or a topic in Azure Service Bus.
# Message sequencing and timestamps
Sequencing and timestamping are two features that are always enabled on all Serv
For those cases in which absolute order of messages is significant and/or in which a consumer needs a trustworthy unique identifier for messages, the broker stamps messages with a gap-free, increasing sequence number relative to the queue or topic. For partitioned entities, the sequence number is issued relative to the partition. ## Sequence number
-The **SequenceNumber** value is a unique 64-bit integer assigned to a message as it is accepted and stored by the broker and functions as its internal identifier. For partitioned entities, the topmost 16 bits reflect the partition identifier. Sequence numbers roll over to zero when the 48/64-bit range is exhausted.
+The `SequenceNumber` value is a unique 64-bit integer assigned to a message as it is accepted and stored by the broker and functions as its internal identifier. For partitioned entities, the topmost 16 bits reflect the partition identifier. Sequence numbers roll over to zero when the 64-bit or 48-bit (excluding 16 bits for the partition identifier) range is exhausted.
-The sequence number can be trusted as a unique identifier since it's assigned by a central and neutral authority and not by clients. It also represents the true order of arrival, and is more precise than a time stamp as an order criterion, because time stamps may not have a high enough resolution at extreme message rates and may be subject to (however minimal) clock skew in situations where the broker ownership transitions between nodes.
+The sequence number can be trusted as a unique identifier since it's assigned by a central and neutral authority and not by clients. It also represents the true order of arrival, and is more precise than a time stamp as an order criterion, because time stamps might not have a high enough resolution at extreme message rates and might be subject to (however minimal) clock skew in situations where the broker ownership transitions between nodes.
The absolute arrival order matters, for example, in business scenarios in which a limited number of offered goods are served on a first-come-first-served basis while supplies last; concert ticket sales are an example. ## Timestamp
-The time-stamping capability acts as a neutral and trustworthy authority that accurately captures the UTC time of arrival of a message, reflected in the **EnqueuedTimeUtc** property. The value is useful if a business scenario depends on deadlines, such as whether a work item was submitted on a certain date before midnight, but the processing is far behind the queue backlog.
+The time-stamping capability acts as a neutral and trustworthy authority that accurately captures the UTC time of arrival of a message, reflected in the `EnqueuedTimeUtc` property. The value is useful if a business scenario depends on deadlines, such as whether a work item was submitted on a certain date before midnight, but the processing is far behind the queue backlog.
> [!NOTE] > Sequence number on its own guarantees the queuing order and the extractor order of messages, but not the processing order, which requires [sessions](message-sessions.md).
Scheduled messages don't materialize in the queue until the defined enqueue time
You can schedule messages using any of our clients in two ways: - Use the regular send API, but set the `ScheduledΓÇïEnqueueΓÇïTimeΓÇïUtc` property on the message before sending.-- Use the schedule message API, pass both the normal message and the scheduled time. The API returns the scheduled message's **SequenceNumber**, which you can later use to cancel the scheduled message if needed.
+- Use the schedule message API, pass both the normal message and the scheduled time. The API returns the scheduled message's `SequenceNumber`, which you can later use to cancel the scheduled message if needed.
Scheduled messages and their sequence numbers can also be discovered using [message browsing](message-browsing.md).
-The **SequenceNumber** for a scheduled message is only valid while the message is in this state. As the message transitions to the active state, the message is appended to the queue as if it had been enqueued at the current instant, which includes assigning a new **SequenceNumber**.
+The `SequenceNumber` for a scheduled message is only valid while the message is in the scheduled state. As the message transitions to the active state, the message is appended to the queue as if it had been enqueued at the current instant, which includes assigning a new `SequenceNumber`.
Because the feature is anchored on individual messages and messages can only be enqueued once, Service Bus doesn't support recurring schedules for messages. > [!NOTE]
-> Message enqueuing time doesn't mean that the message will be sent at the same time. It will get enqueued, but the actual sending time depends on the queue's workload and its state.
-
-> [!NOTE]
-> Due to performance considerations, the activation and cancellation of scheduled messages are independent operations without mutual locking. If a message is in the process of being activated and is simultaneously cancelled, the activation process will not be reversed and the message will still be activated. Moreover, this can potentially lead to a negative count of scheduled messages. To minimize this race condition, it is recommended to avoid scheduling activation and cancellation operations in close succession.
+> - Message enqueuing time doesn't mean that the message will be sent at the same time. It will get enqueued, but the actual sending time depends on the queue's workload and its state.
+> - Due to performance considerations, the activation and cancellation of scheduled messages are independent operations without mutual locking. If a message is in the process of being activated and is simultaneously cancelled, the activation process will not be reversed and the message will still be activated. Moreover, this can potentially lead to a negative count of scheduled messages. To minimize this race condition, we recommend that you avoid scheduling activation and cancellation operations in close succession.
### Using scheduled messages with workflows
-It is common to see longer-running business workflows that have an explicit time component to them, like 5-minute timeouts for 2-factor authentication, hour-long timeouts for users confirming their email address, and multi-day, week, or month long time components in domains like banking and insurance.
+It's common to see longer-running business workflows that have an explicit time component to them, like 5-minute timeouts for 2-factor authentication, hour-long timeouts for users confirming their email address, and multi-day, week, or month long time components in domains like banking and insurance.
These workflows are often kicked off by the processing of some message, which then stores some state, and then schedules a message to continue the process at a later time. Frameworks like [NServiceBus](https://docs.particular.net/tutorials/nservicebus-sagas/2-timeouts/) and [MassTransit](https://masstransit.io/documentation/configuration/sagas/overview) make it easier to integrate all of these elements together.
-## Next steps
+## Related content
To learn more about Service Bus messaging, see the following topics: * [Service Bus queues, topics, and subscriptions](service-bus-queues-topics-subscriptions.md)
-* [Get started with Service Bus queues](service-bus-dotnet-get-started-with-queues.md)
-* [How to use Service Bus topics and subscriptions](service-bus-dotnet-how-to-use-topics-subscriptions.md)
-
-## Additional resource
- * [A blog post that describes techniques for reordering messages that arrive out of order](https://particular.net/blog/you-dont-need-ordered-delivery)
service-bus-messaging Service Bus Amqp Protocol Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-amqp-protocol-guide.md
Unlike earlier expired draft versions from the AMQP working group that are still
The protocol can be used for symmetric peer-to-peer communication, for interaction with message brokers that support queues and publish/subscribe entities, as Azure Service Bus does. It can also be used for interaction with messaging infrastructure where the interaction patterns are different from regular queues, as is the case with Azure Event Hubs. An event hub acts like a queue when events are sent to it, but acts more like a serial storage service when events are read from it; it somewhat resembles a tape drive. The client picks an offset into the available data stream and is then served all events from that offset to the latest available.
-The AMQP 1.0 protocol is designed to be extensible, enabling further specifications to enhance its capabilities. The three extension specifications discussed in this document illustrate this. For communication over existing HTTPS/WebSockets infrastructure, configuring the native AMQP TCP ports might be difficult. A binding specification defines how to layer AMQP over WebSockets. For interacting with the messaging infrastructure in a request/response fashion for management purposes or to provide advanced functionality, the AMQP management specification defines the required basic interaction primitives. For federated authorization model integration, the AMQP claims-based-security specification defines how to associate and renew authorization tokens associated with links.
+The AMQP 1.0 protocol is designed to be extensible, enabling further specifications to enhance its capabilities. The three extension specifications discussed in this document illustrate it. For communication over existing HTTPS/WebSockets infrastructure, configuring the native AMQP TCP ports might be difficult. A binding specification defines how to layer AMQP over WebSockets. For interacting with the messaging infrastructure in a request/response fashion for management purposes or to provide advanced functionality, the AMQP management specification defines the required basic interaction primitives. For federated authorization model integration, the AMQP claims-based-security specification defines how to associate and renew authorization tokens associated with links.
## Basic AMQP scenarios This section explains the basic usage of AMQP 1.0 with Azure Service Bus, which includes creating connections, sessions, and links, and transferring messages to and from Service Bus entities such as queues, topics, and subscriptions.
-The most authoritative source to learn about how AMQP works is the [AMQP 1.0 specification](http://docs.oasis-open.org/amqp/core/v1.0/amqp-core-overview-v1.0.html), but the specification was written to precisely guide implementation and not to teach the protocol. This section focuses on introducing as much terminology as needed for describing how Service Bus uses AMQP 1.0. For a more comprehensive introduction to AMQP, as well as a broader discussion of AMQP 1.0, you can review [this video course][this video course].
+The most authoritative source to learn about how AMQP works is the [AMQP 1.0 specification](http://docs.oasis-open.org/amqp/core/v1.0/amqp-core-overview-v1.0.html), but the specification was written to precisely guide implementation and not to teach the protocol. This section focuses on introducing as much terminology as needed for describing how Service Bus uses AMQP 1.0. For a more comprehensive introduction to AMQP and a broader discussion of AMQP 1.0, review [this video course][this video course].
### Connections and sessions
AMQP calls the communicating programs *containers*; those contain *nodes*, which
![Diagram showing Sessions and Connections between containers.][1]
-The network connection is thus anchored on the container. It's initiated by the container in the client role making an outbound TCP socket connection to a container in the receiver role, which listens for and accepts inbound TCP connections. The connection handshake includes negotiating the protocol version, declaring or negotiating the use of Transport Level Security (TLS/SSL), and an authentication/authorization handshake at the connection scope that is based on SASL.
+The network connection is thus anchored on the container. It's initiated by the container in the client role making an outbound TCP socket connection to a container in the receiver role, which listens for and accepts inbound TCP connections. The connection handshake includes negotiating the protocol version, declaring or negotiating the use of Transport Level Security (TLS)/Secure Sockets Layer (SSL), and an authentication/authorization handshake at the connection scope that is based on SASL.
Azure Service Bus or Azure Event Hubs requires the use of TLS at all times. It supports connections over TCP port 5671, whereby the TCP connection is first overlaid with TLS before entering the AMQP protocol handshake, and also supports connections over TCP port 5672 whereby the server immediately offers a mandatory upgrade of connection to TLS using the AMQP-prescribed model. The AMQP WebSockets binding creates a tunnel over TCP port 443 that is then equivalent to AMQP 5671 connections.
In addition to the session-level flow control model that previously discussed, e
![Screenshot of a log showing Source, Destination, Source Port, Destination Port, and Protocol Name. In the first row the Destination Port 10401 (0x28 A 1) is outlined in black.][4]
-On a link, transfers can only happen when the sender has enough *link credit*. Link credit is a counter set by the receiver using the *flow* performative, which is scoped to a link. When the sender is assigned link credit, it attempts to use up that credit by delivering messages. Each message delivery decrements the remaining link credit by 1. When the link credit is used up, deliveries stop.
+On a link, transfers can only happen when the sender has enough *link credit*. Link credit is a counter set by the receiver using the *flow* performative, which is scoped to a link. When the sender is assigned link credit, it attempts to use up that credit by delivering messages. Each message delivery decrements the remaining link credit by 1. When the link credit is used, deliveries stop.
When Service Bus is in the receiver role, it instantly provides the sender with ample link credit, so that messages can be sent immediately. As link credit is used, Service Bus occasionally sends a *flow* performative to the sender to update the link credit balance.
The following sections explain which properties from the standard AMQP message s
Any property that application needs to define should be mapped to AMQP's `application-properties` map.
-#### header
+#### Header
| Field Name | Usage | API name | | | | |
Any property that application needs to define should be mapped to AMQP's `applic
| first-acquirer |- |- | | delivery-count |- |[DeliveryCount](/dotnet/api/azure.messaging.servicebus.servicebusreceivedmessage) |
-#### properties
+#### Properties
| Field Name | Usage | API name | | | | |
-| message-id |Application-defined, free-form identifier for this message. Used for duplicate detection. |[MessageId](/dotnet/api/azure.messaging.servicebus.servicebusmessage.messageid) |
-| user-id |Application-defined user identifier, not interpreted by Service Bus. |Not accessible through the Service Bus API. |
-| to |Application-defined destination identifier, not interpreted by Service Bus. |[To](/dotnet/api/azure.messaging.servicebus.servicebusmessage.to) |
-| subject |Application-defined message purpose identifier, not interpreted by Service Bus. |[Subject](/dotnet/api/azure.messaging.servicebus.servicebusmessage.subject) |
-| reply-to |Application-defined reply-path indicator, not interpreted by Service Bus. |[ReplyTo](/dotnet/api/azure.messaging.servicebus.servicebusmessage.replyto) |
-| correlation-id |Application-defined correlation identifier, not interpreted by Service Bus. |[CorrelationId](/dotnet/api/azure.messaging.servicebus.servicebusmessage.correlationid) |
-| content-type |Application-defined content-type indicator for the body, not interpreted by Service Bus. |[ContentType](/dotnet/api/azure.messaging.servicebus.servicebusmessage.contenttype) |
-| content-encoding |Application-defined content-encoding indicator for the body, not interpreted by Service Bus. |Not accessible through the Service Bus API. |
-| absolute-expiry-time |Declares at which absolute instant the message expires. Ignored on input (header TTL is observed), authoritative on output. |Not accessible through the Service Bus API. |
-| creation-time |Declares at which time the message was created. Not used by Service Bus |Not accessible through the Service Bus API. |
-| group-id |Application-defined identifier for a related set of messages. Used for Service Bus sessions. |[SessionId](/dotnet/api/azure.messaging.servicebus.servicebusmessage.sessionid) |
-| group-sequence |Counter identifying the relative sequence number of the message inside a session. Ignored by Service Bus. |Not accessible through the Service Bus API. |
-| reply-to-group-id |- |[ReplyToSessionId](/dotnet/api/azure.messaging.servicebus.servicebusmessage.replytosessionid) |
+| `message-id` |Application-defined, free-form identifier for this message. Used for duplicate detection. |[MessageId](/dotnet/api/azure.messaging.servicebus.servicebusmessage.messageid) |
+| `user-id` |Application-defined user identifier, not interpreted by Service Bus. |Not accessible through the Service Bus API. |
+| `to` |Application-defined destination identifier, not interpreted by Service Bus. |[To](/dotnet/api/azure.messaging.servicebus.servicebusmessage.to) |
+| `subject` |Application-defined message purpose identifier, not interpreted by Service Bus. |[Subject](/dotnet/api/azure.messaging.servicebus.servicebusmessage.subject) |
+| `reply-to` |Application-defined reply-path indicator, not interpreted by Service Bus. |[ReplyTo](/dotnet/api/azure.messaging.servicebus.servicebusmessage.replyto) |
+| `correlation-id` |Application-defined correlation identifier, not interpreted by Service Bus. |[CorrelationId](/dotnet/api/azure.messaging.servicebus.servicebusmessage.correlationid) |
+| `content-type` |Application-defined content-type indicator for the body, not interpreted by Service Bus. |[ContentType](/dotnet/api/azure.messaging.servicebus.servicebusmessage.contenttype) |
+| `content-encoding` |Application-defined content-encoding indicator for the body, not interpreted by Service Bus. |Not accessible through the Service Bus API. |
+| `absolute-expiry-time` |Declares at which absolute instant the message expires. Ignored on input (header TTL is observed), authoritative on output. |Not accessible through the Service Bus API. |
+| `creation-time` |Declares at which time the message was created. Not used by Service Bus |Not accessible through the Service Bus API. |
+| `group-id` |Application-defined identifier for a related set of messages. Used for Service Bus sessions. |[SessionId](/dotnet/api/azure.messaging.servicebus.servicebusmessage.sessionid) |
+| `group-sequence` |Counter identifying the relative sequence number of the message inside a session. Ignored by Service Bus. |Not accessible through the Service Bus API. |
+| `reply-to-group-id` |- |[ReplyToSessionId](/dotnet/api/azure.messaging.servicebus.servicebusmessage.replytosessionid) |
#### Message annotations
There are few other service bus message properties, which aren't part of AMQP me
| Annotation Map Key | Usage | API name | | | | |
-| x-opt-scheduled-enqueue-time | Declares at which time the message should appear on the entity |[ScheduledEnqueueTime](/dotnet/api/azure.messaging.servicebus.servicebusreceivedmessage.scheduledenqueuetime) |
-| x-opt-partition-key | Application-defined key that dictates which partition the message should land in. | [PartitionKey](/dotnet/api/azure.messaging.servicebus.servicebusreceivedmessage.partitionkey) |
-| x-opt-via-partition-key | Application-defined partition-key value when a transaction is to be used to send messages via a transfer queue. | [TransactionPartitionKey](/dotnet/api/azure.messaging.servicebus.servicebusreceivedmessage.transactionpartitionkey) |
-| x-opt-enqueued-time | Service-defined UTC time representing the actual time of enqueuing the message. Ignored on input. | [EnqueuedTime](/dotnet/api/azure.messaging.servicebus.servicebusreceivedmessage.enqueuedtime) |
-| x-opt-sequence-number | Service-defined unique number assigned to a message. | [SequenceNumber](/dotnet/api/azure.messaging.servicebus.servicebusreceivedmessage.sequencenumber) |
-| x-opt-offset | Service-defined enqueued sequence number of the message. | [EnqueuedSequenceNumber](/dotnet/api/azure.messaging.servicebus.servicebusreceivedmessage.enqueuedsequencenumber) |
-| x-opt-locked-until | Service-defined. The date and time until which the message will be locked in the queue/subscription. | [LockedUntil](/dotnet/api/azure.messaging.servicebus.servicebusreceivedmessage.lockeduntil) |
-| x-opt-deadletter-source | Service-Defined. If the message is received from dead letter queue, it represents the source of the original message. | [DeadLetterSource](/dotnet/api/azure.messaging.servicebus.servicebusreceivedmessage.deadlettersource) |
+| `x-opt-scheduled-enqueue-time` | Declares at which time the message should appear on the entity |[ScheduledEnqueueTime](/dotnet/api/azure.messaging.servicebus.servicebusreceivedmessage.scheduledenqueuetime) |
+| `x-opt-partition-key` | Application-defined key that dictates which partition the message should land in. | [PartitionKey](/dotnet/api/azure.messaging.servicebus.servicebusreceivedmessage.partitionkey) |
+| `x-opt-via-partition-key` | Application-defined partition-key value when a transaction is to be used to send messages via a transfer queue. | [TransactionPartitionKey](/dotnet/api/azure.messaging.servicebus.servicebusreceivedmessage.transactionpartitionkey) |
+| `x-opt-enqueued-time` | Service-defined UTC time representing the actual time of enqueuing the message. Ignored on input. | [EnqueuedTime](/dotnet/api/azure.messaging.servicebus.servicebusreceivedmessage.enqueuedtime) |
+| `x-opt-sequence-number` | Service-defined unique number assigned to a message. | [SequenceNumber](/dotnet/api/azure.messaging.servicebus.servicebusreceivedmessage.sequencenumber) |
+| `x-opt-offset` | Service-defined enqueued sequence number of the message. | [EnqueuedSequenceNumber](/dotnet/api/azure.messaging.servicebus.servicebusreceivedmessage.enqueuedsequencenumber) |
+| `x-opt-locked-until` | Service-defined. The date and time until which the message will be locked in the queue/subscription. | [LockedUntil](/dotnet/api/azure.messaging.servicebus.servicebusreceivedmessage.lockeduntil) |
+| `x-opt-deadletter-source` | Service-Defined. If the message is received from dead letter queue, it represents the source of the original message. | [DeadLetterSource](/dotnet/api/azure.messaging.servicebus.servicebusreceivedmessage.deadlettersource) |
### Transaction capability
Every connection has to initiate its own control link to be able to start and en
#### Starting a transaction
-To begin transactional work. the controller must obtain a `txn-id` from the coordinator. It does this by sending a `declare` type message. If the declaration is successful, the coordinator responds with a disposition outcome, which carries the assigned `txn-id`.
+To begin transactional work, the controller must obtain a `txn-id` from the coordinator. It does this by sending a `declare` type message. If the declaration is successful, the coordinator responds with a disposition outcome, which carries the assigned `txn-id`.
| Client (Controller) | Direction | Service Bus (Coordinator) | | : | :: | : |
service-bus-messaging Service Bus Service Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-service-endpoints.md
Title: Configure virtual network service endpoints for Azure Service Bus description: This article provides information on how to add a Microsoft.ServiceBus service endpoint to a virtual network. Previously updated : 02/16/2023 Last updated : 07/24/2024 # Allow access to Azure Service Bus namespace from specific virtual networks
-The integration of Service Bus with [Virtual Network (VNet) service endpoints][vnet-sep] enables secure access to messaging capabilities from workloads like virtual machines that are bound to virtual networks, with the network traffic path being secured on both ends.
+The integration of Service Bus with [Virtual Network service endpoints][vnet-sep] enables secure access to messaging capabilities from workloads like virtual machines that are bound to virtual networks, with the network traffic path being secured on both ends.
-Once configured to be bound to at least one virtual network subnet service endpoint, the respective Service Bus namespace will no longer accept traffic from anywhere but authorized virtual network(s) and, optionally, specific internet IP addresses. From the virtual network perspective, binding a Service Bus namespace to a service endpoint configures an isolated networking tunnel from the virtual network subnet to the messaging service.
+Once configured to be bound to at least one virtual network subnet service endpoint, the respective Service Bus namespace will no longer accept traffic from anywhere but the authorized virtual networks and, optionally, specific internet IP addresses. From the virtual network perspective, binding a Service Bus namespace to a service endpoint configures an isolated networking tunnel from the virtual network subnet to the messaging service.
The result is a private and isolated relationship between the workloads bound to the subnet and the respective Service Bus namespace, in spite of the observable network address of the messaging service endpoint being in a public IP range. ## Important points-- Virtual Networks are supported only in [Premium tier](service-bus-premium-messaging.md) Service Bus namespaces. When using VNet service endpoints with Service Bus, you shouldn't enable these endpoints in applications that mix standard and premium tier Service Bus namespaces. Because the standard tier doesn't support VNets. The endpoint is restricted to Premium tier namespaces only.
+- Virtual Networks are supported only in [Premium tier](service-bus-premium-messaging.md) Service Bus namespaces. When using virtual network service endpoints with Service Bus, you shouldn't enable these endpoints in applications that mix standard and premium tier Service Bus namespaces. Because the standard tier doesn't support virtual networks. The endpoint is restricted to Premium tier namespaces only.
- Implementing Virtual Networks integration can prevent other Azure services from interacting with Service Bus. As an exception, you can allow access to Service Bus resources from certain **trusted services** even when network service endpoints are enabled. For a list of trusted services, see [Trusted services](#trusted-microsoft-services). The following Microsoft services are required to be on a virtual network
The result is a private and isolated relationship between the workloads bound to
- Azure Functions - Specify **at least one IP rule or virtual network rule** for the namespace to allow traffic only from the specified IP addresses or subnet of a virtual network. If there are no IP and virtual network rules, the namespace can be accessed over the public internet (using the access key).
-## Advanced security scenarios enabled by VNet integration
+## Advanced security scenarios enabled by virtual network integration
Solutions that require tight and compartmentalized security, and where virtual network subnets provide the segmentation between the compartmentalized services, generally still need communication paths between services residing in those compartments.
Any immediate IP route between the compartments, including those carrying HTTPS
That means your security sensitive cloud solutions not only gain access to Azure industry-leading reliable and scalable asynchronous messaging capabilities, but they can now use messaging to create communication paths between secure solution compartments that are inherently more secure than what is achievable with any peer-to-peer communication mode, including HTTPS and other TLS-secured socket protocols.
-## Binding Service Bus to Virtual Networks
+## Binding Service Bus to virtual networks
*Virtual network rules* are the firewall security feature that controls whether your Azure Service Bus server accepts connections from a particular virtual network subnet.
This section shows you how to use Azure portal to add a virtual network service
> If you choose **Selected networks**, add at least one IP firewall rule or a virtual network that will have access to the namespace. Choose **Disabled** if you want to restrict all traffic to this namespace over [private endpoints](private-link-service.md) only. - **All networks** (default). This option enables public access from all networks using an access key. If you select the **All networks** option, Service Bus accepts connections from any IP address (using the access key). This setting is equivalent to a rule that accepts the 0.0.0.0/0 IP address range. 2. To restrict access to specific virtual networks, select the **Selected networks** option if it isn't already selected.
-1. In the **Virtual Network** section of the page, select **+Add existing virtual network**. Select **+ Create new virtual network** if you want to create a new VNet.
+1. In the **Virtual Network** section of the page, select **+Add existing virtual network**. Select **+ Create new virtual network** if you want to create a new virtual network.
:::image type="content" source="./media/service-endpoints/add-vnet-menu.png" lightbox="./media/service-endpoints/add-vnet-menu.png" alt-text="Image showing the selection of Add existing virtual network button on the toolbar.":::
This section shows you how to use Azure portal to add a virtual network service
> If you select the **Selected networks** option and don't add at least one IP firewall rule or a virtual network on this page, the namespace can be accessed over public internet (using the access key). 3. Select the virtual network from the list of virtual networks, and then pick the **subnet**. You have to enable the service endpoint before adding the virtual network to the list. If the service endpoint isn't enabled, the portal prompts you to enable it.
- :::image type="content" source="./media/service-endpoints/select-subnet.png" alt-text="Image showing the selection of VNet and subnet.":::
+ :::image type="content" source="./media/service-endpoints/select-subnet.png" alt-text="Screenshot showing the selection of virtual network and subnet.":::
4. You should see the following successful message after the service endpoint for the subnet is enabled for **Microsoft.ServiceBus**. Select **Add** at the bottom of the page to add the network. :::image type="content" source="./media/service-endpoints/subnet-service-endpoint-enabled.png" alt-text="Image showing the success message of enabling the service endpoint.":::
Use the following Azure PowerShell commands to add, list, remove, update, and de
- [`New-AzServiceBusVirtualNetworkRuleConfig`](/powershell/module/az.servicebus/new-azservicebusipruleconfig) and [`Set-AzServiceBusNetworkRuleSet`](/powershell/module/az.servicebus/set-azservicebusnetworkruleset) together to add a virtual network rule.
-## default action and public network access
+## Default action and public network access
### REST API
-The default value of the `defaultAction` property was `Deny` for API version **2021-01-01-preview and earlier**. However, the deny rule isn't enforced unless you set IP filters or virtual network (VNet) rules. That is, if you didn't have any IP filters or VNet rules, it's treated as `Allow`.
+The default value of the `defaultAction` property was `Deny` for API version **2021-01-01-preview and earlier**. However, the deny rule isn't enforced unless you set IP filters or virtual network rules. That is, if you didn't have any IP filters or virtual network rules, it's treated as `Allow`.
-From API version **2021-06-01-preview onwards**, the default value of the `defaultAction` property is `Allow`, to accurately reflect the service-side enforcement. If the default action is set to `Deny`, IP filters and VNet rules are enforced. If the default action is set to `Allow`, IP filters and VNet rules aren't enforced. The service remembers the rules when you turn them off and then back on again.
+From API version **2021-06-01-preview onwards**, the default value of the `defaultAction` property is `Allow`, to accurately reflect the service-side enforcement. If the default action is set to `Deny`, IP filters and virtual network rules are enforced. If the default action is set to `Allow`, IP filters and virtual network rules aren't enforced. The service remembers the rules when you turn them off and then back on again.
The API version **2021-06-01-preview onwards** also introduces a new property named `publicNetworkAccess`. If it's set to `Disabled`, operations are restricted to private links only. If it's set to `Enabled`, operations are allowed over the public internet.
For more information about these properties, see [Create or Update Network Rule
### Azure portal
-Azure portal always uses the latest API version to get and set properties. If you had previously configured your namespace using **2021-01-01-preview and earlier** with `defaultAction` set to `Deny`, and specified zero IP filters and VNet rules, the portal would have previously checked **Selected Networks** on the **Networking** page of your namespace. Now, it checks the **All networks** option.
+Azure portal always uses the latest API version to get and set properties. If you had previously configured your namespace using **2021-01-01-preview and earlier** with `defaultAction` set to `Deny`, and specified zero IP filters and virtual network rules, the portal would have previously checked **Selected Networks** on the **Networking** page of your namespace. Now, it checks the **All networks** option.
:::image type="content" source="./media/service-bus-ip-filtering/firewall-all-networks-selected.png" alt-text="Screenshot of the Azure portal Networking page. The option to allow access from All networks is selected on the Firewalls and virtual networks tab.":::
spring-apps How To Staging Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-staging-environment.md
Use the following steps to view deployed apps.
:::image type="content" source="media/how-to-staging-environment/running-staging-app.png" lightbox="media/how-to-staging-environment/running-staging-app.png" alt-text="Screenshot that shows the URL of the staging app."::: >[!TIP]
-> Confirm that your test endpoint ends with a slash (/) to ensure that the CSS file is loaded correctly. If your browser requires you to enter login credentials to view the page, use [URL decode](https://www.urldecoder.org/) to decode your test endpoint. URL decode returns a URL in the format `https://\<username>:\<password>@\<cluster-name>.test.azuremicroservices.io/demo/green`. Use this format to access your endpoint.
+> Confirm that your test endpoint ends with a slash (/) to ensure that the CSS file is loaded correctly. If your browser requires you to enter login credentials to view the page, use [URL decode](https://www.urldecoder.org/) to decode your test endpoint. URL decode returns a URL in the format `https://\<username>:\<password>@\<cluster-name>.test.azuremicroservices.io/demo/green`. Use this format to access your endpoint. If you want to disable basic authentication for your test endpoint, run the following Azure CLI command: `az spring app update --resource-group <resource-group-name> --service <Azure-Spring-Apps-instance-name> --name demo --disable-test-endpoint-auth true`
>[!NOTE] > Configuration server settings apply to both your staging environment and your production environment. For example, if you set the context path (*server.servlet.context-path*) for your app demo in the configuration server as *somepath*, the path to your green deployment changes to `https://\<username>:\<password>@\<cluster-name>.test.azuremicroservices.io/demo/green/somepath/...`.
storage Storage Files Quick Create Use Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-quick-create-use-windows.md
description: This tutorial covers how to create an SMB Azure file share using th
Previously updated : 05/13/2024 Last updated : 07/24/2024 #Customer intent: As an IT admin new to Azure Files, I want to try out Azure file shares so I can determine whether I want to subscribe to the service.
Now that you've created the VM, connect to it so you can mount your file share.
:::image type="content" source="media/storage-files-quick-create-use-windows/local-host2.png" alt-text="Screenshot of the VM log in prompt, more choices is highlighted.":::
-1. You may receive a certificate warning during the sign-in process. Select **Yes** or **Continue** to create the connection.
+1. You might receive a certificate warning during the sign-in process. Select **Yes** or **Continue** to create the connection.
### Map the Azure file share to a Windows drive 1. In the Azure portal, navigate to the *qsfileshare* fileshare and select **Connect**. 1. Select a drive letter and then **Show script**.
-1. Copy the script and paste it in **Notepad**.
+1. Copy the script from the Azure portal and paste it into **Notepad**, as in the following example.
- :::image type="content" source="medilet-resize.png":::
+ :::image type="content" source="medilet-resize.png":::
1. In the VM, open **PowerShell** and paste in the contents of the **Notepad**, then press enter to run the command. It should map the drive.
synapse-analytics Apache Spark 3 Runtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-3-runtime.md
Azure Synapse Analytics supports multiple runtimes for Apache Spark. This document covers the runtime components and versions for the Azure Synapse Runtime for Apache Spark 3.1. > [!CAUTION]
-> Deprecation and disablement notification for Azure Synapse Runtime for Apache Spark 3.1
-> * End of Support for Azure Synapse Runtime for Apache Spark 3.1 announced January 26, 2023.
-> * Effective January 26, 2024, the Azure Synapse has stopped official support for Spark 3.1 Runtimes.
-> * Post January 26, 2024, we will not be addressing any support tickets related to Spark 3.1. There will be no release pipeline in place for bug or security fixes for Spark 3.1. Utilizing Spark 3.1 post the support cutoff date is undertaken at one's own risk. We strongly discourage its continued use due to potential security and functionality concerns.
-> * Recognizing that certain customers may need additional time to transition to a higher runtime version, we are temporarily extending the usage option for Spark 3.1, but we will not provide any official support for it.
-> * **We strongly advise proactively upgrading workloads to a more recent version of the runtime (e.g., [Azure Synapse Runtime for Apache Spark 3.4 (GA)](./apache-spark-34-runtime.md))**.
--
+> Deprecation and disablement notification for Azure Synapse Runtime for Apache Spark 3.1
+> * Effective August 29, 2024, **disablement** of jobs running on Azure Synapse Runtime for Apache Spark 3.1 will be executed. **Immediately** migrate to higher runtime versions otherwise your jobs will stop executing.
+> * **All Spark jobs running on Azure Synapse Runtime for Apache Spark 3.1 will be disabled as of August 29, 2024.**
+ * End of Support for Azure Synapse Runtime for Apache Spark 3.1 announced January 26, 2023.
+ * Effective January 26, 2024, the Azure Synapse has stopped official support for Spark 3.1 Runtimes.
+ * Post January 26, 2024, we will not be addressing any support tickets related to Spark 3.1. There will be no release pipeline in place for bug or security fixes for Spark 3.1. Utilizing Spark 3.1 post the support cutoff date is undertaken at one's own risk. We strongly discourage its continued use due to potential security and functionality concerns.
+ * Recognizing that certain customers may need additional time to transition to a higher runtime version, we are temporarily extending the usage option for Spark 3.1, but we will not provide any official support for it.
+ * **We strongly advise proactively upgrading workloads to a more recent version of the runtime (e.g., [Azure Synapse Runtime for Apache Spark 3.4 (GA)](./apache-spark-34-runtime.md))**.
## Component versions | Component | Version |
synapse-analytics Synapse Link For Sql Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/synapse-link-for-sql-known-issues.md
description: Learn about limitations and known issues with Azure Synapse Link fo
Previously updated : 05/02/2024 Last updated : 07/24/2024
The following sections list limitations for Azure Synapse Link for SQL.
- System tables can't be replicated. - The security configuration from the source database will **NOT** be reflected in the target dedicated SQL pool. - Enabling Azure Synapse Link for SQL creates a new schema named `changefeed`. Don't use this schema, as it is reserved for system use.-- Source tables with collations that are unsupported by dedicated SQL pools, such as UTF-8 and certain Japanese collations, can't be replicated. Here's the [supported collations in Synapse SQL Pool](../sql/reference-collation-types.md).
- * Additionally, Azure Synapse Link for SQL does not support some Thai language collations:
- * **Thai100CaseInsensitiveAccentInsensitiveKanaSensitive**
- * **Thai100CaseInsensitiveAccentSensitiveSupplementaryCharacters**
- * **Thai100CaseSensitiveAccentInsensitiveKanaSensitive**
- * **Thai100CaseSensitiveAccentInsensitiveKanaSensitiveWidthSensitiveSupplementaryCharacters**
- * **Thai100CaseSensitiveAccentSensitiveKanaSensitive**
- * **Thai100CaseSensitiveAccentSensitiveSupplementaryCharacters**
- * **ThaiCaseSensitiveAccentInsensitiveWidthSensitive**
- * Currently, the collation **Latin1_General_BIN2** isn't supported as there's a known issue where the link can't be stopped nor underlying tables could be removed from replication.
+- Source tables with collations that are unsupported by dedicated SQL pools, such as UTF-8 and certain Japanese collations, can't be replicated. See [supported collations in Synapse SQL pools](../sql/reference-collation-types.md).
+ * Additionally, Azure Synapse Link for SQL does not support some Thai language collations:
+ * `Thai100CaseInsensitiveAccentInsensitiveKanaSensitive`
+ * `Thai100CaseInsensitiveAccentSensitiveSupplementaryCharacters`
+ * `Thai100CaseSensitiveAccentInsensitiveKanaSensitive`
+ * `Thai100CaseSensitiveAccentInsensitiveKanaSensitiveWidthSensitiveSupplementaryCharacters`
+ * `Thai100CaseSensitiveAccentSensitiveKanaSensitive`
+ * `Thai100CaseSensitiveAccentSensitiveSupplementaryCharacters`
+ * `ThaiCaseSensitiveAccentInsensitiveWidthSensitive`
- Single row updates (including off-page storage) of > 370 MB are not supported.-- When Azure Synapse Link for SQL on Azure SQL Database or SQL Server 2022 is enabled, the aggressive log truncation feature of Accelerated Database Recovery (ADR) is automatically disabled. This is because Azure Synapse Link for SQL accesses the database transaction log. This behavior is similar to changed data capture (CDC). Active transactions continue to hold the transaction log truncation until the transaction commits and Azure Synapse Link for SQL catches up, or transaction aborts. This might result in the transaction log filling up more than usual and should be monitored so that the transaction log does not fill.
+- When Azure Synapse Link for SQL on Azure SQL Database or SQL Server 2022 is enabled, the aggressive log truncation feature of Accelerated Database Recovery (ADR) is automatically disabled. This is necessary because Azure Synapse Link for SQL accesses the database transaction log. This behavior is similar to changed data capture (CDC). Active transactions continue to hold the transaction log truncation until the transaction commits and Azure Synapse Link for SQL catches up, or transaction aborts. This might result in the transaction log filling up more than usual and should be monitored so that the transaction log does not fill.
### Azure SQL Database only - Azure Synapse Link for SQL isn't supported on Free, Basic, or Standard tier with fewer than 100 DTUs.
update-manager Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/support-matrix.md
The following table lists the operating systems supported on [Azure Arc-enabled
| Debian 10 and 11| | Rocky Linux 8|
-# [Windows IoT on Arc enabled IaaS VMs](#tab/winio-arc)
+# [Windows IoT Enterprise on Arc enabled IaaS VMs (preview)](#tab/winio-arc)
- Windows 10 IoT Enterprise LTSC 2021 - Windows 10 IoT Enterprise LTSC 2019
update-manager Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/whats-new.md
Last updated 07/24/2024
[Azure Update Manager](overview.md) helps you manage and govern updates for all your machines. You can monitor Windows and Linux update compliance across your deployments in Azure, on-premises, and on the other cloud platforms from a single dashboard. This article summarizes new releases and features in Azure Update Manager.
+## July 2024
+
+### Support for Windows IoT Enterprise on Arc enabled IaaS VMs
+
+Public preview: Azure Update Manager now supports Windows IoT Enterprise on Arc enabled IaaS VMs. For more information, see [supported Windows IoT enterprise releases](https://learn.microsoft.com/azure/update-manager/support-matrix?tabs=winio-arc%2Cpublic%2Cthird-party-win#support-for-check-for-updatesone-time-updateperiodic-assessment-and-scheduled-patching).
## June 2024
virtual-desktop Client Device Redirection Intune https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/client-device-redirection-intune.md
Now that you configure Intune to manage device redirection on personal devices,
Configuring redirection settings for Windows App and the Remote Desktop app on a client device using Microsoft Intune has the following limitation: -- When you configure client device redirection for the Remote Desktop or Windows App on iOS and iPadOS, multifactor authentication (MFA) requests might get stuck in a loop. A common scenario of this issue happens when the Remote Desktop or Windows App is being run on an Intune enrolled iPhone and the same iPhone is being used to receive MFA requests from the Microsoft Authenticator app when signing into the Remote Desktop or Windows App. To work around this issue, use the Remote Desktop or Windows App on a different device (such as an iPad) from the device being used to receive MFA requests (an iPhone).
+- When you configure client device redirection for the Remote Desktop app or Windows App on iOS and iPadOS, multifactor authentication (MFA) requests might get stuck in a loop. A common scenario of this issue happens when the Remote Desktop app or Windows App is being run on an Intune enrolled iPhone and the same iPhone is being used to receive MFA requests from the Microsoft Authenticator app when signing into the Remote Desktop app or Windows App. To work around this issue, use the Remote Desktop app or Windows App on a different device (such as an iPad) from the device being used to receive MFA requests (such as an iPhone).
virtual-desktop Clipboard Transfer Direction Data Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/clipboard-transfer-direction-data-types.md
description: Learn how to configure the clipboard in Azure Virtual Desktop to fu
Previously updated : 03/19/2024 Last updated : 07/18/2024 # Configure the clipboard transfer direction and types of data that can be copied in Azure Virtual Desktop
Here's how to configure the clipboard transfer direction and the types of data t
# [Intune](#tab/intune)
-To configure the clipboard using Intune, follow these steps. This process [deploys an OMA-URI to target a CSP](/troubleshoot/mem/intune/device-configuration/deploy-oma-uris-to-target-csp-via-intune).
+To configure the clipboard using Intune, follow these steps. This process creates an Intune [settings catalog](/mem/intune/configuration/settings-catalog) policy.
-1. Sign in to the [Microsoft Intune admin center](https://endpoint.microsoft.com/).
+1. Sign in to the [Microsoft Intune admin center](https://intune.microsoft.com/).
-1. [Create a profile with custom settings](/mem/intune/configuration/custom-settings-configure) for Windows 10 and later devices, with the **Templates** profile type and the **Custom** profile template name.
+1. Select **Devices** > **Manage devices** > **Configuration** > **Create** > **New policy**.
-1. For the **Basics** tab, enter a name and optional description for the profile, and then select **Next**.
+1. Enter the following properties:
-1. For the **Configuration settings** tab, select **Add** to show the **Add row** pane.
+ - **Platform**: Select **Windows 10 and later**.
+ - **Profile type**: Select **Settings catalog**.
-1. In the **Add row** pane, enter one of the following sets of settings, depending on whether you want to configure the clipboard from session host to client, or client to session host.
+1. Select **Create**.
+1. In **Basics**, enter the following properties:
- - To configure the clipboard from **session host to client**:
- - **Name**: (*example*) Session host to client
- - **Description**: *Optional*
- - **OMA-URI**: `./Vendor/MSFT/Policy/Config/RemoteDesktopServices/LimitServerToClientClipboardRedirection`
- - **Data type**: `String`
- - **Value**: Enter a value from the following table:
-
- | Value | Description |
- |--|--|
- | `<![CDATA[<enabled/><data id="TS_SC_CLIPBOARD_RESTRICTION_Text" value="0"/>]]>` | Disable clipboard transfers from session host to client. |
- | `<![CDATA[<enabled/><data id="TS_SC_CLIPBOARD_RESTRICTION_Text" value="1"/>]]>` | Allow plain text. |
- | `<![CDATA[<enabled/><data id="TS_SC_CLIPBOARD_RESTRICTION_Text" value="2"/>]]>` | Allow plain text and images. |
- | `<![CDATA[<enabled/><data id="TS_SC_CLIPBOARD_RESTRICTION_Text" value="3"/>]]>` | Allow plain text, images, and Rich Text Format. |
- | `<![CDATA[<enabled/><data id="TS_SC_CLIPBOARD_RESTRICTION_Text" value="4"/>]]>` | Allow plain text, images, Rich Text Format, and HTML. |
-
- - To configure the clipboard from **client to session host**:
- - **Name**: (*example*) Client to session host
- - **Description**: *Optional*
- - **OMA-URI**: `./Vendor/MSFT/Policy/Config/RemoteDesktopServices/LimitClientToServerClipboardRedirection`
- - **Data type**: `String`
- - **Value**: Enter a value from the following table:
-
- | Value | Description |
- |--|--|
- | `<![CDATA[<enabled/><data id="TS_CS_CLIPBOARD_RESTRICTION_Text" value="0"/>]]>` | Disable clipboard transfers from session host to client. |
- | `<![CDATA[<enabled/><data id="TS_CS_CLIPBOARD_RESTRICTION_Text" value="1"/>]]>` | Allow plain text. |
- | `<![CDATA[<enabled/><data id="TS_CS_CLIPBOARD_RESTRICTION_Text" value="2"/>]]>` | Allow plain text and images. |
- | `<![CDATA[<enabled/><data id="TS_CS_CLIPBOARD_RESTRICTION_Text" value="3"/>]]>` | Allow plain text, images, and Rich Text Format. |
- | `<![CDATA[<enabled/><data id="TS_CS_CLIPBOARD_RESTRICTION_Text" value="4"/>]]>` | Allow plain text, images, Rich Text Format, and HTML. |
+ - **Name**: Enter a descriptive name for the profile. Name your profile so you can easily identify it later.
+ - **Description**: Enter a description for the profile. This setting is optional, but recommended.
-1. Select **Save** to add the row. Repeat the previous two steps to configure the clipboard in the other direction, if necessary, then once you configure the settings you want, select **Next**.
+1. Select **Next**.
-1. For the **Assignments** tab, select the users, devices, or groups to receive the profile, then select **Next**. For more information on assigning profiles, see [Assign user and device profiles](/mem/intune/configuration/device-profile-assign).
+1. In **Configuration settings**, select **Add settings**. Then:
+
+ 1. In the settings picker, expand **Administrative Templates > Windows Components > Remote Desktop Services > Remote Desktop Session Host > Device and Resource Redirection**.
+
+ 1. Select the following settings and make sure you select the settings with the correct scope. The `(User)` settings apply to the user scope. The other settings apply to the device scope. To determine which scope is correct for your scenario, go to [Settings catalog - Device scope vs. user scope settings](/mem/intune/configuration/settings-catalog#device-scope-vs-user-scope-settings):
+
+ - Restrict clipboard transfer from server to client
+ - Restrict clipboard transfer from client to server
+
+ **OR**
+
+ - Restrict clipboard transfer from server to client (User)
+ - Restrict clipboard transfer from client to server (User)
+
+ 1. Close the settings picker.
-1. For the **Applicability Rules** tab, select **Next**.
+1. Configure the settings:
+
+ - **Restrict clipboard transfer from server to client**: Select **Enabled**.
+ - **Restrict clipboard transfer from server to client**: Select the type of clipboard data you want to prevent or allow. Your options:
+
+ - Disable clipboard transfers from server to client
+ - Allow plain text
+ - Allow plain text and images
+ - Allow plain text, images, and Rich Text Format
+ - Allow plain text, images, Rich Text Format, and HTML
+
+ - **Restrict clipboard transfer from client to server**: Select **Enabled**.
+ - **Restrict clipboard transfer from client to server**: Select the type of clipboard data you want to prevent or allow. Your options:
+
+ - Disable clipboard transfers from server to client
+ - Allow plain text
+ - Allow plain text and images
+ - Allow plain text, images, and Rich Text Format
+ - Allow plain text, images, Rich Text Format, and HTML
+
+1. Select **Next**.
+
+1. At the **Scope tags** tab (optional), you can skip this step. For more information about scope tags in Intune, see [Use RBAC roles and scope tags for distributed IT](/mem/intune/fundamentals/scope-tags).
+
+ Select **Next**.
+
+1. For the **Assignments** tab, select the users, devices, or groups to receive the profile, then select **Next**. For more information on assigning profiles, see [Assign user and device profiles](/mem/intune/configuration/device-profile-assign).
1. On the **Review + create** tab, review the configuration information, then select **Create**.
virtual-desktop Troubleshoot App Attach https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-app-attach.md
To validate that your session hosts have the necessary access to a file share co
```powershell # Install the CimDiskImage PowerShell module, if it's not already installed. If (!(Get-Module -ListAvailable | ? Name -eq CimDiskImage)) {
- Install-Module CimDiskImage -WhatIf
+ Install-Module CimDiskImage
} # Import the CimDiskImage PowerShell module.
virtual-machine-scale-sets Virtual Machine Scale Sets Automatic Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md
Customers can set up the following using action groups:
## Investigate and Resolve Auto Upgrade Errors
-The platform can return errors on VMs while performing Automatic Image Upgrade with Rolling Upgrade policy. The [Get Instance View](/rest/api/compute/virtual-machine-scale-sets/get-instance-view) of a VM contains the detailed error message to investigate and resolve an error. The [Rolling Upgrades - Get Latest](/rest/api/compute/virtual-machine-scale-sets/get) can provide more details on rolling upgrade configuration and status. The [Get OS Upgrade History](/rest/api/compute/virtual-machine-scale-sets/get) provides details on the last image upgrade operation on the scale set. Below are the topmost errors that can result in Rolling Upgrades.
+The platform can return errors on VMs while performing Automatic Image Upgrade with Rolling Upgrade policy. The [Get Instance View](/rest/api/compute/virtual-machine-scale-sets/get-instance-view) of a VM contains the detailed error message to investigate and resolve an error. The [Rolling Upgrades - Get Latest](/rest/api/compute/virtual-machine-scale-sets/get) can provide more details on rolling upgrade configuration and status. The [Get OS Upgrade History](/rest/api/compute/virtual-machine-scale-sets/get-os-upgrade-history) provides details on the last image upgrade operation on the scale set. Below are the topmost errors that can result in Rolling Upgrades.
**RollingUpgradeInProgressWithFailedUpgradedVMs** - Error is triggered for a VM failure.
virtual-machines Disks Convert Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-convert-types.md
Title: Convert managed disks storage between different disk types
description: How to convert Azure managed disks between the different disks types by using Azure PowerShell, Azure CLI, or the Azure portal. -+ Previously updated : 04/15/2024 Last updated : 07/25/2024
-# Change the disk type of an Azure managed disk
+# Convert the disk type of an Azure managed disk
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows
-There are five disk types of Azure managed disks: Azure Ultra Disks, Premium SSD v2, premium SSD, Standard SSD, and Standard HDD. You can easily switch between Premium SSD, Standard SSD, and Standard HDD based on your performance needs. Premium SSD and Standard SSD are also available with [Zone-redundant storage](disks-redundancy.md#zone-redundant-storage-for-managed-disks). You can't yet switch from or to an Ultra Disk or a Premium SSD v2, you must deploy a new one with a snapshot of an existing disk. See [Migrate to Premium SSD v2 or Ultra Disk](#migrate-to-premium-ssd-v2-or-ultra-disk) for details.
+There are five disk types of Azure managed disks: Azure Ultra Disks, Premium SSD v2, premium SSD, Standard SSD, and Standard HDD. You can easily switch between Premium SSD, Standard SSD, and Standard HDD based on your performance needs. Premium SSD and Standard SSD are also available with [Zone-redundant storage](disks-redundancy.md#zone-redundant-storage-for-managed-disks). For most cases, you can't yet switch from or to an Ultra Disk or a Premium SSD v2, you must deploy a new one with a snapshot of an existing disk. However, you can sign up for a preview and then you can switch from existing disks to a Premium SSD v2. See [Migrate to Premium SSD v2 or Ultra Disk using snapshots](#migrate-to-premium-ssd-v2-or-ultra-disk-using-snapshots) for details.
This functionality isn't supported for unmanaged disks. But you can easily convert an unmanaged disk to a managed disk with [CLI](linux/convert-unmanaged-to-managed-disks.md) or [PowerShell](windows/convert-unmanaged-to-managed-disks.md) to be able to switch between disk types.
Because conversion requires a restart of the virtual machine (VM), schedule the
- You can only change disk type twice per day. - You can only change the disk type of managed disks. If your disk is unmanaged, convert it to a managed disk with [CLI](linux/convert-unmanaged-to-managed-disks.md) or [PowerShell](windows/convert-unmanaged-to-managed-disks.md) to switch between disk types.
+## Convert Premium SSD v2 disks (preview)
+As a public preview, you can switch existing disks to Premium SSD v2 disks the same way you do for other disk types. Use [this survey](https://aka.ms/SeamlessMigrationCustomerSurvey) to sign up for the preview. Premium SSD v2 disks have some limitations, see the [Premium SSD v2 limitations](disks-deploy-premium-v2.md#limitations) section of their article to learn more.
+
+The preview allowing direct switching to Premium SSD v2 disks has some additional limitations and regional restrictions:
+
+- You can't switch an OS disk to a Premium SSD v2 disk.
+- Existing disks can only be directly switched to 512 sector size Premium SSD v2 disks.
+- You can only perform 40 conversions at the same time per subscription per region.
+- If your existing disk is a shared disk, you must detach all VMs before changing to Premium SSD v2.
+- If your existing disk is using host caching, you must [set it to none](#disable-host-caching) before changing to Premium SSD v2.
+- If your existing disk is using bursting, you must [disable it](#disable-bursting) before changing to Premium SSD v2.
+- If your existing disk is using double encryption, you must [switch to one of the single encryption options](#disable-double-encryption) before changing to Premium SSD v2.
+- You can't directly switch from a Premium SSD v2 to another disk type. If you want to change a Premium SSD v2 to another disk type, you must migrate using [snapshots](#migrate-to-premium-ssd-v2-or-ultra-disk-using-snapshots).
+- You can't directly switch from Ultra Disks to Premium SSD v2 disks, you must migrate using [snapshots](#migrate-to-premium-ssd-v2-or-ultra-disk-using-snapshots).
+- If you're using the rest API, you must use an API version `2020-12-01` or newer for both the Compute Resource Provider and the Disk Resource Provider.
+
+This preview is currently only available in the following regions:
+
+- Central US
+- East US
+- East US 2
+- US West
+- West Europe
+- North Europe
+- West US 2
+- East Asia
+- Southeast Asia
+- Central India
+- France Central
+
+### Disable host caching
+
+If your disk is using host caching, you must disable it before converting to Premium SSD v2. You can use the following CLI script to identify your disk's LUN and disable host caching. Replace `yourResourceGroup` and `nameOfYourVM` with your own values, then run the script.
+
+```azurecli
+$myRG="yourResourceGroup"
+$myVM="nameOfYourVM"
+
+lun=$(az vm show -g $myRG -n $myVM --query "storageProfile.dataDisks[].lun")
+
+az vm update --resource-group $myRG --name $myVM --disk-caching $lun=None
+```
+
+### Disable bursting
+
+If your disk is using bursting, you must disable it before converting to Premium SSD v2. If you enabled bursting within 12 hours, you have to wait until the 13th hour or later to disable it.
+
+You can use the following command to disable disk bursting: `az disk update --name "yourDiskNameHere" --resource-group "yourRGNameHere" --enable-bursting false`
+
+### Disable double encryption
+
+If your disk is using double encryption, you must disable it before converting to Premium SSD v2. You can use the following command to change your disk from double encryption to encryption at rest with customer-managed keys:
+
+```azurecli
+az disk-encryption-set update --name "nameOfYourDiskEncryptionSetHere" --resource-group "yourRGNameHere" --key-url yourKeyURL --source-vault "yourKeyVaultName" --encryption-type EncryptionAtRestWithCustomerKey
+```
+ ## Switch all managed disks of a VM from one account to another This example shows how to convert all of a VM's disks to premium storage. However, by changing the $storageType variable in this example, you can convert the VM's disks type to standard SSD or standard HDD. To use Premium managed disks, your VM must use a [VM size](sizes.md) that supports Premium storage. This example also switches to a size that supports premium storage:
Start-AzVM -ResourceGroupName $rgName -Name $vmName
```azurecli #resource group that contains the virtual machine
-$rgName='yourResourceGroup'
+rgName='yourResourceGroup'
#Name of the virtual machine vmName='yourVM'
For your dev/test workload, you might want a mix of Standard and Premium disks t
# [Azure PowerShell](#tab/azure-powershell) + ```azurepowershell-interactive $diskName = 'yourDiskName'
Start-AzVM -ResourceGroupName $vm.ResourceGroupName -Name $vm.Name
# [Azure CLI](#tab/azure-cli) + ```azurecli #resource group that contains the managed disk
-$rgName='yourResourceGroup'
+rgName='yourResourceGroup'
#Name of your managed disk diskName='yourManagedDiskName'
az vm start --ids $vmId
# [Portal](#tab/azure-portal) + Follow these steps: 1. Sign in to the [Azure portal](https://portal.azure.com).
The disk type conversion is instantaneous. You can start your VM after the conve
-## Migrate to Premium SSD v2 or Ultra Disk
+## Migrate to Premium SSD v2 or Ultra Disk using snapshots
+ Currently, you can only migrate an existing disk to either a Premium SSD v2 or an Ultra Disk through snapshots stored on Standard Storage (Incremental Standard HDD Snapshot). Migration with snapshots stored on Premium storage and other options isn't supported. Migration via snapshot from Premium SSD v2 or Ultra Disk to Premium SSD v1, Standard SSD and Standard HDD is not supported.
virtual-machines Disks High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-high-availability.md
Title: Best practices for high availability with Azure VMs and managed disks
description: Learn the steps you can take to get the best availability with your Azure virtual machines and managed disks. Previously updated : 05/21/2024 Last updated : 07/24/2024
Single VMs using only [Premium SSD disks](disks-types.md#premium-ssds) as the OS
### Use zone-redundant storage disks
-Zone-redundant storage (ZRS) disks synchronously replicate data across three availability zones, which are separated groups of data centers in a region that have independent power, cooling, and networking infrastructure. With ZRS disks, you can [force detach](https://learn.microsoft.com/rest/api/compute/virtual-machines/attach-detach-data-disks?view=rest-compute-2024-03-01&tabs=HTTP#diskdetachoptiontypes&preserve-view=true)(in preview) your ZRS data disks even in the event of a zonal outage. ZRS disks have limitations, see [Zone-redundant storage for managed disks](disks-redundancy.md#zone-redundant-storage-for-managed-disks) for details.
+Zone-redundant storage (ZRS) disks synchronously replicate data across three availability zones, which are separated groups of data centers in a region that have independent power, cooling, and networking infrastructure. With ZRS disks, your data is accessible even in the event of a zonal outage. Also, ZRS data disks allow you to [forcibly detach](/rest/api/compute/virtual-machines/attach-detach-data-disks?view=rest-compute-2024-03-01&tabs=HTTP#diskdetachoptiontypes) (preview) them from VMs experiencing issues. ZRS disks have limitations, see [Zone-redundant storage for managed disks](disks-redundancy.md#zone-redundant-storage-for-managed-disks) for details.
## Recommendations for applications running on multiple VMs
virtual-machines Disks Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-redundancy.md
Title: Redundancy options for Azure managed disks
description: Learn about zone-redundant storage and locally redundant storage for Azure managed disks. Previously updated : 04/23/2024 Last updated : 07/24/2024
If your workflow doesn't support application-level synchronous writes across zon
Zone-redundant storage (ZRS) synchronously replicates your Azure managed disk across three Azure availability zones in the region you select. Each availability zone is a separate physical location with independent power, cooling, and networking. ZRS disks provide at least 99.9999999999% (12 9's) of durability over a given year.
-A ZRS disk lets you recover from failures in availability zones. If a zone went down and your virtual machine (VM) wasn't affected, then your workloads continue running. But if your VM was affected by an outage and you want to recover before it's resolved, you can [force detach](https://learn.microsoft.com/rest/api/compute/virtual-machines/attach-detach-data-disks?view=rest-compute-2024-03-01&tabs=HTTP#diskdetachoptiontypes&preserve-view=true) (in preview) the ZRS data disks from the impacted VM and attach them to another VM.
+A ZRS disk lets you recover from failures in availability zones. If a zone went down and your virtual machine (VM) wasn't affected, then your workloads continue running. But if your VM was affected by an outage and you want to recover before it's resolved, you can either take a snapshot or make a copy of your ZRS disks. Once you've created new disks, attach them to another VM. Alternatively, if you're using the [force detach](/rest/api/compute/virtual-machines/attach-detach-data-disks?view=rest-compute-2024-03-01&tabs=HTTP#diskdetachoptiontypes) (preview) feature, you can forcibly detach ZRS data disks from failed VMs and attach them to another VM.
+ ZRS disks can also be shared between VMs for improved availability with clustered or distributed applications like SQL FCI, SAP ASCS/SCS, or GFS2. A shared ZRS disk can be attached to primary and secondary VMs in different zones to take advantage of both ZRS and [availability zones](../availability-zones/az-overview.md). If your primary zone fails, you can quickly fail over to the secondary VM using [SCSI persistent reservation](disks-shared-enable.md#supported-scsi-pr-commands). For more information on ZRS disks, see [Zone Redundant Storage (ZRS) option for Azure Disks for high availability](https://youtu.be/RSHmhmdHXcY).
For more information on ZRS disks, see [Zone Redundant Storage (ZRS) option for
[!INCLUDE [disk-storage-zrs-limitations](../../includes/disk-storage-zrs-limitations.md)]
-Force detach (in preview) is supported for ZRS data disks but not supported for ZRS OS disk.
- ### Regional availability [!INCLUDE [disk-storage-zrs-regions](../../includes/disk-storage-zrs-regions.md)]
Except for more write latency, disks using ZRS are identical to disks using LRS,
## Next steps - To learn how to create a ZRS disk, see [Deploy a ZRS managed disk](disks-deploy-zrs.md).-- To convert an LRS disk to ZRS, see [Convert a disk from LRS to ZRS](disks-migrate-lrs-zrs.md).-- More about [force detach](https://learn.microsoft.com/rest/api/compute/virtual-machines/attach-detach-data-disks?view=rest-compute-2024-03-01&tabs=HTTP#diskdetachoptiontypes&preserve-view=true)
+- To convert an LRS disk to ZRS, see [Convert a disk from LRS to ZRS](disks-migrate-lrs-zrs.md).
virtual-machines Guest Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/guest-configuration.md
To deploy the extension for Linux:
"type": "ConfigurationForLinux", "typeHandlerVersion": "1.0", "autoUpgradeMinorVersion": true,
- "enableAutomaticUpgrade": true, 
+ "enableAutomaticUpgrade": true,
"settings": {}, "protectedSettings": {} }
To deploy the extension for Windows:
"type": "ConfigurationforWindows", "typeHandlerVersion": "1.0", "autoUpgradeMinorVersion": true,
- "enableAutomaticUpgrade": true, 
+ "enableAutomaticUpgrade": true,
"settings": {}, "protectedSettings": {} }
resource windowsVMGuestConfigExtension 'Microsoft.Compute/virtualMachines/extens
type: 'ConfigurationForLinux' typeHandlerVersion: '1.0' autoUpgradeMinorVersion: true
- enableAutomaticUpgrade: true
+ enableAutomaticUpgrade: true
settings: {} protectedSettings: {} }
resource windowsVMGuestConfigExtension 'Microsoft.Compute/virtualMachines/extens
type: 'ConfigurationforWindows' typeHandlerVersion: '1.0' autoUpgradeMinorVersion: true
- enableAutomaticUpgrade: true
+ enableAutomaticUpgrade: true
settings: {} protectedSettings: {} }
virtual-machines Restore Point Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/restore-point-troubleshooting.md
If you are creating restore points for a VM that has encrypted disks, you must e
Restore points are supported only with API version 2022-03-01 or later. If you are using REST APIs to create and manage restore points, use the specified API version when calling the restore point API.
-### InternalError / InternalExecutionError / InternalOperationError - An internal execution error occurred. Please retry later.
+### InternalError / InternalExecutionError / InternalOperationError / InternalDiskRestorePointError - An internal execution error occurred. Please retry later.
-**Error code**: InternalError / InternalExecutionError / InternalOperationError
+**Error code**: InternalError / InternalExecutionError / InternalOperationError / InternalDiskRestorePointError
**Error message**: An internal execution error occurred. Please retry later.
virtual-machines Spot Vms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/spot-vms.md
The amount of available capacity can vary based on size, region, time of day, an
## Eviction policy
-VMs can be evicted based on capacity or the max price you set. When creating an Azure Spot Virtual Machine, you can set the eviction policy to *Deallocate* (default) or *Delete*.
+Spot VMs can be stopped if Azure needs capacity for other pay-as-you-go workloads or when the price of the spot instance exceeds the maximum price that you have set. When creating an Azure Spot Virtual Machine, you can set the eviction policy to *Deallocate* (default) or *Delete*.
The *Deallocate* policy moves your VM to the stopped-deallocated state, allowing you to redeploy it later. However, there's no guarantee that the allocation will succeed. The deallocated VMs will count against your quota and you'll be charged storage costs for the underlying disks. If you would like your VM to be deleted when it's evicted, you can set the eviction policy to *delete*. The evicted VMs are deleted together with their underlying disks, so you'll not continue to be charged for the storage.
-You can opt in to receive in-VM notifications through [Azure Scheduled Events](./linux/scheduled-events.md). This will notify you if your VMs are being evicted and you will have 30 seconds to finish any jobs and perform shutdown tasks prior to the eviction.
+You can opt in to receive in-VM notifications through [Azure Scheduled Events](./linux/scheduled-events.md). These are delivered on a best effort basis up to 30 seconds prior to the eviction.
| Option | Outcome |
You can opt in to receive in-VM notifications through [Azure Scheduled Events](.
| Max price is set to < the current price. | The VM isn't deployed. You'll get an error message that the max price needs to be >= current price. | | Restarting a stopped/deallocated VM if the max price is >= the current price | If there's capacity and quota, then the VM is deployed. | | Restarting a stopped/deallocated VM if the max price is < the current price | You'll get an error message that the max price needs to be >= current price. |
-| Price for the VM has gone up and is now > the max price. | The VM gets evicted. You get a 30s notification before actual eviction. |
+| Price for the VM has gone up and is now > the max price. | The VM gets evicted. Azure will attempt scheduled event delivery up to 30 seconds before actual eviction. |
| After eviction, the price for the VM goes back to being < the max price. | The VM won't be automatically restarted. You can restart the VM yourself, and it will be charged at the current price. | | If the max price is set to `-1` | The VM won't be evicted for pricing reasons. The max price will be the current price, up to the price for standard VMs. You'll never be charged above the standard price.| | Changing the max price | You need to deallocate the VM to change the max price. Deallocate the VM, set a new max price, then update the VM. |
virtual-machines Trusted Launch Existing Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/trusted-launch-existing-vm.md
Azure Virtual Machines supports enabling Azure Trusted launch on existing [Azure Generation 2](generation-2.md) virtual machines (VMs) by upgrading to the [Trusted launch](trusted-launch.md) security type.
-[Trusted launch](trusted-launch.md) is a way to enable foundational compute security on [Azure Generation 2 VMs](generation-2.md) VMs and protects against advanced and persistent attack techniques like boot kits and rootkits. It does so by combining infrastructure technologies like Secure Boot, virtual Trusted Platform Module (vTPM), and boot integrity monitoring on your VM.
+[Trusted launch](trusted-launch.md) is a way to enable foundational compute security on [Azure Generation 2 VMs](generation-2.md) and protects against advanced and persistent attack techniques like boot kits and rootkits. It does so by combining infrastructure technologies like Secure Boot, virtual Trusted Platform Module (vTPM), and boot integrity monitoring on your VM.
> [!IMPORTANT] > Support for *enabling Trusted launch on existing Azure Generation 1 VMs* is currently in private preview. You can gain access to preview by using the [registration form](https://aka.ms/Gen1ToTLUpgrade).
virtual-machines Vm Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/vm-applications.md
sudo apt show powershell | grep Depends
2. Check the output of the line **Depends** which lists the following packages: ```output
-Depends: libc6, libgcc1, libgssapi-krb5-2, libstdc++6, zlib1g, libicu72|libicu71|libicu70|libicu69|libicu68|libicu67|libicu66|libicu65|libicu63|libicu60|libicu57|libicu55|libicu52, libssl3|libssl1.1|libssl1.0.2|libssl1.
+Depends: libc6, lib32gcc-s1, libgssapi-krb5-2, libstdc++6, zlib1g, libicu72|libicu71|libicu70|libicu69|libicu68|libicu67|libicu66|libicu65|libicu63|libicu60|libicu57|libicu55|libicu52, libssl3|libssl1.1|libssl1.0.2|libssl1.
``` 3. Download each of these files using `sudo apt-get download <package_name>` and create a tar compressed archive with all files.
Depends: libc6, libgcc1, libgssapi-krb5-2, libstdc++6, zlib1g, libicu72|libicu71
mkdir /tmp/powershell cd /tmp/powershell sudo apt-get download libc6
-sudo apt-get download libgcc1
+sudo apt-get download lib32gcc-s1
sudo apt-get download libgssapi-krb5-2 sudo apt-get download libstdc++6 sudo apt-get download zlib1g
sudo tar -cvzf powershell.tar.gz *.deb
mkdir /tmp/powershell cd /tmp/powershell sudo apt-get download libc6
-sudo apt-get download libgcc1
+sudo apt-get download lib32gcc-s1
sudo apt-get download libgssapi-krb5-2 sudo apt-get download libstdc++6 sudo apt-get download zlib1g
sudo tar -cvzf powershell.tar.gz *.deb
mkdir /tmp/powershell cd /tmp/powershell sudo apt-get download libc6
-sudo apt-get download libgcc1
+sudo apt-get download lib32gcc-s1
sudo apt-get download libgssapi-krb5-2 sudo apt-get download libstdc++6 sudo apt-get download zlib1g
If I want to create a VM application package for `myApp.exe`, which ships as an
"move .\\myApp .\\myApp.exe & myApp.exe /S -config myApp_config" ```
-If the installer executable file doesn't support an uninstall parameter, you can sometimes look up the registry on a test machine to know here the uninstaller is located.
+If the installer executable file doesn't support an uninstall parameter, you can sometimes look up the registry on a test machine to know where the uninstaller is located.
In the registry, the uninstall string is stored in `Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\<installed application name>\UninstallString` so I would use the contents as my remove command:
virtual-machines Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/quick-create-cli.md
**Applies to:** :heavy_check_mark: Windows VMs
-The Azure CLI is used to create and manage Azure resources from the command line or in scripts. This quickstart shows you how to use the Azure CLI to deploy a virtual machine (VM) in Azure that runs Windows Server 2019. To see your VM in action, you then RDP to the VM and install the IIS web server.
+The Azure CLI is used to create and manage Azure resources from the command line or in scripts. This quickstart shows you how to use the Azure CLI to deploy a virtual machine (VM) in Azure that runs Windows Server 2022. To see your VM in action, you then RDP to the VM and install the IIS web server.
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
virtual-machines Redhat Rhui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/redhat/redhat-rhui.md
Extended Update Support (EUS) repositories are available to customers who might
> [!NOTE] > EUS is not supported on RHEL Extras. This means that if you install a package that is usually available from the RHEL Extras channel, you can't install while on EUS. For more information, see [Red Hat Enterprise Linux Extras Product Life Cycle](https://access.redhat.com/support/policy/updates/extras/).
-Support for EUS RHEL7 ended in August 30, 2021. For more information, see [Red Hat Enterprise Linux Extended Maintenance](https://access.redhat.com/support/policy/updates/errata/#Long_Support).
+Support for EUS RHEL7 ended in June 30, 2028. For more information, see [Red Hat Enterprise Linux Extended Maintenance](https://access.redhat.com/support/policy/updates/errata/#Long_Support).
- RHEL 7.4 EUS support ended August 31, 2019 - RHEL 7.5 EUS support ended April 30, 2020 - RHEL 7.6 EUS support ended May 31, 2021 - RHEL 7.7 EUS support ended August 30, 2021
+- RHEL 7.9 EUS support ended June 30, 2028
- RHEL 8.4 EUS support ended May 31, 2023 - RHEL 8.6 EUS support ends May 31, 2024 - RHEL 9.0 EUS support ends May 31, 2024
Support for EUS RHEL7 ended in August 30, 2021. For more information, see [Red H
#### [Switching to EUS repositories on RHEL7](#tab/rhel7) >[!NOTE]
->Support for RHEL7 EUS ended in August 30, 2021. It is not recommended to switch to EUS repositories in RHEL7 anymore.
+>Support for RHEL7 EUS ended in June 30, 2028. It is not recommended to switch to EUS repositories in RHEL7 anymore.
#### [Switching to EUS repositories on RHEL8](#tab/rhel8)
virtual-network Routing Preference Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/routing-preference-cli.md
- Title: Configure routing preference for a public IP address using Azure CLI-
-description: Learn how to create a public IP with an Internet traffic routing preference by using the Azure CLI.
- Previously updated : 08/24/2023-------
-# Configure routing preference for a public IP address using Azure CLI
-
-This article shows you how to configure routing preference via ISP network (**Internet** option) for a public IP address using Azure CLI. After creating the public IP address, you can associate it with the following Azure resources for inbound and outbound traffic to the internet:
-
-* Virtual machine
-* Virtual machine scale set
-* Azure Kubernetes Service (AKS)
-* Internet-facing load balancer
-* Application Gateway
-* Azure Firewall
-
-By default, the routing preference for public IP address is set to the Microsoft global network for all Azure services and can be associated with any Azure service.
----- This article requires version 2.0.49 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.-
-## Create a resource group
-Create a resource group with the [az group create](/cli/azure/group#az-group-create) command. The following example creates a resource group in the **East US** Azure region:
-
-```azurecli
- az group create --name myResourceGroup --location eastus
-```
-## Create a public IP address
-
-Create a Public IP Address with routing preference of **Internet** type using command [az network public-ip create](/cli/azure/network/public-ip#az-network-public-ip-create), with the format as shown below.
-
-The following command creates a new public IP with **Internet** routing preference in the **East US** Azure region.
-
-```azurecli
-az network public-ip create \
name MyRoutingPrefIP \resource-group MyResourceGroup \location eastus \ip-tags 'RoutingPreference=Internet' \sku STANDARD \allocation-method static \version IPv4
-```
-
-> [!NOTE]
-> Currently, routing preference only supports IPV4 public IP addresses.
-
-You can associate the above created public IP address with a [Windows](../../virtual-machines/windows/overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json) or [Linux](../../virtual-machines/linux/overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json) virtual machine. Use the CLI section on the tutorial page: [Associate a public IP address to a virtual machine](./associate-public-ip-address-vm.md) to associate the Public IP to your VM. You can also associate the public IP address created above with an [Azure Load Balancer](../../load-balancer/load-balancer-overview.md), by assigning it to the load balancer **frontend** configuration. The public IP address serves as a load-balanced virtual IP address (VIP).
-
-## Next steps
--- Learn more about [routing preference in public IP addresses](routing-preference-overview.md). -- [Configure routing preference for a VM using the Azure CLI](./configure-routing-preference-virtual-machine-cli.md).
virtual-network Routing Preference Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/routing-preference-portal.md
Title: Configure routing preference for a public IP address - Azure portal
-description: Learn how to create a public IP with an Internet traffic routing preference
+ Title: Configure routing preference for a public IP address
+description: Learn how to create a public IP with an Internet traffic routing preference using the Azure portal, Azure PowerShell, or Azure CLI.
Previously updated : 08/24/2023 Last updated : 07/25/2024
-# Configure routing preference for a public IP address using the Azure portal
+# Configure routing preference for a public IP address
-This article shows you how to configure [routing preference](routing-preference-overview.md) via ISP network (**Internet** option) for a public IP address. After creating the public IP address, you can associate it with the following Azure resources for inbound and outbound traffic to the internet:
+This article shows you how to configure [routing preference](routing-preference-overview.md) via ISP network (**Internet** option) for a public IP address using the Azure portal, Azure PowerShell, or Azure CLI. After creating the public IP address, you can associate it with the following Azure resources for inbound and outbound traffic to the internet:
* Virtual machine * Virtual machine scale set
This article shows you how to configure [routing preference](routing-preference-
By default, the routing preference for public IP address is set to the Microsoft global network for all Azure services and can be associated with any Azure service.
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) now.
+## Prerequisites
+
+# [Azure portal](#tab/azureportal)
++
+# [Azure CLI](#tab/azurecli/)
+++
+- This article requires version 2.0.49 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+
+# [Azure PowerShell](#tab/azurepowershell/)
++
+If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 6.9.0 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
++ ## Create a public IP address with a routing preference+
+# [Azure portal](#tab/azureportal)
+ 1. Sign in to the [Azure portal](https://portal.azure.com/). 2. Select **Create a resource**. 3. In the search box, type *Public IP address*.
If you don't have an Azure subscription, create a [free account](https://azure.m
You can associate the above created public IP address with a [Windows](../../virtual-machines/windows/overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json) or [Linux](../../virtual-machines/linux/overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json) virtual machine. Use the CLI section on the tutorial page: [Associate a public IP address to a virtual machine](./associate-public-ip-address-vm.md) to associate the public IP to your VM. You can also associate the public IP address created above with an [Azure Load Balancer](../../load-balancer/load-balancer-overview.md), by assigning it to the load balancer **frontend** configuration. The public IP address serves as a load-balanced virtual IP address (VIP).
+# [Azure CLI](#tab/azurecli/)
+
+Create a resource group with the [az group create](/cli/azure/group#az-group-create) command. The following example creates a resource group in the **East US** Azure region:
+
+```azurecli
+ az group create --name myResourceGroup --location eastus
+```
+## Create a public IP address
+
+Create a Public IP Address with routing preference of **Internet** type using command [az network public-ip create](/cli/azure/network/public-ip#az-network-public-ip-create), with the format as shown below.
+
+The following command creates a new public IP with **Internet** routing preference in the **East US** Azure region.
+
+```azurecli
+az network public-ip create \
+--name MyRoutingPrefIP \
+--resource-group MyResourceGroup \
+--location eastus \
+--ip-tags 'RoutingPreference=Internet' \
+--sku STANDARD \
+--allocation-method static \
+--version IPv4
+```
+
+> [!NOTE]
+> Currently, routing preference only supports IPV4 public IP addresses.
+
+You can associate the above created public IP address with a [Windows](../../virtual-machines/windows/overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json) or [Linux](../../virtual-machines/linux/overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json) virtual machine. Use the CLI section on the tutorial page: [Associate a public IP address to a virtual machine](./associate-public-ip-address-vm.md) to associate the Public IP to your VM. You can also associate the public IP address created above with an [Azure Load Balancer](../../load-balancer/load-balancer-overview.md), by assigning it to the load balancer **frontend** configuration. The public IP address serves as a load-balanced virtual IP address (VIP).
+
+# [Azure PowerShell](#tab/azurepowershell/)
+
+The following command creates a new public IP with a routing preference type as *Internet* in the *East US* Azure region:
+
+```azurepowershell
+$iptagtype="RoutingPreference"
+$tagName = "Internet"
+$ipTag = New-AzPublicIpTag -IpTagType $iptagtype -Tag $tagName
+# attach the tag
+$publicIp = New-AzPublicIpAddress `
+-Name "MyPublicIP" `
+-ResourceGroupName $rg.ResourceGroupName `
+-Location $rg.Location `
+-IpTag $ipTag `
+-AllocationMethod Static `
+-Sku Standard `
+-IpAddressVersion IPv4
+```
+
+You can associate the above created public IP address with a [Windows](../../virtual-machines/windows/overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json) or [Linux](../../virtual-machines/linux/overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json) virtual machine. Use the CLI section on the tutorial page: [Associate a public IP address to a virtual machine](./associate-public-ip-address-vm.md) to associate the Public IP to your VM. You can also associate the public IP address created above with an [Azure Load Balancer](../../load-balancer/load-balancer-overview.md), by assigning it to the load balancer **frontend** configuration. The public IP address serves as a load-balanced virtual IP address (VIP).
++++ ## Next steps - Learn more about [public IP with routing preference](routing-preference-overview.md). - [Configure routing preference for a VM](./tutorial-routing-preference-virtual-machine-portal.md).-- [Configure routing preference for a public IP address using the PowerShell](routing-preference-powershell.md).-- Learn more about [public IP addresses](public-ip-addresses.md#public-ip-addresses) in Azure.-- Learn more about all [public IP address settings](virtual-network-public-ip-address.md#create-a-public-ip-address).
+- [Configure routing preference for a VM using the Azure CLI](./configure-routing-preference-virtual-machine-cli.md).
+- [Configure routing preference for a VM using the Azure PowerShell](./configure-routing-preference-virtual-machine-powershell.md).
virtual-network Routing Preference Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/routing-preference-powershell.md
- Title: Configure routing preference for a public IP address - Azure PowerShell-
-description: Learn how to Configure routing preference for a public IP address using Azure PowerShell.
- Previously updated : 08/24/2023-------
-# Configure routing preference for a public IP address using Azure PowerShell
-
-This article shows you how to configure routing preference via ISP network (**Internet** option) for a public IP address using Azure PowerShell. After creating the public IP address, you can associate it with the following Azure resources for inbound and outbound traffic to the internet:
-
-* Virtual machine
-* Virtual machine scale set
-* Azure Kubernetes Service (AKS)
-* Internet-facing load balancer
-* Application Gateway
-* Azure Firewall
-
-By default, the routing preference for public IP address is set to the Microsoft global network for all Azure services and can be associated with any Azure service.
-
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) now.
-
-If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 6.9.0 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-azure-powershell). If you are running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
-
-## Create a resource group
-
-Create a resource group with [New-AzResourceGroup](/powershell/module/az.Resources/New-azResourceGroup). This example creates a resource group named *myResourceGroup* in the *eastus* location:
-
-```azurepowershell
-$rg = New-AzResourceGroup -Name myResourceGroup -Location EastUS
-```
-
-## Create a Public IP with Internet routing preference
-
-The following command creates a new public IP with a routing preference type as *Internet* in the *East US* Azure region:
-
-```azurepowershell
-$iptagtype="RoutingPreference"
-$tagName = "Internet"
-$ipTag = New-AzPublicIpTag -IpTagType $iptagtype -Tag $tagName
-# attach the tag
-$publicIp = New-AzPublicIpAddress `
--Name "MyPublicIP" `--ResourceGroupName $rg.ResourceGroupName `--Location $rg.Location `--IpTag $ipTag `--AllocationMethod Static `--Sku Standard `--IpAddressVersion IPv4
-```
-
-You can associate the above created public IP address with a [Windows](../../virtual-machines/windows/overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json) or [Linux](../../virtual-machines/linux/overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json) virtual machine. Use the CLI section on the tutorial page: [Associate a public IP address to a virtual machine](./associate-public-ip-address-vm.md) to associate the Public IP to your VM. You can also associate the public IP address created above with an [Azure Load Balancer](../../load-balancer/load-balancer-overview.md), by assigning it to the load balancer **frontend** configuration. The public IP address serves as a load-balanced virtual IP address (VIP).
-
-## Clean up resources
-
-If no longer needed, you can use the [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) command to remove the resource group, VM, and all related resources.
-
-```azurepowershell
-Remove-AzResourceGroup -Name myResourceGroup
-```
-
-## Next steps
--- Learn more about [routing preference in public IP addresses](routing-preference-overview.md).-- [Configure routing preference for a VM using the Azure PowerShell](./configure-routing-preference-virtual-machine-powershell.md).
virtual-wan Openvpn Azure Ad Client Mac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/openvpn-azure-ad-client-mac.md
Title: 'Configure VPN clients for P2S OpenVPN protocol connections: Microsoft Entra authentication: macOS: Preview'
+ Title: 'Configure VPN clients for P2S OpenVPN protocol connections: Microsoft Entra authentication: macOS'
description: 'Preview: Learn how to configure a macOS VPN client to connect to a virtual network using point-to-site VPN and Microsoft Entra authentication.'
If you want to configure multiple computers, you can create a client profile on
## Prerequisites
-Before you can connect and authenticate using Microsoft Entra ID, you must first configure your Microsoft Entra tenant. For more information, see [Configure a Microsoft Entra tenant](openvpn-azure-ad-tenant.md).
+Make sure you have the following prerequisites before you proceed with the steps in this article:
+
+* Before you can connect and authenticate using Microsoft Entra ID, you must first configure your Microsoft Entra tenant. For more information, see [Configure a Microsoft Entra tenant](openvpn-azure-ad-tenant.md).
+ ## <a name="download"></a>To download the Azure VPN client
vpn-gateway Point To Site Entra Gateway Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/point-to-site-entra-gateway-update.md
Title: 'Update Audience for P2S VPN gateway connections - Microsoft Entra ID authentication'
+ Title: 'Migrate manually registered Azure VPN client to Microsoft-registered for P2S Microsoft Entra ID authentication'
description: Learn how to update Audience values for P2S VPN gateway connections that use Microsoft Entra ID authentication. Previously updated : 06/18/2024 Last updated : 07/24/2024 # Customer intent: As an VPN Gateway administrator, I want to update point-to-site Audience values for Microsoft Entra ID authentication.
-# Change Audience value for P2S VPN gateway and VPN clients
+# Migrate a manually registered Azure VPN Client to the Microsoft-registered client
-This article helps you change (update) the Audience value for point-to-site (P2S) VPN Gateway connections that use Microsoft Entra ID authentication. When you update an Audience value, you must make the change on both the P2S VPN gateway, and on any previously configured VPN clients. For more information about Audience values, see [About point-to-site VPN - Microsoft Entra ID authentication](point-to-site-about.md#entra-id).
+This article helps you migrate from a manually registered Azure VPN Client to the Microsoft-registered Azure VPN Client for point-to-site (P2S) Microsoft Entra ID authentication. The Microsoft-registered Azure VPN client uses a different Audience value. When you update an Audience value, you must make the change on both the P2S VPN gateway, and on any previously configured VPN clients.
-The following table shows the available supported Audience values. P2S VPN gateways also support custom Audience.
+For more information about Audience values, see [About point-to-site VPN - Microsoft Entra ID authentication](point-to-site-about.md#entra-id). The examples in this article use the new Audience value for Azure Public.
+The following table shows the available supported Audience values.
-In most cases, you'll be changing an older *Azure Public* audience value to the new Azure Public audience value to take advantage of the Microsoft-registered Azure VPN Client new features and supported operating systems. The examples in this article show the new Audience value for Azure Public. However, the process is the same if you want to change to a different supported Audience value, such as a custom value.
## Workflow
When you update audience values on an existing gateway, you incur fewer than 5 m
:::image type="content" source="./media/update-entra-audience/audience.png" alt-text="Screenshot showing settings for Tunnel type, Authentication type, and Microsoft Entra settings." lightbox="././media/update-entra-audience/audience.png":::
-1. Change the **Audience** value. For this example, we changed the Audience value to the Azure Public value for the Microsoft-registered Azure VPN Client; **c632b3df-fb67-4d84-bdcf-b95ad541b5c8**. You can also use a different Audience value, such as a custom value, for this setting.
+1. Change the **Audience** value. For this example, we changed the Audience value to the Azure Public value for the Microsoft-registered Azure VPN Client; **c632b3df-fb67-4d84-bdcf-b95ad541b5c8**.
1. Leave the other settings the same, unless you have changed tenants and need to change the tenant IDs. If you update the Issuer field, take care to include the trailing slash at the end. For more information about each of the fields, see [Microsoft Entra ID](point-to-site-entra-gateway.md#configure-vpn) values. 1. Once you finish configuring settings, click **Save** at the top of the page. 1. The new settings save to the P2S gateway and the gateway updates. This takes about 5 minutes to complete.
However, when you update only the Audience or tenant values, you have a couple o
* If the Azure VPN Client is already configured to connect to this P2S gateway, you can [manually update](#manual) the VPN client.
-* If you've updated multiple values on the P2S gateway, or you want easily update the VPN clients by importing the new values, you can generate and download a new P2S VPN client profile configuration package and import it to each client.
+* If you've updated multiple values on the P2S gateway, or you want easily update the VPN clients by importing the new values, you can generate and download a new P2S VPN [client profile configuration package](#generate) and import it to each client.
+
+### <a name="manual"></a>Update an Azure VPN Client
-### <a name="manual"></a>Manually update an Azure VPN Client
+These steps help you update the Azure VPN Client manually, without using the profile configuration package.
1. Launch the Azure VPN Client app. 1. Select the VPN connection profile that you want to update.
However, when you update only the Audience or tenant values, you have a couple o
1. If you also updated the Tenant ID values, change them on the client. These values must match the P2S gateway values. 1. Click **Save** to save the settings.
-### <a name="generate"></a>Generate a profile configuration package
+### <a name="generate"></a>Update using a profile configuration package
If you want to use the VPN client profile configuration files to configure your Azure VPN Client, you can generate a profile configuration package that contains the new P2S gateway settings.
vpn-gateway Point To Site Entra Vpn Client Mac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/point-to-site-entra-vpn-client-mac.md
This article helps you configure your macOS client computer to connect to an Azu
## Prerequisites
-Make sure you have the following prerequistes before you proceed with the steps in this article:
+Make sure you have the following prerequisites before you proceed with the steps in this article:
* Configure your VPN gateway for point-to-site VPN connections that specify Microsoft Entra ID authentication. See [Configure a P2S VPN gateway for Microsoft Entra ID authentication](point-to-site-entra-gateway.md).
-* If your device is running MacOS M1 or MacOS M2, you must install Rosetta software if it's not already installed on the device. For more information, see the [Apple support article](https://support.apple.com/en-us/HT211861).
+ ## Workflow
vpn-gateway Vpn Gateway Highlyavailable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-highlyavailable.md
Title: 'About Highly Available gateway configurations'
+ Title: 'Design highly available gateway connectivity'
-description: Learn about highly available configuration options using Azure VPN Gateways.
+description: Learn about highly available configuration options for VPN Gateway.
Previously updated : 07/11/2024 Last updated : 07/24/2024
-# Highly Available cross-premises and VNet-to-VNet connectivity
+# Design highly available gateway connectivity for cross-premises and VNet-to-VNet connections
-This article provides an overview of Highly Available configuration options for your cross-premises and VNet-to-VNet connectivity using Azure VPN gateways.
+This article helps you understand how to design highly available gateway connectivity for cross-premises and VNet-to-VNet connections.
## <a name = "activestandby"></a>About VPN gateway redundancy
To provide better availability for your cross premises connections, there are a
You can use multiple VPN devices from your on-premises network to connect to your Azure VPN gateway, as shown in the following diagram: This configuration provides multiple active tunnels from the same Azure VPN gateway to your on-premises devices in the same location. There are some requirements and constraints: