Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
ai-services | Provisioned Throughput | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/provisioned-throughput.md | Title: Azure OpenAI Service provisioned throughput description: Learn about provisioned throughput and Azure OpenAI. Previously updated : 11/20/2023 Last updated : 1/16/2024 keywords: # What is provisioned throughput? -The provisioned throughput capability allows you to specify the amount of throughput you require for your application. The service then provisions the necessary compute and ensures it is ready for you. Throughput is defined in terms of provisioned throughput units (PTU) which is a normalized way of representing an amount of throughput for your deployment. Each model-versions pair requires different amounts of PTU to deploy and provide different amounts of throughput per PTU. +The provisioned throughput capability allows you to specify the amount of throughput you require in a deployment. The service then allocates the necessary model processing capacity and ensures it's ready for you. Throughput is defined in terms of provisioned throughput units (PTU) which is a normalized way of representing the throughput for your deployment. Each model-version pair requires different amounts of PTU to deploy and provide different amounts of throughput per PTU. ## What does the provisioned deployment type provide? -- **Predictable performance:** stable max latency and throughput for uniform workloads.+- **Predictable performance:** stable max latency and throughput for uniform workloads. - **Reserved processing capacity:** A deployment configures the amount of throughput. Once deployed, the throughput is available whether used or not.-- **Cost savings:** High throughput workloads may provide cost savings vs token-based consumption.+- **Cost savings:** High throughput workloads might provide cost savings vs token-based consumption. -An Azure OpenAI Deployment is a unit of management for a specific OpenAI Model. A deployment provides customer access to a model for inference and integrates additional features like Content Moderation ([See content moderation documentation](content-filter.md)). +An Azure OpenAI Deployment is a unit of management for a specific OpenAI Model. A deployment provides customer access to a model for inference and integrates more features like Content Moderation ([See content moderation documentation](content-filter.md)). > [!NOTE]-> Provisioned throughput units (PTU) are different from standard quota in Azure OpenAI and are not available by default. To learn more about this offering contact your Microsoft Account Team. +> Provisioned throughput unit (PTU) quota is different from standard quota in Azure OpenAI and is not available by default. To learn more about this offering contact your Microsoft Account Team. ## What do you get? -|Topic | Provisioned| +| Topic | Provisioned| |||-| What is it? | Provides guaranteed throughput at smaller increments than the existing provisioned offer. Deployments will have a consistent max latency for a given model-version | +| What is it? | Provides guaranteed throughput at smaller increments than the existing provisioned offer. Deployments have a consistent max latency for a given model-version. | | Who is it for? | Customers who want guaranteed throughput with minimal latency variance. |-| Quota | Provisioned-managed throughput Units | -| Latency | Max latency constrained | -| Utilization | Provisioned-managed Utilization measure provided in Azure Monitor | -| Estimating size | Provided calculator in the studio & load test script | +| Quota | Provisioned-managed throughput Units for a given model. | +| Latency | Max latency constrained from the model. Overall latency is a factor of call shape. | +| Utilization | Provisioned-managed Utilization measure provided in Azure Monitor. | +| Estimating size | Provided calculator in the studio & benchmarking script. | ## Key concepts ### Provisioned throughput units -Provisioned throughput Units (PTU) are units of model processing capacity that customers you can reserve and deploy for processing prompts and generating completions. The minimum PTU deployment, increments, and processing capacity associated with each unit varies by model type & version. +Provisioned throughput units (PTU) are units of model processing capacity that customers you can reserve and deploy for processing prompts and generating completions. The minimum PTU deployment, increments, and processing capacity associated with each unit varies by model type & version. ### Deployment types -We introduced a new deployment type called **ProvisionedManaged** which provides smaller increments of PTU per deployment. Both types have their own quota, and you will only see the options you have been enabled for. +When deploying a model in Azure OpenAI, you need to set the `sku-name` to be Provisioned-Managed. The `sku-capacity` specifies the number of PTUs assigned to the deployment. ++```azurecli +az cognitiveservices account deployment create \ +--name <myResourceName> \ +--resource-group <myResourceGroupName> \ +--deployment-name MyDeployment \ +--model-name GPT-4 \ +--model-version 0613 \ +--model-format OpenAI \ +--sku-capacity 100 \ +--sku-name ProvisionedManaged +``` ### Quota -Provisioned throughput quota represents a specific amount of total throughput you can deploy. Quota in the Azure OpenAI Service is managed at the subscription level meaning that it can be consumed by different resources within that subscription. +Provisioned throughput quota represents a specific amount of total throughput you can deploy. Quota in the Azure OpenAI Service is managed at the subscription level. All Azure OpenAI resources within the subscription share this quota. ++Quota is specific to a (deployment type, model, region) triplet and isn't interchangeable. Meaning you can't use quota for GPT-4 to deploy GPT-35-turbo. You can raise a support request to move quota across deployment types, models, or regions but the swap isn't guaranteed. ++While we make every attempt to ensure that quota is deployable, quota doesn't represent a guarantee that the underlying capacity is available. The service assigns capacity during the deployment operation and if capacity is unavailable the deployment fails with an out of capacity error. +++### How utilization enforcement works +Provisioned deployments provide you with an allocated amount of model processing capacity to run a given model. The `Provisioned-Managed Utilization` metric in Azure Monitor measures a given deployments utilization on 1-minute increments. Provisioned-Managed deployments are optimized to ensure that accepted calls are processed with a consistent model processing time (actual end-to-end latency is dependent on a call's characteristics). When the workload exceeds the allocated PTU capacity, the service returns a 429 HTTP status code until the utilization drops down below 100%. +++#### What should I do when I receive a 429 response? +The 429 response isn't an error, but instead part of the design for telling users that a given deployment is fully utilized at a point in time. By providing a fast-fail response, you have control over how to handle these situations in a way that best fits your application requirements. ++The `retry-after-ms` and `retry-after` headers in the response tell you the time to wait before the next call will be accepted. How you choose to handle this response depends on your application requirements. Here are some considerations: +- You can consider redirecting the traffic to other models, deployments or experiences. This option is the lowest-latency solution because the action can be taken as soon as you receive the 429 signal. +- If you're okay with longer per-call latencies, implement client-side retry logic. This option gives you the highest amount of throughput per PTU. The Azure OpenAI client libraries include built-in capabilities for handling retries. ++#### How does the service decide when to send a 429? +We use a variation of the leaky bucket algorithm to maintain utilization below 100% while allowing some burstiness in the traffic. The high-level logic is as follows: +1. Each customer has a set amount of capacity they can utilize on a deployment +2. When a request is made: ++ a. When the current utilization is above 100%, the service returns a 429 code with the `retry-after-ms` header set to the time until utilization is below 100% + + b. Otherwise, the service estimates the incremental change to utilization required to serve the request by combining prompt tokens and the specified max_tokens in the call. ++3. When a request finishes, we now know the actual compute cost for the call. To ensure an accurate accounting, we correct the utilization using the following logic: ++ a. If the actual > estimated, then the difference is added to the deployment's utilization + b. If the actual < estimated, then the difference is subtracted. ++4. The overall utilization is decremented down at a continuous rate based on the number of PTUs deployed. ++Since calls are accepted until utilization reaches 100%, you're allowed to burst over 100% utilization when first increasing traffic. For sizeable calls and small sized deployments, you might then be over 100% utilization for up to several minutes. ++++ -Quota is specific to a (deployment type, model, region) triplet and isn't interchangeable. Meaning you can't use quota for GPT-4 to deploy GPT-35-turbo. Customers can raise a support request to move the quota across deployment types, models, or regions but we can't guarantee that it will be possible. +## Next steps -While we make every attempt to ensure that quota is always deployable, quota does not represent a guarantee that the underlying capacity is available for the customer to use. The service assigns capacity to the customer at deployment time and if capacity is unavailable the deployment will fail with an out of capacity error. +- [Learn about the onboarding steps for provisioned deployments](../how-to/provisioned-throughput-onboarding.md) +- [Provisioned Throughput Units (PTU) getting started guide](../how-to//provisioned-get-started.md) |
ai-services | Latency | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/latency.md | -This article will provide you with background around how latency works with Azure OpenAI and how to optimize your environment to improve performance. +This article provides you with background around how latency and throughput works with Azure OpenAI and how to optimize your environment to improve performance. -## What is latency? +## Understanding throughput vs latency +There are two key concepts to think about when sizing an application: (1) System level throughput and (2) Per-call response times (also known as Latency). -The high level definition of latency in this context is the amount of time it takes to get a response back from the model. For completion and chat completion requests, latency is largely dependent on model type as well as the number of tokens generated and returned. The number of tokens sent to the model as part of the input token limit, has a much smaller overall impact on latency. +### System level throughput +This looks at the overall capacity of your deployment ΓÇô how many requests per minute and total tokens that can be processed. ++For a standard deployment, the quota assigned to your deployment partially determines the amount of throughput you can achieve. However, quota only determines the admission logic for calls to the deployment and isn't directly enforcing throughput. Due to per-call latency variations, you might not be able to achieve throughput as high as your quota. [Learn more on managing quota](./quota.md). ++In a provisioned deployment, A set amount of model processing capacity is allocated to your endpoint. The amount of throughput that you can achieve on the endpoint is a factor of the input size, output size, call rate and cache match rate. The number of concurrent calls and total tokens processed can vary based on these values. The following steps walk through how to assess the throughput you can get a given workload in a provisioned deployment: ++1. Use the Capacity calculator for a sizing estimate. ++2. Benchmark the load using real traffic workload. Measure the utilization & tokens processed metrics from Azure Monitor. Run for an extended period. The [Azure OpenAI Benchmarking repository](https://aka.ms/aoai/benchmarking) contains code for running the benchmark. Finally, the most accurate approach is to run a test with your own data and workload characteristics. ++Here are a few examples for GPT-4 0613 model: ++| Prompt Size (tokens) | Generation size (tokens) | Calls per minute | PTUs required | +|--|--|--|--| +| 800 | 150 | 30 | 100 | +| 1000 | 50 | 300 | 700 | +| 5000 | 100 | 50 | 600 | ++The number of PTUs scales roughly linearly with call rate (might be sublinear) when the workload distribution remains constant. +++### Latency: The per-call response times ++The high level definition of latency in this context is the amount of time it takes to get a response back from the model. For completion and chat completion requests, latency is largely dependent on model type, the number of tokens in the prompt and the number of tokens generated. In general, each prompt token adds little time compared to each incremental token generated. ++Estimating your expected per-call latency can be challenging with these models. Latency of a completion request can vary based on four primary factors: (1) the model, (2) the number of tokens in the prompt, (3) the number of tokens generated, and (4) the overall load on the deployment & system. One and three are often the main contributors to the total time. The next section goes into more details on the anatomy of a large language model inference call. ## Improve performance+There are several factors that you can control to improve per-call latency of your application. ### Model selection -Latency varies based on what model you are using. For an identical request, it is expected that different models will have a different latency. If your use case requires the lowest latency models with the fastest response times we recommend the latest models in the [GPT-3.5 Turbo model series](../concepts/models.md#gpt-35-models). +Latency varies based on what model you're using. For an identical request, expect that different models have different latencies for the chat completions call. If your use case requires the lowest latency models with the fastest response times, we recommend the latest models in the [GPT-3.5 Turbo model series](../concepts/models.md#gpt-35-models). -### Max tokens +### Generation size and Max tokens -When you send a completion request to the Azure OpenAI endpoint your input text is converted to tokens which are then sent to your deployed model. The model receives the input tokens and then begins generating a response. It's an iterative sequential process, one token at a time. Another way to think of it is like a for loop with `n tokens = n iterations`. +When you send a completion request to the Azure OpenAI endpoint, your input text is converted to tokens that are then sent to your deployed model. The model receives the input tokens and then begins generating a response. It's an iterative sequential process, one token at a time. Another way to think of it is like a for loop with `n tokens = n iterations`. For most models, generating the response is the slowest step in the process. -So another important factor when evaluating latency is how many tokens are being generated. This is controlled largely via the `max_tokens` parameter. Reducing the number of tokens generated per request will reduce the latency of each request. +At the time of the request, the requested generation size (max_tokens parameter) is used as an initial estimate of the generation size. The compute-time for generating the full size is reserved the model as the request is processed. Once the generation is completed, the remaining quota is released. Ways to reduce the number of tokens: +o Set the `max_token` parameter on each call as small as possible. +o Include stop sequences to prevent generating extra content. +o Generate fewer responses: The best_of & n parameters can greatly increase latency because they generate multiple outputs. For the fastest response, either don't specify these values or set them to 1. ++In summary, reducing the number of tokens generated per request reduces the latency of each request. ### Streaming+Setting `stream: true` in a request makes the service return tokens as soon as they're available, instead of waiting for the full sequence of tokens to be generated. It doesn't change the time to get all the tokens, but it reduces the time for first response. This approach provides a better user experience since end-users can read the response as it is generated. ++Streaming is also valuable with large calls that take a long time to process. Many clients and intermediary layers have timeouts on individual calls. Long generation calls might be canceled due to client-side time outs. By streaming the data back, you can ensure incremental data is received. ++ **Examples of when to use streaming**: Chat bots and conversational interfaces. -Streaming impacts perceived latency. If you have streaming enabled you'll receive tokens back in chunks as soon as they're available. From a user perspective, this often feels like the model is responding faster even though the overall time to complete the request remains the same. +Streaming impacts perceived latency. With streaming enabled you receive tokens back in chunks as soon as they're available. For end-users, this approach often feels like the model is responding faster even though the overall time to complete the request remains the same. **Examples of when streaming is less important**: Sentiment analysis, language translation, content generation. -There are many use cases where you are performing some bulk task where you only care about the finished result, not the real-time response. If streaming is disabled, you won't receive any tokens until the model has finished the entire response. +There are many use cases where you're performing some bulk task where you only care about the finished result, not the real-time response. If streaming is disabled, you won't receive any tokens until the model has finished the entire response. ### Content filtering The addition of content filtering comes with an increase in safety, but also lat Learn more about requesting modifications to the default, [content filtering policies](./content-filters.md). ++### Separation of workloads +Mixing different workloads on the same endpoint can negatively affect latency. This is because (1) they're batched together during inference and short calls can be waiting for longer completions and (2) mixing the calls can reduce your cache hit rate as they're both competing for the same space. When possible, it's recommended to have separate deployments for each workload. ++### Prompt Size +While prompt size has smaller influence on latency than the generation size it affects the overall time, especially when the size grows large. ++### Batching +If you're sending multiple requests to the same endpoint, you can batch the requests into a single call. This reduces the number of requests you need to make and depending on the scenario it might improve overall response time. We recommend testing this method to see if it helps. ++## How to measure your throughput +We recommend measuring your overall throughput on a deployment with two measures: +- Calls per minute: The number of API inference calls you're making per minute. This can be measured in Azure-monitor using the Azure OpenAI Requests metric and splitting by the ModelDeploymentName +- Total Tokens per minute: The total number of tokens being processed per minute by your deployment. This includes prompt & generated tokens. This is often further split into measuring both for a deeper understanding of deployment performance. This can be measured in Azure-Monitor using the Processed Inference tokens metric. ++You can learn more about [Monitoring the Azure OpenAI Service](./monitoring.md). ++## How to measure per-call latency +The time it takes for each call depends on how long it takes to read the model, generate the output, and apply content filters. The way you measure the time will vary if you're using streaming or not. We suggest a different set of measures for each case. ++You can learn more about [Monitoring the Azure OpenAI Service](./monitoring.md). ++### Non-Streaming: +- End-to-end Request Time: The total time taken to generate the entire response for non-streaming requests, as measured by the API gateway. This number increases as prompt and generation size increases. ++### Streaming: +- Time to Response: Recommended latency (responsiveness) measure for streaming requests. Applies to PTU and PTU-managed deployments. Calculated as time taken for the first response to appear after a user sends a prompt, as measured by the API gateway. This number increases as the prompt size increases and/or hit size reduces. +- Average Token Generation Rate +Time from the first token to the last token, divided by the number of generated tokens, as measured by the API gateway. This measures the speed of response generation and increases as the system load increases. Recommended latency measure for streaming requests. +++ ## Summary -* **Model latency**: If model latency is important to you we recommend trying out our latest models in the [GPT-3.5 Turbo model series](../concepts/models.md). +* **Model latency**: If model latency is important to you, we recommend trying out our latest models in the [GPT-3.5 Turbo model series](../concepts/models.md). * **Lower max tokens**: OpenAI has found that even in cases where the total number of tokens generated is similar the request with the higher value set for the max token parameter will have more latency. |
ai-services | Provisioned Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/provisioned-get-started.md | After you purchase a commitment on your quota, you can create a deployment. To c | Select a model| Choose the specific model you wish to deploy. | GPT-4 | | Model version | Choose the version of the model to deploy. | 0613 | | Deployment Name | The deployment name is used in your code to call the model by using the client libraries and the REST APIs. | gpt-4|-| Content filter | Specify the filtering policy to apply to the deployment. Learn more on our [Content Filtering](../concepts/content-filter.md) how-tow | Default | +| Content filter | Specify the filtering policy to apply to the deployment. Learn more on our [Content Filtering](../concepts/content-filter.md) how-to. | Default | | Deployment Type |This impacts the throughput and performance. Choose Provisioned-Managed for your provisioned deployment | Provisioned-Managed | | Provisioned Throughput Units | Choose the amount of throughput you wish to include in the deployment. | 100 | az cognitiveservices account deployment create \ --model-version 0613 \ --model-format OpenAI \ --sku-capacity 100 \sku-name Provisioned-Managed+--sku-name ProvisionedManaged ``` -REST, ARM template, Bicep and Terraform can also be used to create deployments. See the section on automating deployments in the [Managing Quota](https://learn.microsoft.com/azure/ai-services/openai/how-to/quota?tabs=rest#automate-deployment) how-to guide and replace the `sku.name` with "Provisioned-Managed" rather than "Standard." +REST, ARM template, Bicep and Terraform can also be used to create deployments. See the section on automating deployments in the [Managing Quota](quota.md?tabs=rest#automate-deployment) how-to guide and replace the `sku.name` with "ProvisionedManaged" rather than "Standard." ## Make your first calls-The inferencing code for provisioned deployments is the same a standard deployment type. The following code snippet shows a chat completions call to a GPT-4 model. For your first time using these models programmatically, we recommend starting with our [quickstart start guide](../quickstart.md). Our recommendation is to use the OpenAI library with version 1.0 or greater since this includes retry logic within the library. +The inferencing code for provisioned deployments is the same a standard deployment type. The following code snippet shows a chat completions call to a GPT-4 model. For your first time using these models programmatically, we recommend starting with our [quickstart guide](../quickstart.md). Our recommendation is to use the OpenAI library with version 1.0 or greater since this includes retry logic within the library. ```python A 429 response indicates that the allocated PTUs are fully consumed at the time The 429 signal isn't an unexpected error response when pushing to high utilization but instead part of the design for managing queuing and high load for provisioned deployments. ### Modifying retry logic within the client libraries-The Azure OpenAI SDKs retry 429 responses by default and behind the scenes in the client (up to the maximum retries). The libraries respect the `retry-after` time. You can also modify the retry behavior to better suite your experience. Here's an example with the python library. +The Azure OpenAI SDKs retry 429 responses by default and behind the scenes in the client (up to the maximum retries). The libraries respect the `retry-after` time. You can also modify the retry behavior to better suit your experience. Here's an example with the python library. You can use the `max_retries` option to configure or disable retry settings: We recommend the following workflow: ## Next Steps -* For more information on cloud application best practices, check out [Best practices in cloud applications](https://learn.microsoft.com/azure/architecture/best-practices/index-best-practices) +* For more information on cloud application best practices, check out [Best practices in cloud applications](/azure/architecture/best-practices/index-best-practices) * For more information on provisioned deployments, check out [What is provisioned throughput?](../concepts/provisioned-throughput.md) * For more information on retry logic within each SDK, check out: * [Python reference documentation](https://github.com/openai/openai-python?tab=readme-ov-file#retries)- * [.NET reference documentation](https://learn.microsoft.com/dotnet/api/azure.ai.openai.openaiclientoptions?view=azure-dotnet-preview) - * [Java reference documentation](https://learn.microsoft.com/java/api/com.azure.ai.openai.openaiclientbuilder?view=azure-java-preview#com-azure-ai-openai-openaiclientbuilder-retryoptions(com-azure-core-http-policy-retryoptions)) - * [JavaScript reference documentation](https://learn.microsoft.com/javascript/api/@azure/openai/openaiclientoptions?view=azure-node-preview#@azure-openai-openaiclientoptions-retryoptions) - * [GO reference documentation](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/ai/azopenai#ChatCompletionsOptions) + * [.NET reference documentation](/dotnet/api/azure.ai.openai.openaiclientoptions?view=azure-dotnet-preview&preserve-view=true) + * [Java reference documentation](/java/api/com.azure.ai.openai.openaiclientbuilder?view=azure-java-preview&preserve-view=true#com-azure-ai-openai-openaiclientbuilder-retryoptions(com-azure-core-http-policy-retryoptions)) + * [JavaScript reference documentation](/javascript/api/@azure/openai/openaiclientoptions?view=azure-node-preview&preserve-view=true#@azure-openai-openaiclientoptions-retryoptions) + * [GO reference documentation](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/ai/azopenai#ChatCompletionsOptions) |
ai-services | Provisioned Throughput Onboarding | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/provisioned-throughput-onboarding.md | + + Title: Azure OpenAI Service Provisioned Throughput Units (PTU) onboarding +description: Learn about provisioned throughput units onboarding and Azure OpenAI. ++ Last updated : 01/15/2024+++++recommendations: false +keywords: +++# Provisioned throughput units onboarding ++This article walks you through the process of onboarding to [Provisioned Throughput Units (PTU)](../concepts/provisioned-throughput.md). Once you complete the initial onboarding, we recommend referring to the PTU [getting started guide](./provisioned-get-started.md). ++> [!NOTE] +> Provisioned Throughput Units (PTU) are different from standard quota in Azure OpenAI and are not available by default. To learn more about this offering contact your Microsoft Account Team. ++## Sizing and estimation: provisioned managed only ++Determining the right amount of provisioned throughput, or PTUs, you require for your workload is an essential step to optimizing performance and cost. This section describes how to use the Azure OpenAI capacity planning tool. The tool provides you with an estimate of the required PTU. ++### Estimate provisioned throughput and cost ++To get a quick estimate for your workload, open the capacity planner in the [Azure OpenAI Studio](https://oai.azure.com). The capacity planner is under **Management** > **Quotas** > **Provisioned**. The **Provisioned** option and the capacity planner are only available in certain regions within the Quota pane, if you don't see this option setting the quota region to *Sweden Central* will make this option available. Enter the following parameters based on your workload. ++| Input | Description | +||| +|Model | OpenAI model you plan to use. For example: GPT-4 | +| Version | Version of the model you plan to use, for example 0614 | +| Prompt tokens | Number of tokens in the prompt for each call | +| Generation tokens | Number of tokens generated by the model on each call | +| Peak calls per minute | Peak concurrent load to the endpoint measured in calls per minute| ++After you fill in the required details, select Calculate to view the suggested PTU for your scenario. +++> [!NOTE] +> The capacity planner is an estimate based on simple input criteria. The most accurate way to determine your capacity is to benchmark a deployment with a representational workload for your use case. ++### Understanding the provisioned throughput purchase model ++Unlike Azure services where you're charged based on usage, the Azure OpenAI Provisioned Throughput feature is purchased as a renewable, monthly commitment. This commitment is charged to your subscription upon creation and at each monthly renewal. When you onboard to Provisioned Throughput, you need to create a commitment on each Azure OpenAI resource where you intend to create a provisioned deployment. The PTUs you purchase in this way are available for use when creating deployments on those resources. ++The total number of PTUs you can purchase via commitments is limited to the amount of Provisioned Throughput quota that is assigned to your subscription. The following table compares other characteristics of Provisioned Throughput quota (PTUs) and Provisioned Throughput commitments. ++|Topic|Quota|Commitments| +|||| +|Purpose| Grants permission to create provisioned deployments, and provides the upper limit on the capacity that can be used|Purchase vehicle for Provisioned Throughput capacity| +|Lifetime| Quota might be removed from your subscription if it isn't purchased via a commitment within five days of being granted|The minimum term is one month, with customer-selectable autorenewal behavior. A commitment isn't cancelable, and can't be moved to a new resource while it's active| +|Scope |Quota is specific to a subscription and region, and is shared across all Azure OpenAI resources | Commitments are an attribute of an Azure OpenAI resource, and are scoped to deployments within that resource. A subscription might contain as many active commitments as there are resources.| +|Granularity| Quota is granted specific to a model family (for example, GPT-4) but is shareable across model versions within the family| Commitments aren't model or version specific. For example, a resourceΓÇÖs 1000 PTU commitment can cover deployments of both GPT-4 and GPT-35-Turbo| +|Capacity guarantee| Having quota doesn't guarantee that capacity is available when you create the deployment| Capacity availability to cover committed PTUs is guaranteed as long as the commitment is active.| +|Increases/Decreases| New quota can be requested and approved at any time, independent of your commitment renewal dates | The number of PTUs covered by a commitment can be increased at any time, but can't be decreased except at the time of renewal.| ++Quota and commitments work together to govern the creation of deployments within your subscriptions. To create a provisioned deployment, two criteria must be met: ++- Quota must be available for the desired model within the desired region and subscription. This means you can't exceed your subscription/region-wide limit for the model. +- Committed PTUs must be available on the resource where you create the deployment. (The capacity you assign to the deployment is paid-for). ++### Commitment properties and charging model ++A commitment includes several properties. ++|Property|Description|When Set| +|||| +|Azure OpenAI Resource | The resource hosting the commitment | Commitment creation| +|Committed PTUs| The number of PTUs covered by the commitment. | Initially set at commitment creation, and can be increased at any time, but not decreased.| +|Term| The term of the commitment. A commitment expires one month from its creation date. The renewal policy defines what happens next. | Commitment creation | +|Expiration Date| The expiration date of the commitment. This time of expiration is midnight UTC.| Initially, 30 days from creation. However, the expiration date changes if the commitment is renewed.| +|Renewal Policy| There are three options for what to do upon expiration: <br><br> - Autorenew: A new commitment term begins for another 30 days at the current number of PTUs <br>- Autorenew with different settings: This setting is the same as *Autorenew*, except that the number of PTUs committed upon renewal can be decreased <br>- Don't autorenew: Upon expiration, the commitment ends and isn't renewed.| Initially set at commitment creation, and can be changed at any time.| ++### Commitment charges ++Provisioned Throughput Commitments generate charges against your Azure subscription at the following times: ++- At commitment creation. The charge is computed according to the current monthly PTU rate and the number of PTUs committed. You will receive a single up-front charge on your invoice. ++- At commitment renewal. If the renewal policy is set to autorenew, a new monthly charge is generated based on the PTUs committed in the new term. This charge appears as a single up-front charge on your invoice. ++- When new PTUs are added to an existing commitment. The charge is computed based on the number of PTUs added to the commitment, pro-rated hourly to the end of the existing commitment term. For example, if 300 PTUs are added to an existing commitment of 900 PTUs exactly halfway through its term, there is a charge at the time of the addition for the equivalent of 150 PTUs (300 PTUs pro-rated to the commitment expiration date). If the commitment is renewed, the following monthΓÇÖs charge will be for the new PTU total of 1,200 PTUs. ++As long as the number of deployed PTUs in a resource is covered by the resourceΓÇÖs commitment, then you'll only see the charges. However, if the number of deployed PTUs in a resource becomes greater than the resourceΓÇÖs committed PTUs, the excess PTUs will be charged as overage at an hourly rate. Typically, the only way this overage will happen is if a commitment expires or is reduced at its renewal while the resource contains deployments. For example, if a 300 PTU commitment is allowed to expire on a resource that has 300 PTUs deployed, the deployed PTUs is no longer be covered by any commitment. Once the expiration date is reached, the subscription is charged an hourly overage fee based on the 300 excess PTUs. ++The hourly rate is higher than the monthly commitment rate and the charges exceed the monthly rate within a few days. There are two ways to end hourly overage charges: ++- Delete or scale-down deployments so that they donΓÇÖt use more PTUs than are committed +- Create a new commitment on the resource to cover the deployed PTUs. +++## Purchasing and managing commitments ++### Planning your commitments ++Upon receiving confirmation that Provisioned Throughput Unit (PTU) quota is assigned to a subscription, you must create commitments on the target resources (or extend existing commitments) to make the quota usable for deployments. ++Prior to creating commitments, plan how the provisioned deployments will be used and which Azure OpenAI resources will host them. Commitments have a one month minimum term and can't be decreased in size until the end of the term. They also can't be moved to new resources once created. Finally, the sum of your committed PTUs can't be greater than your quota ΓÇô PTUs committed on a resource are no longer available to commit to on a different resource until the commitment expires. Having a clear plan on which resources will be used for provisioned deployments and the capacity you intend to apply to them (for at least a month) will help ensure an optimal experience with your provisioned throughput setup. ++For example: ++- DonΓÇÖt create a commitment and deployment on a ΓÇ£temporaryΓÇ¥ resource for the purpose of validation. YouΓÇÖll be locked into using that resource for at least month. Instead, if the plan is to ultimately use the PTUs on a production resource, create the commitment and test deployment on that resource right from the start. ++- Calculate the number of PTUs to commit on a resource based on the number, model and size of the deployments you intend to create, keeping in mind the minimum number of PTUs each model requires create a deployment. ++ - Example 1: GPT-4-32K requires a minimum of 200 PTUs to deploy. If you create a commitment of only 100 PTUs on a resource, you wonΓÇÖt have enough committed PTUs to deploy GPT-4-32K there ++ - Example 2: If you need to create multiple deployments on a resource, sum the PTUs required for each deployment. A production resource hosting deployments for 300 PTUs of GPT-4, and 500 PTUs of GPT-4-32K will require a commitment of at least 800 PTUs to cover both deployments. ++- Distribute or consolidate PTUs as needed. For example, total quota of 1000 PTUs can be distributed across resources as needed to support your deployments. It could be committed on a single resource to support one or more deployments adding up to 1000 PTUs, or distributed across multiple resources (for example, a dev and a prod resource) as long as the total number of committed PTUs is less than or equal to the quota of 1000. ++- Consider operational requirements in your plan. For example: + - Organizationally required resource naming conventions + - Business continuity policies that require multiple deployments of a model per region, perhaps on different Azure OpenAI resources ++### Creating Provisioned Throughput commitments ++With the plan ready, the next step is to create the commitments. Commitments are created manually via Azure OpenAI Studio and require the user creating the commitment to have either the [Contributor or Cognitive Services Contributor role](./role-based-access-control.md) at the subscription level. ++For each new commitment you need to create, follow these steps: ++1. Launch the Provisioned Throughput purchase dialog by selecting **Quotas** > **Provisioned** > **Click here to purchase**. +++2. Select the Azure OpenAI resource and purchase the commitment. ++| Setting | Notes | +||-| +| **Select a resource** | Choose the resource where you will create the provisioned deployment. Once you have purchased the commitment, you will be unable to use the PTUs on another resource until the current commitment expires. | +| **Amount to commit (PTU)** | Choose the number of PTUs you're committing to. This number can be increased later, but can't be decreased | +| **Commitment tier for current period** | The commitment period is set to one month. | +| **Renewal settings** | Select Purchase. A confirmation dialog will be displayed. After you confirm, your PTUs will be committed, and you can use them to create a provisioned deployment. | +++### Adding Provisioned Throughput Units to existing commitments ++The steps are the same as in the previous example, but you'll increase the **amount to commit (PTU)** value. +++### Managing commitments ++**Discontinue use of provisioned throughput** ++To end use of provisioned throughput, and stop any charges after the current commitments are expired, two steps must be taken: ++1. Set the renewal policy on all commitments to *Don't autorenew*. +2. Delete the provisioned deployments using the quota. ++**Move a commitment/deployment to a new resource in the same subscription/region** ++It isn't possible in Azure OpenAI Studio to directly *move* a deployment or a commitment to a new resource. Instead, a new deployment needs to be created on the target resource and traffic moved to it. There will need to be a commitment purchased established on the new resource to accomplish this. Because commitments are charged up-front for a 30-day period, it's necessary to time this move with the expiration of the original commitment to minimize overlap with the new commitment and ΓÇ£double-billingΓÇ¥ during the overlap. ++There are two approaches that can be taken to implement this transition. ++**Option 1: No-Overlap Switchover** ++This option requires some downtime, but requires no extra quota and generates no extra costs. ++| Steps | Notes | +|-|-| +|Set the renewal policy on the existing commitment to expire| This will prevent the commitment from renewing and generating further charges | +|Before expiration of the existing commitment, delete its deployment | Downtime will start at this point and will last until the new deployment is created and traffic is moved. You'll minimize the duration by timing the deletion to happen as close to the expiration date/time as possible.| +|After expiration of the existing commitment, create the commitment on the new resource|Minimize downtime by executing this and the next step as soon after expiration as possible.| +|Create the deployment on the new resource and move traffic to it|| ++**Option 2: Overlapped Switchover** ++This option has no downtime by having both existing and new deployments live at the same time. This requires having quota available to create the new deployment, and will generate extra costs for the duration of the overlapped deployments. ++| Steps | Notes | +|-|-| +|Set the renewal policy on the existing commitment to expire| Doing so prevents the commitment from renewing and generating further charges.| +|Before expiration of the existing commitment:<br>1. Create the commitment on the new resource.<br>2. Create the new deployment.<br>3. Switch traffic<br>4. Delete existing deployment| Ensure you leave enough time for all steps before the existing commitment expires, otherwise overage charges will be generated (see next section) for options. | ++If the final step takes longer than expected and will finish after the existing commitment expires, there are three options to minimize overage charges. ++- **Take downtime**: Delete the original deployment then complete the move. +- **Pay overage**: Keep the original deployment and pay hourly until you have moved traffic off and deleted the deployment. +- **Reset the original commitment** to renew one more time. This will give you time to complete the move with a known cost. ++Both paying for an overage and resetting the original commitment will generate charges beyond the original expiration date. Paying overage charges might be cheaper than a new one-month commitment if you only need a day or two to complete the move. Compare the costs of both options to find the lowest-cost approach. ++### Move the deployment to a new region and or subscription ++The same approaches apply in moving the commitment and deployment within the region, except that having available quota in the new location will be required in all cases. ++### View and edit an existing resource ++In Azure OpenAI Studio, select **Quota** > **Provisioned** > **Manage Commitment Tiers** and select a resource with an existing commitment to view/change it. ++## Next steps ++- [Provisioned Throughput Units (PTU) getting started guide](./provisioned-get-started.md) +- [Provisioned Throughput Units (PTU) concepts](../concepts/provisioned-throughput.md) |
ai-services | Quotas Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/quotas-limits.md | The following sections provide you with a quick guide to the default quotas and | Max training job time (job will fail if exceeded) | 720 hours | | Max training job size (tokens in training file) x (# of epochs) | 2 Billion | | Max size of all files per upload (Azure OpenAI on your data) | 16 MB |+| Maximum number of Provisioned throughput units per deployment | 100,000 | ## Regional quota limits To minimize issues related to rate limits, it's a good idea to use the following ### How to request increases to the default quotas and limits -Quota increase requests can be submitted from the [Quotas](./how-to/quota.md) page of Azure OpenAI Studio. Please note that due to overwhelming demand, quota increase requests are being accepted and will be filled in the order they are received. Priority will be given to customers who generate traffic that consumes the existing quota allocation, and your request may be denied if this condition is not met. +Quota increase requests can be submitted from the [Quotas](./how-to/quota.md) page of Azure OpenAI Studio. Please note that due to overwhelming demand, quota increase requests are being accepted and will be filled in the order they are received. Priority will be given to customers who generate traffic that consumes the existing quota allocation, and your request may be denied if this condition isn't met. For other rate limits, please [submit a service request](../cognitive-services-support-options.md?context=/azure/ai-services/openai/context/context). |
aks | Developer Best Practices Pod Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/developer-best-practices-pod-security.md | Title: Developer best practices - Pod security in Azure Kubernetes Services (AKS) description: Learn the developer best practices for how to secure pods in Azure Kubernetes Service (AKS) Previously updated : 10/27/2022 Last updated : 01/12/2024 |
aks | Gpu Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/gpu-cluster.md | To view supported GPU-enabled VMs, see [GPU-optimized VM sizes in Azure][gpu-sku ## Limitations * AKS does not support Windows GPU-enabled node pools. * If you're using an Azure Linux GPU-enabled node pool, automatic security patches aren't applied, and the default behavior for the cluster is *Unmanaged*. For more information, see [auto-upgrade](./auto-upgrade-node-image.md).-* [NVadsA10](https://learn.microsoft.com/azure/virtual-machines/nva10v5-series) v5-series are not a recommended SKU for GPU VHD. +* [NVadsA10](../virtual-machines/nva10v5-series.md) v5-series are not a recommended SKU for GPU VHD. ## Before you begin |
aks | Kubelogin Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/kubelogin-authentication.md | Title: Using Kubelogin with Azure Kubernetes Service (AKS) -description: Learn about using Kubelogin to enable all of the supported Azure Active Directory authentication methods with Azure Kubernetes Service (AKS). + Title: Use kubelogin to authenticate in Azure Kubernetes Service +description: Learn how to use the kubelogin plugin for all Microsoft Entra authentication methods in Azure Kubernetes Service (AKS). Last updated 11/28/2023 -# Use Kubelogin with Azure Kubernetes Service (AKS) +# Use kubelogin to authenticate users in Azure Kubernetes Service -Kubelogin is a client-go credential [plugin][client-go-cred-plugin] that implements Microsoft Entra ID authentication. This plugin provides features that are not available in kubectl. +The kubelogin plugin in Azure is a client-go credential [plugin][client-go-cred-plugin] that implements Microsoft Entra authentication. The kubelogin plugin offers features that aren't available in the kubectl command-line tool. -Azure Kubernetes Service (AKS) clusters integrated with Microsoft Entra ID, running Kubernetes versions 1.24 and higher, automatically use the `kubelogin` format. +Azure Kubernetes Service (AKS) clusters that are integrated with Microsoft Entra ID and running Kubernetes version 1.24 or later automatically use the kubelogin format. -This article provides an overview of the following authentication methods and examples on how to use them: --* Device code -* The Azure CLI -* Interactive web browser -* Service principal -* Managed identity -* Workload identity +This article provides an overview and examples of how to use kubelogin for all [supported Microsoft Entra authentication methods][authentication-methods] in AKS. ## Limitations -* A maximum of 200 groups are included in the Microsoft Entra ID JSON Web Token (JWT). For more than 200 groups, consider using [Application Roles][entra-id-application-roles]. -* Groups created in Microsoft Entra ID are only included by their ObjectID and not by their display name. `sAMAccountName` is only available for groups synchronized from on-premises Active Directory. -* On AKS, service principal authentication method only works with managed Microsoft Entra ID, not legacy Azure Active Directory. -* Device code authentication method doesn't work when Conditional Access policy is configured on a Microsoft Entra tenant. Use web browser interactive authentication instead. --## Authentication modes +* You can include a maximum of 200 groups in a Microsoft Entra JSON Web Token (JWT) claim. If you have more than 200 groups, consider using [application roles][entra-id-application-roles]. +* Groups that are created in Microsoft Entra ID are included only by their **ObjectID** value, and not by their display name. The `sAMAccountName` command is available only for groups that are synchronized from on-premises Windows Server Active Directory. +* In AKS, the service principal authentication method works only with managed Microsoft Entra ID, and not with the earlier version Azure Active Directory. +* The device code authentication method doesn't work when a Microsoft Entra Conditional Access policy is set on a Microsoft Entra tenant. In that scenario, use web browser interactive authentication. -Most of the interaction with `kubelogin` is specific to the `convert-kubeconfig` subcommand, which uses the input kubeconfig specified in `--kubeconfig` or `KUBECONFIG` environment variable to convert to the final kubeconfig in exec format based on the specified authentication mode. +## How authentication works -### How authentication works +For most interactions with kubelogin, you use the `convert-kubeconfig` subcommand. The subcommand uses the kubeconfig file that's specified in `--kubeconfig` or in the `KUBECONFIG` environment variable to convert the final kubeconfig file to exec format based on the specified authentication method. -The authentication modes that `kubelogin` implements are Microsoft Entra ID OAuth 2.0 token grant flows. Throughout `kubelogin` subcommands, you see below common flags. In general, these flags are already set up when you get the kubeconfig from AKS. +The authentication methods that kubelogin implements are Microsoft Entra OAuth 2.0 token grant flows. The following parameter flags are common to use in kubelogin subcommands. In general, these flags are ready to use when you get the kubeconfig file from AKS. -* **--tenant-id**: Microsoft Entra ID tenant ID -* **--client-id**: The application ID of the public client application. This client app is only used in device code, web browser interactive, and ropc log in modes. -* **--server-id**: The application ID of the web app, or resource server. The token should be issued to this resource. +* `--tenant-id`: The Microsoft Entra tenant ID. +* `--client-id`: The application ID of the public client application. This client app is used only in the device code, web browser interactive, and OAuth 2.0 Resource Owner Password Credentials (ROPC) (workflow identity) sign-in methods. +* `--server-id`: The application ID of the web app or resource server. The token is issued to this resource. > [!NOTE]-> With each authentication method, the token isn't cached on the file system. +> In each authentication method, the token is not cached on the file system. ++## Authentication methods ++The next sections describe supported authentication methods and how to use them: ++* Device code +* Azure CLI +* Web browser interactive +* Service principal +* Managed identity +* Workload identity -## Using device code +### Device code -Device code is the default authentication mode in `convert-kubeconfig` subcommand. The `-l devicecode` is optional. This authentication method prompts the device code for user to sign in from a browser session. +Device code is the default authentication method for the `convert-kubeconfig` subcommand. The `-l devicecode` parameter is optional. This authentication method prompts the device code for the user to sign in from a browser session. -Before `kubelogin` and Exec plugin were introduced, the Azure authentication mode in `kubectl` only supported device code flow. It used an old library that produces the token with `audience` claim that has the `spn:` prefix, which isn't compatible with [AKS-managed Microsoft Entra ID][aks-managed-microsoft-entra-id] using [on-behalf-of][oauth-on-behalf-of] (OBO) flow. When you run the `convert-kubeconfig` subcommand, `kubelogin` removes the `spn:` (prefix in audience claim). If you require using the original functionality, add the `--legacy` argument. +Before the kubelogin and exec plugins were introduced, the Azure authentication method in kubectl supported only the device code flow. It used an earlier version of a library that produces a token that has the `audience` claim with an `spn:` prefix. It isn't compatible with [AKS managed Microsoft Entra ID][aks-managed-microsoft-entra-id], which uses an [on-behalf-of (OBO)][oauth-on-behalf-of] flow. When you run the `convert-kubeconfig` subcommand, kubelogin removes the `spn:` prefix from the audience claim. -If you're using `kubeconfig` from legacy Azure AD cluster, `kubelogin` automatically adds the `--legacy` flag. +If your requirements include using functionality from earlier versions, add the `--legacy` argument. If you're using the kubeconfig file in an earlier version Azure Active Directory cluster, kubelogin automatically adds the `--legacy` flag. -In this sign in mode, the access token and refresh token are cached in the `${HOME}/.kube/cache/kubelogin` directory. This path can be overridden specifying the `--token-cache-dir` parameter. +In this sign-in method, the access token and the refresh token are cached in the *${HOME}/.kube/cache/kubelogin* directory. To override this path, include the `--token-cache-dir` parameter. -If your Azure AD integrated cluster uses Kubernetes version 1.24 or earlier, you need to manually convert the kubeconfig format by running the following commands. +If your AKS Microsoft Entra integrated cluster uses Kubernetes 1.24 or earlier, you must manually convert the kubeconfig file format by running the following commands: ```bash export KUBECONFIG=/path/to/kubeconfig kubelogin convert-kubeconfig ``` -Run `kubectl` command to get node information. +Run this kubectl command to get node information: ```bash kubectl get nodes ``` -To clean up cached tokens, run the following command. +To clean up cached tokens, run the following command: ```bash kubelogin remove-tokens ``` > [!NOTE]-> Device code sign in method doesn't work when Conditional Access policy is configured on Microsoft Entra tenant. Use the [web browser interactive mode][web-browser-interactive-mode] instead. +> The device code sign-in method doesn't work when a Conditional Access policy is configured on a Microsoft Entra tenant. In this scenario, use the [web browser interactive method][web-browser-interactive-method]. -## Using the Azure CLI +### Azure CLI -Authenticating using the Azure CLI method uses the already signed in context performed by the Azure CLI to get the access token. The token is issued in the same Microsoft Entra tenant as with `az login`. --`kubelogin` doesn't write the tokens to the token cache file. It's already managed by the Azure CLI. +The Azure CLI authentication method uses the signed-in context that the Azure CLI establishes to get the access token. The token is issued in the same Microsoft Entra tenant as `az login`. kubelogin doesn't write tokens to the token cache file because they are already managed by the Azure CLI. > [!NOTE]-> This authentication method only works with AKS-managed Microsoft Entra ID. +> This authentication method works only with AKS managed Microsoft Entra ID. ++The following example shows how to use the Azure CLI method to authenticate: ```bash az login export KUBECONFIG=/path/to/kubeconfig kubelogin convert-kubeconfig -l azurecli ``` -Run `kubectl` command to get node information. +Run this kubectl command to get node information: ```bash kubectl get nodes ``` -When the Azure CLI's config directory is outside the $`{HOME}` directory, specify the parameter `--azure-config-dir` in `convert-kubeconfig` subcommand. It generates the `kubeconfig` with the environment variable configured. You can achieve the same configuration by setting the environment variable `AZURE_CONFIG_DIR` to this directory while running `kubectl` command. +If the Azure CLI config directory is outside the *${HOME}* directory, use the `--azure-config-dir` parameter with the `convert-kubeconfig` subcommand. The command generates the kubeconfig file with the environment variable configured. You can get the same configuration by setting the `AZURE_CONFIG_DIR` environment variable to this directory when you run a kubectl command. ++### Web browser interactive -## Using an interactive web browser +The web browser interactive method of authentication automatically opens a web browser to sign in the user. After the user is authenticated, the browser redirects to the local web server by using the verified credentials. This authentication method complies with Conditional Access policy. -Interactive web browser authentication automatically opens a web browser to log in the user. Once authenticated, the browser redirects back to a local web server with the credentials. This authentication method complies with Conditional Access policy. +When you authenticate by using this method, the access token is cached in the *${HOME}/.kube/cache/kubelogin* directory. You can override this path by using the `--token-cache-dir` parameter. -When you authenticate using this method, the access token is cached in the `${HOME}/.kube/cache/kubelogin` directory. This path can be overridden by specifying the `--token-cache-dir` parameter. +#### Bearer token -The following example shows how to use a bearer token with interactive flow. +The following example shows how to use a bearer token with the web browser interactive flow: ```bash export KUBECONFIG=/path/to/kubeconfig export KUBECONFIG=/path/to/kubeconfig kubelogin convert-kubeconfig -l interactive ``` -Run `kubectl` command to get node information. +Run this kubectl command to get node information: ```bash kubectl get nodes ``` -The following example shows how to use Proof-of-Possession (PoP) tokens with interactive flow. +#### Proof-of-Possession token ++The following example shows how to use a Proof-of-Possession (PoP) token with the web browser interactive flow: ```bash export KUBECONFIG=/path/to/kubeconfig export KUBECONFIG=/path/to/kubeconfig kubelogin convert-kubeconfig -l interactive --pop-enabled --pop-claims "u=/ARM/ID/OF/CLUSTER" ``` -Run `kubectl` command to get node information. +Run this kubectl command to get node information: ```bash kubectl get nodes ``` -## Using a service principal +### Service principal ++This authentication method uses a service principal to sign in the user. You can provide the credential by setting an environment variable or by using the credential in a command-line argument. The supported credentials that you can use are a password or a Personal Information Exchange (PFX) client certificate. -This authentication method uses a service principal to sign in. The credential may be provided using an environment variable or command-line argument. The supported credentials are password and pfx client certificate. +Before you use this method, consider the following limitations: -The following are limitations to consider before using this method: +* This method works only with managed Microsoft Entra ID. +* The service principal can be a member of a maximum of 200 [Microsoft Entra groups][microsoft-entra-group-membership]. -* This only works with managed Microsoft Entra ID -* The service principal can be member of a maximum of [200 Microsoft Entra ID groups][microsoft-entra-group-membership]. +#### Environment variables -The following examples show how to set up a client secret using an environment variable. +The following example shows how to set up a client secret by using environment variables: ```bash export KUBECONFIG=/path/to/kubeconfig kubelogin convert-kubeconfig -l spn -export AAD_SERVICE_PRINCIPAL_CLIENT_ID=<spn client id> -export AAD_SERVICE_PRINCIPAL_CLIENT_SECRET=<spn secret> +export AAD_SERVICE_PRINCIPAL_CLIENT_ID=<Service Principal Name (SPN) client ID> +export AAD_SERVICE_PRINCIPAL_CLIENT_SECRET=<SPN secret> ``` -Run `kubectl` command to get node information. +Run this kubectl command to get node information: ```bash kubectl get nodes ``` +Then run this command: + ```bash export KUBECONFIG=/path/to/kubeconfig kubelogin convert-kubeconfig -l spn -export AZURE_CLIENT_ID=<spn client id> -export AZURE_CLIENT_SECRET=<spn secret> +export AZURE_CLIENT_ID=<SPN client ID> +export AZURE_CLIENT_SECRET=<SPN secret> ``` -Run `kubectl` command to get node information. +Run this kubectl command to get node information: ```bash kubectl get nodes ``` -The following example shows how to set up a client secret in a command-line argument. +#### Command-line argument ++The following example shows how to set up a client secret in a command-line argument: ```bash export KUBECONFIG=/path/to/kubeconfig -kubelogin convert-kubeconfig -l spn --client-id <spn client id> --client-secret <spn client secret> +kubelogin convert-kubeconfig -l spn --client-id <SPN client ID> --client-secret <SPN client secret> ``` -Run `kubectl` command to get node information. +Run this kubectl command to get node information: ```bash kubectl get nodes ``` > [!WARNING]-> This method leaves the secret in the kubeconfig file. +> The command-line argument method stores the secret in the kubeconfig file. ++#### Client certificate -The following examples show how to set up a client secret using a client certificate. +The following example shows how to set up a client secret by using a client certificate: ```bash export KUBECONFIG=/path/to/kubeconfig kubelogin convert-kubeconfig -l spn -export AAD_SERVICE_PRINCIPAL_CLIENT_ID=<spn client id> +export AAD_SERVICE_PRINCIPAL_CLIENT_ID=<SPN client ID> export AAD_SERVICE_PRINCIPAL_CLIENT_CERTIFICATE=/path/to/cert.pfx-export AAD_SERVICE_PRINCIPAL_CLIENT_CERTIFICATE_PASSWORD=<pfx password> +export AAD_SERVICE_PRINCIPAL_CLIENT_CERTIFICATE_PASSWORD=<PFX password> ``` -Run `kubectl get nodes` command to get node information in ps output format. +Run this kubectl command to get node information: ```bash kubectl get nodes ``` +Then run this command: + ```bash export KUBECONFIG=/path/to/kubeconfig kubelogin convert-kubeconfig -l spn -export AZURE_CLIENT_ID=<spn client id> +export AZURE_CLIENT_ID=<SPN client ID> export AZURE_CLIENT_CERTIFICATE_PATH=/path/to/cert.pfx-export AZURE_CLIENT_CERTIFICATE_PASSWORD=<pfx password> +export AZURE_CLIENT_CERTIFICATE_PASSWORD=<PFX password> ``` -Run `kubectl` command to get node information. +Run this kubectl command to get node information: ```bash kubectl get nodes ``` -The following example shows how to set up a Proof-of-Possession (PoP) token using a client secret from environment variables. +#### PoP token and environment variables ++The following example shows how to set up a PoP token that uses a client secret that it gets from environment variables: ```bash export KUBECONFIG=/path/to/kubeconfig kubelogin convert-kubeconfig -l spn --pop-enabled --pop-claims "u=/ARM/ID/OF/CLUSTER" -export AAD_SERVICE_PRINCIPAL_CLIENT_ID=<spn client id> -export AAD_SERVICE_PRINCIPAL_CLIENT_SECRET=<spn secret> +export AAD_SERVICE_PRINCIPAL_CLIENT_ID=<SPN client ID> +export AAD_SERVICE_PRINCIPAL_CLIENT_SECRET=<SPN secret> ``` -Run `kubectl` command to get node information. +Run this kubectl command to get node information: ```bash kubectl get nodes ``` -## Using a managed identity +### Managed identity ++Use the [managed identity][managed-identity-overview] authentication method for applications that connect to resources that support Microsoft Entra authentication. Examples include accessing Azure resources like an Azure virtual machine, a virtual machine scale set, or Azure Cloud Shell. -The [managed identity][managed-identity-overview] authentication method should be used for applications to use when connecting to resources that support Microsoft Entra authentication. For example, accessing Azure services such as Azure Virtual Machine, Azure Virtual Machine Scale Sets, Azure Cloud Shell, etc. +#### Default managed identity -The following example shows how to use the default managed identity. +The following example shows how to use the default managed identity: ```bash export KUBECONFIG=/path/to/kubeconfig export KUBECONFIG=/path/to/kubeconfig kubelogin convert-kubeconfig -l msi ``` -Run `kubectl` command to get node information. +Run this kubectl command to get node information: ```bash kubectl get nodes ``` -The following example shows how to use a managed identity with a specific identity. +#### Specific identity ++The following example shows how to use a managed identity with a specific identity: ```bash export KUBECONFIG=/path/to/kubeconfig export KUBECONFIG=/path/to/kubeconfig kubelogin convert-kubeconfig -l msi --client-id <msi-client-id> ``` -Run `kubectl` command to get node information. +Run this kubectl command to get node information: ```bash kubectl get nodes ``` -## Using a workload identity +### Workload identity -This authentication method uses Microsoft Entra ID federated identity credentials to authenticate to Kubernetes clusters with Microsoft Entra ID integration. It works by setting the environment variables: +The workload identity authentication method uses identity credentials that are federated with Microsoft Entra to authenticate access to AKS clusters. The method uses Microsoft Entra integrated authentication. It works by setting the following environment variables: -* **AZURE_CLIENT_ID**: the Microsoft Entra ID application ID that is federated with workload identity -* **AZURE_TENANT_ID**: the Microsoft Entra ID tenant ID -* **AZURE_FEDERATED_TOKEN_FILE**: the file containing signed assertion of workload identity. For example, Kubernetes projected service account (jwt) token -* **AZURE_AUTHORITY_HOST**: the base URL of a Microsoft Entra ID authority. For example, `https://login.microsoftonline.com/`. +* `AZURE_CLIENT_ID`: The Microsoft Entra application ID that is federated with the workload identity. +* `AZURE_TENANT_ID`: The Microsoft Entra tenant ID. +* `AZURE_FEDERATED_TOKEN_FILE`: The file that contains a signed assertion of the workload identity, like a Kubernetes projected service account (JWT) token. +* `AZURE_AUTHORITY_HOST`: The base URL of a Microsoft Entra authority. For example, `https://login.microsoftonline.com/`. -With [workload identity][workload-identity], it's possible to access Kubernetes clusters from CI/CD system such as GitHub, ArgoCD, etc. without storing Service Principal credentials in those external systems. To configure OIDC federation from GitHub, see the following [example][oidc-federation-github]. +You can use a [workload identity][workload-identity] to access Kubernetes clusters from CI/CD systems like GitHub or ArgoCD without storing service principal credentials in the external systems. To configure OpenID Connect (OIDC) federation from GitHub, see the [OIDC federation example][oidc-federation-github]. -The following example shows how to use a workload identity. +The following example shows how to use a workload identity: ```bash export KUBECONFIG=/path/to/kubeconfig export KUBECONFIG=/path/to/kubeconfig kubelogin convert-kubeconfig -l workloadidentity ``` -Run `kubectl` command to get node information. +Run this kubectl command to get node information: ```bash kubectl get nodes ``` -## Using Kubelogin with AKS +## How to use kubelogin with AKS -AKS uses a pair of first party Azure AD applications. These application IDs are the same in all environments. +AKS uses a pair of first-party Microsoft Entra applications. These application IDs are the same in all environments. -The AKS Microsoft Entra ID Server application ID used by the server side is: `6dae42f8-4368-4678-94ff-3960e28e3630`. The access token accessing AKS clusters need to be issued for this application. In most of kubelogin authentication modes, `--server-id` is a required parameter with `kubelogin get-token`. +The AKS Microsoft Entra server application ID that the server side uses is `6dae42f8-4368-4678-94ff-3960e28e3630`. The access token that accesses AKS clusters must be issued for this application. In most kubelogin authentication methods, you must use `--server-id` with `kubelogin get-token`. -The AKS Microsoft Entra ID client application ID used by kubelogin to perform public client authentication on behalf of the user is: `80faf920-1908-4b52-b5ef-a8e7bedfc67a`. The client application ID is used as part of device code and web browser interactive authentication methods. +The AKS Microsoft Entra client application ID that kubelogin uses to perform public client authentication on behalf of the user is `80faf920-1908-4b52-b5ef-a8e7bedfc67a`. The client application ID is used in device code and web browser interactive authentication methods. -## Next steps +## Related content -* Learn how to integrate AKS with Microsoft Entra ID with our [AKS-managed Microsoft Entra integration][aks-managed-microsoft-entra-integration-guide] how-to guide. -* To get started with managed identities in AKS, see [Use a managed identity in AKS][use-managed-identity-aks]. -* To get started with workload identities in AKS, see [Use a workload identity in AKS][use-workload-identity-aks]. +* Learn how to integrate AKS with Microsoft Entra ID in the [AKS managed Microsoft Entra ID integration][aks-managed-microsoft-entra-integration-guide] how-to article. +* To get started with managed identities in AKS, see [Use a managed identity in AKS][use-a-managed-identity-in-aks]. +* To get started with workload identities in AKS, see [Use a workload identity in AKS][use-a-workload-identity-in-aks]. <!-- LINKS - internal -->+[authentication-methods]: #authentication-methods [aks-managed-microsoft-entra-id]: managed-azure-ad.md [oauth-on-behalf-of]: ../active-directory/develop/v2-oauth2-on-behalf-of-flow.md-[web-browser-interactive-mode]: #using-an-interactive-web-browser +[web-browser-interactive-method]: #web-browser-interactive [microsoft-entra-group-membership]: /entra/identity/hybrid/connect/how-to-connect-fed-group-claims [managed-identity-overview]: /entra/identity/managed-identities-azure-resources/overview [workload-identity]: /entra/workload-id/workload-identities-overview [entra-id-application-roles]: /entra/external-id/customers/how-to-use-app-roles-customers [aks-managed-microsoft-entra-integration-guide]: managed-azure-ad.md-[use-managed-identity-aks]: use-managed-identity.md -[use-workload-identity-aks]: workload-identity-overview.md +[use-a-managed-identity-in-aks]: use-managed-identity.md +[use-a-workload-identity-in-aks]: workload-identity-overview.md <!-- LINKS - external --> [client-go-cred-plugin]: https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins |
aks | Quick Kubernetes Deploy Bicep Extensibility Kubernetes Provider | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-bicep-extensibility-kubernetes-provider.md | Last updated 01/11/2024 Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you quickly deploy and manage clusters. In this quickstart, you: -* Deploy an AKS cluster using the Bicep extensibility Kubernetes provider (preview). -* Run a sample multi-container application with a group of microservices and web front ends simulating a retail scenario. -+- Deploy an AKS cluster using the Bicep extensibility Kubernetes provider (preview). +- Run a sample multi-container application with a group of microservices and web front ends simulating a retail scenario. > [!IMPORTANT] > The Bicep Kubernetes provider is currently in preview. You can enable the feature from the [Bicep configuration file](../../azure-resource-manager/bicep/bicep-config.md#enable-experimental-features) by adding: Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you qui ## Before you begin -* This quickstart assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)][kubernetes-concepts]. -* You need an Azure account with an active subscription. If you don't have one, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). -* To learn more about creating a Windows Server node pool, see [Create an AKS cluster that supports Windows Server containers](quick-windows-container-deploy-cli.md). +This quickstart assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)][kubernetes-concepts]. ++- [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] +- Make sure the identity you use to create your cluster has the appropriate minimum permissions. For more details on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)](../concepts-identity.md). [!INCLUDE [About Bicep](../../../includes/resource-manager-quickstart-bicep-introduction.md)] -* To set up your environment for Bicep development, see [Install Bicep tools](../../azure-resource-manager/bicep/install.md). After completing the steps, you have [Visual Studio Code](https://code.visualstudio.com/) and the [Bicep extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-bicep). You also have either the latest [Azure CLI](/cli/azure/) version or the latest [Azure PowerShell module](/powershell/azure/new-azureps-module-az). -* To create an AKS cluster using a Bicep file, you provide an SSH public key. If you need this resource, see the following section. Otherwise, skip to [Review the Bicep file](#review-the-bicep-file). -* Make sure the identity you use to create your cluster has the appropriate minimum permissions. For more details on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)](../concepts-identity.md). -* To deploy a Bicep file, you need write access on the resources you deploy and access to all operations on the `Microsoft.Resources/deployments` resource type. For example, to deploy a virtual machine, you need `Microsoft.Compute/virtualMachines/write` and `Microsoft.Resources/deployments/*` permissions. For a list of roles and permissions, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md). +- To set up your environment for Bicep development, see [Install Bicep tools](../../azure-resource-manager/bicep/install.md). After completing the steps, you have [Visual Studio Code](https://code.visualstudio.com/) and the [Bicep extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-bicep). You also have either the latest [Azure CLI](/cli/azure/) version or the latest [Azure PowerShell module](/powershell/azure/new-azureps-module-az). +- To create an AKS cluster using a Bicep file, you provide an SSH public key. If you need this resource, see the following section. Otherwise, skip to [Review the Bicep file](#review-the-bicep-file). +- To deploy a Bicep file, you need write access on the resources you deploy and access to all operations on the `Microsoft.Resources/deployments` resource type. For example, to deploy a virtual machine, you need `Microsoft.Compute/virtualMachines/write` and `Microsoft.Resources/deployments/*` permissions. For a list of roles and permissions, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md). ### Create an SSH key pair 1. Go to [https://shell.azure.com](https://shell.azure.com) to open Cloud Shell in your browser.-2. Create an SSH key pair using the [`az sshkey create`][az-sshkey-create] Azure CLI command or the `ssh-keygen` command. +1. Create an SSH key pair using the [az sshkey create][az-sshkey-create] Azure CLI command or the `ssh-keygen` command. - ```azurecli-interactive + ```azurecli # Create an SSH key pair using Azure CLI az sshkey create --name "mySSHKey" --resource-group "myResourceGroup" To deploy the application, you use a manifest file to create all the objects req :::image type="content" source="media/quick-kubernetes-deploy-bicep-extensibility-kubernetes-provider/aks-store-architecture.png" alt-text="Screenshot of Azure Store sample architecture." lightbox="media/quick-kubernetes-deploy-bicep-extensibility-kubernetes-provider/aks-store-architecture.png"::: -* **Store front**: Web application for customers to view products and place orders. -* **Product service**: Shows product information. -* **Order service**: Places orders. -* **Rabbit MQ**: Message queue for an order queue. +- **Store front**: Web application for customers to view products and place orders. +- **Product service**: Shows product information. +- **Order service**: Places orders. +- **Rabbit MQ**: Message queue for an order queue. > [!NOTE] > We don't recommend running stateful containers, such as Rabbit MQ, without persistent storage for production. These are used here for simplicity, but we recommend using managed services, such as Azure CosmosDB or Azure Service Bus. To deploy the application, you use a manifest file to create all the objects req For a breakdown of YAML manifest files, see [Deployments and YAML manifests](../concepts-clusters-workloads.md#deployments-and-yaml-manifests). -2. Open `main.bicep` in Visual Studio Code. -3. Press <kbd>Ctrl+Shift+P</kbd> to open **Command Palette**. -4. Search for **bicep**, and then select **Bicep: Import Kubernetes Manifest**. + If you create and save the YAML file locally, then you can upload the manifest file to your default directory in CloudShell by selecting the **Upload/Download files** button and selecting the file from your local file system. ++1. Open `main.bicep` in Visual Studio Code. +1. Press <kbd>Ctrl+Shift+P</kbd> to open **Command Palette**. +1. Search for **bicep**, and then select **Bicep: Import Kubernetes Manifest**. :::image type="content" source="./media/quick-kubernetes-deploy-bicep-extensibility-kubernetes-provider/bicep-extensibility-kubernetes-provider-import-kubernetes-manifest.png" alt-text="Screenshot of Visual Studio Code import Kubernetes Manifest." lightbox="./media/quick-kubernetes-deploy-bicep-extensibility-kubernetes-provider/bicep-extensibility-kubernetes-provider-import-kubernetes-manifest.png"::: -5. Select `aks-store-quickstart.yaml` from the prompt. This process creates an `aks-store-quickstart.bicep` file in the same folder. -6. Open `main.bicep` and add the following Bicep at the end of the file to reference the newly created `aks-store-quickstart.bicep` module: +1. Select `aks-store-quickstart.yaml` from the prompt. This process creates an `aks-store-quickstart.bicep` file in the same folder. +1. Open `main.bicep` and add the following Bicep at the end of the file to reference the newly created `aks-store-quickstart.bicep` module: ```bicep module kubernetes './aks-store-quickstart.bicep' = { To deploy the application, you use a manifest file to create all the objects req } ``` -7. Save both `main.bicep` and `aks-store-quickstart.bicep`. +1. Save both `main.bicep` and `aks-store-quickstart.bicep`. ## Deploy the Bicep file ### [Azure CLI](#tab/azure-cli) -1. Create an Azure resource group using the [`az group create`][az-group-create] command. +1. Create an Azure resource group using the [az group create][az-group-create] command. - ```azurecli-interactive + ```azurecli az group create --name myResourceGroup --location eastus ``` -2. Deploy the Bicep file using the [`az deployment group create`][az-deployment-group-create] command. +1. Deploy the Bicep file using the [az deployment group create][az-deployment-group-create] command. - ```azurecli-interactive + ```azurecli az deployment group create --resource-group myResourceGroup --template-file main.bicep --parameters clusterName=<cluster-name> dnsPrefix=<dns-previs> linuxAdminUsername=<linux-admin-username> sshRSAPublicKey='<ssh-key>' ``` ### [Azure PowerShell](#tab/azure-powershell) -1. Create an Azure resource group using the [`New-AzResourceGroup`][new-azresourcegroup] cmdlet. +1. Create an Azure resource group using the [New-AzResourceGroup][new-azresourcegroup] cmdlet. - ```azurepowershell-interactive + ```azurepowershell New-AzResourceGroup -Name myResourceGroup -Location eastus ``` -2. Deploy the Bicep file using the [`New-AzResourceGroupDeployment`][new-azresourcegroupdeployment] cmdlet. +1. Deploy the Bicep file using the [New-AzResourceGroupDeployment][new-azresourcegroupdeployment] cmdlet. - ```azurepowershell-interactive + ```azurepowershell New-AzResourceGroupDeployment -ResourceGroupName myResourceGroup -TemplateFile ./main.bicep -clusterName=<cluster-name> -dnsPrefix=<dns-prefix> -linuxAdminUsername=<linux-admin-username> -sshRSAPublicKey="<ssh-key>" ``` To deploy the application, you use a manifest file to create all the objects req Provide the following values in the commands: -* **Cluster name**: Enter a unique name for the AKS cluster, such as *myAKSCluster*. -* **DNS prefix**: Enter a unique DNS prefix for your cluster, such as *myakscluster*. -* **Linux Admin Username**: Enter a username to connect using SSH, such as *azureuser*. -* **SSH RSA Public Key**: Copy and paste the *public* part of your SSH key pair (by default, the contents of *~/.ssh/id_rsa.pub*). +- **Cluster name**: Enter a unique name for the AKS cluster, such as *myAKSCluster*. +- **DNS prefix**: Enter a unique DNS prefix for your cluster, such as *myakscluster*. +- **Linux Admin Username**: Enter a username to connect using SSH, such as *azureuser*. +- **SSH RSA Public Key**: Copy and paste the *public* part of your SSH key pair (by default, the contents of *~/.ssh/id_rsa.pub*). It takes a few minutes to create the AKS cluster. Wait for the cluster successfully deploy before you move on to the next step. ## Validate the Bicep deployment 1. Sign in to the [Azure portal](https://portal.azure.com/).-2. On the Azure portal menu or from the **Home** page, navigate to your AKS cluster. -3. Under **Kubernetes resources**, select **Services and ingresses**. -4. Find the **store-front** service and copy the value for **External IP**. -5. Open a web browser to the external IP address of your service to see the Azure Store app in action. +1. On the Azure portal menu or from the **Home** page, navigate to your AKS cluster. +1. Under **Kubernetes resources**, select **Services and ingresses**. +1. Find the **store-front** service and copy the value for **External IP**. +1. Open a web browser to the external IP address of your service to see the Azure Store app in action. :::image type="content" source="media/quick-kubernetes-deploy-bicep-extensibility-kubernetes-provider/aks-store-application.png" alt-text="Screenshot of AKS Store sample application." lightbox="media/quick-kubernetes-deploy-bicep-extensibility-kubernetes-provider/aks-store-application.png"::: If you don't plan on going through the [AKS tutorial][aks-tutorial], clean up un ### [Azure CLI](#tab/azure-cli) -* Remove the resource group, container service, and all related resources using the [`az group delete`][az-group-delete] command. +Remove the resource group, container service, and all related resources using the [az group delete][az-group-delete] command. - ```azurecli-interactive - az group delete --name myResourceGroup --yes --no-wait - ``` +```azurecli +az group delete --name myResourceGroup --yes --no-wait +``` ### [Azure PowerShell](#tab/azure-powershell) -* Remove the resource group, container service, and all related resources using the [`Remove-AzResourceGroup`][remove-azresourcegroup] cmdlet. +Remove the resource group, container service, and all related resources using the [Remove-AzResourceGroup][remove-azresourcegroup] cmdlet. - ```azurepowershell-interactive - Remove-AzResourceGroup -Name myResourceGroup - ``` +```azurepowershell +Remove-AzResourceGroup -Name myResourceGroup +``` |
aks | Quick Kubernetes Deploy Bicep | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-bicep.md | Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you qui * Deploy an AKS cluster using Bicep. * Run a sample multi-container application with a group of microservices and web front ends simulating a retail scenario. - > [!NOTE] > To get started with quickly provisioning an AKS cluster, this article includes steps to deploy a cluster with default settings for evaluation purposes only. Before deploying a production-ready cluster, we recommend that you familiarize yourself with our [baseline reference architecture][baseline-reference-architecture] to consider how it aligns with your business requirements. Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you qui [!INCLUDE [azure-cli-prepare-your-environment-no-header.md](~/articles/reusable-content/azure-cli/azure-cli-prepare-your-environment-no-header.md)] -* This article requires Azure CLI version 2.0.64 or later. If you're using Azure Cloud Shell, the latest version is already installed. -* This article requires an existing Azure resource group. If you need to create one, you can use the [`az group create`][az-group-create] command. +* This article requires Azure CLI version 2.0.64 or later. If you're using Azure Cloud Shell, the latest version is already installed there. +* This article requires an existing Azure resource group. If you need to create one, you can use the [az group create][az-group-create] command. ### [Azure PowerShell](#tab/azure-powershell) -* If you're running PowerShell locally, install the `Az PowerShell` module. If using Azure Cloud Shell, the latest version is already installed. +* If you're running PowerShell locally, install the `Az PowerShell` module. If you're using Azure Cloud Shell, the latest version is already installed there. * You need the Bicep CLI. For more information, see [Azure PowerShell](../../azure-resource-manager/bicep/install.md#azure-powershell).-* This article requires an existing Azure resource group. If you need to create one, you can use the [`New-AzAksCluster`][new-az-aks-cluster] cmdlet. +* This article requires an existing Azure resource group. If you need to create one, you can use the [New-AzAksCluster][new-az-aks-cluster] cmdlet. Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you qui ### Create an SSH key pair 1. Go to [https://shell.azure.com](https://shell.azure.com) to open Cloud Shell in your browser.-2. Create an SSH key pair using the [`az sshkey create`][az-sshkey-create] Azure CLI command or the `ssh-keygen` command. +2. Create an SSH key pair using the [az sshkey create][az-sshkey-create] Azure CLI command or the `ssh-keygen` command. - ```azurecli-interactive + ```azurecli # Create an SSH key pair using Azure CLI az sshkey create --name "mySSHKey" --resource-group "myResourceGroup" For more AKS samples, see the [AKS quickstart templates][aks-quickstart-template > [!IMPORTANT] > The Bicep file sets the `clusterName` param to the string *aks101cluster*. If you want to use a different cluster name, make sure to update the string to your preferred cluster name before saving the file to your computer. -2. Deploy the Bicep file using either Azure CLI or Azure PowerShell. +1. Deploy the Bicep file using either Azure CLI or Azure PowerShell. ### [Azure CLI](#tab/azure-cli) - ```azurecli-interactive + ```azurecli az deployment group create --resource-group myResourceGroup --template-file main.bicep --parameters dnsPrefix=<dns-prefix> linuxAdminUsername=<linux-admin-username> sshRSAPublicKey='<ssh-key>' ``` ### [Azure PowerShell](#tab/azure-powershell) - ```azurepowershell-interactive + ```azurepowershell New-AzResourceGroup -Name myResourceGroup -Location eastus New-AzResourceGroupDeployment -ResourceGroupName myResourceGroup -TemplateFile ./main.bicep -dnsPrefix=<dns-prefix> -linuxAdminUsername=<linux-admin-username> -sshRSAPublicKey="<ssh-key>" ``` To manage a Kubernetes cluster, use the Kubernetes command-line client, [kubectl ### [Azure CLI](#tab/azure-cli) -1. Install `kubectl` locally using the [`az aks install-cli`][az-aks-install-cli] command. +1. Install `kubectl` locally using the [az aks install-cli][az-aks-install-cli] command. - ```azurecli-interactive + ```azurecli az aks install-cli ``` -2. Configure `kubectl` to connect to your Kubernetes cluster using the [`az aks get-credentials`][az-aks-get-credentials] command. This command downloads credentials and configures the Kubernetes CLI to use them. +1. Configure `kubectl` to connect to your Kubernetes cluster using the [az aks get-credentials][az-aks-get-credentials] command. This command downloads credentials and configures the Kubernetes CLI to use them. - ```azurecli-interactive + ```azurecli az aks get-credentials --resource-group myResourceGroup --name myAKSCluster ``` -3. Verify the connection to your cluster using the [`kubectl get`][kubectl-get] command. This command returns a list of the cluster nodes. +1. Verify the connection to your cluster using the [kubectl get][kubectl-get] command. This command returns a list of the cluster nodes. - ```azurecli-interactive + ```azurecli kubectl get nodes ``` To manage a Kubernetes cluster, use the Kubernetes command-line client, [kubectl ### [Azure PowerShell](#tab/azure-powershell) -1. Install `kubectl` locally using the [`Install-AzAksKubectl`][install-azakskubectl] cmdlet. +1. Install `kubectl` locally using the [Install-AzAksKubectl][install-azakskubectl] cmdlet. - ```azurepowershell-interactive + ```azurepowershell Install-AzAksKubectl ``` -2. Configure `kubectl` to connect to your Kubernetes cluster using the [`Import-AzAksCredential`][import-azakscredential] cmdlet. This command downloads credentials and configures the Kubernetes CLI to use them. +1. Configure `kubectl` to connect to your Kubernetes cluster using the [Import-AzAksCredential][import-azakscredential] cmdlet. This command downloads credentials and configures the Kubernetes CLI to use them. - ```azurepowershell-interactive + ```azurepowershell Import-AzAksCredential -ResourceGroupName myResourceGroup -Name myAKSCluster ``` -3. Verify the connection to your cluster using the [`kubectl get`][kubectl-get] command. This command returns a list of the cluster nodes. +1. Verify the connection to your cluster using the [kubectl get][kubectl-get] command. This command returns a list of the cluster nodes. - ```azurepowershell-interactive + ```azurepowershell kubectl get nodes ``` To deploy the application, you use a manifest file to create all the objects req For a breakdown of YAML manifest files, see [Deployments and YAML manifests](../concepts-clusters-workloads.md#deployments-and-yaml-manifests). -2. Deploy the application using the [`kubectl apply`][kubectl-apply] command and specify the name of your YAML manifest. + If you create and save the YAML file locally, then you can upload the manifest file to your default directory in CloudShell by selecting the **Upload/Download files** button and selecting the file from your local file system. ++1. Deploy the application using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest. ```console kubectl apply -f aks-store-quickstart.yaml To deploy the application, you use a manifest file to create all the objects req When the application runs, a Kubernetes service exposes the application front end to the internet. This process can take a few minutes to complete. -1. Check the status of the deployed pods using the [`kubectl get pods`][kubectl-get] command. Make all pods are `Running` before proceeding. +1. Check the status of the deployed pods using the [kubectl get pods][kubectl-get] command. Make all pods are `Running` before proceeding. ++ ```console + kubectl get pods + ``` -2. Check for a public IP address for the store-front application. Monitor progress using the [`kubectl get service`][kubectl-get] command with the `--watch` argument. +1. Check for a public IP address for the store-front application. Monitor progress using the [kubectl get service][kubectl-get] command with the `--watch` argument. ```console kubectl get service store-front --watch When the application runs, a Kubernetes service exposes the application front en store-front LoadBalancer 10.0.100.10 <pending> 80:30025/TCP 4h4m ``` -3. Once the **EXTERNAL-IP** address changes from *pending* to an actual public IP address, use `CTRL-C` to stop the `kubectl` watch process. +1. Once the **EXTERNAL-IP** address changes from *pending* to an actual public IP address, use `CTRL-C` to stop the `kubectl` watch process. The following example output shows a valid public IP address assigned to the service: When the application runs, a Kubernetes service exposes the application front en store-front LoadBalancer 10.0.100.10 20.62.159.19 80:30025/TCP 4h5m ``` -4. Open a web browser to the external IP address of your service to see the Azure Store app in action. +1. Open a web browser to the external IP address of your service to see the Azure Store app in action. :::image type="content" source="media/quick-kubernetes-deploy-bicep/aks-store-application.png" alt-text="Screenshot of AKS Store sample application." lightbox="media/quick-kubernetes-deploy-bicep/aks-store-application.png"::: If you don't plan on going through the [AKS tutorial][aks-tutorial], clean up un ### [Azure CLI](#tab/azure-cli) -* Remove the resource group, container service, and all related resources using the [`az group delete`][az-group-delete] command. +* Remove the resource group, container service, and all related resources using the [az group delete][az-group-delete] command. - ```azurecli-interactive + ```azurecli az group delete --name myResourceGroup --yes --no-wait ``` ### [Azure PowerShell](#tab/azure-powershell) -* Remove the resource group, container service, and all related resources using the [`Remove-AzResourceGroup`][remove-azresourcegroup] cmdlet +* Remove the resource group, container service, and all related resources using the [Remove-AzResourceGroup][remove-azresourcegroup] cmdlet - ```azurepowershell-interactive + ```azurepowershell Remove-AzResourceGroup -Name myResourceGroup ``` |
aks | Quick Kubernetes Deploy Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-cli.md | Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you qui - Deploy an AKS cluster using the Azure CLI. - Run a sample multi-container application with a group of microservices and web front ends simulating a retail scenario. - > [!NOTE] > To get started with quickly provisioning an AKS cluster, this article includes steps to deploy a cluster with default settings for evaluation purposes only. Before deploying a production-ready cluster, we recommend that you familiarize yourself with our [baseline reference architecture][baseline-reference-architecture] to consider how it aligns with your business requirements. This quickstart assumes a basic understanding of Kubernetes concepts. For more i [!INCLUDE [azure-cli-prepare-your-environment-no-header.md](~/articles/reusable-content/azure-cli/azure-cli-prepare-your-environment-no-header.md)] -- This article requires version 2.0.64 or later of the Azure CLI. If you are using Azure Cloud Shell, then the latest version is already installed.+- This article requires version 2.0.64 or later of the Azure CLI. If you're using Azure Cloud Shell, the latest version is already installed there. - Make sure that the identity you're using to create your cluster has the appropriate minimum permissions. For more details on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)](../concepts-identity.md).-- If you have multiple Azure subscriptions, select the appropriate subscription ID in which the resources should be billed using the [az account](/cli/azure/account) command.+- If you have multiple Azure subscriptions, select the appropriate subscription ID in which the resources should be billed using the [az account set](/cli/azure/account#az-account-set) command. ## Create a resource group An [Azure resource group][azure-resource-group] is a logical group in which Azur The following example creates a resource group named *myResourceGroup* in the *eastus* location. -Create a resource group using the [`az group create`][az-group-create] command. +Create a resource group using the [az group create][az-group-create] command. ```azurecli az group create --name myResourceGroup --location eastus Create a resource group using the [`az group create`][az-group-create] command. ## Create an AKS cluster -To create an AKS cluster, use the [`az aks create`][az-aks-create] command. The following example creates a cluster named *myAKSCluster* with one node and enables a system-assigned managed identity. +To create an AKS cluster, use the [az aks create][az-aks-create] command. The following example creates a cluster named *myAKSCluster* with one node and enables a system-assigned managed identity. ```azurecli az aks create \ To create an AKS cluster, use the [`az aks create`][az-aks-create] command. The ## Connect to the cluster -To manage a Kubernetes cluster, use the Kubernetes command-line client, [kubectl][kubectl]. `kubectl` is already installed if you use Azure Cloud Shell. To install `kubectl` locally, use the [`az aks install-cli`][az-aks-install-cli] command. +To manage a Kubernetes cluster, use the Kubernetes command-line client, [kubectl][kubectl]. `kubectl` is already installed if you use Azure Cloud Shell. To install `kubectl` locally, call the [az aks install-cli][az-aks-install-cli] command. -1. Configure `kubectl` to connect to your Kubernetes cluster using the [`az aks get-credentials`][az-aks-get-credentials] command. This command downloads credentials and configures the Kubernetes CLI to use them. +1. Configure `kubectl` to connect to your Kubernetes cluster using the [az aks get-credentials][az-aks-get-credentials] command. This command downloads credentials and configures the Kubernetes CLI to use them. ```azurecli az aks get-credentials --resource-group myResourceGroup --name myAKSCluster ``` -1. Verify the connection to your cluster using the [`kubectl get`][kubectl-get] command. This command returns a list of the cluster nodes. +1. Verify the connection to your cluster using the [kubectl get][kubectl-get] command. This command returns a list of the cluster nodes. ```azurecli kubectl get nodes To deploy the application, you use a manifest file to create all the objects req If you create and save the YAML file locally, then you can upload the manifest file to your default directory in CloudShell by selecting the **Upload/Download files** button and selecting the file from your local file system. -1. Deploy the application using the [`kubectl apply`][kubectl-apply] command and specify the name of your YAML manifest. +1. Deploy the application using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest. ```azurecli kubectl apply -f aks-store-quickstart.yaml To deploy the application, you use a manifest file to create all the objects req When the application runs, a Kubernetes service exposes the application front end to the internet. This process can take a few minutes to complete. -1. Check the status of the deployed pods using the [`kubectl get pods`][kubectl-get] command. Make sure all pods are `Running` before proceeding. +1. Check the status of the deployed pods using the [kubectl get pods][kubectl-get] command. Make sure all pods are `Running` before proceeding. ++ ```console + kubectl get pods + ``` -1. Check for a public IP address for the store-front application. Monitor progress using the [`kubectl get service`][kubectl-get] command with the `--watch` argument. +1. Check for a public IP address for the store-front application. Monitor progress using the [kubectl get service][kubectl-get] command with the `--watch` argument. ```azurecli kubectl get service store-front --watch When the application runs, a Kubernetes service exposes the application front en ## Delete the cluster -If you don't plan on going through the [AKS tutorial][aks-tutorial], clean up unnecessary resources to avoid Azure charges. +If you don't plan on going through the [AKS tutorial][aks-tutorial], clean up unnecessary resources to avoid Azure charges. Call the [az group delete][az-group-delete] command to remove the resource group, container service, and all related resources. -- Remove the resource group, container service, and all related resources using the [`az group delete`][az-group-delete] command.-- ```azurecli - az group delete --name myResourceGroup --yes --no-wait - ``` + ```azurecli + az group delete --name myResourceGroup --yes --no-wait + ``` - > [!NOTE] - > The AKS cluster was created with a system-assigned managed identity, which is the default identity option used in this quickstart. The platform manages this identity so you don't need to manually remove it. + > [!NOTE] + > The AKS cluster was created with a system-assigned managed identity, which is the default identity option used in this quickstart. The platform manages this identity so you don't need to manually remove it. ## Next steps To learn more about AKS and walk through a complete code-to-deployment example, <!-- LINKS - internal --> [kubernetes-concepts]: ../concepts-clusters-workloads.md-[aks-identity-concepts]: ../concepts-identity.md [aks-tutorial]: ../tutorial-kubernetes-prepare-app.md [azure-resource-group]: ../../azure-resource-manager/management/overview.md-[az-account]: /cli/azure/account [az-aks-create]: /cli/azure/aks#az-aks-create [az-aks-get-credentials]: /cli/azure/aks#az-aks-get-credentials [az-aks-install-cli]: /cli/azure/aks#az-aks-install-cli To learn more about AKS and walk through a complete code-to-deployment example, [az-group-delete]: /cli/azure/group#az-group-delete [kubernetes-deployment]: ../concepts-clusters-workloads.md#deployments-and-yaml-manifests [aks-solution-guidance]: /azure/architecture/reference-architectures/containers/aks-start-here?toc=/azure/aks/toc.json&bc=/azure/aks/breadcrumb/toc.json-[intro-azure-linux]: ../../azure-linux/intro-azure-linux.md -[baseline-reference-architecture]: /azure/architecture/reference-architectures/containers/aks/baseline-aks?toc=/azure/aks/toc.json&bc=/azure/aks/breadcrumb/toc.json +[baseline-reference-architecture]: /azure/architecture/reference-architectures/containers/aks/baseline-aks?toc=/azure/aks/toc.json&bc=/azure/aks/breadcrumb/toc.json |
aks | Quick Kubernetes Deploy Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-portal.md | Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you qui - Deploy an AKS cluster using the Azure portal. - Run a sample multi-container application with a group of microservices and web front ends simulating a retail scenario. - > [!NOTE] > To get started with quickly provisioning an AKS cluster, this article includes steps to deploy a cluster with default settings for evaluation purposes only. Before deploying a production-ready cluster, we recommend that you familiarize yourself with our [baseline reference architecture][baseline-reference-architecture] to consider how it aligns with your business requirements. It takes a few minutes to create the AKS cluster. When your deployment is comple To manage a Kubernetes cluster, use the Kubernetes command-line client, [kubectl][kubectl]. `kubectl` is already installed if you use Azure Cloud Shell. If you're unfamiliar with the Cloud Shell, review [Overview of Azure Cloud Shell](../../cloud-shell/overview.md). -1. Open Cloud Shell using the `>_` button on the top of the Azure portal. -- > [!NOTE] - > To perform these operations in a local shell installation: - > - > 1. Verify Azure CLI or Azure PowerShell is installed. - > 1. Connect to Azure via the `az login` or `Connect-AzAccount` command. +If you're using Cloud Shell, open it with the `>_` button on the top of the Azure portal. If you're using PowerShell locally, connect to Azure via the `Connect-AzAccount` command. If you're using Azure CLI locally, connect to Azure via the `az login` command. ### [Azure CLI](#tab/azure-cli) -1. Configure `kubectl` to connect to your Kubernetes cluster using the [`az aks get-credentials`][az-aks-get-credentials] command. This command downloads credentials and configures the Kubernetes CLI to use them. +1. Configure `kubectl` to connect to your Kubernetes cluster using the [az aks get-credentials][az-aks-get-credentials] command. This command downloads credentials and configures the Kubernetes CLI to use them. ```azurecli az aks get-credentials --resource-group myResourceGroup --name myAKSCluster To manage a Kubernetes cluster, use the Kubernetes command-line client, [kubectl ### [Azure PowerShell](#tab/azure-powershell) -1. Configure `kubectl` to connect to your Kubernetes cluster using the [`Import-AzAksCredential`][import-azakscredential] cmdlet. This command downloads credentials and configures the Kubernetes CLI to use them. +1. Configure `kubectl` to connect to your Kubernetes cluster using the [Import-AzAksCredential][import-azakscredential] cmdlet. This command downloads credentials and configures the Kubernetes CLI to use them. - ```azurepowershell-interactive + ```azurepowershell Import-AzAksCredential -ResourceGroupName myResourceGroup -Name myAKSCluster ``` 1. Verify the connection to your cluster using `kubectl get` to return a list of the cluster nodes. - ```azurepowershell-interactive + ```azurepowershell kubectl get nodes ``` To deploy the application, you use a manifest file to create all the objects req For a breakdown of YAML manifest files, see [Deployments and YAML manifests](../concepts-clusters-workloads.md#deployments-and-yaml-manifests). -1. If you create and save the YAML file locally, then you can upload the manifest file to your default directory in CloudShell by selecting the **Upload/Download files** button and selecting the file from your local file system. + If you create and save the YAML file locally, then you can upload the manifest file to your default directory in CloudShell by selecting the **Upload/Download files** button and selecting the file from your local file system. + 1. Deploy the application using the `kubectl apply` command and specify the name of your YAML manifest: ```console To deploy the application, you use a manifest file to create all the objects req When the application runs, a Kubernetes service exposes the application front end to the internet. This process can take a few minutes to complete. -1. Check the status of the deployed pods using the [`kubectl get pods`][kubectl-get] command. Make all pods are `Running` before proceeding. +1. Check the status of the deployed pods using the [kubectl get pods][kubectl-get] command. Make all pods are `Running` before proceeding. ++ ```console + kubectl get pods + ``` -1. Check for a public IP address for the store-front application. Monitor progress using the [`kubectl get service`][kubectl-get] command with the `--watch` argument. +1. Check for a public IP address for the store-front application. Monitor progress using the [kubectl get service][kubectl-get] command with the `--watch` argument. ```azurecli kubectl get service store-front --watch |
aks | Quick Kubernetes Deploy Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-powershell.md | Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you qui - Deploy an AKS cluster using Azure PowerShell. - Run a sample multi-container application with a group of microservices and web front ends simulating a retail scenario. - > [!NOTE] > To get started with quickly provisioning an AKS cluster, this article includes steps to deploy a cluster with default settings for evaluation purposes only. Before deploying a production-ready cluster, we recommend that you familiarize yourself with our [baseline reference architecture][baseline-reference-architecture] to consider how it aligns with your business requirements. An [Azure resource group][azure-resource-group] is a logical group in which Azur The following example creates a resource group named *myResourceGroup* in the *eastus* location. -- Create a resource group using the [`New-AzResourceGroup`][new-azresourcegroup] cmdlet.+- Create a resource group using the [New-AzResourceGroup][new-azresourcegroup] cmdlet. ```azurepowershell New-AzResourceGroup -Name myResourceGroup -Location eastus The following example creates a resource group named *myResourceGroup* in the *e ## Create AKS cluster -To create an AKS cluster, use the [`New-AzAksCluster`][new-azakscluster] cmdlet. The following example creates a cluster named *myAKSCluster* with one node and enables a system-assigned managed identity. +To create an AKS cluster, use the [New-AzAksCluster][new-azakscluster] cmdlet. The following example creates a cluster named *myAKSCluster* with one node and enables a system-assigned managed identity. ```azurepowershell New-AzAksCluster -ResourceGroupName myResourceGroup ` After a few minutes, the command completes and returns information about the clu ## Connect to the cluster -To manage a Kubernetes cluster, use the Kubernetes command-line client, [kubectl][kubectl]. `kubectl` is already installed if you use Azure Cloud Shell. To install `kubectl` locally, use the `Install-AzAksCliTool` cmdlet. +To manage a Kubernetes cluster, use the Kubernetes command-line client, [kubectl][kubectl]. `kubectl` is already installed if you use Azure Cloud Shell. To install `kubectl` locally, call the `Install-AzAksCliTool` cmdlet. -1. Configure `kubectl` to connect to your Kubernetes cluster using the [`Import-AzAksCredential`][import-azakscredential] cmdlet. This command downloads credentials and configures the Kubernetes CLI to use them. +1. Configure `kubectl` to connect to your Kubernetes cluster using the [Import-AzAksCredential][import-azakscredential] cmdlet. This command downloads credentials and configures the Kubernetes CLI to use them. ```azurepowershell Import-AzAksCredential -ResourceGroupName myResourceGroup -Name myAKSCluster ``` -1. Verify the connection to your cluster using the [`kubectl get`][kubectl-get] command. This command returns a list of the cluster nodes. +1. Verify the connection to your cluster using the [kubectl get][kubectl-get] command. This command returns a list of the cluster nodes. ```azurepowershell kubectl get nodes To deploy the application, you use a manifest file to create all the objects req If you create and save the YAML file locally, then you can upload the manifest file to your default directory in CloudShell by selecting the **Upload/Download files** button and selecting the file from your local file system. -1. Deploy the application using the [`kubectl apply`][kubectl-apply] command and specify the name of your YAML manifest. +1. Deploy the application using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest. ```console kubectl apply -f aks-store-quickstart.yaml To deploy the application, you use a manifest file to create all the objects req When the application runs, a Kubernetes service exposes the application front end to the internet. This process can take a few minutes to complete. -1. Check the status of the deployed pods using the [`kubectl get pods`][kubectl-get] command. Make all pods are `Running` before proceeding. +1. Check the status of the deployed pods using the [kubectl get pods][kubectl-get] command. Make all pods are `Running` before proceeding. ++ ```console + kubectl get pods + ``` -1. Check for a public IP address for the store-front application. Monitor progress using the [`kubectl get service`][kubectl-get] command with the `--watch` argument. +1. Check for a public IP address for the store-front application. Monitor progress using the [kubectl get service][kubectl-get] command with the `--watch` argument. ```azurecli-interactive kubectl get service store-front --watch When the application runs, a Kubernetes service exposes the application front en ## Delete the cluster -If you don't plan on going through the [AKS tutorial][aks-tutorial], clean up unnecessary resources to avoid Azure charges. Remove the resource group, container service, and all related resources using the [`Remove-AzResourceGroup`][remove-azresourcegroup] cmdlet. +If you don't plan on going through the [AKS tutorial][aks-tutorial], clean up unnecessary resources to avoid Azure charges. Remove the resource group, container service, and all related resources by calling the [Remove-AzResourceGroup][remove-azresourcegroup] cmdlet. ```azurepowershell Remove-AzResourceGroup -Name myResourceGroup |
aks | Quick Kubernetes Deploy Rm Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-rm-template.md | Title: 'Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using an ARM template' description: Learn how to quickly deploy a Kubernetes cluster using an Azure Resource Manager template and deploy an application in Azure Kubernetes Service (AKS). Previously updated : 01/11/2024 Last updated : 01/12/2024 #Customer intent: As a developer or cluster operator, I want to quickly deploy an AKS cluster and deploy an application so that I can see how to run applications using the managed Kubernetes service in Azure. -* Deploy an AKS cluster using an Azure Resource Manager template. -* Run a sample multi-container application with a group of microservices and web front ends simulating a retail scenario. +- Deploy an AKS cluster using an Azure Resource Manager template. +- Run a sample multi-container application with a group of microservices and web front ends simulating a retail scenario. > [!NOTE] > To get started with quickly provisioning an AKS cluster, this article includes steps to deploy a cluster with default settings for evaluation purposes only. Before deploying a production-ready cluster, we recommend that you familiarize yourself with our [baseline reference architecture][baseline-reference-architecture] to consider how it aligns with your business requirements. ## Before you begin -* This quickstart assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)][kubernetes-concepts]. -* You need an Azure account with an active subscription. If you don't have one, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). -* To learn more about creating a Windows Server node pool, see [Create an AKS cluster that supports Windows Server containers](quick-windows-container-deploy-cli.md). -* [!INCLUDE [About Azure Resource Manager](../../../includes/resource-manager-quickstart-introduction.md)] -* If your environment meets the prerequisites and you're familiar with ARM templates, select **Deploy to Azure** to open the template in the Azure portal. +This article assumes a basic understanding of Kubernetes concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)](../concepts-clusters-workloads.md). - [![Deploy to Azure](../../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.kubernetes%2Faks%2Fazuredeploy.json) +- [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ++- Make sure the identity you use to create your cluster has the appropriate minimum permissions. For more details on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)](../concepts-identity.md). ++- To deploy an ARM template, you need write access on the resources you're deploying and access to all operations on the `Microsoft.Resources/deployments` resource type. For example, to deploy a virtual machine, you need `Microsoft.Compute/virtualMachines/write` and `Microsoft.Resources/deployments/*` permissions. For a list of roles and permissions, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md). ++After you deploy the cluster from the template, you can use either Azure CLI or Azure PowerShell to connect to the cluster and deploy the sample application. ### [Azure CLI](#tab/azure-cli) [!INCLUDE [azure-cli-prepare-your-environment-no-header.md](~/articles/reusable-content/azure-cli/azure-cli-prepare-your-environment-no-header.md)] -* This article requires Azure CLI version 2.0.64 or later. If using Azure Cloud Shell, the latest version is already installed. +This article requires Azure CLI version 2.0.64 or later. If you're using Azure Cloud Shell, the latest version is already installed there. ### [Azure PowerShell](#tab/azure-powershell) -* If you're running PowerShell locally, install the `Az PowerShell` module. If using Azure Cloud Shell, the latest version is already installed. +If you're running PowerShell locally, install the `Az PowerShell` module. If you're using Azure Cloud Shell, the latest version is already installed there. -* To create an AKS cluster using an ARM template, you provide an SSH public key. If you need this resource, see the following section. Otherwise, skip [Review the template](#review-the-template). --* Make sure the identity you use to create your cluster has the appropriate minimum permissions. For more details on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)](../concepts-identity.md). --* To deploy an ARM template, you need write access on the resources you're deploying and access to all operations on the `Microsoft.Resources/deployments` resource type. For example, to deploy a virtual machine, you need `Microsoft.Compute/virtualMachines/write` and `Microsoft.Resources/deployments/*` permissions. For a list of roles and permissions, see [Azure built-in roles](../../role-based-access-control/built-in-roles.md). - ### Create an SSH key pair -To access AKS nodes, you connect using an SSH key pair (public and private), which you generate using the `ssh-keygen` command. By default, these files are created in the *~/.ssh* directory. Running the `ssh-keygen` command will overwrite any SSH key pair with the same name already existing in the given location. +To create an AKS cluster using an ARM template, you provide an SSH public key. If you need this resource, follow the steps in this section. Otherwise, skip to the [Review the template](#review-the-template) section. ++To access AKS nodes, you connect using an SSH key pair (public and private). To create an SSH key pair: 1. Go to [https://shell.azure.com](https://shell.azure.com) to open Cloud Shell in your browser.-2. Create an SSH key pair using the [`az sshkey create`][az-sshkey-create] Azure CLI command or the `ssh-keygen` command. +1. Create an SSH key pair using the [az sshkey create](/cli/azure/sshkey#az-sshkey-create) command or the `ssh-keygen` command. - ```azurecli-interactive + ```azurecli # Create an SSH key pair using Azure CLI az sshkey create --name "mySSHKey" --resource-group "myResourceGroup" + # or + # Create an SSH key pair using ssh-keygen ssh-keygen -t rsa -b 4096 ``` +1. To deploy the template, you must provide the public key from the SSH pair. To retrieve the public key, call [az sshkey show](/cli/azure/sshkey#az-sshkey-show): ++ ```azurecli + az sshkey show --name "mySSHKey" --resource-group "myResourceGroup" --query "publicKey" + ``` ++By default, the SSH key files are created in the *~/.ssh* directory. Running the `az sshkey create` or `ssh-keygen` command will overwrite any existing SSH key pair with the same name. + For more information about creating SSH keys, see [Create and manage SSH keys for authentication in Azure][ssh-keys]. ## Review the template The template used in this quickstart is from [Azure Quickstart Templates](https: :::code language="json" source="~/quickstart-templates/quickstarts/microsoft.kubernetes/aks/azuredeploy.json"::: -The resource defined in the ARM template: --* [**Microsoft.ContainerService/managedClusters**](/azure/templates/microsoft.containerservice/managedclusters?pivots=deployment-language-arm-template) +The resource type defined in the ARM template is [**Microsoft.ContainerService/managedClusters**](/azure/templates/microsoft.containerservice/managedclusters?pivots=deployment-language-arm-template). For more AKS samples, see the [AKS quickstart templates][aks-quickstart-templates] site. For more AKS samples, see the [AKS quickstart templates][aks-quickstart-template [![Deploy to Azure](../../media/template-deployments/deploy-to-azure.svg)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.kubernetes%2Faks%2Fazuredeploy.json) -2. On the **Basics** page, leave the default values for the *OS Disk Size GB*, *Agent Count*, *Agent VM Size*, *OS Type*, and *Kubernetes Version*, and configure the following template parameters: +1. On the **Basics** page, leave the default values for the *OS Disk Size GB*, *Agent Count*, *Agent VM Size*, and *OS Type*, and configure the following template parameters: - * **Subscription**: Select an Azure subscription. - * **Resource group**: Select **Create new**. Enter a unique name for the resource group, such as *myResourceGroup*, then select **OK**. - * **Location**: Select a location, such as **East US**. - * **Cluster name**: Enter a unique name for the AKS cluster, such as *myAKSCluster*. - * **DNS prefix**: Enter a unique DNS prefix for your cluster, such as *myakscluster*. - * **Linux Admin Username**: Enter a username to connect using SSH, such as *azureuser*. - * **SSH public key source**: Select **Use existing public key**. - * **Key pair name**: Copy and paste the *public* part of your SSH key pair (by default, the contents of *~/.ssh/id_rsa.pub*). + - **Subscription**: Select an Azure subscription. + - **Resource group**: Select **Create new**. Enter a unique name for the resource group, such as *myResourceGroup*, then select **OK**. + - **Location**: Select a location, such as **East US**. + - **Cluster name**: Enter a unique name for the AKS cluster, such as *myAKSCluster*. + - **DNS prefix**: Enter a unique DNS prefix for your cluster, such as *myakscluster*. + - **Linux Admin Username**: Enter a username to connect using SSH, such as *azureuser*. + - **SSH public key source**: Select **Use existing public key**. + - **Key pair name**: Copy and paste the *public* part of your SSH key pair (by default, the contents of *~/.ssh/id_rsa.pub*). -3. Select **Review + Create** > **Create**. +1. Select **Review + Create** > **Create**. It takes a few minutes to create the AKS cluster. Wait for the cluster to be successfully deployed before you move on to the next step. -## Validate the deployment +## Connect to the cluster -### Connect to the cluster --To manage a Kubernetes cluster, use the Kubernetes command-line client, [kubectl][kubectl]. `kubectl` is already installed if you use Azure Cloud Shell. +To manage a Kubernetes cluster, use the Kubernetes command-line client, [kubectl][kubectl]. ### [Azure CLI](#tab/azure-cli) -1. Install `kubectl` locally using the [`az aks install-cli`][az-aks-install-cli] command. -- ```azurecli-interactive - az aks install-cli - ``` +If you use Azure Cloud Shell, `kubectl` is already installed. To install and run `kubectl` locally, call the [az aks install-cli][az-aks-install-cli] command. -2. Configure `kubectl` to connect to your Kubernetes cluster using the [`az aks get-credentials`][az-aks-get-credentials] command. This command downloads credentials and configures the Kubernetes CLI to use them. +1. Configure `kubectl` to connect to your Kubernetes cluster using the [az aks get-credentials][az-aks-get-credentials] command. This command downloads credentials and configures the Kubernetes CLI to use them. - ```azurecli-interactive + ```azurecli az aks get-credentials --resource-group myResourceGroup --name myAKSCluster ``` -3. Verify the connection to your cluster using the [`kubectl get`][kubectl-get] command. This command returns a list of the cluster nodes. +1. Verify the connection to your cluster using the [kubectl get][kubectl-get] command. This command returns a list of the cluster nodes. - ```azurecli-interactive + ```azurecli kubectl get nodes ``` The following example output shows the three nodes created in the previous steps. Make sure the node status is *Ready*. ```output- NAME STATUS ROLES AGE VERSION - aks-agentpool-41324942-0 Ready agent 6m44s v1.12.6 - aks-agentpool-41324942-1 Ready agent 6m46s v1.12.6 - aks-agentpool-41324942-2 Ready agent 6m45s v1.12.6 + NAME STATUS ROLES AGE VERSION + aks-agentpool-27442051-vmss000000 Ready agent 10m v1.27.7 + aks-agentpool-27442051-vmss000001 Ready agent 10m v1.27.7 + aks-agentpool-27442051-vmss000002 Ready agent 11m v1.27.7 ``` ### [Azure PowerShell](#tab/azure-powershell) -1. Install `kubectl` locally using the [`Install-AzAksKubectl`][install-azakskubectl] cmdlet. +If you use Azure Cloud Shell, `kubectl` is already installed. To install `kubectl` locally, call the [Install-AzAksCliTool][install-azakskubectl] cmdlet. - ```azurepowershell-interactive - Install-AzAksKubectl - ``` --2. Configure `kubectl` to connect to your Kubernetes cluster using the [`Import-AzAksCredential`][import-azakscredential] cmdlet. This command downloads credentials and configures the Kubernetes CLI to use them. +1. Configure `kubectl` to connect to your Kubernetes cluster using the [Import-AzAksCredential][import-azakscredential] cmdlet. This command downloads credentials and configures the Kubernetes CLI to use them. - ```azurepowershell-interactive + ```azurepowershell Import-AzAksCredential -ResourceGroupName myResourceGroup -Name myAKSCluster ``` -3. Verify the connection to your cluster using the [`kubectl get`][kubectl-get] command. This command returns a list of the cluster nodes. +1. Verify the connection to your cluster using the [kubectl get][kubectl-get] command. This command returns a list of the cluster nodes. - ```azurepowershell-interactive + ```azurepowershell kubectl get nodes ``` The following example output shows the three nodes created in the previous steps. Make sure the node status is *Ready*. ```output- NAME STATUS ROLES AGE VERSION - aks-agentpool-41324942-0 Ready agent 6m44s v1.12.6 - aks-agentpool-41324942-1 Ready agent 6m46s v1.12.6 - aks-agentpool-41324942-2 Ready agent 6m45s v1.12.6 + NAME STATUS ROLES AGE VERSION + aks-agentpool-27442051-vmss000000 Ready agent 10m v1.27.7 + aks-agentpool-27442051-vmss000001 Ready agent 10m v1.27.7 + aks-agentpool-27442051-vmss000002 Ready agent 11m v1.27.7 ``` -### Deploy the application +## Deploy the application To deploy the application, you use a manifest file to create all the objects required to run the [AKS Store application](https://github.com/Azure-Samples/aks-store-demo). A [Kubernetes manifest file][kubernetes-deployment] defines a cluster's desired state, such as which container images to run. The manifest includes the following Kubernetes deployments and :::image type="content" source="media/quick-kubernetes-deploy-rm-template/aks-store-architecture.png" alt-text="Screenshot of Azure Store sample architecture." lightbox="media/quick-kubernetes-deploy-rm-template/aks-store-architecture.png"::: -* **Store front**: Web application for customers to view products and place orders. -* **Product service**: Shows product information. -* **Order service**: Places orders. -* **Rabbit MQ**: Message queue for an order queue. +- **Store front**: Web application for customers to view products and place orders. +- **Product service**: Shows product information. +- **Order service**: Places orders. +- **Rabbit MQ**: Message queue for an order queue. > [!NOTE] > We don't recommend running stateful containers, such as Rabbit MQ, without persistent storage for production. These are used here for simplicity, but we recommend using managed services, such as Azure CosmosDB or Azure Service Bus. To deploy the application, you use a manifest file to create all the objects req For a breakdown of YAML manifest files, see [Deployments and YAML manifests](../concepts-clusters-workloads.md#deployments-and-yaml-manifests). -2. Deploy the application using the [`kubectl apply`][kubectl-apply] command and specify the name of your YAML manifest. + If you create and save the YAML file locally, then you can upload the manifest file to your default directory in CloudShell by selecting the **Upload/Download files** button and selecting the file from your local file system. ++1. Deploy the application using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest. ```console kubectl apply -f aks-store-quickstart.yaml To deploy the application, you use a manifest file to create all the objects req service/store-front created ``` -### Test the application +## Test the application -1. Check the status of the deployed pods using the [`kubectl get pods`][kubectl-get] command. Make all pods are `Running` before proceeding. +1. Check the status of the deployed pods using the [kubectl get pods][kubectl-get] command. Make all pods are `Running` before proceeding. -2. Check for a public IP address for the store-front application. Monitor progress using the [`kubectl get service`][kubectl-get] command with the `--watch` argument. + ```console + kubectl get pods + ``` ++1. Check for a public IP address for the store-front application. Monitor progress using the [kubectl get service][kubectl-get] command with the `--watch` argument. ```console kubectl get service store-front --watch To deploy the application, you use a manifest file to create all the objects req store-front LoadBalancer 10.0.100.10 <pending> 80:30025/TCP 4h4m ``` -3. Once the **EXTERNAL-IP** address changes from *pending* to an actual public IP address, use `CTRL-C` to stop the `kubectl` watch process. +1. Once the **EXTERNAL-IP** address changes from *pending* to an actual public IP address, use `CTRL-C` to stop the `kubectl` watch process. The following example output shows a valid public IP address assigned to the service: To deploy the application, you use a manifest file to create all the objects req store-front LoadBalancer 10.0.100.10 20.62.159.19 80:30025/TCP 4h5m ``` -4. Open a web browser to the external IP address of your service to see the Azure Store app in action. +1. Open a web browser to the external IP address of your service to see the Azure Store app in action. :::image type="content" source="media/quick-kubernetes-deploy-rm-template/aks-store-application.png" alt-text="Screenshot of AKS Store sample application." lightbox="media/quick-kubernetes-deploy-rm-template/aks-store-application.png"::: If you don't plan on going through the [AKS tutorial][aks-tutorial], clean up un ### [Azure CLI](#tab/azure-cli) -* Remove the resource group, container service, and all related resources using the [`az group delete`][az-group-delete] command. +Remove the resource group, container service, and all related resources by calling the [az group delete][az-group-delete] command. - ```azurecli-interactive - az group delete --name myResourceGroup --yes --no-wait - ``` +```azurecli +az group delete --name myResourceGroup --yes --no-wait +``` ### [Azure PowerShell](#tab/azure-powershell) -* Remove the resource group, container service, and all related resources using the [`Remove-AzResourceGroup`][remove-azresourcegroup] cmdlet +Remove the resource group, container service, and all related resources by calling the [Remove-AzResourceGroup][remove-azresourcegroup] cmdlet - ```azurepowershell-interactive - Remove-AzResourceGroup -Name myResourceGroup - ``` +```azurepowershell +Remove-AzResourceGroup -Name myResourceGroup +``` - > [!NOTE] - > The AKS cluster was created with a system-assigned managed identity, which is the default identity option used in this quickstart. The platform manages this identity so you don't need to manually remove it. +> [!NOTE] +> The AKS cluster was created with a system-assigned managed identity, which is the default identity option used in this quickstart. The platform manages this identity so you don't need to manually remove it. ## Next steps To learn more about AKS and walk through a complete code-to-deployment example, [aks-quickstart-templates]: https://azure.microsoft.com/resources/templates/?term=Azure+Kubernetes+Service <!-- LINKS - internal -->-[kubernetes-concepts]: ../concepts-clusters-workloads.md [aks-tutorial]: ../tutorial-kubernetes-prepare-app.md [az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials [import-azakscredential]: /powershell/module/az.aks/import-azakscredential |
aks | Quick Kubernetes Deploy Terraform | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-terraform.md | Title: 'Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using Terraform' description: Learn how to quickly deploy a Kubernetes cluster using Terraform and deploy an application in Azure Kubernetes Service (AKS). Previously updated : 01/11/2024 Last updated : 01/12/2024 content_well_notification: - AI-contribution Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you qui * Deploy an AKS cluster using Terraform. * Run a sample multi-container application with a group of microservices and web front ends simulating a retail scenario. - > [!NOTE] > To get started with quickly provisioning an AKS cluster, this article includes steps to deploy a cluster with default settings for evaluation purposes only. Before deploying a production-ready cluster, we recommend that you familiarize yourself with our [baseline reference architecture][baseline-reference-architecture] to consider how it aligns with your business requirements. Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you qui > [!NOTE] > The Azure Linux node pool is now generally available (GA). To learn about the benefits and deployment steps, see the [Introduction to the Azure Linux Container Host for AKS][intro-azure-linux]. -## Login to your Azure Account +## Login to your Azure account ++First, log into your Azure account and authenticate using one of the methods described in the following section. [!INCLUDE [authenticate-to-azure.md](~/azure-dev-docs-pr/articles/terraform/includes/authenticate-to-azure.md)] Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you qui resource_group_name=$(terraform output -raw resource_group_name) ``` -2. Display the name of your new Kubernetes cluster using the [`az aks list`](/cli/azure/aks#az-aks-list) command. +2. Display the name of your new Kubernetes cluster using the [az aks list](/cli/azure/aks#az-aks-list) command. ```azurecli-interactive az aks list \ To deploy the application, you use a manifest file to create all the objects req For a breakdown of YAML manifest files, see [Deployments and YAML manifests](../concepts-clusters-workloads.md#deployments-and-yaml-manifests). + If you create and save the YAML file locally, then you can upload the manifest file to your default directory in CloudShell by selecting the **Upload/Download files** button and selecting the file from your local file system. + 2. Deploy the application using the `kubectl apply` command and specify the name of your YAML manifest. ```console When the application runs, a Kubernetes service exposes the application front en 1. Check the status of the deployed pods using the `kubectl get pods` command. Make all pods are `Running` before proceeding. + ```console + kubectl get pods + ``` + 2. Check for a public IP address for the store-front application. Monitor progress using the `kubectl get service` command with the `--watch` argument. ```azurecli-interactive When the application runs, a Kubernetes service exposes the application front en sp=$(terraform output -raw sp) ``` -1. Delete the service principal using the [`az ad sp delete`](/cli/azure/ad/sp#az-ad-sp-delete) command. +1. Delete the service principal using the [az ad sp delete](/cli/azure/ad/sp#az-ad-sp-delete) command. ```azurecli-interactive az ad sp delete --id $sp |
aks | Quick Windows Container Deploy Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-windows-container-deploy-cli.md | Last updated 01/11/2024 Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you quickly deploy and manage clusters. In this article, you use Azure CLI to deploy an AKS cluster that runs Windows Server containers. You also deploy an ASP.NET sample application in a Windows Server container to the cluster. - > [!NOTE] > To get started with quickly provisioning an AKS cluster, this article includes steps to deploy a cluster with default settings for evaluation purposes only. Before deploying a production-ready cluster, we recommend that you familiarize yourself with our [baseline reference architecture][baseline-reference-architecture] to consider how it aligns with your business requirements. This quickstart assumes a basic understanding of Kubernetes concepts. For more i [!INCLUDE [azure-cli-prepare-your-environment-no-header.md](~/articles/reusable-content/azure-cli/azure-cli-prepare-your-environment-no-header.md)] -- This quickstart requires version 2.0.64 or later of the Azure CLI. If you are using Azure Cloud Shell, then the latest version is already installed.+- This article requires version 2.0.64 or later of the Azure CLI. If you're using Azure Cloud Shell, the latest version is already installed there. - Make sure that the identity you're using to create your cluster has the appropriate minimum permissions. For more details on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)](../concepts-identity.md).-- If you have multiple Azure subscriptions, select the appropriate subscription ID in which the resources should be billed using the [az account](/cli/azure/account) command.+- If you have multiple Azure subscriptions, select the appropriate subscription ID in which the resources should be billed using the [az account set](/cli/azure/account#az-account-set) command. ## Create a resource group az aks nodepool add \ ## Connect to the cluster -You use [kubectl][kubectl], the Kubernetes command-line client, to manage your Kubernetes clusters. If you use Azure Cloud Shell, `kubectl` is already installed. If you want to install `kubectl` locally, you can use the [az aks install-cli][az-aks-install-cli] command. +You use [kubectl][kubectl], the Kubernetes command-line client, to manage your Kubernetes clusters. If you use Azure Cloud Shell, `kubectl` is already installed. If you want to install and run `kubectl` locally, call the [az aks install-cli][az-aks-install-cli] command. 1. Configure `kubectl` to connect to your Kubernetes cluster using the [az aks get-credentials][az-aks-get-credentials] command. This command downloads credentials and configures the Kubernetes CLI to use them. The ASP.NET sample application is provided as part of the [.NET Framework Sample When the application runs, a Kubernetes service exposes the application front end to the internet. This process can take a few minutes to complete. Occasionally, the service can take longer than a few minutes to provision. Allow up to 10 minutes for provisioning. -1. Check the status of the deployed pods using the [`kubectl get pods`][kubectl-get] command. Make all pods are `Running` before proceeding. +1. Check the status of the deployed pods using the [kubectl get pods][kubectl-get] command. Make all pods are `Running` before proceeding. ++ ```console + kubectl get pods + ``` 1. Monitor progress using the [kubectl get service][kubectl-get] command with the `--watch` argument. When the application runs, a Kubernetes service exposes the application front en 1. See the sample app in action by opening a web browser to the external IP address of your service. - :::image type="content" source="media/quick-windows-container-deploy-cli/asp-net-sample-app.png" alt-text="Screenshot of browsing to ASP.NET sample application."::: + :::image type="content" source="media/quick-windows-container-deploy-cli/asp-net-sample-app.png" alt-text="Screenshot of browsing to ASP.NET sample application." lightbox="media/quick-windows-container-deploy-cli/asp-net-sample-app.png"::: ## Delete resources To learn more about AKS, and to walk through a complete code-to-deployment examp [az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials [az-aks-install-cli]: /cli/azure/aks#az_aks_install_cli [az-group-create]: /cli/azure/group#az_group_create-[az-group-delete]: /cli/azure/group#az_group_delete -[az-provider-register]: /cli/azure/provider#az_provider_register [aks-solution-guidance]: /azure/architecture/reference-architectures/containers/aks-start-here?toc=/azure/aks/toc.json&bc=/azure/aks/breadcrumb/toc.json [kubernetes-deployment]: ../concepts-clusters-workloads.md#deployments-and-yaml-manifests [kubernetes-service]: ../concepts-network.md#services [windows-server-password]: /windows/security/threat-protection/security-policy-settings/password-must-meet-complexity-requirements#reference [win-faq-change-admin-creds]: ../windows-faq.md#how-do-i-change-the-administrator-password-for-windows-server-nodes-on-my-cluster-[az-provider-show]: /cli/azure/provider#az_provider_show [baseline-reference-architecture]: /azure/architecture/reference-architectures/containers/aks/baseline-aks?toc=/azure/aks/toc.json&bc=/azure/aks/breadcrumb/toc.json |
aks | Quick Windows Container Deploy Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-windows-container-deploy-portal.md | Last updated 01/11/2024 Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you quickly deploy and manage clusters. In this article, you deploy an AKS cluster that runs Windows Server containers using the Azure portal. You also deploy an ASP.NET sample application in a Windows Server container to the cluster. - > [!NOTE] > To get started with quickly provisioning an AKS cluster, this article includes steps to deploy a cluster with default settings for evaluation purposes only. Before deploying a production-ready cluster, we recommend that you familiarize yourself with our [baseline reference architecture][baseline-reference-architecture] to consider how it aligns with your business requirements. You use [kubectl][kubectl], the Kubernetes command-line client, to manage your K ### [Azure CLI](#tab/azure-cli) 1. Open Cloud Shell by selecting the `>_` button at the top of the Azure portal page.-1. Configure `kubectl` to connect to your Kubernetes cluster using the [`az aks get-credentials`][az-aks-get-credentials] command. The following command downloads credentials and configures the Kubernetes CLI to use them. +1. Configure `kubectl` to connect to your Kubernetes cluster using the [az aks get-credentials][az-aks-get-credentials] command. The following command downloads credentials and configures the Kubernetes CLI to use them. ```azurecli az aks get-credentials --resource-group myResourceGroup --name myAKSCluster You use [kubectl][kubectl], the Kubernetes command-line client, to manage your K ### [Azure PowerShell](#tab/azure-powershell) 1. Open Cloud Shell by selecting the `>_` button at the top of the Azure portal page.-1. Configure `kubectl` to connect to your Kubernetes cluster using the [`Import-AzAksCredential`][import-azakscredential] cmdlet. The following command downloads credentials and configures the Kubernetes CLI to use them. +1. Configure `kubectl` to connect to your Kubernetes cluster using the [Import-AzAksCredential][import-azakscredential] cmdlet. The following command downloads credentials and configures the Kubernetes CLI to use them. ```azurepowershell Import-AzAksCredential -ResourceGroupName myResourceGroup -Name myAKSCluster The ASP.NET sample application is provided as part of the [.NET Framework Sample If you create and save the YAML file locally, then you can upload the manifest file to your default directory in CloudShell by selecting the **Upload/Download files** button and selecting the file from your local file system. -1. Deploy the application using the [`kubectl apply`][kubectl-apply] command and specify the name of your YAML manifest. +1. Deploy the application using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest. ```console kubectl apply -f sample.yaml The ASP.NET sample application is provided as part of the [.NET Framework Sample When the application runs, a Kubernetes service exposes the application front end to the internet. This process can take a few minutes to complete. Occasionally, the service can take longer than a few minutes to provision. Allow up to 10 minutes for provisioning. -1. Check the status of the deployed pods using the [`kubectl get pods`][kubectl-get] command. Make all pods are `Running` before proceeding. +1. Check the status of the deployed pods using the [kubectl get pods][kubectl-get] command. Make all pods are `Running` before proceeding. ++ ```console + kubectl get pods + ``` -1. Monitor progress using the [`kubectl get service`][kubectl-get] command with the `--watch` argument. +1. Monitor progress using the [kubectl get service][kubectl-get] command with the `--watch` argument. ```console kubectl get service sample --watch |
aks | Quick Windows Container Deploy Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-windows-container-deploy-powershell.md | The ASP.NET sample application is provided as part of the [.NET Framework Sample When the application runs, a Kubernetes service exposes the application front end to the internet. This process can take a few minutes to complete. Occasionally, the service can take longer than a few minutes to provision. Allow up to 10 minutes for provisioning. -1. Check the status of the deployed pods using the [`kubectl get pods`][kubectl-get] command. Make all pods are `Running` before proceeding. +1. Check the status of the deployed pods using the [kubectl get pods][kubectl-get] command. Make all pods are `Running` before proceeding. ++ ```console + kubectl get pods + ``` 1. Monitor progress using the [kubectl get service][kubectl-get] command with the `--watch` argument. When the application runs, a Kubernetes service exposes the application front en 1. See the sample app in action by opening a web browser to the external IP address of your service. - :::image type="content" source="media/quick-windows-container-deploy-powershell/asp-net-sample-app.png" alt-text="Screenshot of browsing to ASP.NET sample application."::: + :::image type="content" source="media/quick-windows-container-deploy-powershell/asp-net-sample-app.png" alt-text="Screenshot of browsing to ASP.NET sample application." lightbox="media/quick-windows-container-deploy-powershell/asp-net-sample-app.png"::: ## Delete resources |
aks | Tutorial Kubernetes Workload Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/tutorial-kubernetes-workload-identity.md | Azure Kubernetes Service (AKS) is a managed Kubernetes service that lets you qui * [!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] * This article requires version 2.47.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. * The identity you use to create your cluster must have the appropriate minimum permissions. For more information on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)][aks-identity-concepts].-* If you have multiple Azure subscriptions, select the appropriate subscription ID in which the resources should be billed using the [`az account`][az-account] command. +* If you have multiple Azure subscriptions, select the appropriate subscription ID in which the resources should be billed using the [az account set][az-account-set] command. ## Create a resource group An [Azure resource group][azure-resource-group] is a logical group in which Azur The following example creates a resource group named *myResourceGroup* in the *eastus* location. -* Create a resource group using the [`az group create`][az-group-create] command. +* Create a resource group using the [az group create][az-group-create] command. ```azurecli-interactive az group create --name myResourceGroup --location eastus To help simplify steps to configure the identities required, the steps below def ## Create an AKS cluster -1. Create an AKS cluster using the [`az aks create`][az-aks-create] command with the `--enable-oidc-issuer` parameter to use the OIDC Issuer. +1. Create an AKS cluster using the [az aks create][az-aks-create] command with the `--enable-oidc-issuer` parameter to use the OIDC Issuer. ```azurecli-interactive az aks create -g "${RESOURCE_GROUP}" -n myAKSCluster --node-count 1 --enable-oidc-issuer --enable-workload-identity --generate-ssh-keys To help simplify steps to configure the identities required, the steps below def ## Create an Azure Key Vault and secret -1. Create an Azure Key Vault in resource group you created in this tutorial using the [`az keyvault create`][az-keyvault-create] command. +1. Create an Azure Key Vault in resource group you created in this tutorial using the [az keyvault create][az-keyvault-create] command. ```azurecli-interactive az keyvault create --resource-group "${RESOURCE_GROUP}" --location "${LOCATION}" --name "${KEYVAULT_NAME}" To help simplify steps to configure the identities required, the steps below def At this point, your Azure account is the only one authorized to perform any operations on this new vault. -2. Add a secret to the vault using the [`az keyvault secret set`][az-keyvault-secret-set] command. The password is the value you specified for the environment variable `KEYVAULT_SECRET_NAME` and stores the value of **Hello!** in it. +2. Add a secret to the vault using the [az keyvault secret set][az-keyvault-secret-set] command. The password is the value you specified for the environment variable `KEYVAULT_SECRET_NAME` and stores the value of **Hello!** in it. ```azurecli-interactive az keyvault secret set --vault-name "${KEYVAULT_NAME}" --name "${KEYVAULT_SECRET_NAME}" --value 'Hello!' ``` -3. Add the Key Vault URL to the environment variable `KEYVAULT_URL` using the [`az keyvault show`][az-keyvault-show] command. +3. Add the Key Vault URL to the environment variable `KEYVAULT_URL` using the [az keyvault show][az-keyvault-show] command. ```bash export KEYVAULT_URL="$(az keyvault show -g "${RESOURCE_GROUP}" -n ${KEYVAULT_NAME} --query properties.vaultUri -o tsv)" To help simplify steps to configure the identities required, the steps below def ## Create a managed identity and grant permissions to access the secret -1. Set a specific subscription as the current active subscription using the [`az account set`][az-account-set] command. +1. Set a specific subscription as the current active subscription using the [az account set][az-account-set] command. ```azurecli-interactive az account set --subscription "${SUBSCRIPTION}" ``` -2. Create a managed identity using the [`az identity create`][az-identity-create] command. +2. Create a managed identity using the [az identity create][az-identity-create] command. ```azurecli-interactive az identity create --name "${USER_ASSIGNED_IDENTITY_NAME}" --resource-group "${RESOURCE_GROUP}" --location "${LOCATION}" --subscription "${SUBSCRIPTION}" To help simplify steps to configure the identities required, the steps below def ### Create Kubernetes service account -1. Create a Kubernetes service account and annotate it with the client ID of the managed identity created in the previous step using the [`az aks get-credentials`][az-aks-get-credentials] command. Replace the default value for the cluster name and the resource group name. +1. Create a Kubernetes service account and annotate it with the client ID of the managed identity created in the previous step using the [az aks get-credentials][az-aks-get-credentials] command. Replace the default value for the cluster name and the resource group name. ```azurecli-interactive az aks get-credentials -n myAKSCluster -g "${RESOURCE_GROUP}" To help simplify steps to configure the identities required, the steps below def ## Establish federated identity credential -* Create the federated identity credential between the managed identity, service account issuer, and subject using the [`az identity federated-credential create`][az-identity-federated-credential-create] command. +* Create the federated identity credential between the managed identity, service account issuer, and subject using the [az identity federated-credential create][az-identity-federated-credential-create] command. ```azurecli-interactive az identity federated-credential create --name ${FEDERATED_IDENTITY_CREDENTIAL_NAME} --identity-name ${USER_ASSIGNED_IDENTITY_NAME} --resource-group ${RESOURCE_GROUP} --issuer ${AKS_OIDC_ISSUER} --subject system:serviceaccount:${SERVICE_ACCOUNT_NAMESPACE}:${SERVICE_ACCOUNT_NAME} To help simplify steps to configure the identities required, the steps below def pod/quick-start created ``` -2. Check whether all properties are injected properly with the webhook using the [`kubectl describe`][kubelet-describe] command. +2. Check whether all properties are injected properly with the webhook using the [kubectl describe][kubelet-describe] command. ```bash kubectl describe pod quick-start ``` -3. Verify the pod can get a token and access the secret from the Key Vault using the [`kubectl logs`][kubelet-logs] command. +3. Verify the pod can get a token and access the secret from the Key Vault using the [kubectl logs][kubelet-logs] command. ```bash kubectl logs quick-start You may wish to leave these resources in place. If you no longer need these reso kubectl delete sa "${SERVICE_ACCOUNT_NAME}" --namespace "${SERVICE_ACCOUNT_NAMESPACE}" ``` -3. Delete the Azure resource group and all its resources using the [`az group delete`][az-group-delete] command. +3. Delete the Azure resource group and all its resources using the [az group delete][az-group-delete] command. ```azurecli-interactive az group delete --name "${RESOURCE_GROUP}" This tutorial is for introductory purposes. For guidance on a creating full solu <!-- INTERNAL LINKS --> [kubernetes-concepts]: ../concepts-clusters-workloads.md [aks-identity-concepts]: ../concepts-identity.md-[az-account]: /cli/azure/account [azure-resource-group]: ../../azure-resource-manager/management/overview.md [az-group-create]: /cli/azure/group#az-group-create [az-group-delete]: /cli/azure/group#az-group-delete |
aks | Trusted Access Feature | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/trusted-access-feature.md | Title: Enable Azure resources to access Azure Kubernetes Service (AKS) clusters using Trusted Access -description: Learn how to use the Trusted Access feature to enable Azure resources to access Azure Kubernetes Service (AKS) clusters. + Title: Get secure resource access to AKS by using Trusted Access +description: Learn how to use the Trusted Access feature to give Azure resources access to Azure Kubernetes Service (AKS) clusters. Last updated 12/04/2023 -# Enable Azure resources to access Azure Kubernetes Service (AKS) clusters using Trusted Access (Preview) +# Get secure access for Azure resources in Azure Kubernetes Service by using Trusted Access (preview) -Many Azure services that integrate with Azure Kubernetes Service (AKS) need access to the Kubernetes API server. In order to avoid granting these services admin access or having to keep your AKS clusters public for network access, you can use the AKS Trusted Access feature. +Many Azure services that integrate with Azure Kubernetes Service (AKS) need access to the Kubernetes API server. To avoid granting these services admin access or making your AKS clusters public for network access, you can use the AKS Trusted Access feature. -This feature allows services to securely connect to AKS and Kubernetes via the Azure backend without requiring private endpoint. Instead of relying on identities with [Microsoft Entra ID](../active-directory/fundamentals/active-directory-whatis.md) permissions, this feature can use your system-assigned managed identity to authenticate with the managed services and applications you want to use on top of AKS. +This feature gives services secure access to AKS and Kubernetes by using the Azure back end without requiring a private endpoint. Instead of relying on identities that have [Microsoft Entra](../active-directory/fundamentals/active-directory-whatis.md) permissions, this feature can use your system-assigned managed identity to authenticate with the managed services and applications that you want to use with your AKS clusters. -Trusted Access addresses the following scenarios: --* Azure services may be unable to access the Kubernetes API server when the authorized IP range is enabled, or in private clusters unless you implement a private endpoint access model. --* Providing admin access to the Kubernetes API to an Azure service doesn't follow the least privileged access best practices and could lead to privilege escalations or risks of credential leakage. -- * For example, you may have to implement high-privileged service-to-service permissions, which aren't ideal during audit reviews. --This article shows you how to enable secure access from your Azure services to your Kubernetes API server in AKS using Trusted Access. +This article shows you how to get secure access for your Azure services to your Kubernetes API server in AKS by using Trusted Access. [!INCLUDE [preview features callout](./includes/preview/preview-callout.md)] > [!NOTE]-> The Trusted Access API is GA. We provide GA support for CLI, however it's still in preview and requires the `aks-preview` extension. +> The Trusted Access API is generally available. We provide general availability (GA) support for the Azure CLI, but it's still in preview and requires using the aks-preview extension. ## Trusted Access feature overview -Trusted Access enables you to give explicit consent to your system-assigned MSI of allowed resources to access your AKS clusters using an Azure resource *RoleBinding*. Your Azure resources access AKS clusters through the AKS regional gateway via system-assigned managed identity authentication with the appropriate Kubernetes permissions via an Azure resource *Role*. The Trusted Access feature allows you to access AKS clusters with different configurations, including but not limited to [private clusters](private-clusters.md), [clusters with local accounts disabled](manage-local-accounts-managed-azure-ad.md#disable-local-accounts), [Microsoft Entra ID clusters](azure-ad-integration-cli.md), and [authorized IP range clusters](api-server-authorized-ip-ranges.md). +Trusted Access addresses the following scenarios: ++* If an authorized IP range is set or in a private cluster, Azure services might not be able to access the Kubernetes API server unless you implement a private endpoint access model. ++* Giving an Azure service admin access to the Kubernetes API doesn't follow the least privilege access best practice and can lead to privilege escalations or risk of credentials leakage. For example, you might have to implement high-privileged service-to-service permissions, and they aren't ideal in an audit review. ++You can use Trusted Access to give explicit consent to your system-assigned managed identity of allowed resources to access your AKS clusters by using an Azure resource called a *role binding*. Your Azure resources access AKS clusters through the AKS regional gateway via system-assigned managed identity authentication. The appropriate Kubernetes permissions are assigned via an Azure resource called a *role*. Through Trusted Access, you can access AKS clusters with different configurations including but not limited to [private clusters](private-clusters.md), [clusters that have local accounts turned off](manage-local-accounts-managed-azure-ad.md#disable-local-accounts), [Microsoft Entra clusters](azure-ad-integration-cli.md), and [authorized IP range clusters](api-server-authorized-ip-ranges.md). ## Prerequisites * An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * Resource types that support [system-assigned managed identity](../active-directory/managed-identities-azure-resources/overview.md).-* * If you're using Azure CLI, the **aks-preview** extension version **0.5.74 or later** is required. -* To learn about what Roles to use in various scenarios, see: - * [AzureML access to AKS clusters with special configurations](https://github.com/Azure/AML-Kubernetes/blob/master/docs/azureml-aks-ta-support.md). - * [Using Azure Backup][aks-azure-backup] - * [Enable Agentless Container Posture](../defender-for-cloud/concept-agentless-containers.md) + * If you're using the Azure CLI, the aks-preview extension version 0.5.74 or later is required. +* To learn what roles to use in different scenarios, see these articles: + * [Azure Machine Learning access to AKS clusters with special configurations](https://github.com/Azure/AML-Kubernetes/blob/master/docs/azureml-aks-ta-support.md) + * [What is Azure Kubernetes Service backup?][aks-azure-backup] + * [Turn on an agentless container posture](../defender-for-cloud/concept-agentless-containers.md) +## Get started -First, install the aks-preview extension by running the following command: +First, install the aks-preview extension: ```azurecli az extension add --name aks-preview ``` -Run the following command to update to the latest version of the extension released: +Run the following command to update to the latest version of the extension: ```azurecli az extension update --name aks-preview ``` -Then register the `TrustedAccessPreview` feature flag by using the [`az feature register`][az-feature-register] command, as shown in the following example: +Then, register the TrustedAccessPreview feature flag by using the [az feature register][az-feature-register] command. ++Here's an example: ```azurecli-interactive az feature register --namespace "Microsoft.ContainerService" --name "TrustedAccessPreview" ``` -It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [`az feature show`][az-feature-show] command: +It takes a few minutes for the status to appear as **Registered**. Verify the registration status by using the [az feature show][az-feature-show] command: ```azurecli-interactive az feature show --namespace "Microsoft.ContainerService" --name "TrustedAccessPreview" ``` -When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [`az provider register`][az-provider-register] command: +When the status is **Registered**, refresh the registration of the Microsoft.ContainerService resource provider by using the [az provider register][az-provider-register] command: ```azurecli-interactive az provider register --namespace Microsoft.ContainerService az provider register --namespace Microsoft.ContainerService ## Create an AKS cluster -[Create an AKS cluster](tutorial-kubernetes-deploy-cluster.md) in the same subscription as the Azure resource you want to access the cluster. +In the same subscription as the Azure resource that you want to access the cluster, [create an AKS cluster](tutorial-kubernetes-deploy-cluster.md). -## Select the required Trusted Access Roles +## Select the required Trusted Access roles -The Roles you select depend on the different Azure services. These services help create Roles and RoleBindings, which build the connection from the Azure service to AKS. +The roles that you select depend on the Azure services that you want to access the AKS cluster. Azure services help create roles and role bindings that build the connection from the Azure service to AKS. -## Create a Trusted Access RoleBinding +## Create a Trusted Access role binding -After confirming which Role to use, use the Azure CLI to create a Trusted Access RoleBinding in an AKS cluster. The RoleBinding associates your selected Role with the Azure service. +After you confirm which role to use, use the Azure CLI to create a Trusted Access role binding in the AKS cluster. The role binding associates your selected role with the Azure service. ```azurecli-# Create a Trusted Access RoleBinding in an AKS cluster +# Create a Trusted Access role binding in an AKS cluster ++az aks trustedaccess rolebinding create --resource-group <AKS resource group> --cluster-name <AKS cluster name> -n <role binding name> -s <connected service resource ID> --roles <roleName1, roleName2> +``` -az aks trustedaccess rolebinding create --resource-group <AKS resource group> --cluster-name <AKS cluster name> -n <rolebinding name> -s <connected service resource ID> --roles <roleName1, roleName2> +Here's an example: +```azurecli # Sample command az aks trustedaccess rolebinding create \ az aks trustedaccess rolebinding create \ --roles Microsoft.Compute/virtualMachineScaleSets/test-node-reader,Microsoft.Compute/virtualMachineScaleSets/test-admin ``` ---## Update an existing Trusted Access RoleBinding with new roles +## Update an existing Trusted Access role binding -For an existing RoleBinding with associated source service, you can update the RoleBinding with new Roles. +For an existing role binding that has an associated source service, you can update the role binding with new roles. > [!NOTE]-> The new RoleBinding may take up to 5 minutes to take effect as addon manager updates clusters every 5 minutes. Before the new RoleBinding takes effect, the old RoleBinding still works. +> The add-on manager updates clusters every five minutes, so the new role binding might take up to five minutes to take effect. Before the new role binding takes effect, the existing role binding still works. >-> You can use `az aks trusted access rolebinding list --name <rolebinding name> --resource-group <resource group>` to check the current RoleBinding. +> You can use `az aks trusted access rolebinding list --name <role binding name> --resource-group <resource group>` to check the current role binding. ```azurecli-# Update RoleBinding command +# Update the RoleBinding command -az aks trustedaccess rolebinding update --resource-group <AKS resource group> --cluster-name <AKS cluster name> -n <existing rolebinding name> --roles <newRoleName1, newRoleName2> +az aks trustedaccess rolebinding update --resource-group <AKS resource group> --cluster-name <AKS cluster name> -n <existing role binding name> --roles <newRoleName1, newRoleName2> +``` ++Here's an example: -# Update RoleBinding command with sample resource group, cluster, and Roles +```azurecli +# Update the RoleBinding command with sample resource group, cluster, and roles az aks trustedaccess rolebinding update \ --resource-group myResourceGroup \ az aks trustedaccess rolebinding update \ --roles Microsoft.Compute/virtualMachineScaleSets/test-node-reader,Microsoft.Compute/virtualMachineScaleSets/test-admin ``` ---## Show the Trusted Access RoleBinding +## Show a Trusted Access role binding -Use the Azure CLI to show a specific Trusted Access RoleBinding. +Show a specific Trusted Access role binding by using the `az aks trustedaccess rolebinding show` command: ```azurecli-az aks trustedaccess rolebinding show --name <rolebinding name> --resource-group <AKS resource group> --cluster-name <AKS cluster name> +az aks trustedaccess rolebinding show --name <role binding name> --resource-group <AKS resource group> --cluster-name <AKS cluster name> ``` -+## List all the Trusted Access role bindings for a cluster -## List all the Trusted Access RoleBindings for a cluster --Use the Azure CLI to list all the Trusted Access RoleBindings for a cluster. +List all the Trusted Access role bindings for a cluster by using the `az aks trustedaccess rolebinding list` command: ```azurecli az aks trustedaccess rolebinding list --resource-group <AKS resource group> --cluster-name <AKS cluster name> ``` -## Delete the Trusted Access RoleBinding for a cluster +## Delete a Trusted Access role binding for a cluster > [!WARNING]-> Deleting the existing Trusted Access RoleBinding will cause disconnection from AKS cluster to the Azure service. +> Deleting an existing Trusted Access role binding disconnects the Azure service from the AKS cluster. -Use the Azure CLI to delete an existing Trusted Access RoleBinding. +Delete an existing Trusted Access role binding by using the `az aks trustedaccess rolebinding delete` command: ```azurecli-az aks trustedaccess rolebinding delete --name <rolebinding name> --resource-group <AKS resource group> --cluster-name <AKS cluster name> +az aks trustedaccess rolebinding delete --name <role binding name> --resource-group <AKS resource group> --cluster-name <AKS cluster name> ``` -## Next steps --For more information on AKS, see: +## Related content * [Deploy and manage cluster extensions for AKS](cluster-extensions.md)-* [Deploy AzureML extension on AKS or Arc Kubernetes cluster](../machine-learning/how-to-deploy-kubernetes-extension.md) -* [Deploy Azure Backup on AKS cluster](../backup/azure-kubernetes-service-backup-overview.md) -* [Enable Agentless Container Posture on AKS cluster](../defender-for-cloud/concept-agentless-containers.md) +* [Deploy the Azure Machine Learning extension on an AKS or Azure Arc–enabled Kubernetes cluster](../machine-learning/how-to-deploy-kubernetes-extension.md) +* [Deploy Azure Backup on an AKS cluster](../backup/azure-kubernetes-service-backup-overview.md) +* [Set agentless container posture in Microsoft Defender for Cloud for an AKS cluster](../defender-for-cloud/concept-agentless-containers.md) <!-- LINKS --> |
aks | Use Kms Etcd Encryption | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-kms-etcd-encryption.md | Title: Use Key Management Service (KMS) etcd encryption in Azure Kubernetes Service (AKS) -description: Learn how to use the Key Management Service (KMS) etcd encryption with Azure Kubernetes Service (AKS) + Title: Use Key Management Service etcd encryption in Azure Kubernetes Service +description: Learn how to use Key Management Service (KMS) etcd encryption with Azure Kubernetes Service (AKS). Last updated 01/04/2024 -# Add Key Management Service (KMS) etcd encryption to an Azure Kubernetes Service (AKS) cluster +# Add Key Management Service etcd encryption to an Azure Kubernetes Service cluster -This article shows you how to enable encryption at rest for your Kubernetes secrets in etcd using Azure Key Vault with the Key Management Service (KMS) plugin. The KMS plugin allows you to: +This article shows you how to turn on encryption at rest for your Azure Kubernetes Service (AKS) secrets in an etcd key-value store by using Azure Key Vault and the Key Management Service (KMS) plugin. You can use the KMS plugin to: -* Use a key in Key Vault for etcd encryption. +* Use a key in a key vault for etcd encryption. * Bring your own keys.-* Provide encryption at rest for secrets stored in etcd. -* Rotate the keys in Key Vault. +* Provide encryption at rest for secrets that are stored in etcd. +* Rotate the keys in a key vault. -For more information on using the KMS plugin, see [Encrypting Secret Data at Rest](https://kubernetes.io/docs/tasks/administer-cluster/kms-provider/). +For more information on using KMS, see [Encrypting Secret Data at Rest](https://kubernetes.io/docs/tasks/administer-cluster/kms-provider/). ## Prerequisites * An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).-* Azure CLI version 2.39.0 or later. Run `az --version` to find your version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install]. +* Azure CLI version 2.39.0 or later. Run `az --version` to find your version. If you need to install or upgrade, see [Install the Azure CLI][azure-cli-install]. > [!WARNING]-> KMS supports Konnectivity or [API Server Vnet Integration][api-server-vnet-integration]. -> You can use `kubectl get po -n kube-system` to verify the results show that a konnectivity-agent-xxx pod is running. If there is, it means the AKS cluster is using Konnectivity. When using VNet integration, you can run the command `az aks show -g -n` to verify the setting `enableVnetIntegration` is set to **true**. +> KMS supports Konnectivity or [API Server VNet Integration (preview)][api-server-vnet-integration]. +> +> You can use `kubectl get po -n kube-system` to verify the results and show that a konnectivity-agent pod is running. If a pod is running, the AKS cluster is using Konnectivity. When you use API Server VNet Integration, you can run the `az aks show -g -n` command to verify that the `enableVnetIntegration` setting is set to `true`. ## Limitations The following limitations apply when you integrate KMS etcd encryption with AKS: -* Deletion of the key, Key Vault, or the associated identity isn't supported. -* KMS etcd encryption doesn't work with system-assigned managed identity. The key vault access policy is required to be set before the feature is enabled. In addition, system-assigned managed identity isn't available until cluster creation. Consequently, there's a cycle dependency. -* Azure Key Vault with Firewall enabled to allow public access isn't supported. It blocks traffic from KMS plugin to the Key Vault. -* The maximum number of secrets supported by a cluster enabled with KMS is 2,000. However, it's important to note that [KMS V2][kms-v2-support] isn't limited by this restriction and can handle a higher number of secrets. -* Bring your own (BYO) Azure Key Vault from another tenant isn't supported. -* With KMS enabled, you can't change associated Azure Key Vault model (public, private). To [change associated key vault mode][changing-associated-key-vault-mode], you need to disable and enable KMS again. -* If a cluster is enabled with KMS and private key vault and isn't using the `API Server VNet integration` tunnel, then stop/start cluster isn't allowed. -* Using the Virtual Machine Scale Sets API to scale the nodes in the cluster down to zero deallocates the nodes, causing the cluster to go down and become unrecoverable. -* After you disable KMS, you can't destroy the keys. Otherwise, it causes the API server to stop working. +* Deleting the key, the key vault, or the associated identity isn't supported. +* KMS etcd encryption doesn't work with system-assigned managed identity. The key vault access policy must be set before the feature is turned on. System-assigned managed identity isn't available until after the cluster is created. Consider the cycle dependency. +* Azure Key Vault with a firewall to allow public access isn't supported because it blocks traffic from the KMS plugin to the key vault. +* The maximum number of secrets that are supported by a cluster that has KMS turned on is 2,000. However, it's important to note that [KMS v2][kms-v2-support] isn't limited by this restriction and can handle a higher number of secrets. +* Bring your own (BYO) Azure key vault from another tenant isn't supported. +* With KMS turned on, you can't change the associated key vault mode (public versus private). To [update a key vault mode][update-a-key-vault-mode], you must first turn off KMS, and then turn it on again. +* If a cluster has KMS turned on, has a private key vault, and isn't using the API Server VNet integration tunnel, you can't stop and then start the cluster. +* Using the Virtual Machine Scale Sets API to scale the nodes in the cluster down to zero deallocates the nodes. The cluster then goes down and becomes unrecoverable. +* After you turn off KMS, you can't destroy the keys. Destroying the keys causes the API server to stop working. ++KMS supports a [public key vault][turn-on-kms-for-a-public-key-vault] or a [private key vault][turn-on-kms-for-a-private-key-vault]. -KMS supports [public key vault][Enable-KMS-with-public-key-vault] and [private key vault][Enable-KMS-with-private-key-vault]. +## Turn on KMS for a public key vault -## Enable KMS with public key vault +The following sections describe how to turn on KMS for a public key vault. -### Create a key vault and key +### Create a public key vault and key > [!WARNING]-> Deleting the key or the Azure Key Vault is not supported and will cause the secrets to be unrecoverable in the cluster. +> Deleting the key or the key vault is not supported and causes the secrets in the cluster to be unrecoverable. >-> If you need to recover your Key Vault or key, see [Azure Key Vault recovery management with soft delete and purge protection](../key-vault/general/key-vault-recovery.md?tabs=azure-cli). +> If you need to recover your key vault or your key, see [Azure Key Vault recovery management with soft delete and purge protection](../key-vault/general/key-vault-recovery.md?tabs=azure-cli). -#### For non-RBAC key vault +#### Create a key vault and key for a non-RBAC public key vault -Use `az keyvault create` to create a key vault. +Use `az keyvault create` to create a key vault without using Azure role-based access control (Azure RBAC): ```azurecli az keyvault create --name MyKeyVault --resource-group MyResourceGroup ``` -Use `az keyvault key create` to create a key. +Use `az keyvault key create` to create a key: ```azurecli az keyvault key create --name MyKeyName --vault-name MyKeyVault ``` -Use `az keyvault key show` to export the key ID. +Use `az keyvault key show` to export the key ID: ```azurecli export KEY_ID=$(az keyvault key show --name MyKeyName --vault-name MyKeyVault --query 'key.kid' -o tsv) echo $KEY_ID ``` -The above example stores the key ID in *KEY_ID*. +This example stores the key ID in `KEY_ID`. -#### For RBAC key vault +#### Create a key vault and key for an RBAC public key vault -Use `az keyvault create` to create a key vault using Azure Role Based Access Control. +Use `az keyvault create` to create a key vault by using Azure RBAC: ```azurecli export KEYVAULT_RESOURCE_ID=$(az keyvault create --name MyKeyVault --resource-group MyResourceGroup --enable-rbac-authorization true --query id -o tsv) ``` -Assign yourself permission to create a key. +Assign yourself permissions to create a key: ```azurecli-interactive az role assignment create --role "Key Vault Crypto Officer" --assignee-object-id $(az ad signed-in-user show --query id --out tsv) --assignee-principal-type "User" --scope $KEYVAULT_RESOURCE_ID ``` -Use `az keyvault key create` to create a key. +Use `az keyvault key create` to create a key: ```azurecli az keyvault key create --name MyKeyName --vault-name MyKeyVault ``` -Use `az keyvault key show` to export the key ID. +Use `az keyvault key show` to export the key ID: ```azurecli export KEY_ID=$(az keyvault key show --name MyKeyName --vault-name MyKeyVault --query 'key.kid' -o tsv) echo $KEY_ID ``` -The above example stores the key ID in *KEY_ID*. +This example stores the key ID in `KEY_ID`. -### Create a user-assigned managed identity +### Create a user-assigned managed identity for a public key vault -Use `az identity create` to create a user-assigned managed identity. +Use `az identity create` to create a user-assigned managed identity: ```azurecli az identity create --name MyIdentity --resource-group MyResourceGroup ``` -Use `az identity show` to get the identity object ID. +Use `az identity show` to get the identity object ID: ```azurecli IDENTITY_OBJECT_ID=$(az identity show --name MyIdentity --resource-group MyResourceGroup --query 'principalId' -o tsv) echo $IDENTITY_OBJECT_ID ``` -The above example stores the value of the identity object ID in *IDENTITY_OBJECT_ID*. +The preceding example stores the value of the identity object ID in `IDENTITY_OBJECT_ID`. -Use `az identity show` to get the identity resource ID. +Use `az identity show` to get the identity resource ID: ```azurecli IDENTITY_RESOURCE_ID=$(az identity show --name MyIdentity --resource-group MyResourceGroup --query 'id' -o tsv) echo $IDENTITY_RESOURCE_ID ``` -The above example stores the value of the identity resource ID in *IDENTITY_RESOURCE_ID*. +This example stores the value of the identity resource ID in `IDENTITY_RESOURCE_ID`. -### Assign permissions (decrypt and encrypt) to access key vault +### Assign permissions to decrypt and encrypt a public key vault -#### For non-RBAC key vault +The following sections describe how to assign decrypt and encrypt permissions for a private key vault. -If your key vault is not enabled with `--enable-rbac-authorization`, you can use `az keyvault set-policy` to create an Azure key vault policy. +#### Assign permissions for a non-RBAC public key vault ++If your key vault is not set with `--enable-rbac-authorization`, you can use `az keyvault set-policy` to create an Azure key vault policy. ```azurecli-interactive az keyvault set-policy -n MyKeyVault --key-permissions decrypt encrypt --object-id $IDENTITY_OBJECT_ID ``` -#### For RBAC key vault +#### Assign permissions for an RBAC public key vault -If your key vault is enabled with `--enable-rbac-authorization`, you need to assign the "Key Vault Crypto User" RBAC role which has decrypt, encrypt permission. +If your key vault is set with `--enable-rbac-authorization`, assign the Key Vault Crypto User role to give decrypt and encrypt permissions. ```azurecli-interactive az role assignment create --role "Key Vault Crypto User" --assignee-object-id $IDENTITY_OBJECT_ID --assignee-principal-type "ServicePrincipal" --scope $KEYVAULT_RESOURCE_ID ``` -### Create an AKS cluster with KMS etcd encryption enabled +### Create an AKS cluster that has a public key vault and turn on KMS etcd encryption -Create an AKS cluster using the [az aks create][az-aks-create] command with the `--enable-azure-keyvault-kms`, `--azure-keyvault-kms-key-vault-network-access` and `--azure-keyvault-kms-key-id` parameters to enable KMS etcd encryption. +To turn on KMS etcd encryption, create an AKS cluster by using the [az aks create][az-aks-create] command. You can use the `--enable-azure-keyvault-kms`, `--azure-keyvault-kms-key-vault-network-access`, and `--azure-keyvault-kms-key-id` parameters with `az aks create`. ```azurecli-interactive az aks create --name myAKSCluster --resource-group MyResourceGroup --assign-identity $IDENTITY_RESOURCE_ID --enable-azure-keyvault-kms --azure-keyvault-kms-key-vault-network-access "Public" --azure-keyvault-kms-key-id $KEY_ID ``` -### Update an existing AKS cluster to enable KMS etcd encryption +### Update an existing AKS cluster to turn on KMS etcd encryption for a public key vault -Use [az aks update][az-aks-update] with the `--enable-azure-keyvault-kms`, `--azure-keyvault-kms-key-vault-network-access` and `--azure-keyvault-kms-key-id` parameters to enable KMS etcd encryption on an existing cluster. +To turn on KMS etcd encryption for an existing cluster, use the [az aks update][az-aks-update] command. You can use the `--enable-azure-keyvault-kms`, `--azure-keyvault-kms-key-vault-network-access`, and `--azure-keyvault-kms-key-id` parameters with `az-aks-update`. ```azurecli-interactive az aks update --name myAKSCluster --resource-group MyResourceGroup --enable-azure-keyvault-kms --azure-keyvault-kms-key-vault-network-access "Public" --azure-keyvault-kms-key-id $KEY_ID ``` -Use the following command to update all secrets. Otherwise, old secrets won't be encrypted. For larger clusters, you may want to subdivide the secrets by namespace or script an update. +Use the following command to update all secrets. If you don't run this command, secrets that were created earlier are no longer encrypted. For larger clusters, you might want to subdivide the secrets by namespace or create an update script. ```azurecli-interactive kubectl get secrets --all-namespaces -o json | kubectl replace -f - ``` -### Rotate the existing keys +### Rotate existing keys in a public key vault -After changing the key ID (including key name and key version), you can use [az aks update][az-aks-update] with the `--enable-azure-keyvault-kms`, `--azure-keyvault-kms-key-vault-network-access` and `--azure-keyvault-kms-key-id` parameters to rotate the existing keys of KMS. +After you change the key ID (including changing either the key name or the key version), you can use the [az aks update][az-aks-update] command. You can use the `--enable-azure-keyvault-kms`, `--azure-keyvault-kms-key-vault-network-access`, and `--azure-keyvault-kms-key-id` parameters with `az-aks-update` to rotate existing keys in KMS. > [!WARNING]-> Remember to update all secrets after key rotation. Otherwise, the secrets will be inaccessible if the old keys don't exist or aren't working. -> -> Once you rotate the key, the old key (key1) is still cached and shouldn't be deleted. If you want to delete the old key (key1) immediately, you need to rotate the key twice. Then key2 and key3 are cached, and key1 can be deleted without impacting existing cluster. +> Remember to update all secrets after key rotation. If you don't update all secrets, the secrets are inaccessible if the keys that were created earlier don't exist or no longer work. +> +> After you rotate the key, the previous key (key1) is still cached and shouldn't be deleted. If you want to delete the previous key (key1) immediately, you need to rotate the key twice. Then key2 and key3 are cached, and key1 can be deleted without affecting the existing cluster. ```azurecli-interactive az aks update --name myAKSCluster --resource-group MyResourceGroup --enable-azure-keyvault-kms --azure-keyvault-kms-key-vault-network-access "Public" --azure-keyvault-kms-key-id $NEW_KEY_ID ``` -Use the following command to update all secrets. Otherwise, old secrets will still be encrypted with the previous key. For larger clusters, you may want to subdivide the secrets by namespace or script an update. +Use the following command to update all secrets. If you don't run this command, secrets that were created earlier are still encrypted with the previous key. For larger clusters, you might want to subdivide the secrets by namespace or create an update script. ```azurecli-interactive kubectl get secrets --all-namespaces -o json | kubectl replace -f - ``` -## Enable KMS with private key vault +## Turn on KMS for a private key vault -If you enable KMS with private key vault, AKS will create a private endpoint and private link in the node resource group automatically. The key vault will be added a private endpoint connection with the AKS cluster. +If you turn on KMS for a private key vault, AKS automatically creates a private endpoint and a private link in the node resource group. The key vault is added a private endpoint connection with the AKS cluster. ### Create a private key vault and key > [!WARNING]-> Deleting the key or the Azure Key Vault isn't supported and will cause the secrets to be unrecoverable in the cluster. +> Deleting the key or the key vault is not supported and causes the secrets in the cluster to be unrecoverable. >-> If you need to recover your key vault or key, see [Azure Key Vault recovery management with soft delete and purge protection](../key-vault/general/key-vault-recovery.md?tabs=azure-cli). +> If you need to recover your key vault or your key, see [Azure Key Vault recovery management with soft delete and purge protection](../key-vault/general/key-vault-recovery.md?tabs=azure-cli). -Use `az keyvault create` to create a private key vault. +Use `az keyvault create` to create a private key vault: ```azurecli az keyvault create --name MyKeyVault --resource-group MyResourceGroup --public-network-access Disabled ``` -It's not supported to create or update keys in private key vault without private endpoint. To manage private key vaults, you can refer to [Integrate Key Vault with Azure Private Link](../key-vault/general/private-link-service.md). +Creating or updating keys in a private key vault that doesn't have a private endpoint isn't supported. To learn how to manage private key vaults, see [Integrate a key vault by using Azure Private Link](../key-vault/general/private-link-service.md). -### Create a user-assigned managed identity +### Create a user-assigned managed identity for a private key vault -Use `az identity create` to create a user-assigned managed identity. +Use `az identity create` to create a user-assigned managed identity: ```azurecli az identity create --name MyIdentity --resource-group MyResourceGroup ``` -Use `az identity show` to get the identity object ID. +Use `az identity show` to get the identity object ID: ```azurecli IDENTITY_OBJECT_ID=$(az identity show --name MyIdentity --resource-group MyResourceGroup --query 'principalId' -o tsv) echo $IDENTITY_OBJECT_ID ``` -The above example stores the value of the identity object ID in *IDENTITY_OBJECT_ID*. +The preceding example stores the value of the identity object ID in `IDENTITY_OBJECT_ID`. -Use `az identity show` to get identity resource ID. +Use `az identity show` to get the identity resource ID: ```azurecli IDENTITY_RESOURCE_ID=$(az identity show --name MyIdentity --resource-group MyResourceGroup --query 'id' -o tsv) echo $IDENTITY_RESOURCE_ID ``` -The above example stores the value of the identity resource ID in *IDENTITY_RESOURCE_ID*. +This example stores the value of the identity resource ID in `IDENTITY_RESOURCE_ID`. -### Assign permissions (decrypt and encrypt) to access key vault +### Assign permissions to decrypt and encrypt a private key vault ++The following sections describe how to assign decrypt and encrypt permissions for a private key vault. ++#### Assign permissions for a non-RBAC private key vault > [!NOTE] > When using a private key vault, AKS can't validate the permissions of the identity. Verify the identity has been granted permission to access the key vault before enabling KMS. -#### For non-RBAC key vault --If your key vault is not enabled with `--enable-rbac-authorization`, you can use `az keyvault set-policy` to create an Azure key vault policy. +If your key vault is not set with `--enable-rbac-authorization`, you can use `az keyvault set-policy` to create a key vault policy in Azure: ```azurecli-interactive az keyvault set-policy -n MyKeyVault --key-permissions decrypt encrypt --object-id $IDENTITY_OBJECT_ID ``` -#### For RBAC key vault +#### Assign permissions for an RBAC private key vault -If your key vault is enabled with `--enable-rbac-authorization`, you need to assign a RBAC role that contains decrypt, encrypt permission. +If your key vault is set with `--enable-rbac-authorization`, assign an Azure RBAC role that includes decrypt and encrypt permissions: ```azurecli-interactive az role assignment create --role "Key Vault Crypto User" --assignee-object-id $IDENTITY_OBJECT_ID --assignee-principal-type "ServicePrincipal" --scope $KEYVAULT_RESOURCE_ID ``` -### Assign permission for creating private link +### Assign permissions to create a private link -For private key vaults, you need the *Key Vault Contributor* role to create a private link between the private key vault and the cluster. +For private key vaults, the Key Vault Contributor role is required to create a private link between the private key vault and the cluster. ```azurecli-interactive az role assignment create --role "Key Vault Contributor" --assignee-object-id $IDENTITY_OBJECT_ID --assignee-principal-type "ServicePrincipal" --scope $KEYVAULT_RESOURCE_ID ``` -### Create an AKS cluster with private key vault and enable KMS etcd encryption +### Create an AKS cluster that has a private key vault and turn on KMS etcd encryption -Create an AKS cluster using the [az aks create][az-aks-create] command with the `--enable-azure-keyvault-kms`, `--azure-keyvault-kms-key-id`, `--azure-keyvault-kms-key-vault-network-access` and `--azure-keyvault-kms-key-vault-resource-id` parameters to enable KMS etcd encryption with private key vault. +To turn on KMS etcd encryption for a private key vault, create an AKS cluster by using the [az aks create][az-aks-create] command. You can use the `--enable-azure-keyvault-kms`, `--azure-keyvault-kms-key-id`, `--azure-keyvault-kms-key-vault-network-access`, and `--azure-keyvault-kms-key-vault-resource-id` parameters with `az-aks-create`. ```azurecli-interactive az aks create --name myAKSCluster --resource-group MyResourceGroup --assign-identity $IDENTITY_RESOURCE_ID --enable-azure-keyvault-kms --azure-keyvault-kms-key-id $KEY_ID --azure-keyvault-kms-key-vault-network-access "Private" --azure-keyvault-kms-key-vault-resource-id $KEYVAULT_RESOURCE_ID ``` -### Update an existing AKS cluster to enable KMS etcd encryption with private key vault +### Update an existing AKS cluster to turn on KMS etcd encryption for a private key vault -Use [az aks update][az-aks-update] with the `--enable-azure-keyvault-kms`, `--azure-keyvault-kms-key-id`, `--azure-keyvault-kms-key-vault-network-access` and `--azure-keyvault-kms-key-vault-resource-id` parameters to enable KMS etcd encryption on an existing cluster with private key vault. +To turn on KMS etcd encryption on an existing cluster that has a private key vault, use the [az aks update][az-aks-update] command. You can use the `--enable-azure-keyvault-kms`, `--azure-keyvault-kms-key-id`, `--azure-keyvault-kms-key-vault-network-access`, and `--azure-keyvault-kms-key-vault-resource-id` parameters with `az-aks-update`. ```azurecli-interactive az aks update --name myAKSCluster --resource-group MyResourceGroup --enable-azure-keyvault-kms --azure-keyvault-kms-key-id $KEY_ID --azure-keyvault-kms-key-vault-network-access "Private" --azure-keyvault-kms-key-vault-resource-id $KEYVAULT_RESOURCE_ID ``` -Use the following command to update all secrets. Otherwise, old secrets won't be encrypted. For larger clusters, you may want to subdivide the secrets by namespace or script an update. +Use the following command to update all secrets. If you don't run this command, secrets that were created earlier aren't encrypted. For larger clusters, you might want to subdivide the secrets by namespace or create an update script. ```azurecli-interactive kubectl get secrets --all-namespaces -o json | kubectl replace -f - ``` -### Rotate the existing keys +### Rotate existing keys in a private key vault -After changing the key ID (including key name and key version), you can use [az aks update][az-aks-update] with the `--enable-azure-keyvault-kms`, `--azure-keyvault-kms-key-id`, `--azure-keyvault-kms-key-vault-network-access` and `--azure-keyvault-kms-key-vault-resource-id` parameters to rotate the existing keys of KMS. +After you change the key ID (including the key name and the key version), you can use the [az aks update][az-aks-update] command. You can use the `--enable-azure-keyvault-kms`, `--azure-keyvault-kms-key-id`, `--azure-keyvault-kms-key-vault-network-access`, and `--azure-keyvault-kms-key-vault-resource-id` parameters with `az-aks-update` to rotate the existing keys of KMS. > [!WARNING]-> Remember to update all secrets after key rotation. Otherwise, the secrets will be inaccessible if the old keys are not existing or working. +> Remember to update all secrets after key rotation. If you don't update all secrets, the secrets are inaccessible if the keys that were created earlier don't exist or no longer work. >-> Once you rotate the key, the old key (key1) is still cached and shouldn't be deleted. If you want to delete the old key (key1) immediately, you need to rotate the key twice. Then key2 and key3 are cached, and key1 can be deleted without impacting existing cluster. +> After you rotate the key, the previous key (key1) is still cached and shouldn't be deleted. If you want to delete the previous key (key1) immediately, you need to rotate the key twice. Then key2 and key3 are cached, and key1 can be deleted without affecting the existing cluster. ```azurecli-interactive az aks update --name myAKSCluster --resource-group MyResourceGroup --enable-azure-keyvault-kms --azure-keyvault-kms-key-id $NewKEY_ID --azure-keyvault-kms-key-vault-network-access "Private" --azure-keyvault-kms-key-vault-resource-id $KEYVAULT_RESOURCE_ID ``` -Use the following command to update all secrets. Otherwise, old secrets will still be encrypted with the previous key. For larger clusters, you may want to subdivide the secrets by namespace or script an update. +Use the following command to update all secrets. If you don't update all secrets, secrets that were created earlier are encrypted with the previous key. For larger clusters, you might want to subdivide the secrets by namespace or create an update script. ```azurecli-interactive kubectl get secrets --all-namespaces -o json | kubectl replace -f - ``` -## Update key vault mode +## Update a key vault mode > [!NOTE]-> To change a different key vault with a different mode (public, private), you can run `az aks update` directly. To change the mode of attached key vault, you need to disable KMS and re-enable it with the new key vault IDs. +> To change a different key vault with a different mode (whether public or private), you can run `az aks update` directly. To change the mode of an attached key vault, you must first turn off KMS, and then turn it on again by using the new key vault IDs. -Below are the steps about how to migrate the attached public key vault to private mode. +The following sections describe how to migrate an attached public key vault to private mode. -### Disable KMS on the cluster +### Turn off KMS on the cluster -Disable the KMS on existing cluster and release the key vault. +Turn off KMS on an existing cluster and release the key vault: ```azurecli-interactive az aks update --name myAKSCluster --resource-group MyResourceGroup --disable-azure-keyvault-kms ``` -### Change key vault mode +### Change the key vault mode -Update the key vault from public to private. +Update the key vault from public to private: ```azurecli-interactive az keyvault update --name MyKeyVault --resource-group MyResourceGroup --public-network-access Disabled ``` -### Enable KMS on the cluster with updated key vault +### Turn on KMS for the cluster by using the updated key vault -Re-enable the KMS with updated private key vault. +Turn on KMS by using the updated private key vault: ```azurecli-interactive az aks update --name myAKSCluster --resource-group MyResourceGroup --enable-azure-keyvault-kms --azure-keyvault-kms-key-id $NewKEY_ID --azure-keyvault-kms-key-vault-network-access "Private" --azure-keyvault-kms-key-vault-resource-id $KEYVAULT_RESOURCE_ID ``` -After configuring KMS, you can enable [diagnostic-settings for key vault to check the encryption logs](../key-vault/general/howto-logging.md). +After you set up KMS, you can turn on [diagnostic settings for the key vault to check the encryption logs](../key-vault/general/howto-logging.md). -## Disable KMS +## Turn off KMS -Before disabling KMS, you can use the following Azure CLI command to verify if KMS is enabled. +Before you turn off KMS, you can use the following Azure CLI command to check whether KMS is turned on: ```azurecli-interactive az aks list --query "[].{Name:name, KmsEnabled:securityProfile.azureKeyVaultKms.enabled, KeyId:securityProfile.azureKeyVaultKms.keyId}" -o table ``` -If the results confirm KMS is enabled, run the following command to disable KMS on the cluster. +If the results confirm KMS that is on, run the following command to turn off KMS on the cluster: ```azurecli-interactive az aks update --name myAKSCluster --resource-group MyResourceGroup --disable-azure-keyvault-kms ``` -Use the following command to update all secrets. Otherwise, the old secrets will still be encrypted with the previous key and the encrypt/decrypt permission on key vault is still required. For larger clusters, you may want to subdivide the secrets by namespace or script an update. +Use the following command to update all secrets. If you don't run this command, secrets that were created earlier are still encrypted with the previous key, and the encrypt and decrypt permissions on the key vault are still required. For larger clusters, you might want to subdivide the secrets by namespace or create an update script. ```azurecli-interactive kubectl get secrets --all-namespaces -o json | kubectl replace -f - kubectl get secrets --all-namespaces -o json | kubectl replace -f - ## KMS v2 support -Starting with AKS version 1.27, enabling the KMS feature configures KMS v2. With KMS v2, you aren't limited to the 2,000 secrets it supports. For more information, review [KMS V2 Improvements](https://kubernetes.io/blog/2023/05/16/kms-v2-moves-to-beta/). +Beginning in AKS version 1.27, turning on the KMS feature configures KMS v2. With KMS v2, you aren't limited to the 2,000 secrets that earlier versions support. For more information, see [KMS V2 Improvements](https://kubernetes.io/blog/2023/05/16/kms-v2-moves-to-beta/). -### Migration to KMS v2 +### Migrate to KMS v2 -If your cluster version is less than 1.27 and you already enabled KMS, the upgrade to 1.27 or higher will be blocked. You use the following steps to migrate to KMS v2: +If your cluster version is later than 1.27 and you already turned on KMS, the upgrade to KMS 1.27 or later is blocked. Use the following steps to migrate to KMS v2: -1. Disable KMS on the cluster. -2. Perform the storage migration. -3. Upgrade the cluster to version 1.27 or higher. -4. Re-enable KMS on the cluster. -5. Perform the storage migration. +1. Turn off KMS on the cluster. +1. Perform the storage migration. +1. Upgrade the cluster to version 1.27 or later. +1. Turn on KMS on the cluster. +1. Perform the storage migration. -#### Disable KMS +#### Turn off KMS to migrate storage -To disable KMS on an existing cluster, use the `az aks update` command with the `--disable-azure-keyvault-kms` argument. +To turn off KMS on an existing cluster, use the `az aks update` command with the `--disable-azure-keyvault-kms` argument: ```azurecli-interactive az aks update --name myAKSCluster --resource-group MyResourceGroup --disable-azure-keyvault-kms ``` -#### Storage migration +#### Migrate storage -To update all secrets, use the `kubectl get secrets` command with the `--all-namespaces` argument. +To update all secrets, use the `kubectl get secrets` command with the `--all-namespaces` argument: ```azurecli-interactive kubectl get secrets --all-namespaces -o json | kubectl replace -f - ``` -#### Upgrade AKS cluster +#### Upgrade the AKS cluster -To upgrade an AKS cluster, use the `az aks upgrade` command and specify the desired version as `1.27.x` or higher with the `--kubernetes-version` argument. +To upgrade an AKS cluster, use the `az aks upgrade` command. Set the version to `1.27.x` or later by using the `--kubernetes-version` argument. ```azurecli-interactive az aks upgrade --resource-group myResourceGroup --name myAKSCluster --kubernetes-version <AKS version> ``` -For example: +Here's an example: ```azurecli-interactive az aks upgrade --resource-group myResourceGroup --name myAKSCluster --kubernetes-version 1.27.1 ``` -#### Re-enable KMS +#### Turn on KMS after storage migration -You can reenable the KMS feature on the cluster to encrypt the secrets. Afterwards, the AKS cluster uses KMS v2. -If you don't want to do the KMS v2 migration, you can create a new version 1.27 and higher cluster with KMS enabled. +You can turn on the KMS feature on the cluster again to encrypt the secrets. Afterward, the AKS cluster uses KMS v2. If you don't want to migrate to KMS v2, you can create a new cluster that is version 1.27 or later with KMS turned on. -#### Storage migration +#### Migrate storage for KMS v2 -To re-encrypt all secrets under KMS v2, use the `kubectl get secrets` command with the `--all-namespaces` argument. +To re-encrypt all secrets in KMS v2, use the `kubectl get secrets` command with the `--all-namespaces` argument: ```azurecli-interactive kubectl get secrets --all-namespaces -o json | kubectl replace -f - ``` <!-- LINKS - Internal -->-[aks-support-policies]: support-policies.md -[aks-faq]: faq.md -[az-feature-register]: /cli/azure/feature#az-feature-register -[az-feature-list]: /cli/azure/feature#az-feature-list -[az extension add]: /cli/azure/extension#az-extension-add -[az-extension-update]: /cli/azure/extension#az-extension-update [azure-cli-install]: /cli/azure/install-azure-cli [az-aks-create]: /cli/azure/aks#az-aks-create-[az-extension-add]: /cli/azure/extension#az_extension_add -[az-extension-update]: /cli/azure/extension#az_extension_update -[az-feature-register]: /cli/azure/feature#az_feature_register -[az-feature-list]: /cli/azure/feature#az_feature_list -[az-provider-register]: /cli/azure/provider#az_provider_register [az-aks-update]: /cli/azure/aks#az_aks_update-[Enable-KMS-with-public-key-vault]: use-kms-etcd-encryption.md#enable-kms-with-public-key-vault -[Enable-KMS-with-private-key-vault]: use-kms-etcd-encryption.md#enable-kms-with-private-key-vault -[changing-associated-key-vault-mode]: use-kms-etcd-encryption.md#update-key-vault-mode -[install-azure-cli]: /cli/azure/install-azure-cli +[turn-on-kms-for-a-public-key-vault]: #turn-on-kms-for-a-public-key-vault +[turn-on-kms-for-a-private-key-vault]: #turn-on-kms-for-a-private-key-vault +[update-a-key-vault-mode]: #update-a-key-vault-mode [api-server-vnet-integration]: api-server-vnet-integration.md [kms-v2-support]: use-kms-etcd-encryption.md#kms-v2-support |
automation | Runbook Input Parameters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/runbook-input-parameters.md | Azure Automation supports various input parameter values across the different ru | PowerShell | - String <br>- Security.SecureString <br>- INT32 <br>- Boolean <br>- DateTime <br>- Array <br>- Collections.Hashtable <br>- Management.Automation.SwitchParameter | | PowerShell Workflow | - String <br>- Security.SecureString <br>- INT32 <br>- Boolean <br>- DateTime <br>- Array <br>- Collections.Hashtable <br>- Management.Automation.SwitchParameter | | Graphical PowerShell| - String <br>- INT32 <br>- INT64 <br>- Boolean <br>- Decimal <br>- DateTime <br>- Object |-| Python | - String | | +| Python | - String | ## Configure input parameters in PowerShell runbooks |
automation | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/whats-new.md | You can now restore Runbooks deleted in the past 29 days. [Learn more](manage-ru **Type: Retirement** -On **31 August 2024**, Azure Automation will retire [Automation Update management](https://learn.microsoft.com/answers/questions/1459053/retirement-announcement-azure-automation-update-ma) and [Change Tracking using Log Analytics](https://learn.microsoft.com/answers/questions/1459059/retirement-announcement-azure-automation-change-tr). You must migrate to [Azure Update Manager](../update-manager/overview.md) and [Change tracking and inventory using Azure Monitoring Agent](change-tracking/overview-monitoring-agent.md) respectively before the deprecation date. +On **31 August 2024**, Azure Automation will retire [Automation Update management](/answers/questions/1459053/retirement-announcement-azure-automation-update-ma) and [Change Tracking using Log Analytics](/answers/questions/1459059/retirement-announcement-azure-automation-change-tr). You must migrate to [Azure Update Manager](../update-manager/overview.md) and [Change tracking and inventory using Azure Monitoring Agent](change-tracking/overview-monitoring-agent.md) respectively before the deprecation date. ## November 2023 |
azure-arc | Run Command | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/run-command.md | Run Command on Azure Arc-enabled servers supports the following operations: |Operation |Description | |||-|[Create](https://review.learn.microsoft.com/en-us/rest/api/hybridcompute/machine-run-commands/create-or-update?view=rest-hybridcompute-2023-10-03-preview&branch=main&tabs=HTTP) |The operation to create a run command. This runs the run command. | -|[Delete](/rest/api/hybridcompute/machine-run-commands/delete?view=rest-hybridcompute-2023-10-03-preview&tabs=HTTP) |The operation to delete a run command. If it's running, delete will also stop the run command. | -|[Get](/rest/api/hybridcompute/machine-run-commands/get?view=rest-hybridcompute-2023-10-03-preview&tabs=HTTP) |The operation to get a run command. | -|[List](/rest/api/hybridcompute/machine-run-commands/list?view=rest-hybridcompute-2023-10-03-preview&tabs=HTTP) |The operation to get all the run commands of an Azure Arc-enabled server. | -|[Update](/rest/api/hybridcompute/machine-run-commands/update?view=rest-hybridcompute-2023-10-03-preview&tabs=HTTP) |The operation to update the run command. This stops the previous run command. | +|[Create](/rest/api/hybridcompute/machine-run-commands/create-or-update?tabs=HTTP) |The operation to create a run command. This runs the run command. | +|[Delete](/rest/api/hybridcompute/machine-run-commands/delete?tabs=HTTP) |The operation to delete a run command. If it's running, delete will also stop the run command. | +|[Get](/rest/api/hybridcompute/machine-run-commands/get?tabs=HTTP) |The operation to get a run command. | +|[List](/rest/api/hybridcompute/machine-run-commands/list?tabs=HTTP) |The operation to get all the run commands of an Azure Arc-enabled server. | +|[Update](/rest/api/hybridcompute/machine-run-commands/update?tabs=HTTP) |The operation to update the run command. This stops the previous run command. | > [!NOTE] > Output and error blobs are overwritten each time the run command script executes. |
azure-arc | Agent Overview Scvmm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/agent-overview-scvmm.md | -When you [enable guest management](https://learn.microsoft.com/azure/azure-arc/system-center-virtual-machine-manager/enable-guest-management-at-scale) on SCVMM VMs, Azure arc agent is installed on the VMs. The Azure Connected Machine agent enables you to manage your Windows and Linux machines hosted outside of Azure on your corporate network or other cloud providers. This article provides an architectural overview of Azure connected machine agent. +When you [enable guest management](enable-guest-management-at-scale.md) on SCVMM VMs, Azure arc agent is installed on the VMs. The Azure Connected Machine agent enables you to manage your Windows and Linux machines hosted outside of Azure on your corporate network or other cloud providers. This article provides an architectural overview of Azure connected machine agent. ## Agent components The agent requests the following metadata information from Azure: - [Connect your SCVMM server to Azure Arc](/azure/azure-arc/system-center-virtual-machine-manager/quickstart-connect-system-center-virtual-machine-manager-to-arc). - [Install Arc agent at scale for your SCVMM VMs](/azure/azure-arc/system-center-virtual-machine-manager/enable-guest-management-at-scale).-- [Install Arc agent using a script for SCVMM VMs](/azure/azure-arc/system-center-virtual-machine-manager/install-arc-agents-using-script).+- [Install Arc agent using a script for SCVMM VMs](/azure/azure-arc/system-center-virtual-machine-manager/install-arc-agents-using-script). |
azure-arc | Deliver Esus For System Center Virtual Machine Manager Vms | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/deliver-esus-for-system-center-virtual-machine-manager-vms.md | keywords: "VMM, Arc, Azure" # Deliver ESUs for SCVMM VMs through Arc -Azure Arc-enabled System Center Virtual Machine Manager (SCVMM) allows you to enroll all the Windows Server 2012/2012 R2 VMs managed by your SCVMM server in [Extended Security Updates](https://learn.microsoft.com/windows-server/get-started/extended-security-updates-overview) (ESUs) at scale. +Azure Arc-enabled System Center Virtual Machine Manager (SCVMM) allows you to enroll all the Windows Server 2012/2012 R2 VMs managed by your SCVMM server in [Extended Security Updates](/windows-server/get-started/extended-security-updates-overview) (ESUs) at scale. -ESUs allow you to leverage cost flexibility in the form of pay-as-you-go Azure billing and enhanced delivery experience in the form of built-in inventory and keyless delivery. In addition, ESUs enabled by Azure Arc give you access to Azure management services such as [Azure Update Manager](https://learn.microsoft.com/azure/update-manager/overview?tabs=azure-vms), [Azure Automation Change Tracking and Inventory](https://learn.microsoft.com/azure/automation/change-tracking/overview?tabs=python-2), and [Azure Policy Guest Configuration](https://learn.microsoft.com/azure/cloud-adoption-framework/manage/azure-server-management/guest-configuration-policy) at no additional cost. +ESUs allow you to leverage cost flexibility in the form of pay-as-you-go Azure billing and enhanced delivery experience in the form of built-in inventory and keyless delivery. In addition, ESUs enabled by Azure Arc give you access to Azure management services such as [Azure Update Manager](/azure/update-manager/overview?tabs=azure-vms), [Azure Automation Change Tracking and Inventory](/azure/automation/change-tracking/overview?tabs=python-2), and [Azure Policy Guest Configuration](/azure/cloud-adoption-framework/manage/azure-server-management/guest-configuration-policy) at no additional cost. This article provides the steps to procure and deliver ESUs to WS 2012 and 2012 R2 SCVMM VMs onboarded to Azure Arc-enabled SCVMM. This article provides the steps to procure and deliver ESUs to WS 2012 and 2012 You can select one or more Arc-enabled SCVMM VMs to link to an ESU license. Once you've linked a VM to an activated ESU license, the VM is eligible to receive Windows Server 2012 and 2012 R2 ESUs. >[!Note]-> You have the flexibility to configure your patching solution of choice to receive these updates ΓÇô whether it's [Azure Update Manager](https://learn.microsoft.com/azure/update-center/overview), [Windows Server Update Services](https://learn.microsoft.com/windows-server/administration/windows-server-update-services/get-started/windows-server-update-services-wsus), Microsoft Updates, [Microsoft Endpoint Configuration Manager](https://learn.microsoft.com/mem/configmgr/core/understand/introduction), or a third-party patch management solution. +> You have the flexibility to configure your patching solution of choice to receive these updates ΓÇô whether it's [Azure Update Manager](/azure/update-center/overview), [Windows Server Update Services](/windows-server/administration/windows-server-update-services/get-started/windows-server-update-services-wsus), Microsoft Updates, [Microsoft Endpoint Configuration Manager](/mem/configmgr/core/understand/introduction), or a third-party patch management solution. 1. Select the **Eligible Resources** tab to view a list of all your Arc-enabled server machines running Windows Server 2012 and 2012 R2, including SCVMM machines that are guest management enabled. The **ESUs status** column indicates whether the machine is ESUs enabled. |
azure-arc | Enable Guest Management At Scale | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/enable-guest-management-at-scale.md | In this article, you learn how to install Arc agents at scale for SCVMM VMs and >- SCVMM 2022 UR1 or later >- SCVMM 2019 UR5 or later >- VMs running Windows Server 2012 R2, 2016, 2019, 2022, Windows 10, and Windows 11 ->For other SCVMM versions, Linux VMs or Windows VMs running WS 2012 or earlier, [install Arc agents through the script](https://learn.microsoft.com/azure/azure-arc/system-center-virtual-machine-manager/install-arc-agents-using-script). +>For other SCVMM versions, Linux VMs or Windows VMs running WS 2012 or earlier, [install Arc agents through the script](install-arc-agents-using-script.md). ## Prerequisites Ensure the following before you install Arc agents at scale for SCVMM VMs: - The user account must have permissions listed in Azure Arc SCVMM Administrator role. - All the target machines are: - Powered on and the resource bridge has network connectivity to the host running the VM.- - Running a [supported operating system](/azure/azure-arc/servers/prerequisites#supported-operating-systems). - - Able to connect through the firewall to communicate over the internet and [these URLs](/azure/azure-arc/servers/network-requirements?tabs=azure-cloud#urls) aren't blocked. + - Running a [supported operating system](../servers/prerequisites.md#supported-operating-systems). + - Able to connect through the firewall to communicate over the internet and [these URLs](../servers/network-requirements.md?tabs=azure-cloud#urls) aren't blocked. ## Install Arc agents at scale from portal An admin can install agents for multiple machines from the Azure portal if the machines share the same administrator credentials. -1. Navigate to the **SCVMM management servers** blade on [Azure Arc Center](https://ms.portal.azure.com/#view/Microsoft_Azure_HybridCompute/AzureArcCenterBlade/~/overview), and select the SCVMM management server resource. +1. Navigate to the **SCVMM management servers** blade on [Azure Arc Center](https://portal.azure.com/#view/Microsoft_Azure_HybridCompute/AzureArcCenterBlade/~/overview), and select the SCVMM management server resource. 2. Select all the machines and choose the **Enable in Azure** option. 3. Select **Enable guest management** checkbox to install Arc agents on the selected machine. 4. If you want to connect the Arc agent via proxy, provide the proxy server details. An admin can install agents for multiple machines from the Azure portal if the m ## Next steps -[Manage VM extensions to use Azure management services for your SCVMM VMs](../servers/manage-vm-extensions.md). +[Manage VM extensions to use Azure management services for your SCVMM VMs](../servers/manage-vm-extensions.md). |
azure-arc | Install Arc Agents Using Script | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/install-arc-agents-using-script.md | Ensure the following before you install Arc agents using a script for SCVMM VMs: - Is powered on and the resource bridge has network connectivity to the host running the VM. - Is running a [supported operating system](/azure/azure-arc/servers/prerequisites#supported-operating-systems). - Is able to connect through the firewall to communicate over the Internet and [these URLs](/azure/azure-arc/servers/network-requirements?tabs=azure-cloud#urls) aren't blocked.- - Has Azure CLI [installed](https://learn.microsoft.com/cli/azure/install-azure-cli). + - Has Azure CLI [installed](/cli/azure/install-azure-cli). - Has the Arc agent installation script downloaded from [here](https://download.microsoft.com/download/7/1/6/7164490e-6d8c-450c-8511-f8191f6ec110/arcscvmm-enable-guest-management.ps1) for a Windows VM or from [here](https://download.microsoft.com/download/0/9/b/09bd9ef4-a7af-49e5-ad5f-9e8f85fae75b/arcscvmm-enable-guest-management.sh) for a Linux VM. >[!NOTE] Ensure the following before you install Arc agents using a script for SCVMM VMs: ## Next steps -[Manage VM extensions to use Azure management services for your SCVMM VMs](../servers/manage-vm-extensions.md). +[Manage VM extensions to use Azure management services for your SCVMM VMs](../servers/manage-vm-extensions.md). |
azure-arc | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/overview.md | Azure Arc-enabled System Center Virtual Machine Manager also allows you to manag Arc-enabled System Center VMM allows you to: - Perform various VM lifecycle operations such as start, stop, pause, and delete VMs on SCVMM managed VMs directly from Azure.-- Empower developers and application teams to self-serve VM operations on demand using [Azure role-based access control (RBAC)](https://learn.microsoft.com/azure/role-based-access-control/overview).+- Empower developers and application teams to self-serve VM operations on demand using [Azure role-based access control (RBAC)](/azure/role-based-access-control/overview). - Browse your VMM resources (VMs, templates, VM networks, and storage) in Azure, providing you with a single pane view for your infrastructure across both environments. - Discover and onboard existing SCVMM managed VMs to Azure. - Install the Arc-connected machine agents at scale on SCVMM VMs to [govern, protect, configure, and monitor them](../servers/overview.md#supported-cloud-operations). |
azure-arc | Quickstart Connect System Center Virtual Machine Manager To Arc | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/quickstart-connect-system-center-virtual-machine-manager-to-arc.md | This QuickStart shows you how to connect your SCVMM management server to Azure A | **Requirement** | **Details** | | | | | **Azure** | An Azure subscription <br/><br/> A resource group in the above subscription where you have the *Owner/Contributor* role. |-| **SCVMM** | You need an SCVMM management server running version 2019 or later.<br/><br/> A private cloud with minimum free capacity of 32 GB of RAM, 4 vCPUs with 100 GB of free disk space. <br/><br/> A VM network with internet access, directly or through proxy. Appliance VM will be deployed using this VM network.<br/><br/> Only Static IP allocation is supported and VMM Static IP Pool is required. Follow [these steps](https://learn.microsoft.com/system-center/vmm/network-pool?view=sc-vmm-2022&preserve-view=true) to create a VMM Static IP Pool and ensure that the Static IP Pool has at least four IP addresses. Dynamic IP allocation using DHCP isn't supported. | +| **SCVMM** | You need an SCVMM management server running version 2019 or later.<br/><br/> A private cloud with minimum free capacity of 32 GB of RAM, 4 vCPUs with 100 GB of free disk space. <br/><br/> A VM network with internet access, directly or through proxy. Appliance VM will be deployed using this VM network.<br/><br/> Only Static IP allocation is supported and VMM Static IP Pool is required. Follow [these steps](/system-center/vmm/network-pool?view=sc-vmm-2022&preserve-view=true) to create a VMM Static IP Pool and ensure that the Static IP Pool has at least four IP addresses. Dynamic IP allocation using DHCP isn't supported. | | **SCVMM accounts** | An SCVMM admin account that can perform all administrative actions on all objects that VMM manages. <br/><br/> The user should be part of local administrator account in the SCVMM server. If the SCVMM server is installed in a High Availability configuration, the user should be a part of the local administrator accounts in all the SCVMM cluster nodes. <br/><br/>This will be used for the ongoing operation of Azure Arc-enabled SCVMM and the deployment of the Arc Resource bridge VM. | | **Workstation** | The workstation will be used to run the helper script.<br/><br/> A Windows/Linux machine that can access both your SCVMM management server and internet, directly or through proxy.<br/><br/> The helper script can be run directly from the VMM server machine as well.<br/><br/> To avoid network latency issues, we recommend executing the helper script directly in the VMM server machine.<br/><br/> Note that when you execute the script from a Linux machine, the deployment takes a bit longer and you might experience performance issues. | |
azure-arc | Set Up And Manage Self Service Access Scvmm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/set-up-and-manage-self-service-access-scvmm.md | The **Azure Arc ScVmm VM Contributor** role is a built-in role that provides per ## Next steps -[Create an Azure Arc VM](https://learn.microsoft.com/azure/azure-arc/system-center-virtual-machine-manager/create-virtual-machine). +[Create an Azure Arc VM](create-virtual-machine.md). |
azure-arc | Deliver Extended Security Updates For Vmware Vms Through Arc | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/deliver-extended-security-updates-for-vmware-vms-through-arc.md | keywords: "VMware, Arc, Azure" # Deliver ESUs for VMware VMs through Arc -Azure Arc-enabled VMware vSphere allows you to enroll all the Windows Server 2012/2012 R2 VMs managed by your vCenter in [Extended Security Updates](https://learn.microsoft.com/windows-server/get-started/extended-security-updates-overview) (ESUs) at scale. +Azure Arc-enabled VMware vSphere allows you to enroll all the Windows Server 2012/2012 R2 VMs managed by your vCenter in [Extended Security Updates](/windows-server/get-started/extended-security-updates-overview) (ESUs) at scale. -ESUs allow you to leverage cost flexibility in the form of pay-as-you-go Azure billing and enhanced delivery experience in the form of built-in inventory and keyless delivery. In addition, ESUs enabled by Azure Arc give you access to Azure management services such as [Azure Update Manager](https://learn.microsoft.com/azure/update-manager/overview?tabs=azure-vms), [Azure Automation Change Tracking and Inventory](https://learn.microsoft.com/azure/automation/change-tracking/overview?tabs=python-2), and [Azure Policy Guest Configuration](https://learn.microsoft.com/azure/cloud-adoption-framework/manage/azure-server-management/guest-configuration-policy) at no additional cost. +ESUs allow you to leverage cost flexibility in the form of pay-as-you-go Azure billing and enhanced delivery experience in the form of built-in inventory and keyless delivery. In addition, ESUs enabled by Azure Arc give you access to Azure management services such as [Azure Update Manager](/azure/update-manager/overview?tabs=azure-vms), [Azure Automation Change Tracking and Inventory](/azure/automation/change-tracking/overview?tabs=python-2), and [Azure Policy Guest Configuration](/azure/cloud-adoption-framework/manage/azure-server-management/guest-configuration-policy) at no additional cost. This article provides the steps to procure and deliver ESUs to WS 2012 and 2012 R2 VMware VMs onboarded to Azure Arc-enabled VMware vSphere. This article provides the steps to procure and deliver ESUs to WS 2012 and 2012 You can select one or more Arc-enabled VMware vSphere VMs to link to an ESU license. Once you've linked a VM to an activated ESU license, the VM is eligible to receive Windows Server 2012 and 2012 R2 ESUs. >[!Note]-> You have the flexibility to configure your patching solution of choice to receive these updates ΓÇô whether it's [Azure Update Manager](https://learn.microsoft.com/azure/update-center/overview), [Windows Server Update Services](https://learn.microsoft.com/windows-server/administration/windows-server-update-services/get-started/windows-server-update-services-wsus), Microsoft Updates, [Microsoft Endpoint Configuration Manager](https://learn.microsoft.com/mem/configmgr/core/understand/introduction), or a third-party patch management solution. +> You have the flexibility to configure your patching solution of choice to receive these updates ΓÇô whether it's [Azure Update Manager](/azure/update-center/overview), [Windows Server Update Services](/windows-server/administration/windows-server-update-services/get-started/windows-server-update-services-wsus), Microsoft Updates, [Microsoft Endpoint Configuration Manager](/mem/configmgr/core/understand/introduction), or a third-party patch management solution. 1. Select the **Eligible Resources** tab to view a list of all your Arc-enabled server machines running Windows Server 2012 and 2012 R2, including VMware machines that are guest management enabled. The **ESUs status** column indicates whether the machine is ESUs enabled. |
azure-cache-for-redis | Cache Tutorial Semantic Cache | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-tutorial-semantic-cache.md | + + Title: 'Tutorial: Use Azure Cache for Redis as a semantic cache' +description: In this tutorial, you learn how to use Azure Cache for Redis as a semantic cache. ++++ Last updated : 01/08/2024++#CustomerIntent: As a developer, I want to develop some code using a sample so that I see an example of a semantic cache with an AI-based large language model. +++# Tutorial: Use Azure Cache for Redis as a semantic cache ++In this tutorial, you use Azure Cache for Redis as a semantic cache with an AI-based large language model (LLM). You use Azure Open AI Service to generate LLM responses to queries and cache those responses using Azure Cache for Redis, delivering faster responses and lowering costs. ++Because Azure Cache for Redis offers built-in vector search capability, you can also perform _semantic caching_. You can return cached responses for identical queries and also for queries that are similar in meaning, even if the text isn't the same. ++In this tutorial, you learn how to: ++> [!div class="checklist"] +> +> - Create an Azure Cache for Redis instance configured for semantic caching +> - Use LangChain other popular Python libraries. +> - Use Azure OpenAI service to generate text from AI models and cache results. +> - See the performance gains from using caching with LLMs. ++>[!IMPORTANT] +>This tutorial walks you through building a Jupyter Notebook. You can follow this tutorial with a Python code file (.py) and get _similar_ results, but you need to add all of the code blocks in this tutorial into the `.py` file and execute once to see results. In other words, Jupyter Notebooks provides intermediate results as you execute cells, but this is not behavior you should expect when working in a Python code file. ++>[!IMPORTANT] +>If you would like to follow along in a completed Jupyter notebook instead, [download the Jupyter notebook file named _semanticcache.ipynb_](https://github.com/Azure-Samples/azure-cache-redis-samples/tree/main/tutorial/semantic-cache) and save it into the new _semanticcache_ folder. ++## Prerequisites ++- An Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services?azure-portal=true) ++- Access granted to Azure OpenAI in the desired Azure subscription + Currently, you must apply for access to Azure OpenAI. You can apply for access to Azure OpenAI by completing the form at [https://aka.ms/oai/access](https://aka.ms/oai/access). ++- [Python 3.11.6 or later version](https://www.python.org/) ++- [Jupyter Notebooks](https://jupyter.org/) (optional) ++- An Azure OpenAI resource with the **text-embedding-ada-002 (Version 2)** and **gpt-35-turbo-instruct** models deployed. These models are currently only available in [certain regions](../ai-services/openai/concepts/models.md#model-summary-table-and-region-availability). See the [resource deployment guide](../ai-services/openai/how-to/create-resource.md) for instructions on how to deploy the models. ++## Create an Azure Cache for Redis instance ++Follow the [Quickstart: Create a Redis Enterprise cache](quickstart-create-redis-enterprise.md) guide. On the **Advanced** page, make sure that you added the **RediSearch** module and chose the **Enterprise** Cluster Policy. All other settings can match the default described in the quickstart. ++ It takes a few minutes for the cache to create. You can move on to the next step in the meantime. +++## Set up your development environment ++1. Create a folder on your local computer named _semanticcache_ in the location where you typically save your projects. ++1. Create a new python file (_tutorial.py_) or Jupyter notebook (_tutorial.ipynb_) in the folder. ++1. Install the required Python packages: ++ ```python + pip install openai langchain redis tiktoken + ``` ++## Create Azure OpenAI models ++Make sure you have two models deployed to your Azure OpenAI resource: ++- An LLM that provides text responses. We use the **GPT-3.5-turbo-instruct** model for this tutorial. ++- An embeddings model that converts queries into vectors to allow them to be compared to past queries. We use the **text-embedding-ada-002 (Version 2)** model for this tutorial. ++See [Deploy a model](/azure/ai-services/openai/how-to/create-resource?pivots=web-portal#deploy-a-model) for more detailed instructions. Record the name you chose for each model deployment. ++## Import libraries and set up connection information ++To successfully make a call against Azure OpenAI, you need an **endpoint** and a **key**. You also need an **endpoint** and a **key** to connect to Azure Cache for Redis. ++1. Go to your Azure Open AI resource in the Azure portal. ++1. Locate **Endpoint and Keys** in the **Resource Management** section of your Azure OpenAI resource. Copy your endpoint and access key because you need both for authenticating your API calls. An example endpoint is: `https://docs-test-001.openai.azure.com`. You can use either `KEY1` or `KEY2`. ++1. Go to the **Overview** page of your Azure Cache for Redis resource in the Azure portal. Copy your endpoint. ++1. Locate **Access keys** in the **Settings** section. Copy your access key. You can use either `Primary` or `Secondary`. ++1. Add the following code to a new code cell: ++ ```python + # Code cell 2 + + import openai + import redis + import os + import langchain + from langchain.llms import AzureOpenAI + from langchain.embeddings import AzureOpenAIEmbeddings + from langchain.globals import set_llm_cache + from langchain.cache import RedisSemanticCache + import time + + + AZURE_ENDPOINT=<your-openai-endpoint> + API_KEY=<your-openai-key> + API_VERSION="2023-05-15" + LLM_DEPLOYMENT_NAME=<your-llm-model-name> + LLM_MODEL_NAME="gpt-35-turbo-instruct" + EMBEDDINGS_DEPLOYMENT_NAME=<your-embeddings-model-name> + EMBEDDINGS_MODEL_NAME="text-embedding-ada-002" + + REDIS_ENDPOINT = <your-redis-endpoint> + REDIS_PASSWORD = <your-redis-password> + + ``` ++1. Update the value of `API_KEY` and `RESOURCE_ENDPOINT` with the key and endpoint values from your Azure OpenAI deployment. ++1. Set `LLM_DEPLOYMENT_NAME` and `EMBEDDINGS_DEPLOYMENT_NAME` to the name of your two models deployed in Azure OpenAI Service. ++1. Update `REDIS_ENDPOINT` and `REDIS_PASSWORD` with the endpoint and key value from your Azure Cache for Redis instance. ++ > [!IMPORTANT] + > We strongly recommend using environmental variables or a secret manager like [Azure Key Vault](/azure/key-vault/general/overview) to pass in the API key, endpoint, and deployment name information. These variables are set in plaintext here for the sake of simplicity. + +1. Execute code cell 2. ++## Initialize AI models ++Next, you initialize the LLM and embeddings models ++1. Add the following code to a new code cell: ++ ```python + # Code cell 3 + + llm = AzureOpenAI( + deployment_name=LLM_DEPLOYMENT_NAME, + model_name="gpt-35-turbo-instruct", + openai_api_key=API_KEY, + azure_endpoint=AZURE_ENDPOINT, + openai_api_version=API_VERSION, + ) + embeddings = AzureOpenAIEmbeddings( + azure_deployment=EMBEDDINGS_DEPLOYMENT_NAME, + model="text-embedding-ada-002", + openai_api_key=API_KEY, + azure_endpoint=AZURE_ENDPOINT, + openai_api_version=API_VERSION + ) + ``` ++1. Execute code cell 3. ++## Set up Redis as a semantic cache ++Next, specify Redis as a semantic cache for your LLM. ++1. Add the following code to a new code cell: ++ ```python + # Code cell 4 + + redis_url = "rediss://:" + REDIS_PASSWORD + "@"+ REDIS_ENDPOINT + set_llm_cache(RedisSemanticCache(redis_url = redis_url, embedding=embeddings, score_threshold=0.05)) + ``` + + > [!IMPORTANT] + > The value of the `score_threshold` parameter determines how similar two queries need to be in order to return a cached result. The lower the number, the more similar the queries need to be. + > You can play around with this value to fine-tune it to your application. + +1. Execute code cell 4. ++## Query and get responses from the LLM ++Finally, query the LLM to get an AI generated response. If you're using a Jupyter notebook, you can add `%%time` at the top of the cell to output the amount of time taken to execute the code. ++1. Add the following code to a new code cell and execute it: ++ ```python + # Code cell 5 + %%time + response = llm("Please write a poem about cute kittens.") + print(response) + ``` + + You should see an output and output similar to this: ++ ```output + Fluffy balls of fur, + With eyes so bright and pure, + Kittens are a true delight, + Bringing joy into our sight. + + With tiny paws and playful hearts, + They chase and pounce, a work of art, + Their innocence and curiosity, + Fills our hearts with such serenity. + + Their soft meows and gentle purrs, + Are like music to our ears, + They curl up in our laps, + And take the stress away in a snap. + + Their whiskers twitch, they're always ready, + To explore and be adventurous and steady, + With their tails held high, + They're a sight to make us sigh. + + Their tiny faces, oh so sweet, + With button noses and paw-sized feet, + They're the epitome of cuteness, + ... + Cute kittens, a true blessing, + In our hearts, they'll always be reigning. + CPU times: total: 0 ns + Wall time: 2.67 s + ``` + + The `Wall time` shows a value of 2.67 seconds. That's how much real-world time it took to query the LLM and for the LLM to generate a response. ++1. Execute cell 5 again. You should see the exact same output, but with a smaller wall time: ++ ```output + Fluffy balls of fur, + With eyes so bright and pure, + Kittens are a true delight, + Bringing joy into our sight. + + With tiny paws and playful hearts, + They chase and pounce, a work of art, + Their innocence and curiosity, + Fills our hearts with such serenity. + + Their soft meows and gentle purrs, + Are like music to our ears, + They curl up in our laps, + And take the stress away in a snap. + + Their whiskers twitch, they're always ready, + To explore and be adventurous and steady, + With their tails held high, + They're a sight to make us sigh. + + Their tiny faces, oh so sweet, + With button noses and paw-sized feet, + They're the epitome of cuteness, + ... + Cute kittens, a true blessing, + In our hearts, they'll always be reigning. + CPU times: total: 0 ns + Wall time: 575 ms + ``` + + The wall time appears to shorten by a factor of five--all the way down to 575 milliseconds. + +1. Change the query from `Please write a poem about cute kittens` to `Write a poem about cute kittens` and run cell 5 again. You should see the _exact same output_ and a _lower wall time_ than the original query. Even though the query changed, the _semantic meaning_ of the query remained the same so the same cached output was returned. This is the advantage of semantic caching! ++## Change the similarity threshold ++1. Try running a similar query with a different meaning, like `Please write a poem about cute puppies`. Notice that the cached result is returned here as well. The semantic meaning of the word `puppies` is close enough to the word `kittens` that the cached result is returned. ++1. The similarity threshold can be modified to determine when the semantic cache should return a cached result and when it should return a new output from the LLM. In code cell 4, change `score_threshold` from `0.05` to `0.01`: ++ ```python + # Code cell 4 ++ redis_url = "rediss://:" + REDIS_PASSWORD + "@"+ REDIS_ENDPOINT + set_llm_cache(RedisSemanticCache(redis_url = redis_url, embedding=embeddings, score_threshold=0.01)) + ``` ++1. Try the query `Please write a poem about cute puppies` again. You should receive a new output that's specific to puppies: ++ ```output + Oh, little balls of fluff and fur + With wagging tails and tiny paws + Puppies, oh puppies, so pure + The epitome of cuteness, no flaws + + With big round eyes that melt our hearts + And floppy ears that bounce with glee + Their playful antics, like works of art + They bring joy to all they see + + Their soft, warm bodies, so cuddly + As they curl up in our laps + Their gentle kisses, so lovingly + Like tiny, wet, puppy taps + + Their clumsy steps and wobbly walks + As they explore the world anew + Their curiosity, like a ticking clock + Always eager to learn and pursue + + Their little barks and yips so sweet + Fill our days with endless delight + Their unconditional love, so complete + ... + For they bring us love and laughter, year after year + Our cute little pups, in every way. + CPU times: total: 15.6 ms + Wall time: 4.3 s + ``` + + You likely need to fine-tune the similarity threshold based on your application to ensure that the right sensitivity is used when determining which queries to cache. +++## Related content ++- [Learn more about Azure Cache for Redis](cache-overview.md) +- Learn more about Azure Cache for Redis [vector search capabilities](./cache-overview-vector-similarity.md) +- [Tutorial: use vector similarity search on Azure Cache for Redis](cache-tutorial-vector-similarity.md) +- [Read how to build an AI-powered app with OpenAI and Redis](https://techcommunity.microsoft.com/t5/azure-developer-community-blog/vector-similarity-search-with-azure-cache-for-redis-enterprise/ba-p/3822059) +- [Build a Q&A app with semantic answers](https://github.com/ruoccofabrizio/azure-open-ai-embeddings-qna) |
azure-functions | Create First Function Vs Code Csharp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-csharp.md | description: "Learn how to create a C# function, then publish the local project Last updated 01/05/2023 ms.devlang: csharp-++ai-usage: ai-assisted # Quickstart: Create a C# function in Azure using Visual Studio Code There's also a [CLI-based version](create-first-function-cli-csharp.md) of this Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account. +This video shows you how to create a C# function in Azure using VS Code. +> [!VIDEO be75e388-1b74-4051-8a62-132b069a3ec9] ++The steps in the video are also described in the following sections. + ## Configure your environment Before you get started, make sure you have the following requirements in place: |
azure-functions | Create First Function Vs Code Python | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-vs-code-python.md | description: Learn how to create a Python function, then publish the local proje Last updated 05/29/2023 ms.devlang: python-++ai-usage: ai-assisted zone_pivot_groups: python-mode-functions Completing this quickstart incurs a small cost of a few USD cents or less in you There's also a [CLI-based version](create-first-function-cli-python.md) of this article. +This video shows you how to create a Python function in Azure using VS Code. +> [!VIDEO a1e10f96-2940-489c-bc53-da2b915c8fc2] ++The steps in the video are also described in the following sections. + ## Configure your environment Before you begin, make sure that you have the following requirements in place: |
azure-functions | Functions Create Your First Function Visual Studio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-your-first-function-visual-studio.md | ms.assetid: 82db1177-2295-4e39-bd42-763f6082e796 Last updated 02/28/2023 ms.devlang: csharp-++ai-usage: ai-assisted # Quickstart: Create your first C# function in Azure using Visual Studio In this article, you learn how to: Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account. +This video shows you how to create a C# function in Azure. +> [!VIDEO efa236ad-db85-4dfc-9f1e-b353c3b09498] ++The steps in the video are also described in the following sections. + ## Prerequisites + [Visual Studio 2022](https://visualstudio.microsoft.com/vs/). Make sure to select the **Azure development** workload during installation. |
azure-functions | Migrate Service Bus Version 4 Version 5 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/migrate-service-bus-version-4-version-5.md | + + Title: Migrate Azure Service Bus extension for Azure Functions to version 5.x +description: This article shows you how to upgrade your existing function apps using the Azure Service Bus extension version 4.x to be able to use version 5.x of the extension. ++ Last updated : 01/12/2024+zone_pivot_groups: programming-languages-set-functions +++# Migrate function apps from Azure Service Bus extension version 4.x to version 5.x ++This article highlights considerations for upgrading your existing Azure Functions applications that use the Azure Service Bus extension version 4.x to use the newer [extension version 5.x](./functions-bindings-service-bus.md?tabs=extensionv5). Migrating from version 4.x to version 5.x of the Azure Service Bus extension has breaking changes for your application. ++> [!IMPORTANT] +> On March 31, 2025 the Azure Service Bus extension version 4.x will be retired. The extension and all applications using the extension will continue to function, but Azure Service Bus will cease to provide further maintenance and support for this extension. We recommend migrating to the latest version 5.x of the extension. ++This article walks you through the process of migrating your function app to run on version 5.x of the Azure Service Bus extension. Because project upgrade instructions are language dependent, make sure to choose your development language from the selector at the [top of the article](#top). +++## Update the extension version ++.NET Functions uses extensions that are installed in the project as NuGet packages. Depending on your Functions process model, the NuGet package to update varies. ++|Functions process model |Azure Service Bus extension |Recommended version | +||--|--| +|[In-process model](./functions-dotnet-class-library.md)|[Microsoft.Azure.WebJobs.Extensions.ServiceBus](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.ServiceBus) |>= 5.13.4 | +|[Isolated worker model](./dotnet-isolated-process-guide.md) |[Microsoft.Azure.Functions.Worker.Extensions.ServiceBus](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.ServiceBus)|>= 5.14.1 | ++Update your `.csproj` project file to use the latest extension version for your process model. The following `.csproj` file uses version 5 of the Azure Service Bus extension. ++### [Isolated worker model](#tab/isolated-process) ++```xml +<Project Sdk="Microsoft.NET.Sdk"> + <PropertyGroup> + <TargetFramework>net7.0</TargetFramework> + <AzureFunctionsVersion>v4</AzureFunctionsVersion> + <OutputType>Exe</OutputType> + </PropertyGroup> + <ItemGroup> + <PackageReference Include="Microsoft.Azure.Functions.Worker" Version="1.14.1" /> + <PackageReference Include="Microsoft.Azure.Functions.Worker.Extensions.ServiceBus" Version="5.14.1" /> + <PackageReference Include="Microsoft.Azure.Functions.Worker.Sdk" Version="1.10.0" /> + </ItemGroup> + <ItemGroup> + <None Update="host.json"> + <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory> + </None> + <None Update="local.settings.json"> + <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory> + <CopyToPublishDirectory>Never</CopyToPublishDirectory> + </None> + </ItemGroup> +</Project> +``` ++### [In-process model](#tab/in-process) ++```xml +<Project Sdk="Microsoft.NET.Sdk"> + <PropertyGroup> + <TargetFramework>net7.0</TargetFramework> + <AzureFunctionsVersion>v4</AzureFunctionsVersion> + </PropertyGroup> + <ItemGroup> + <PackageReference Include="Microsoft.Azure.WebJobs.Extensions.ServiceBus" Version="5.13.4" /> + <PackageReference Include="Microsoft.NET.Sdk.Functions" Version="4.1.1" /> + </ItemGroup> + <ItemGroup> + <None Update="host.json"> + <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory> + </None> + <None Update="local.settings.json"> + <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory> + <CopyToPublishDirectory>Never</CopyToPublishDirectory> + </None> + </ItemGroup> +</Project> +``` ++## Azure Service Bus SDK changes ++The underlying SDK used by extension changed to use the [Azure.Messaging.ServiceBus](https://www.nuget.org/packages/Azure.Messaging.ServiceBus) SDK, for cases where you were using SDK related types, please look at the [Guide for migrating to Azure.Messaging.ServiceBus from Microsoft.Azure.ServiceBus](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/servicebus/Azure.Messaging.ServiceBus/MigrationGuide.md) for more information. +++++## Update the extension bundle ++By default, [extension bundles](./functions-bindings-register.md#extension-bundles) are used by non-.NET function apps to install binding extensions. The Azure Service Bus version 5 extension is part of extension bundle version 4. ++To update your application to use the latest extension bundle, update your `host.json`. The following `host.json` file uses version 4 of the extension bundle. ++```json +{ + "version": "2.0", + "extensionBundle": { + "id": "Microsoft.Azure.Functions.ExtensionBundle", + "version": "[4.*, 5.0.0)" + } +} +``` ++## Modify your function code ++The Azure Functions Azure Service Bus extension version 5 is built on top of the Azure.Messaging.ServiceBus SDK version 3, which removed support for the `Message` class. Instead, use the `ServiceBusReceivedMessage` type to receive message metadata from Service Bus Queues and Subscriptions. ++## Next steps ++- [Run a function when a Service Bus queue or topic message is created (Trigger)](./functions-bindings-service-bus-trigger.md) +- [Send Azure Service Bus messages from Azure Functions (Output binding)](./functions-bindings-service-bus-output.md) |
azure-maps | About Azure Maps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/about-azure-maps.md | Stay up to date on Azure Maps: [How to use the Get Map Attribution API]: how-to-show-attribution.md [Quickstart: Create a web app]: quick-demo-map-app.md [What is Azure Maps Creator?]: about-creator.md-[v1]: /rest/api/maps/data -[v2]: /rest/api/maps/data-v2 +[v1]: /rest/api/maps/data?view=rest-maps-1.0 +[v2]: /rest/api/maps/data [How to create data registry]: how-to-create-data-registries.md <! REST API Links > [Data registry]: /rest/api/maps/data-registry [Geolocation]: /rest/api/maps/geolocation-[Get Map Tile]: /rest/api/maps/render-v2/get-map-tile +[Get Map Tile]: /rest/api/maps/render/get-map-tile [Get Weather along route API]: /rest/api/maps/weather/getweatheralongroute-[Render]: /rest/api/maps/render-v2 +[Render]: /rest/api/maps/render [REST APIs]: /rest/api/maps/ [Route]: /rest/api/maps/route-[Search]: /rest/api/maps/search +[Search]: /rest/api/maps/search?view=rest-maps-1.0 [Spatial]: /rest/api/maps/spatial-[TilesetID]: /rest/api/maps/render-v2/get-map-tile#tilesetid +[TilesetID]: /rest/api/maps/render/get-map-tile#tilesetid [Timezone]: /rest/api/maps/timezone [Traffic]: /rest/api/maps/traffic <! JavaScript API Links > |
azure-maps | Azure Maps Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/azure-maps-authentication.md | To learn more about authenticating the Azure Maps Control with Microsoft Entra I [Data]: /rest/api/maps/data [Creator]: /rest/api/maps-creator/ [Spatial]: /rest/api/maps/spatial-[Search]: /rest/api/maps/search +[Search]: /rest/api/maps/search?view=rest-maps-1.0 [Route]: /rest/api/maps/route [How to configure Azure RBAC for Azure Maps]: how-to-manage-authentication.md |
azure-maps | Azure Maps Qps Rate Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/azure-maps-qps-rate-limits.md | When QPS limits are reached, an HTTP 429 error is returned. If you're using the [Azure portal]: https://portal.azure.com/ [Manage the pricing tier of your Azure Maps account]: how-to-manage-pricing-tier.md-[v1]: /rest/api/maps/data -[v2]: /rest/api/maps/data-v2 +[v1]: /rest/api/maps/data?view=rest-maps-1.0 +[v2]: /rest/api/maps/data [Data Registry]: /rest/api/maps/data-registry [How to create data registry]: how-to-create-data-registries.md |
azure-maps | Create Data Source Android Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/create-data-source-android-sdk.md | See the following articles for more code samples to add to your maps: [Polygon layer]: how-to-add-shapes-to-android-map.md [Tile layer]: how-to-add-tile-layer-android-map.md <! REST API Links >-[Road tiles]: /rest/api/maps/render-v2/get-map-tile +[Road tiles]: /rest/api/maps/render/get-map-tile [Traffic incidents]: /rest/api/maps/traffic/gettrafficincidenttile [Traffic flow]: /rest/api/maps/traffic/gettrafficflowtile-[Render - Get Map Tile]: /rest/api/maps/render-v2/get-map-tile +[Render - Get Map Tile]: /rest/api/maps/render/get-map-tile <! External Links > [Mapbox Vector Tile Specification]: https://github.com/mapbox/vector-tile-spec |
azure-maps | Create Data Source Ios Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/create-data-source-ios-sdk.md | See the following articles for more code samples to add to your maps: [Polygon layer]: Add-polygon-layer-map-ios.md [Tile layer]: how-to-add-tile-layer-android-map.md <! REST API Links >-[Road tiles]: /rest/api/maps/render-v2/get-map-tile +[Road tiles]: /rest/api/maps/render/get-map-tile [Traffic incidents]: /rest/api/maps/traffic/gettrafficincidenttile [Traffic flow]: /rest/api/maps/traffic/gettrafficflowtile-[Render - Get Map Tile]: /rest/api/maps/render-v2/get-map-tile +[Render - Get Map Tile]: /rest/api/maps/render/get-map-tile <! External Links > [Mapbox Vector Tile Specification]: https://github.com/mapbox/vector-tile-spec |
azure-maps | Create Data Source Web Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/create-data-source-web-sdk.md | See the following articles for more code samples to add to your maps: [Line layer]: map-add-line-layer.md [Mapbox Vector Tile Specification]: https://github.com/mapbox/vector-tile-spec [Polygon layer]: map-add-shape.md-[Render - Get Map Tile]: /rest/api/maps/render-v2/get-map-tile -[Road tiles]: /rest/api/maps/render-v2/get-map-tile +[Render - Get Map Tile]: /rest/api/maps/render/get-map-tile +[Road tiles]: /rest/api/maps/render/get-map-tile [SourceManager]: /javascript/api/azure-maps-control/atlas.sourcemanager [Symbol layer]: map-add-pin.md [Tile layer]: map-add-tile-layer.md |
azure-maps | Extend Geojson | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/extend-geojson.md | Review the glossary of common technical terms associated with Azure Maps and loc > [Azure Maps glossary] [GeoJSON spec]: https://tools.ietf.org/html/rfc7946-[Search Inside Geometry]: /rest/api/maps/search/postsearchinsidegeometry +[Search Inside Geometry]: /rest/api/maps/search/postsearchinsidegeometry?view=rest-maps-1.0 [Geofence GeoJSON format]: geofence-geojson.md [Azure Maps glossary]: glossary.md |
azure-maps | Geocoding Coverage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/geocoding-coverage.md | Learn more about Azure Maps geocoding: > [!div class="nextstepaction"] > [Azure Maps Search service] -[Search service]: /rest/api/maps/search -[Azure Maps Search service]: /rest/api/maps/search -[Get Search Address]: /rest/api/maps/search/get-search-address +[Search service]: /rest/api/maps/search?view=rest-maps-1.0 +[Azure Maps Search service]: /rest/api/maps/search?view=rest-maps-1.0 +[Get Search Address]: /rest/api/maps/search/get-search-address?view=rest-maps-1.0 |
azure-maps | Geographic Scope | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/geographic-scope.md | For information on limiting what regions a SAS token can use in, see [Authentica [Authentication with Azure Maps]: azure-maps-authentication.md#create-sas-tokens [Azure geographies]: https://azure.microsoft.com/global-infrastructure/geographies [Azure Government cloud support]: how-to-use-map-control.md#azure-government-cloud-support-[Search - Get Search Address]: /rest/api/maps/search/get-search-address +[Search - Get Search Address]: /rest/api/maps/search/get-search-address?view=rest-maps-1.0 |
azure-maps | How To Dev Guide Js Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dev-guide-js-sdk.md | main().catch(console.error); [search package]: https://www.npmjs.com/package/@azure-rest/maps-search [search readme]: https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/maps/maps-search-rest/README.md [search sample]: https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/maps/maps-search-rest/samples/v2-beta-[Search service]: /rest/api/maps/search +[Search service]: /rest/api/maps/search?view=rest-maps-1.0 [searchAddress]: /javascript/api/@azure-rest/maps-search/searchaddress [Subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account |
azure-maps | How To Render Custom Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-render-custom-data.md | Similarly, you can change, add, and remove other style modifiers. [Postman]: https://www.postman.com/ [Subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account -[Get Map Static Image]: /rest/api/maps/render-v2/get-map-static-image +[Get Map Static Image]: /rest/api/maps/render/get-map-static-image [Manage the pricing tier of your Azure Maps account]: how-to-manage-pricing-tier.md-[path]: /rest/api/maps/render-v2/get-map-static-image#uri-parameters -[pins]: /rest/api/maps/render-v2/get-map-static-image#uri-parameters -[Render]: /rest/api/maps/render-v2/get-map-static-image -[Render - Get Map Static Image]: /rest/api/maps/render-v2/get-map-static-image +[path]: /rest/api/maps/render/get-map-static-image#uri-parameters +[pins]: /rest/api/maps/render/get-map-static-image#uri-parameters +[Render]: /rest/api/maps/render/get-map-static-image +[Render - Get Map Static Image]: /rest/api/maps/render/get-map-static-image |
azure-maps | How To Search For Address | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-search-for-address.md | This example demonstrates how to search for a cross street based on the coordina > [Best practices for Azure Maps Search service] [Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account-[Azure Maps Search service]: /rest/api/maps/search +[Azure Maps Search service]: /rest/api/maps/search?view=rest-maps-1.0 [Best practices for Azure Maps Search service]: how-to-use-best-practices-for-search.md [Best Practices for Search]: how-to-use-best-practices-for-search.md#geobiased-search-results-[Entity Types]: /rest/api/maps/search/getsearchaddressreverse#entitytype -[Fuzzy Search URI Parameters]: /rest/api/maps/search/getsearchfuzzy#uri-parameters -[Fuzzy Search]: /rest/api/maps/search/getsearchfuzzy -[Get Search Address Reverse]: /rest/api/maps/search/getsearchaddressreverse -[Get Search Address]: /rest/api/maps/search/getsearchaddress -[point of interest result]: /rest/api/maps/search/getsearchpoi#searchpoiresponse +[Entity Types]: /rest/api/maps/search/getsearchaddressreverse?view=rest-maps-1.0#entitytype +[Fuzzy Search URI Parameters]: /rest/api/maps/search/getsearchfuzzy?view=rest-maps-1.0#uri-parameters +[Fuzzy Search]: /rest/api/maps/search/getsearchfuzzy?view=rest-maps-1.0 +[Get Search Address Reverse]: /rest/api/maps/search/getsearchaddressreverse?view=rest-maps-1.0 +[Get Search Address]: /rest/api/maps/search/getsearchaddress?view=rest-maps-1.0 +[point of interest result]: /rest/api/maps/search/getsearchpoi?view=rest-maps-1.0#searchpoiresponse [Post Search Address Batch]: /rest/api/maps/search/postsearchaddressbatch-[Post Search Address Reverse Batch]: /rest/api/maps/search/postsearchaddressreversebatch +[Post Search Address Reverse Batch]: /rest/api/maps/search/postsearchaddressreversebatch?view=rest-maps-1.0 [Postman]: https://www.postman.com/-[Reverse Address Search Results]: /rest/api/maps/search/getsearchaddressreverse#searchaddressreverseresult -[Reverse Address Search]: /rest/api/maps/search/getsearchaddressreverse -[Reverse Search Parameters]: /rest/api/maps/search/getsearchaddressreverse#uri-parameters -[Road Use Types]: /rest/api/maps/search/getsearchaddressreverse#uri-parameters +[Reverse Address Search Results]: /rest/api/maps/search/getsearchaddressreverse?view=rest-maps-1.0#searchaddressreverseresult +[Reverse Address Search]: /rest/api/maps/search/getsearchaddressreverse?view=rest-maps-1.0 +[Reverse Search Parameters]: /rest/api/maps/search/getsearchaddressreverse?view=rest-maps-1.0#uri-parameters +[Road Use Types]: /rest/api/maps/search/getsearchaddressreverse?view=rest-maps-1.0#uri-parameters [Route]: /rest/api/maps/route-[Search Address Reverse Cross Street]: /rest/api/maps/search/getsearchaddressreversecrossstreet -[Search Address]: /rest/api/maps/search/getsearchaddress +[Search Address Reverse Cross Street]: /rest/api/maps/search/getsearchaddressreversecrossstreet?view=rest-maps-1.0 +[Search Address]: /rest/api/maps/search/getsearchaddress?view=rest-maps-1.0 [Search Coverage]: geocoding-coverage.md-[Search Polygon API]: /rest/api/maps/search/getsearchpolygon -[Search]: /rest/api/maps/search +[Search Polygon API]: /rest/api/maps/search/getsearchpolygon?view=rest-maps-1.0 +[Search]: /rest/api/maps/search?view=rest-maps-1.0 [subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account-[URI Parameter reference]: /rest/api/maps/search/getsearchfuzzy#uri-parameters +[URI Parameter reference]: /rest/api/maps/search/getsearchfuzzy?view=rest-maps-1.0#uri-parameters [Weather]: /rest/api/maps/weather |
azure-maps | How To Show Attribution | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-show-attribution.md | https://atlas.microsoft.com/map/attribution?subscription-key={Your-Azure-Maps-Su [Android]: how-to-use-android-map-control-library.md [Authentication with Azure Maps]: azure-maps-authentication.md-[Get Map Attribution API]: /rest/api/maps/render-v2/get-map-attribution -[Get Map Attribution]: /rest/api/maps/render-v2/get-map-attribution#tilesetid +[Get Map Attribution API]: /rest/api/maps/render/get-map-attribution +[Get Map Attribution]: /rest/api/maps/render/get-map-attribution#tilesetid [iOS]: how-to-use-ios-map-control-library.md-[Render service]: /rest/api/maps/render-v2 +[Render service]: /rest/api/maps/render [Tileset Create API]: /rest/api/maps-creator/tileset/create-[TilesetID]: /rest/api/maps/render-v2/get-map-attribution#tilesetid +[TilesetID]: /rest/api/maps/render/get-map-attribution#tilesetid [Web]: how-to-use-map-control.md [Zoom levels and tile grid]: zoom-levels-and-tile-grid.md |
azure-maps | How To Use Best Practices For Routing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-best-practices-for-routing.md | To learn more, please see: [Azure Maps npm Package]: https://www.npmjs.com/package/azure-maps-rest [Azure Maps Route service]: /rest/api/maps/route [How to use the Service module]: how-to-use-services-module.md-[Point of Interest]: /rest/api/maps/search/getsearchpoi +[Point of Interest]: /rest/api/maps/search/getsearchpoi?view=rest-maps-1.0 [Post Route Directions API documentation]: /rest/api/maps/route/postroutedirections#supportingpoints [Post Route Directions]: /rest/api/maps/route/postroutedirections [Postman]: https://www.postman.com/downloads/ |
azure-maps | How To Use Best Practices For Search | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-best-practices-for-search.md | To learn more, please see: > [How to build Azure Maps Search service requests](./how-to-search-for-address.md) > [!div class="nextstepaction"]-> [Search service API documentation](/rest/api/maps/search) +> [Search service API documentation](/rest/api/maps/search?view=rest-maps-1.0) -[Search service]: /rest/api/maps/search -[Search Fuzzy]: /rest/api/maps/search/getsearchfuzzy +[Search service]: /rest/api/maps/search?view=rest-maps-1.0 +[Search Fuzzy]: /rest/api/maps/search/getsearchfuzzy?view=rest-maps-1.0 [Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account [subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account [Postman]: https://www.postman.com/downloads/ [Geocoding coverage]: geocoding-coverage.md-[Search Address Reverse]: /rest/api/maps/search/getsearchaddressreverse -[POI category search]: /rest/api/maps/search/getsearchpoicategory -[Search Nearby]: /rest/api/maps/search/getsearchnearby -[Get Search Address]: /rest/api/maps/search/getsearchaddress +[Search Address Reverse]: /rest/api/maps/search/getsearchaddressreverse?view=rest-maps-1.0 +[POI category search]: /rest/api/maps/search/getsearchpoicategory?view=rest-maps-1.0 +[Search Nearby]: /rest/api/maps/search/getsearchnearby?view=rest-maps-1.0 +[Get Search Address]: /rest/api/maps/search/getsearchaddress?view=rest-maps-1.0 [Azure Maps supported languages]: supported-languages.md-[Search Address]: /rest/api/maps/search/getsearchaddress -[Search Polygon service]: /rest/api/maps/search/getsearchpolygon +[Search Address]: /rest/api/maps/search/getsearchaddress?view=rest-maps-1.0 +[Search Polygon service]: /rest/api/maps/search/getsearchpolygon?view=rest-maps-1.0 [Set up a geofence]: tutorial-geofence.md-[Search POIs inside the geometry]: /rest/api/maps/search/postsearchinsidegeometry +[Search POIs inside the geometry]: /rest/api/maps/search/postsearchinsidegeometry?view=rest-maps-1.0 |
azure-maps | Map Get Information From Coordinate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-get-information-from-coordinate.md | See the following articles for full code examples: > [!div class="nextstepaction"] > [Show traffic](./map-show-traffic.md) -[Reverse Address Search API]: /rest/api/maps/search/getsearchaddressreverse +[Reverse Address Search API]: /rest/api/maps/search/getsearchaddressreverse?view=rest-maps-1.0 [Fetch API]: https://fetch.spec.whatwg.org/ [Create a map]: map-create.md [popup]: /javascript/api/azure-maps-control/atlas.popup#open [Add a popup on the map]: map-add-popup.md [event listener]: /javascript/api/azure-maps-control/atlas.map#events-[Get Search Address Reverse API]: /rest/api/maps/search/getsearchaddressreverse +[Get Search Address Reverse API]: /rest/api/maps/search/getsearchaddressreverse?view=rest-maps-1.0 [load event listener]: /javascript/api/azure-maps-control/atlas.map#events [setOptions]: /javascript/api/azure-maps-control/atlas.popup#setoptions-popupoptions- [@azure-rest/maps-search]: https://www.npmjs.com/package/@azure-rest/maps-search |
azure-maps | Map Search Location | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-search-location.md | The following image is a screenshot showing the results of the two code samples. Learn more about **Fuzzy Search**: > [!div class="nextstepaction"]-> [Azure Maps Fuzzy Search API](/rest/api/maps/search/getsearchfuzzy) +> [Azure Maps Fuzzy Search API](/rest/api/maps/search/getsearchfuzzy?view=rest-maps-1.0) Learn more about the classes and methods used in this article: See the following articles for full code examples: > [!div class="nextstepaction"] > [Show directions from A to B](map-route.md) -[Fuzzy search API]: /rest/api/maps/search/getsearchfuzzy +[Fuzzy search API]: /rest/api/maps/search/getsearchfuzzy?view=rest-maps-1.0 [Fetch API]: https://fetch.spec.whatwg.org/ [DataSource]: /javascript/api/azure-maps-control/atlas.source.datasource [symbol layer]: /javascript/api/azure-maps-control/atlas.layer.symbollayer [Create a map]: map-create.md-[Get Search Fuzzy rest API]: /rest/api/maps/search/getsearchfuzzy +[Get Search Fuzzy rest API]: /rest/api/maps/search/getsearchfuzzy?view=rest-maps-1.0 [setCamera]: /javascript/api/azure-maps-control/atlas.map#setcamera-cameraoptionscameraboundsoptionsanimationoptions- [event listener]: /javascript/api/azure-maps-control/atlas.map#events [BoundingBox]: /javascript/api/azure-maps-control/atlas.data.boundingbox |
azure-maps | Migrate From Bing Maps Web App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-bing-maps-web-app.md | Learn more about migrating from Bing Maps to Azure Maps. [Popup with Media Content]: https://samples.azuremaps.com/?sample=popup-with-media-content [Popups on Shapes]: https://samples.azuremaps.com/?sample=popups-on-shapes [Pushpin clustering]: #pushpin-clustering-[Render]: /rest/api/maps/render-v2 +[Render]: /rest/api/maps/render [Reusing Popup with Multiple Pins]: https://samples.azuremaps.com/?sample=reusing-popup-with-multiple-pins-[road tiles]: /rest/api/maps/render-v2/get-map-tile -[satellite tiles]: /rest/api/maps/render-v2/get-map-static-image +[road tiles]: /rest/api/maps/render/get-map-tile +[satellite tiles]: /rest/api/maps/render/get-map-static-image [Setting the map view]: #setting-the-map-view [Shared Key authentication]: azure-maps-authentication.md#shared-key-authentication [Show traffic data]: #show-traffic-data |
azure-maps | Migrate From Bing Maps Web Services | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-bing-maps-web-services.md | Learn more about the Azure Maps REST services. [Best practices for Azure Maps Route service]: how-to-use-best-practices-for-routing.md [Best practices for Azure Maps Search service]: how-to-use-best-practices-for-search.md [free account]: https://azure.microsoft.com/free/-[fuzzy search]: /rest/api/maps/search/get-search-fuzzy +[fuzzy search]: /rest/api/maps/search/get-search-fuzzy?view=rest-maps-1.0 [Geolocation API]: /rest/api/maps/geolocation/get-ip-to-location-[Get Map Static Image]: /rest/api/maps/render-v2/get-map-static-image -[Get Map Tile]: /rest/api/maps/render-v2/get-map-tile +[Get Map Static Image]: /rest/api/maps/render/get-map-static-image +[Get Map Tile]: /rest/api/maps/render/get-map-tile [Get Route Directions]: /rest/api/maps/route/get-route-directions [Get Route Range]: /rest/api/maps/route/get-route-range-[Get Search Address Reverse Cross Street]: /rest/api/maps/search/get-search-address-reverse-cross-street -[Get Search Address Reverse]: /rest/api/maps/search/get-search-address-reverse -[Get Search Address Structured]: /rest/api/maps/search/get-search-address-structured -[Get Search Address]: /rest/api/maps/search/get-search-address -[Get Search Fuzzy]: /rest/api/maps/search/get-search-fuzzy -[Get Search POI Category]: /rest/api/maps/search/get-search-poi-category -[Get Search POI]: /rest/api/maps/search/get-search-poi -[Get Search Polygon]: /rest/api/maps/search/get-search-polygon +[Get Search Address Reverse Cross Street]: /rest/api/maps/search/get-search-address-reverse-cross-street?view=rest-maps-1.0 +[Get Search Address Reverse]: /rest/api/maps/search/get-search-address-reverse?view=rest-maps-1.0 +[Get Search Address Structured]: /rest/api/maps/search/get-search-address-structured?view=rest-maps-1.0 +[Get Search Address]: /rest/api/maps/search/get-search-address?view=rest-maps-1.0 +[Get Search Fuzzy]: /rest/api/maps/search/get-search-fuzzy?view=rest-maps-1.0 +[Get Search POI Category]: /rest/api/maps/search/get-search-poi-category?view=rest-maps-1.0 +[Get Search POI]: /rest/api/maps/search/get-search-poi?view=rest-maps-1.0 +[Get Search Polygon]: /rest/api/maps/search/get-search-polygon?view=rest-maps-1.0 [Get Timezone By Coordinates]: /rest/api/maps/timezone/get-timezone-by-coordinates [Get Timezone By ID]: /rest/api/maps/timezone/get-timezone-by-id [Get Timezone Enum IANA]: /rest/api/maps/timezone/get-timezone-enum-iana Learn more about the Azure Maps REST services. [Localization support in Azure Maps]: supported-languages.md [Manage authentication in Azure Maps]: how-to-manage-authentication.md [Manage the pricing tier of your Azure Maps account]: how-to-manage-pricing-tier.md-[nearby search]: /rest/api/maps/search/getsearchnearby +[nearby search]: /rest/api/maps/search/getsearchnearby?view=rest-maps-1.0 [NetTopologySuite]: https://github.com/NetTopologySuite/NetTopologySuite [Post Route Directions Batch]: /rest/api/maps/route/post-route-directions-batch [Post Route Directions]: /rest/api/maps/route/post-route-directions [Post Route Matrix]: /rest/api/maps/route/post-route-matrix-[Post Search Address Batch]: /rest/api/maps/search/post-search-address-batch -[Post Search Address Reverse Batch]: /rest/api/maps/search/post-search-address-reverse-batch -[Post Search Along Route]: /rest/api/maps/search/post-search-along-route -[Post Search Fuzzy Batch]: /rest/api/maps/search/post-search-fuzzy-batch -[Post Search Inside Geometry]: /rest/api/maps/search/post-search-inside-geometry +[Post Search Address Batch]: /rest/api/maps/search/post-search-address-batch?view=rest-maps-1.0 +[Post Search Address Reverse Batch]: /rest/api/maps/search/post-search-address-reverse-batch?view=rest-maps-1.0 +[Post Search Along Route]: /rest/api/maps/search/post-search-along-route?view=rest-maps-1.0 +[Post Search Fuzzy Batch]: /rest/api/maps/search/post-search-fuzzy-batch?view=rest-maps-1.0 +[Post Search Inside Geometry]: /rest/api/maps/search/post-search-inside-geometry?view=rest-maps-1.0 [quadtree tile pyramid math]: zoom-levels-and-tile-grid.md [Render custom data on a raster map]: how-to-render-custom-data.md [Route]: /rest/api/maps/route [Search for a location using Azure Maps Search services]: how-to-search-for-address.md-[Search within geometry]: /rest/api/maps/search/post-search-inside-geometry -[Search]: /rest/api/maps/search +[Search within geometry]: /rest/api/maps/search/post-search-inside-geometry?view=rest-maps-1.0 +[Search]: /rest/api/maps/search?view=rest-maps-1.0 [Snap points to logical route path]: https://samples.azuremaps.com/?sample=snap-points-to-logical-route-path [Spatial operations]: /rest/api/maps/spatial [subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account |
azure-maps | Migrate From Google Maps Web App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-google-maps-web-app.md | Learn more about migrating to Azure Maps: [Popup options]: /javascript/api/azure-maps-control/atlas.popupoptions [Popup with Media Content]: https://samples.azuremaps.com/?sample=popup-with-media-content [Popups on Shapes]: https://samples.azuremaps.com/?sample=popups-on-shapes-[Render]: /rest/api/maps/render-v2 +[Render]: /rest/api/maps/render [Reusing Popup with Multiple Pins]: https://samples.azuremaps.com/?sample=reusing-popup-with-multiple-pins-[road tiles]: /rest/api/maps/render-v2/get-map-tile -[satellite tiles]: /rest/api/maps/render-v2/get-map-static-image +[road tiles]: /rest/api/maps/render/get-map-tile +[satellite tiles]: /rest/api/maps/render/get-map-static-image [Search Autosuggest with JQuery UI]: https://samples.azuremaps.com/?sample=search-autosuggest-and-jquery-ui [Search for points of interest]: map-search-location.md [Setting the map view]: #setting-the-map-view |
azure-maps | Migrate From Google Maps Web Services | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-google-maps-web-services.md | Learn more about Azure Maps REST [best practices for search]: how-to-use-best-practices-for-search.md [Calculate routes and directions]: #calculate-routes-and-directions [free account]: https://azure.microsoft.com/free/-[Get Map Static Image]: /rest/api/maps/render-v2/get-map-static-image -[Get Map Tile]: /rest/api/maps/render-v2/get-map-tile +[Get Map Static Image]: /rest/api/maps/render/get-map-static-image +[Get Map Tile]: /rest/api/maps/render/get-map-tile [Get Route Directions]: /rest/api/maps/route/get-route-directions [Get Route Range]: /rest/api/maps/route/get-route-range-[Get Search Address Reverse Cross Street]: /rest/api/maps/search/get-search-address-reverse-cross-street -[Get Search Address Reverse]: /rest/api/maps/search/get-search-address-reverse -[Get Search Address Structured]: /rest/api/maps/search/get-search-address-structured -[Get Search Address]: /rest/api/maps/search/get-search-address -[Get Search Fuzzy]: /rest/api/maps/search/get-search-fuzzy -[Get Search Nearby]: /rest/api/maps/search/get-search-nearby -[Get Search POI Category]: /rest/api/maps/search/get-search-poi-category -[Get Search POI]: /rest/api/maps/search/get-search-poi +[Get Search Address Reverse Cross Street]: /rest/api/maps/search/get-search-address-reverse-cross-street?view=rest-maps-1.0 +[Get Search Address Reverse]: /rest/api/maps/search/get-search-address-reverse?view=rest-maps-1.0 +[Get Search Address Structured]: /rest/api/maps/search/get-search-address-structured?view=rest-maps-1.0 +[Get Search Address]: /rest/api/maps/search/get-search-address?view=rest-maps-1.0 +[Get Search Fuzzy]: /rest/api/maps/search/get-search-fuzzy?view=rest-maps-1.0 +[Get Search Nearby]: /rest/api/maps/search/get-search-nearby?view=rest-maps-1.0 +[Get Search POI Category]: /rest/api/maps/search/get-search-poi-category?view=rest-maps-1.0 +[Get Search POI]: /rest/api/maps/search/get-search-poi?view=rest-maps-1.0 [Get Timezone By Coordinates]: /rest/api/maps/timezone/get-timezone-by-coordinates [Get Timezone By ID]: /rest/api/maps/timezone/get-timezone-by-id [Get Timezone Enum IANA]: /rest/api/maps/timezone/get-timezone-enum-iana Learn more about Azure Maps REST [NuGet package]: https://www.nuget.org/packages/AzureMapsRestToolkit [Post Route Directions Batch]: /rest/api/maps/route/post-route-directions-batch [Post Route Matrix]: /rest/api/maps/route/post-route-matrix-[Post Search Address Batch]: /rest/api/maps/search/post-search-address-batch -[Post Search Address Reverse Batch]: /rest/api/maps/search/post-search-address-reverse-batch -[Post Search Along Route]: /rest/api/maps/search/post-search-along-route -[Post Search Fuzzy Batch]: /rest/api/maps/search/post-search-fuzzy-batch -[Post Search Inside Geometry]: /rest/api/maps/search/post-search-inside-geometry +[Post Search Address Batch]: /rest/api/maps/search/post-search-address-batch?view=rest-maps-1.0 +[Post Search Address Reverse Batch]: /rest/api/maps/search/post-search-address-reverse-batch?view=rest-maps-1.0 +[Post Search Along Route]: /rest/api/maps/search/post-search-along-route?view=rest-maps-1.0 +[Post Search Fuzzy Batch]: /rest/api/maps/search/post-search-fuzzy-batch?view=rest-maps-1.0 +[Post Search Inside Geometry]: /rest/api/maps/search/post-search-inside-geometry?view=rest-maps-1.0 [Render custom data on a raster map]: how-to-render-custom-data.md-[Render]: /rest/api/maps/render-v2/get-map-static-image +[Render]: /rest/api/maps/render/get-map-static-image [Reverse geocode a coordinate]: #reverse-geocode-a-coordinate [Route]: /rest/api/maps/route [Search for a location using Azure Maps Search services]: how-to-search-for-address.md-[Search]: /rest/api/maps/search +[Search]: /rest/api/maps/search?view=rest-maps-1.0 [Spatial operations]: /rest/api/maps/spatial [subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account [Supported map styles]: supported-map-styles.md |
azure-maps | Open Source Projects | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/open-source-projects.md | Find more open-source Azure Maps projects. [Azure Maps Spyglass Control module]: https://github.com/Azure-Samples/azure-maps-spyglass-control [Azure Maps Swipe Map module]: https://github.com/Azure-Samples/azure-maps-swipe-map [Azure Maps Sync Map module]: https://github.com/Azure-Samples/azure-maps-sync-maps-[Azure Maps tile services]: /rest/api/maps/render-v2/get-map-tile +[Azure Maps tile services]: /rest/api/maps/render/get-map-tile [Bot Framework - Point of Interest skill]: https://github.com/microsoft/botframework-solutions/tree/488093ac2fddf16096171f6a926315aa45e199e7/skills/csharp/pointofinterestskill [BotBuilder Location]: https://github.com/Microsoft/BotBuilder-Location [Cesium JS]: https://cesium.com/cesiumjs/ [Code samples]: /samples/browse/?products=azure-maps-[geocoding services]: /rest/api/maps/search +[geocoding services]: /rest/api/maps/search?view=rest-maps-1.0 [Implement IoT spatial analytics using Azure Maps]: https://github.com/Azure-Samples/iothub-to-azure-maps-geofencing [leaflet]: https://leafletjs.com [LiveMaps]: https://github.com/Azure-Samples/LiveMaps [OpenLayers]: https://www.openlayers.org/-[tile layers]: /rest/api/maps/render-v2/get-map-tile +[tile layers]: /rest/api/maps/render/get-map-tile |
azure-maps | Power Bi Visual Add Tile Layer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-add-tile-layer.md | Add more context to the map: [Web Mapping Services (WMS)]: https://www.opengeospatial.org/standards/wms [Show real-time traffic]: power-bi-visual-show-real-time-traffic.md [Zoom levels and tile grid]: zoom-levels-and-tile-grid.md-[weather radar tile service]: /rest/api/maps/render-v2/get-map-tile +[weather radar tile service]: /rest/api/maps/render/get-map-tile |
azure-maps | Render Coverage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/render-coverage.md | The render coverage tables below list the countries/regions that support Azure M > [Zoom levels and tile grid](zoom-levels-and-tile-grid.md) > [!div class="nextstepaction"]-> [Get map tiles](/rest/api/maps/render-v2/get-map-tile) +> [Get map tiles](/rest/api/maps/render/get-map-tile) > [!div class="nextstepaction"] > [Azure Maps routing coverage](routing-coverage.md) [Zoom levels and tile grid]: zoom-levels-and-tile-grid.md [Render v1]: /rest/api/maps/render-[Render v2]: /rest/api/maps/render-v2 +[Render v2]: /rest/api/maps/render |
azure-maps | Supported Map Styles | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/supported-map-styles.md | Learn about how to set a map style in Azure Maps: [Android map control]: how-to-use-android-map-control-library.md [Choose a map style]: choose-map-style.md-[Get Map Static Image]: /rest/api/maps/render-v2/get-map-static-image -[Get Map Tile]: /rest/api/maps/render-v2/get-map-tile +[Get Map Static Image]: /rest/api/maps/render/get-map-static-image +[Get Map Tile]: /rest/api/maps/render/get-map-tile [Power BI visual]: power-bi-visual-get-started.md [Web SDK map control]: how-to-use-map-control.md |
azure-maps | Supported Search Categories | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/supported-search-categories.md | When doing a [category search] for points of interest, there are over a hundred | WINERY | winery | | ZOOS\_ARBORETA\_BOTANICAL\_GARDEN | wildlife park, aquatic zoo marine park, arboreta botanical gardens, zoo, zoos, arboreta botanical garden | -[category search]: /rest/api/maps/search/getsearchpoicategory +[category search]: /rest/api/maps/search/getsearchpoicategory?view=rest-maps-1.0 |
azure-maps | Tutorial Create Store Locator | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-create-store-locator.md | To see more code examples and an interactive coding experience: [Simple Store Locator.html]: https://github.com/Azure-Samples/AzureMapsCodeSamples/blob/master/Samples/Tutorials/Simple%20Store%20Locator/Simple%20Store%20Locator.html [data]: https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/Samples/Tutorials/Simple%20Store%20Locator/data-[Search service]: /rest/api/maps/search +[Search service]: /rest/api/maps/search?view=rest-maps-1.0 [Spherical Mercator projection]: glossary.md#spherical-mercator-projection [EPSG:3857]: https://epsg.io/3857 [EPSG:4326]: https://epsg.io/4326 |
azure-maps | Tutorial Ev Routing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-ev-routing.md | To learn more about Azure Notebooks, see [Azure Maps REST APIs]: /rest/api/maps [Azure Notebooks]: https://notebooks.azure.com [Azure storage account]: /azure/storage/common/storage-account-create?tabs=azure-portal-[Get Map Image API]: /rest/api/maps/render-v2/get-map-static-image -[Get Map Image service]: /rest/api/maps/render-v2/get-map-static-image +[Get Map Image API]: /rest/api/maps/render/get-map-static-image +[Get Map Image service]: /rest/api/maps/render/get-map-static-image [Get Route Directions API]: /rest/api/maps/route/getroutedirections [Get Route Directions]: /rest/api/maps/route/getroutedirections [Get Route Range API]: /rest/api/maps/route/getrouterange To learn more about Azure Notebooks, see [manage authentication in Azure Maps]: how-to-manage-authentication.md [Matrix Routing API]: /rest/api/maps/route/postroutematrix [Post Route Matrix]: /rest/api/maps/route/postroutematrix-[Post Search Inside Geometry API]: /rest/api/maps/search/postsearchinsidegeometry -[Post Search Inside Geometry]: /rest/api/maps/search/postsearchinsidegeometry +[Post Search Inside Geometry API]: /rest/api/maps/search/postsearchinsidegeometry?view=rest-maps-1.0 +[Post Search Inside Geometry]: /rest/api/maps/search/postsearchinsidegeometry?view=rest-maps-1.0 [Quickstart: Sign in and set a user ID]: https://notebooks.azure.com-[Render - Get Map Image]: /rest/api/maps/render-v2/get-map-static-image +[Render - Get Map Image]: /rest/api/maps/render/get-map-static-image [*requirements.txt*]: https://github.com/Azure-Samples/Azure-Maps-Jupyter-Notebook/blob/master/AzureMapsJupyterSamples/Tutorials/EV%20Routing%20and%20Reachable%20Range/requirements.txt [routing APIs]: /rest/api/maps/route [subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account |
azure-maps | Tutorial Iot Hub Maps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-iot-hub-maps.md | To learn more about how to send device-to-cloud telemetry, and the other way aro [free account]: https://azure.microsoft.com/free/ [general-purpose v2 storage account]: ../storage/common/storage-account-overview.md [Get Geofence]: /rest/api/maps/spatial/getgeofence-[Get Search Address Reverse]: /rest/api/maps/search/getsearchaddressreverse +[Get Search Address Reverse]: /rest/api/maps/search/getsearchaddressreverse?view=rest-maps-1.0 [How to create data registry]: how-to-create-data-registries.md [IoT Hub message routing]: ../iot-hub/iot-hub-devguide-routing-query-syntax.md [IoT Plug and Play]: ../iot-develop/index.yml To learn more about how to send device-to-cloud telemetry, and the other way aro [rentalCarSimulation]: https://github.com/Azure-Samples/iothub-to-azure-maps-geofencing/tree/master/src/rentalCarSimulation [resource group]: ../azure-resource-manager/management/manage-resource-groups-portal.md#create-resource-groups [the root of the sample]: https://github.com/Azure-Samples/iothub-to-azure-maps-geofencing-[Search Address Reverse]: /rest/api/maps/search/getsearchaddressreverse +[Search Address Reverse]: /rest/api/maps/search/getsearchaddressreverse?view=rest-maps-1.0 [Send telemetry from a device]: ../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-csharp [Spatial Geofence Get API]: /rest/api/maps/spatial/getgeofence [subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account |
azure-maps | Tutorial Search Location | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-search-location.md | The next tutorial demonstrates how to display a route between two locations. [Azure Maps account]: quick-demo-map-app.md#create-an-azure-maps-account [free account]: https://azure.microsoft.com/free/-[Fuzzy Search service]: /rest/api/maps/search/get-search-fuzzy +[Fuzzy Search service]: /rest/api/maps/search/get-search-fuzzy?view=rest-maps-1.0 [manage authentication in Azure Maps]: how-to-manage-authentication.md [MapControlCredential]: /javascript/api/azure-maps-rest/atlas.service.mapcontrolcredential [pipeline]: /javascript/api/azure-maps-rest/atlas.service.pipeline [Route to a destination]: tutorial-route-location.md-[Search API]: /rest/api/maps/search +[Search API]: /rest/api/maps/search?view=rest-maps-1.0 [Search for points of interest]: https://samples.azuremaps.com/?sample=search-for-points-of-interest [search tutorial]: https://github.com/Azure-Samples/AzureMapsCodeSamples/tree/master/Samples/Tutorials/Search [searchURL]: /javascript/api/azure-maps-rest/atlas.service.searchurl |
azure-maps | Understanding Azure Maps Transactions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/understanding-azure-maps-transactions.md | The following table summarizes the Azure Maps services that generate transaction [Conversion]: /rest/api/maps-creator/conversion [Creator table]: #azure-maps-creator [Data registry]: /rest/api/maps/data-registry-[v1]: /rest/api/maps/data -[v2]: /rest/api/maps/data-v2 +[v1]: /rest/api/maps/data?view=rest-maps-1.0 +[v2]: /rest/api/maps/data [How to create data registry]: how-to-create-data-registries.md [Dataset]: /rest/api/maps-creator/dataset [Feature State]: /rest/api/maps-creator/feature-state [Geolocation]: /rest/api/maps/geolocation [Manage the pricing tier of your Azure Maps account]: how-to-manage-pricing-tier.md [Pricing calculator]: https://azure.microsoft.com/pricing/calculator/-[Render]: /rest/api/maps/render-v2 +[Render]: /rest/api/maps/render [Route]: /rest/api/maps/route-[Search v1]: /rest/api/maps/search -[Search v2]: /rest/api/maps/search-v2 +[Search v1]: /rest/api/maps/search?view=rest-maps-1.0 +[Search v2]: /rest/api/maps/search [Spatial]: /rest/api/maps/spatial [Tileset]: /rest/api/maps-creator/tileset [Timezone]: /rest/api/maps/timezone |
azure-maps | Weather Coverage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/weather-coverage.md | Radar tiles, showing areas of rain, snow, ice and mixed conditions, are returned [Get Daily Forecast]: /rest/api/maps/weather/get-current-air-quality [Get Daily Indices]: /rest/api/maps/weather/get-daily-indices [Get Hourly Forecast]: /rest/api/maps/weather/get-hourly-forecast-[Get Map Tile]: /rest/api/maps/render-v2/get-map-tile +[Get Map Tile]: /rest/api/maps/render/get-map-tile [Get Minute forecast]: /rest/api/maps/weather/get-minute-forecast [Get Quarter Day Forecast]: /rest/api/maps/weather/get-quarter-day-forecast [Get Weather Along Route]: /rest/api/maps/weather/get-weather-along-route |
azure-maps | Weather Service Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/weather-service-tutorial.md | To learn more about Azure Notebooks, see [Daily Forecast]: /rest/api/maps/weather/getdailyforecast [EV routing using Azure Notebooks]: tutorial-ev-routing.md [free account]: https://azure.microsoft.com/free/-[Get Map Image service]: /rest/api/maps/render-v2/get-map-static-image +[Get Map Image service]: /rest/api/maps/render/get-map-static-image [manage authentication in Azure Maps]: how-to-manage-authentication.md-[Render - Get Map Image]: /rest/api/maps/render-v2/get-map-static-image +[Render - Get Map Image]: /rest/api/maps/render/get-map-static-image [subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account [Weather Maps Jupyter Notebook repository]: https://github.com/Azure-Samples/Azure-Maps-Jupyter-Notebook/tree/master/AzureMapsJupyterSamples/Tutorials/Analyze%20Weather%20Data [weather_dataset_demo.csv]: https://github.com/Azure-Samples/Azure-Maps-Jupyter-Notebook/tree/master/AzureMapsJupyterSamples/Tutorials/Analyze%20Weather%20Data/data |
azure-maps | Weather Services Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/weather-services-concepts.md | The following table lists the available Index groups (indexGroupId): [Azure Maps Weather services coverage]: weather-coverage.md [Azure Maps Weather services frequently asked questions (FAQ)]: weather-services-faq.yml [Get Daily Indices API]: /rest/api/maps/weather-[Get Map Tile v2 API]: /rest/api/maps/render-v2/get-map-tile +[Get Map Tile v2 API]: /rest/api/maps/render/get-map-tile [Index IDs and index groups IDs]: #index-ids-and-index-groups-ids [Weather services API]: /rest/api/maps/weather [Weather services]: /rest/api/maps/weather |
azure-maps | Zoom Levels And Tile Grid | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/zoom-levels-and-tile-grid.md | Learn more about geospatial concepts: [EPSG:3857]: https://epsg.io/3857 [Web SDK: Map pixel and position calculations]: /javascript/api/azure-maps-control/atlas.map#pixelstopositions-pixel [Add a tile layer]: map-add-tile-layer.md-[Get map tiles]: /rest/api/maps/render-v2/get-map-tile +[Get map tiles]: /rest/api/maps/render/get-map-tile [Get traffic flow tiles]: /rest/api/maps/traffic/gettrafficflowtile [Get traffic incident tiles]: /rest/api/maps/traffic/gettrafficincidenttile [Azure Maps glossary]: glossary.md |
azure-monitor | Azure Monitor Agent Migration Tools | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration-tools.md | Use the DCR Config Generator tool to parse Log Analytics Agent configuration fro ### Prerequisites\Setup 1. `Powershell version 7.1.3` or higher is recommended (minimum version 5.1)-2. Uses `Az Powershell module` to pull workspace agent configuration information [Az PowerShell module](https://learn.microsoft.com/powershell/azure/install-azps-windows?view=azps-11.0.0&tabs=powershell&pivots=windows-psgallery) +2. Uses `Az Powershell module` to pull workspace agent configuration information [Az PowerShell module](/powershell/azure/install-azps-windows?tabs=powershell&pivots=windows-psgallery) 3. User will need Read/Write access to the specified workspace resource 4. Connect-AzAccount and Select-AzSubscription will be used to set the context for the script to run so proper Azure credentials will be needed |
azure-monitor | Azure Monitor Agent Mma Removal Tool | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-mma-removal-tool.md | The utility works in two steps You do all the setup steps in a [Visual Studio Code](https://code.visualstudio.com/) with the [PowerShell Extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode.PowerShell). - Windows 10+ or Windows Server 2019+ - PowerShell 5.0 or higher. Check the version by running `$PSVersionTable` and checking the PS Version+ - PowerShell. The language must be set to mode `FullLanguage`. Check the mode by running `$ExecutionContext.SessionState.LanguageMode` in PowerShell. You can find more details [here](/powershell/module/microsoft.powershell.core/about/about_language_modes?source=recommendations) + - Bicep. The setup scripts us Bicep to automate the installation. Check the installation by running `bicep --version`. See [install in PowerShell](/azure/azure-resource-manager/bicep/install#azure-powershell) - A [User-Assigned Managed Identity (MI)](/azure/active-directory/managed-identities-azure-resources/overview) which has 'Reader', Virtual Machine Contributor' and 'Azure Arc ScVmm VM Contributor' access on target scopes configured. - A new Resource Group to contain all the Azure resources created automatically by the set up automation. - For granting remediation user-assigned MI with above mentioned roles on the target scopes, Point the current path to the folder containing the extracted deployment package ``` 2. Installing required Az modules.-Az modules contain cmdlets to deploy Azure resources, which are used to create resources. Install the required Az PowerShell Modules using this command. For more details of Az Modules, refer [link](https://docs.microsoft.com/powershell/azure/install-az-ps). You must point current path to the extracted folder location. +Az modules contain cmdlets to deploy Azure resources, which are used to create resources. Install the required Az PowerShell Modules using this command. For more details of Az Modules, refer [link](/powershell/azure/install-az-ps). You must point current path to the extracted folder location. ``` PowerShell Set-Prerequisites |
azure-monitor | Itsmc Definition | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-definition.md | To create an action group: 1. In the **Work Item** type field, select **Incident**. > [!NOTE]- > As of September 2022, we are starting the 3-year process of deprecating support for using ITSM actions to send alerts and events to ServiceNow. For information on the deprecated behavior, see [Use Azure alerts to create a ServiceNow alert or event work item](https://learn.microsoft.com/previous-versions/azure/azure-monitor/alerts/alerts-create-itsm-work-items). - > As of October 2023, we are not supporting UI creation of connector for using ITSM actions to send alerts and events to ServiceNow. Until full deprecation the action creation should be by [API](https://learn.microsoft.com/rest/api/monitor/action-groups/create-or-update?view=rest-monitor-2021-09-01&tabs=HTTP). + > As of September 2022, we are starting the 3-year process of deprecating support for using ITSM actions to send alerts and events to ServiceNow. For information on the deprecated behavior, see [Use Azure alerts to create a ServiceNow alert or event work item](/previous-versions/azure/azure-monitor/alerts/alerts-create-itsm-work-items). + > As of October 2023, we are not supporting UI creation of connector for using ITSM actions to send alerts and events to ServiceNow. Until full deprecation the action creation should be by [API](/rest/api/monitor/action-groups/create-or-update?tabs=HTTP). 1. In the last section of the interface for creating an ITSM action group, if the alert is a log alert, you can define how many work items will be created for each alert. For all other alert types, one work item is created per alert. |
azure-monitor | Itsmc Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/itsmc-overview.md | Depending on your integration, start connecting to your ITSM tool with these ste > [!NOTE] > As of September 2022, we are starting the 3-year process of deprecating support for using ITSM actions to send alerts and events to ServiceNow. For information about legal terms and the privacy policy, see the [Microsoft privacy statement](https://go.microsoft.com/fwLink/?LinkID=522330&clcid=0x9).- > As of October 2023, we are not supporting UI creation of connector for using ITSM actions to send alerts and events to ServiceNow. Until full deprecation the action creation should be by [API](https://learn.microsoft.com/rest/api/monitor/action-groups/create-or-update?view=rest-monitor-2021-09-01&tabs=HTTP). + > As of October 2023, we are not supporting UI creation of connector for using ITSM actions to send alerts and events to ServiceNow. Until full deprecation the action creation should be by [API](/rest/api/monitor/action-groups/create-or-update?tabs=HTTP). 1. Connect to your ITSM. For more information, see the [ServiceNow connection instructions](./itsmc-connections-servicenow.md). 1. (Optional) Set up the IP ranges. To list the ITSM IP addresses to allow ITSM connections from partner ITSM tools, list the whole public IP range of an Azure region where the Log Analytics workspace belongs. For more information, see the [Microsoft Download Center](https://www.microsoft.com/en-us/download/details.aspx?id=56519). For regions EUS/WEU/WUS2/US South Central, the customer can list the ActionGroup network tag only. |
azure-monitor | Proactive Failure Diagnostics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/proactive-failure-diagnostics.md | Failure Anomalies detection relies on a proprietary machine learning algorithm, ### Alert rule creation A Failure Anomalies alert rule is created automatically when your Application Insights resource is created. The rule is automatically configured to analyze the telemetry on that resource.-You can create the rule again using Azure [REST API](https://learn.microsoft.com/rest/api/monitor/smart-detector-alert-rules?view=rest-monitor-2019-06-01&preserve-view=true) or using a [Resource Manager template](proactive-arm-config.md#failure-anomalies-alert-rule). Creating the rule can be useful if the automatic creation of the rule failed for some reason, or if you deleted the rule. +You can create the rule again using Azure [REST API](/rest/api/monitor/smart-detector-alert-rules?view=rest-monitor-2019-06-01&preserve-view=true) or using a [Resource Manager template](proactive-arm-config.md#failure-anomalies-alert-rule). Creating the rule can be useful if the automatic creation of the rule failed for some reason, or if you deleted the rule. ### Alert rule configuration To configure a Failure Anomalies alert rule in the portal, open the Alerts page and select Alert Rules. Failure Anomalies alert rules are included along with any alerts that you set manually. |
azure-monitor | Resource Manager Alerts Service Health | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/resource-manager-alerts-service-health.md | Points to note: 1. The 'scopes' of a service health alert rule can only contain a single subscription, which must be the same subscription in which the rule is created. Multiple subscriptions, a resource group, or other types of scope aren't supported. 1. You can create service health alert rules only in the "Global" location. 1. The "properties.incidentType", "properties.impactedServices[*].ServiceName" and "properties.impactedServices[*].ImpactedRegions[*].RegionName" clauses within the rule condition are optional. You can remove these clauses to be notified on events sent for all incident types, all services, and/or all regions, respectively.-1. The service names used in the "properties.impactedServices[*].ServiceName" must be a valid Azure service name. A list of valid names can be retrieved at the [Resource Health Metadata List API](https://learn.microsoft.com/rest/api/resourcehealth/metadata/list) +1. The service names used in the "properties.impactedServices[*].ServiceName" must be a valid Azure service name. A list of valid names can be retrieved at the [Resource Health Metadata List API](/rest/api/resourcehealth/metadata/list) ```json |
azure-monitor | Availability Azure Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-azure-functions.md | To create a new file, right-click under your timer trigger function (for example 1. Define the `REGION_NAME` environment variable as a valid Azure availability location. - Run the following command in the [Azure CLI](https://learn.microsoft.com/cli/azure/account?view=azure-cli-latest#az-account-list-locations&preserve-view=true) to list available regions. + Run the following command in the [Azure CLI](/cli/azure/account?view=azure-cli-latest#az-account-list-locations&preserve-view=true) to list available regions. ```azurecli az account list-locations -o table |
azure-monitor | Opentelemetry Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-overview.md | Codeless / Agent-based | Autoinstrumentation Traces | Logs Requests | Server Spans Dependencies | Other Span Types (Client, Internal, etc.)+Operation ID | Trace ID +ID or Operation Parent ID | Span ID [!INCLUDE [azure-monitor-app-insights-opentelemetry-support](../includes/azure-monitor-app-insights-opentelemetry-support.md)] |
azure-monitor | Container Insights Livedata Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-livedata-overview.md | Title: View live data with Container insights description: This article describes the real-time view of Kubernetes logs, events, and pod metrics without using kubectl in Container insights. Previously updated : 05/24/2022 Last updated : 01/12/2024 |
azure-monitor | Vminsights Enable Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-enable-overview.md | Title: Enable VM insights overview -description: Learn how to deploy and configure VM insights and find out about the system requirements. + Title: Enable VM Insights overview +description: Learn how to deploy and configure VM Insights and find out about the system requirements. -# Enable VM insights overview +# Enable VM Insights overview -This article provides an overview of how to enable VM insights to monitor the health and performance of: +This article provides an overview of how to enable VM Insights to monitor the health and performance of: - Azure virtual machines. - Azure Virtual Machine Scale Sets. This article provides an overview of how to enable VM insights to monitor the he - Virtual machines hosted in another cloud environment. > [!NOTE]-> Configuring a Log Analytics workspace for using VM insights by using the Log Analytics agent is no longer supported. +> Configuring a Log Analytics workspace for using VM Insights by using the Log Analytics agent is no longer supported. ## Installation options and supported machines -The following table shows the installation methods available for enabling VM insights on supported machines. +The following table shows the installation methods available for enabling VM Insights on supported machines. | Method | Scope | |:|:| The following table shows the installation methods available for enabling VM ins ## Supported Azure Arc machines -VM insights is available for Azure Arc-enabled servers in regions where the Arc extension service is available. You must be running version 0.9 or above of the Azure Arc agent. +VM Insights is available for Azure Arc-enabled servers in regions where the Arc extension service is available. You must be running version 0.9 or above of the Azure Arc agent. ## Supported operating systems -VM insights supports all operating systems supported by the Dependency agent and either Azure Monitor Agent or Log Analytics agent. For a complete list of operating systems supported by Azure Monitor Agent and Log Analytics agent, see [Azure Monitor agent overview](../agents/agents-overview.md#supported-operating-systems). +VM Insights supports all operating systems supported by the Dependency agent and either Azure Monitor Agent or Log Analytics agent. For a complete list of operating systems supported by Azure Monitor Agent and Log Analytics agent, see [Azure Monitor agent overview](../agents/agents-overview.md#supported-operating-systems). Dependency Agent supports the same [Windows versions that Azure Monitor Agent supports](../agents/agents-overview.md#supported-operating-systems), except Windows Server 2008 SP2 and Azure Stack HCI. For Dependency Agent Linux support, see [Dependency Agent Linux support](../vm/vminsights-dependency-agent-maintenance.md#dependency-agent-linux-support). > [!IMPORTANT]-> If the Ethernet device for your virtual machine has more than nine characters, it won't be recognized by VM insights and data won't be sent to the InsightsMetrics table. The agent will collect data from [other sources](../agents/agent-data-sources.md). +> If the Ethernet device for your virtual machine has more than nine characters, it won't be recognized by VM Insights and data won't be sent to the InsightsMetrics table. The agent will collect data from [other sources](../agents/agent-data-sources.md). ### Linux considerations -See the following list of considerations on Linux support of the Dependency agent that supports VM insights: +See the following list of considerations on Linux support of the Dependency agent that supports VM Insights: - Only default and SMP Linux kernel releases are supported. - Nonstandard kernel releases, such as physical address extension (PAE) and Xen, aren't supported for any Linux distribution. For example, a system with the release string of *2.6.16.21-0.8-xen* isn't supported. Output for this command will look similar to the following and specify whether a ## Agents -When you enable VM insights for a machine, the following agents are installed. For the network requirements for these agents, see [Network requirements](../agents/log-analytics-agent.md#network-requirements). +When you enable VM Insights for a machine, the following agents are installed. For the network requirements for these agents, see [Network requirements](../agents/log-analytics-agent.md#network-requirements). > [!IMPORTANT]-> VM insights support for the Azure Monitor agent is currently in public preview. The Azure Monitor agent has several advantages over the Log Analytics agent. It's the preferred agent for virtual machines and virtual machine scale sets. For a comparison of the agent and information on migrating, see [Migrate to Azure Monitor agent from Log Analytics agent](../agents/azure-monitor-agent-migration.md). +> Azure Monitor Agent has several advantages over the legacy Log Analytics agent, which will be deprecated by August 2024. After this date, Microsoft will no longer provide any support for the Log Analytics agent. [Migrate to Azure Monitor agent](../agents/azure-monitor-agent-migration.md) before August 2024 to continue ingesting data. + - **[Azure Monitor agent](../agents/azure-monitor-agent-overview.md) or [Log Analytics agent](../agents/log-analytics-agent.md):** Collects data from the virtual machine or Virtual Machine Scale Set and delivers it to the Log Analytics workspace.-- **Dependency agent**: Collects discovered data about processes running on the virtual machine and external process dependencies, which are used by the [Map feature in VM insights](../vm/vminsights-maps.md). The Dependency agent relies on the Azure Monitor agent or Log Analytics agent to deliver its data to Azure Monitor.+- **Dependency agent**: Collects discovered data about processes running on the virtual machine and external process dependencies, which are used by the [Map feature in VM Insights](../vm/vminsights-maps.md). The Dependency agent relies on the Azure Monitor agent or Log Analytics agent to deliver its data to Azure Monitor. ### Network requirements When you enable VM insights for a machine, the following agents are installed. F - The Dependency agent requires a connection from the virtual machine to the address 169.254.169.254. This address identifies the Azure metadata service endpoint. Ensure that firewall settings allow connections to this endpoint. ## Data collection rule -When you enable VM insights on a machine with the Azure Monitor agent, you must specify a [data collection rule (DCR)](../essentials/data-collection-rule-overview.md) to use. The DCR specifies the data to collect and the workspace to use. VM insights creates a default DCR if one doesn't already exist. For more information on how to create and edit the VM insights DCR, see [Enable VM insights for Azure Monitor Agent](vminsights-enable-portal.md#enable-vm-insights-for-azure-monitor-agent). +When you enable VM Insights on a machine with the Azure Monitor agent, you must specify a [data collection rule (DCR)](../essentials/data-collection-rule-overview.md) to use. The DCR specifies the data to collect and the workspace to use. VM Insights creates a default DCR if one doesn't already exist. For more information on how to create and edit the VM Insights DCR, see [Enable VM Insights for Azure Monitor Agent](vminsights-enable-portal.md#enable-vm-insights-for-azure-monitor-agent). The DCR is defined by the options in the following table. | Option | Description | |:|:| | Guest performance | Specifies whether to collect [performance data](/azure/azure-monitor/vm/vminsights-performance) from the guest operating system. This option is required for all machines. The collection interval for performance data is every 60 seconds.|-| Processes and dependencies | Collects information about processes running on the virtual machine and dependencies between machines. This information enables the [Map feature in VM insights](vminsights-maps.md). This is optional and enables the [VM insights Map feature](vminsights-maps.md) for the machine. | -| Log Analytics workspace | Workspace to store the data. Only workspaces with VM insights are listed. | +| Processes and dependencies | Collects information about processes running on the virtual machine and dependencies between machines. This information enables the [Map feature in VM Insights](vminsights-maps.md). This is optional and enables the [VM Insights Map feature](vminsights-maps.md) for the machine. | +| Log Analytics workspace | Workspace to store the data. Only workspaces with VM Insights are listed. | > [!IMPORTANT]-> Don't create your own DCR to support VM insights. The DCR created by VM insights includes a special data stream required for its operation. You can edit this DCR to collect more data, such as Windows and Syslog events, but you should create more DCRs and associate them with the machine. +> VM Insights automatically creates a DCR that includes a special data stream required for its operation. Do not modify the VM Insights DCR or create your own DCR to support VM Insights. To collect additional data, such as Windows and Syslog events, create separate DCRs and associate them with your machines. -If you associate a data collection rule with the Map feature enabled to a machine on which Dependency Agent isn't installed, the Map view won't be available. To enable the Map view, set `enableAMA property = true` in the Dependency Agent extension when you install Dependency Agent. We recommend following the procedure described in [Enable VM insights for Azure Monitor Agent](vminsights-enable-portal.md#enable-vm-insights-for-azure-monitor-agent). +If you associate a data collection rule with the Map feature enabled to a machine on which Dependency Agent isn't installed, the Map view won't be available. To enable the Map view, set `enableAMA property = true` in the Dependency Agent extension when you install Dependency Agent. We recommend following the procedure described in [Enable VM Insights for Azure Monitor Agent](vminsights-enable-portal.md#enable-vm-insights-for-azure-monitor-agent). ## Migrate from Log Analytics agent to Azure Monitor Agent If you associate a data collection rule with the Map feature enabled to a machin > Collecting duplicate data from a single machine with both Azure Monitor Agent and Log Analytics agent can result in: > > - Extra ingestion costs from sending duplicate data to the Log Analytics workspace.- > - Inaccuracy in the Map feature of VM insights because the feature doesn't check for duplicate data. + > - Inaccuracy in the Map feature of VM Insights because the feature doesn't check for duplicate data. - You must remove the Log Analytics agent yourself from any machines that are using it. Before you do this step, ensure that the machine isn't relying on any other solutions that require the Log Analytics agent. For more information, see [Migrate to Azure Monitor Agent from Log Analytics agent](../agents/azure-monitor-agent-migration.md). For more information about data collection and usage, see the [Microsoft Online ## Next steps -To learn how to use the Performance monitoring feature, see [View VM insights Performance](../vm/vminsights-performance.md). To view discovered application dependencies, see [View VM insights Map](../vm/vminsights-maps.md). +To learn how to use the Performance monitoring feature, see [View VM Insights Performance](../vm/vminsights-performance.md). To view discovered application dependencies, see [View VM Insights Map](../vm/vminsights-maps.md). |
azure-monitor | Vminsights Enable Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-enable-powershell.md | Optional Arguments: + `-Confirm [<SwitchParameter>]` Confirm each action in the script. + `-Approve [<SwitchParameter>]` Provide the approval for the installation to start with no confirmation prompt for the listed VM's/Virtual Machine Scale Sets. -The script supports wildcards for `-Name` and `-ResourceGroup`. For example, `-Name vm*` enables VM insights for all VMs and Virtual Machine Scale Sets that start with "vm". For more information, see [Wildcards in Windows PowerShell](https://learn.microsoft.com/powershell/module/microsoft.powershell.core/about/about_wildcards). +The script supports wildcards for `-Name` and `-ResourceGroup`. For example, `-Name vm*` enables VM insights for all VMs and Virtual Machine Scale Sets that start with "vm". For more information, see [Wildcards in Windows PowerShell](/powershell/module/microsoft.powershell.core/about/about_wildcards). Example: ```azurepowershell Optional Arguments: + `-Confirm [<SwitchParameter>]` Confirm each action in the script. + `-Approve [<SwitchParameter>]` Provide the approval for the installation to start with no confirmation prompt for the listed VM's/Virtual Machine Scale Sets. -The script supports wildcards for `-Name` and `-ResourceGroup`. For example, `-Name vm*` enables VM insights for all VMs and Virtual Machine Scale Sets that start with "vm". For more information, see [Wildcards in Windows PowerShell](https://learn.microsoft.com/powershell/module/microsoft.powershell.core/about/about_wildcards). +The script supports wildcards for `-Name` and `-ResourceGroup`. For example, `-Name vm*` enables VM insights for all VMs and Virtual Machine Scale Sets that start with "vm". For more information, see [Wildcards in Windows PowerShell](/powershell/module/microsoft.powershell.core/about/about_wildcards). Example: |
azure-vmware | Configure Azure Elastic San | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-azure-elastic-san.md | In this section, you create a virtual network for your Elastic SAN. Then you cre 1. Use one of the following instruction options to set up an Elastic SAN, your dedicated volume group, and initial volume in that group: > [!IMPORTANT] > Create your Elastic SAN in the same region and availability zone as your private cloud for best performance.- - [Azure portal](https://learn.microsoft.com/azure/storage/elastic-san/elastic-san-create?tabs=azure-portal) - - [PowerShell](https://learn.microsoft.com/azure/storage/elastic-san/elastic-san-create?tabs=azure-powershell) - - [Azure CLI](https://learn.microsoft.com/azure/storage/elastic-san/elastic-san-create?tabs=azure-cli) + - [Azure portal](/azure/storage/elastic-san/elastic-san-create?tabs=azure-portal) + - [PowerShell](/azure/storage/elastic-san/elastic-san-create?tabs=azure-powershell) + - [Azure CLI](/azure/storage/elastic-san/elastic-san-create?tabs=azure-cli) 1. Use one of the following instructions to configure a Private Endpoint (PE) for your Elastic SAN:- - [PowerShell](https://learn.microsoft.com/azure/storage/elastic-san/elastic-san-networking?tabs=azure-powershell#configure-a-private-endpoint) - - [Azure CLI](https://learn.microsoft.com/azure/storage/elastic-san/elastic-san-networking?tabs=azure-cli#tabpanel_2_azure-cli) + - [PowerShell](/azure/storage/elastic-san/elastic-san-networking?tabs=azure-powershell#configure-a-private-endpoint) + - [Azure CLI](/azure/storage/elastic-san/elastic-san-networking?tabs=azure-cli#tabpanel_2_azure-cli) ## Add an Elastic SAN volume as a datastore |
azure-vmware | Configure External Identity Source Nsx T | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-external-identity-source-nsx-t.md | Title: Configure external identity source for NSX-T Data Center -description: Learn how to use the Azure VMware Solution to configure an external identity source for NSX-T Data Center. + Title: Set an external identity source for NSX-T Data Center +description: Learn how to use Azure VMware Solution to set an external identity source for VMware NSX-T Data Center. Last updated 11/06/2023 -- -# Configure an external identity source for NSX-T Data Center --In this article, you will learn how to configure an external identity source for the NSX-T Data Center in an Azure VMware Solution. The NSX-T Data Center can be configured to use an external LDAP directory service to authenticate users, enabling a user to log in using their Active Directory account credentials, or those from a 3rd party LDAP server. The account can then be assigned an NSX-T Data Center Role, like you have with on-premises environments, to provide RBAC for each NSX-T user. --![Screenshot showing NSX-T connectivity to the LDAP (Active Directory) server.](./media/nsxt/azure-vmware-solution-to-ldap-server.jpg) +# Set an external identity source for NSX-T Data Center -## Prerequisites +In this article, learn how to set up an external identity source for VMware NSX-T Data Center in an instance of Azure VMware Solution. -- A working connection from your Active Directory network to your Azure VMware Solution private cloud. </br>-- A network path from your Active Directory server to the management network of Azure VMware solution where NSX-T is deployed. </br>-- Best practice: Two domain controllers located in Azure in the same region as the Azure VMware Solution SDDC. </br>-- Active Directory Domain Controller(s) with a valid certificate. The certificate could be issued by an [Active Directory Certificate Services Certificate Authority (CA)](https://social.technet.microsoft.com/wiki/contents/articles/2980.ldap-over-ssl-ldaps-certificate.aspx) or a [third-party CA](/troubleshoot/windows-server/identity/enable-ldap-over-ssl-3rd-certification-authority).+You can set up NSX-T Data Center to use an external Lightweight Directory Access Protocol (LDAP) directory service to authenticate users. A user can sign in by using their Windows Server Active Directory account credentials or credentials from a third-party LDAP server. Then, the account can be assigned an NSX-T Data Center role, like in an on-premises environment, to provide role-based access for NSX-T Data Center users. ->[!Note] -> Self-sign certificates are not recommended for production environments. +## Prerequisites - -- An account with Administrator permissions</br>-- The Azure VMware Solution DNS zones and the DNS servers have been correctly deployed. See: [Configure NSX-T Data Center DNS for resolution to your Active Directory Domain and Configure DNS forwarder for Azure VMware Solution](configure-dns-azure-vmware-solution.md)</br>+- A working connection from your Windows Server Active Directory network to your Azure VMware Solution private cloud. +- A network path from your Windows Server Active Directory server to the management network of the instance of Azure VMware Solution in which NSX-T Data Center is deployed. +- A Windows Server Active Directory domain controller that has a valid certificate. The certificate can be issued by a [Windows Server Active Directory Certificate Services Certificate Authority (CA)](https://social.technet.microsoft.com/wiki/contents/articles/2980.ldap-over-ssl-ldaps-certificate.aspx) or by a [third-party CA](/troubleshoot/windows-server/identity/enable-ldap-over-ssl-3rd-certification-authority). + We recommend that you use two domain controllers that are located in the same Azure region as the Azure VMware Solution software-defined datacenter. ->[!NOTE] -> For more information about LDAPS and certificate issuance, see with your security or identity management team. + > [!NOTE] + > Self-signed certificates are not recommended for production environments. -</br> +- An account that has Administrator permissions. +- Azure VMware Solution DNS zones and DNS servers that are correctly configured. For more information, see [Configure NSX-T Data Center DNS for resolution to your Windows Server Active Directory domain and set up DNS forwarder](configure-dns-azure-vmware-solution.md). -## Configure NSX-T to use Active Directory as LDAPS identity source +> [!NOTE] +> For more information about Secure LDAP (LDAPS) and certificate issuance, contact your security team or your identity management team. -1. Sign-in to NSX-T Manager and navigate to System, User Management, LDAP and click on “ADD IDENTITY SOURCE” - -![Screenshot of the NSX-T console.](./media/nsxt/configure-nsx-t-pic-1.png) +## Use Windows Server Active Directory as an LDAPS identity source +1. Sign in to NSX Manager, and then go to **System** > **User Management** > **LDAP** > **Add Identity Source**. -2. Enter the Name, Domain Name (FQDN), the Type and base DN. Optionally add a description. -The base DN is the container where your user accounts are kept, it is the starting point that an LDAP server uses when searching for users for an authentication request. For example CN=users,dc=azfta,dc=com. ->[!NOTE] -> You can use more than one directory as an LDAP provider, i.e. with multiple AD domains when using AVS as a way to consolidate workloads. -</br> + :::image type="content" source="media/nsxt/configure-nsx-t-pic-1.png" alt-text="Screenshot that shows NSX Manager with the options highlighted."::: -![Screenshot of the NSX-T User Management console identity source add screen.](./media/nsxt/configure-nsx-t-pic-2.png) +1. Enter values for **Name**, **Domain Name (FQDN)**, **Type**, and **Base DN**. You can add a description (optional). + The base DN is the container where your user accounts are kept. The base DN is the starting point that an LDAP server uses when it searches for users in an authentication request. For example, **CN=users,dc=azfta,dc=com**. -3. Next, click Set (!) as shown on the screenshot above, then click on "ADD LDAP SERVER" and fill in the following fields + > [!NOTE] + > You can use more than one directory as an LDAP provider. An example is if you have multiple Windows Server Azure Directory domains, and you use Azure VMware Solution as a way to consolidate workloads. - -| Field | Explanation| -|-|| -| Hostname/IP | This is the LDAP server’s FQDN or IP address. For example either azfta-dc01.azfta.com or 10.5.4.4| -| LDAP Protocol | Select LDAPS| -| Port Choose 636 | This is the default secure LDAP port.| -| Enabled | Leave as ‘Yes’| -| Use StartTLS | Only required if non-secured LDAP is being used.| -| Bind Identity | Use your account with domain administrator permissions. For example admin@contoso.com | -| Password | Enter the password for the LDAP server, this is the password for the example admin@contoso.com account.| -| Certificate | Leave empty (see step 6)| + :::image type="content" source="media/nsxt/configure-nsx-t-pic-2.png" alt-text="Screenshot that shows the User Management Add Identity Source page in NSX Manager." lightbox="media/nsxt/configure-nsx-t-pic-2.png"::: +1. Next, under **LDAP Servers**, select **Set** as shown in the preceding screenshot. +1. On **Set LDAP Server**, select **Add LDAP Server**, and then enter or select values for the following items: -![Screenshot of the Set LDAP Server configuration screen.](./media/nsxt/configure-nsx-t-pic-3.png) + | Name | Action | + |-|| + | **Hostname/IP** | Enter the LDAP server’s FQDN or IP address. For example, **azfta-dc01.azfta.com** or **10.5.4.4**. | + | **LDAP Protocol** | Select **LDAPS**. | + | **Port** | Leave the default secure LDAP port. | + | **Enabled** | Leave as **Yes**. | + | **Use Start TLS** | Required only if you use standard (unsecured) LDAP. | + | **Bind Identity** | Use your account that has domain Administrator permissions. For example, `<admin@contoso.com>`. | + | **Password** | Enter the password for the LDAP server. This password is the one that you use with the example `<admin@contoso.com>` account. | + | **Certificate** | Leave empty (see step 6). | + :::image type="content" source="media/nsxt/configure-nsx-t-pic-3.png" alt-text="Screenshot that shows the Set LDAP Server page to add an LDAP server."::: -4. The screen will update, click Click ADD, then APPLY - -![Screenshot of the successful certificate retrieval details.](./media/nsxt/configure-nsx-t-pic-4.png) +1. After the page updates and displays a connection status, select **Add**, and then select **Apply**. -5. Back on the User Management screen, click "SAVE" to complete the changes. - -6. To add a second domain controller, or another external identity provider, go back to step 1. + :::image type="content" source="media/nsxt/configure-nsx-t-pic-4.png" alt-text="Screenshot that shows details of a successful certificate retrieval."::: ->[!NOTE] -> Best practice is to have two domain controllers to act as LDAP servers. You can also put the LDAP servers behind a load balancer. +1. On **User Management**, select **Save** to complete the changes. +1. To add a second domain controller or another external identity provider, return to step 1. -## Assign other NSX-T Data Center roles to Active Directory identities +> [!NOTE] +> A recommended practice is to have two domain controllers to act as LDAP servers. You can also put the LDAP servers behind a load balancer. -After adding an external identity, you can assign NSX-T Data Center Roles to Active Directory security groups based on your organization's security controls. +## Assign roles to Windows Server Active Directory identities -1. Sign in to NSX-T Manager and navigate to System > Users Management > User Role Assignment and click Add +After you add an external identity, you can assign NSX-T Data Center roles to Windows Server Active Directory security groups based on your organization's security controls. -![Screenshot of the NSX-T System, User Management screen.](./media/nsxt/configure-nsx-t-pic-5.png) +1. In NSX Manager, go to **System** > **User Management** > **User Role Assignment** > **Add**. -2. Select **Add** > **Role Assignment for LDAP**.  + :::image type="content" source="media/nsxt/configure-nsx-t-pic-5.png" alt-text="Screenshot that shows the User Management page in NSX Manager." lightbox="media/nsxt/configure-nsx-t-pic-5.png"::: - a. Select the external identity provider-this will be the Identity provider selected in Step 3 in the previous section. “NSX-T External Identity Provider” +1. Select **Add** > **Role Assignment for LDAP**.  - b. Enter the first few characters of the user's name, sign in ID, or a group name to search the LDAP directory, then select a user or group from the list that appears. + 1. Select the external identity provider that you selected in step 3 in the preceding section. For example, **NSX-T External Identity Provider**. - c. Select a role, in this case we are assigning FTAdmin the role of CloudAdmin + 1. Enter the first few characters of the user's name, the user's sign-in ID, or a group name to search the LDAP directory. Then select a user or group from the list of results. - d. Select Save. - -![Screenshot of the NSX-T, System, User Management, ADD user screen.](./media/nsxt/configure-nsx-t-pic-6.png) + 1. Select a role. In this example, assign the FTAdmin user the CloudAdmin role. + 1. Select **Save**. + :::image type="content" source="media/nsxt/configure-nsx-t-pic-6.png" alt-text="Screenshot that shows the Add User page in NSX Manager." lightbox="media/nsxt/configure-nsx-t-pic-6.png"::: -3. Verify the permission assignment is displayed under **User Role Assignment**. - -![Screenshot of the NSX-T User Management confirming user has been added.](./media/nsxt/configure-nsx-t-pic-7.png) +1. Under **User Role Assignment**, verify that the permissions assignment appears. + :::image type="content" source="media/nsxt/configure-nsx-t-pic-7.png" alt-text="Screenshot that shows the User Management page confirming that the user was added." lightbox="media/nsxt/configure-nsx-t-pic-7.png"::: -4. Users should now be able to sign in to NSX-T Manager using their Active Directory credentials. +Your users should now be able to sign in to NSX Manager by using their Windows Server Active Directory credentials. -## Next steps -Now that you configured the external source, you can also learn about: +## Related content -- [Configure external identity source for vCenter Server](configure-identity-source-vcenter.md) - [Azure VMware Solution identity concepts](concepts-identity.md)-- [VMware product documentation](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/administration/GUID-DB5A44F1-6E1D-4E5C-8B50-D6161FFA5BD2.html)+- [Set an external identity source for vCenter Server](configure-identity-source-vcenter.md) +- [VMware product documentation](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/administration/GUID-DB5A44F1-6E1D-4E5C-8B50-D6161FFA5BD2.html) |
azure-vmware | Configure Identity Source Vcenter | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-identity-source-vcenter.md | Title: Configure external identity source for vCenter Server -description: Learn how to configure Microsoft Entra ID over LDAP or LDAPS for vCenter Server as an external identity source. + Title: Set an external identity source for vCenter Server +description: Learn how to set Windows Server Active Directory over LDAP or LDAPS for VMware vCenter Server as an external identity source. Last updated 12/06/2023 -# Configure external identity source for vCenter Server +# Set an external identity source for vCenter Server [!INCLUDE [vcenter-access-identity-description](includes/vcenter-access-identity-description.md)] -> [!NOTE] -> Execute commands one at a time in the order provided. --In this article, learn how to: +In this article, you learn how to: > [!div class="checklist"] >-> * (Optional) Export the certificate for LDAPS authentication -> * (Optional) Upload the LDAPS certificate to blob storage and generate a SAS URL -> * Configure NSX-T DNS for resolution to your Active Directory Domain -> * Add Active Directory over (Secure) LDAPS (LDAP over SSL) or (unsecure) LDAP -> * Add existing AD group to cloudadmin group -> * List all existing external identity sources integrated with vCenter Server SSO -> * Assign additional vCenter Server roles to Active Directory identities -> * Remove AD group from the cloudadmin role -> * Remove existing external identity sources +> - Export a certificate for LDAPS authentication. (Optional) +> - Upload the LDAPS certificate to blob storage and generate a shared access signature (SAS) URL. (Optional) +> - Configure NSX-T DNS for resolution to your Windows Server Active Directory domain. +> - Add Windows Server Active Directory by using LDAPS (secure) or LDAP (unsecured). +> - Add an existing Windows Server Active Directory group to the CloudAdmin group. +> - List all existing external identity sources that are integrated with vCenter Server SSO. +> - Assign additional vCenter Server roles to Windows Server Active Directory identities. +> - Remove a Windows Server Active Directory group from the CloudAdmin role. +> - Remove all existing external identity sources. > [!NOTE]-> [Export the certificate for LDAPS authentication](#optional-export-the-certificate-for-ldaps-authentication) and [Upload the LDAPS certificate to blob storage and generate a SAS URL](#optional-upload-the-ldaps-certificate-to-blob-storage-and-generate-a-sas-url) are optional steps. The certificate(s) will be downloaded from the domain controller(s) automatically through the **PrimaryUrl** and/or **SecondaryUrl** parameters if the **SSLCertificatesSasUrl** parameter is not provided. You can still provide **SSLCertificatesSasUrl** and follow the optional steps to manually export and upload the certificate(s). +> +> - The steps to [export the certificate for LDAPS authentication](#export-the-certificate-for-ldaps-authentication-optional) and [upload the LDAPS certificate to blob storage and generate an SAS URL](#upload-the-ldaps-certificate-to-blob-storage-and-generate-an-sas-url-optional) are optional. If the `SSLCertificatesSasUrl` parameter is not provided, the certificate is downloaded from the domain controller automatically through the `PrimaryUrl` or `SecondaryUrl` parameters. To manually export and upload the certificate, you can provide the `SSLCertificatesSasUrl` parameter and complete the optional steps. +> +> - Run commands one at a time in the order that's described in the article. ## Prerequisites -- Ensure your Active Directory network is connected to your Azure VMware Solution private cloud.+- Ensure that your Windows Server Active Directory network is connected to your Azure VMware Solution private cloud. ++- For Windows Server Active Directory authentication with LDAPS: -- For AD authentication with LDAPS:+ 1. Get access to the Windows Server Active Directory domain controller with Administrator permissions. + 1. Enable LDAPS on your Windows Server Active Directory domain controllers by using a valid certificate. You can obtain the certificate from an [Active Directory Certificate Services Certificate Authority (CA)](https://social.technet.microsoft.com/wiki/contents/articles/2980.ldap-over-ssl-ldaps-certificate.aspx) or a [third-party or public CA](/troubleshoot/windows-server/identity/enable-ldap-over-ssl-3rd-certification-authority). + 1. To obtain a valid certificate, complete the steps in [Create a certificate for secure LDAP](../active-directory-domain-services/tutorial-configure-ldaps.md#create-a-certificate-for-secure-ldap). Ensure that the certificate meets the listed requirements. - - Obtain access to the Active Directory Domain Controller(s) with Administrator permissions. - - Enable LDAPS on your Active Directory Domain Controller(s) with a valid certificate. You can obtain the certificate from an [Active Directory Certificate Services Certificate Authority (CA)](https://social.technet.microsoft.com/wiki/contents/articles/2980.ldap-over-ssl-ldaps-certificate.aspx) or a [third-party/public CA](/troubleshoot/windows-server/identity/enable-ldap-over-ssl-3rd-certification-authority). - - Follow the steps in [create a certificate for secure LDAP](../active-directory-domain-services/tutorial-configure-ldaps.md#create-a-certificate-for-secure-ldap) to obtain a valid certificate. Ensure the certificate meets the listed requirements. - > [!NOTE] - > Avoid using self-signed certificates in production environments. - - Optional: If you don't provide the **SSLCertificatesSasUrl** parameter, the certificate(s) is automatically downloaded from the domain controller(s) through the **PrimaryUrl** and/or **SecondaryUrl** parameters. Alternatively, you can manually [export the certificate for LDAPS authentication](#optional-export-the-certificate-for-ldaps-authentication) and upload it to an Azure Storage account as blob storage. Then, [grant access to Azure Storage resources using a shared access signature (SAS)](../storage/common/storage-sas-overview.md). + > [!NOTE] + > Avoid using self-signed certificates in production environments. + + 1. Optional: If you don't provide the `SSLCertificatesSasUrl` parameter, the certificate is automatically downloaded from the domain controller via the `PrimaryUrl` or the `SecondaryUrl` parameters. Alternatively, you can manually [export the certificate for LDAPS authentication](#export-the-certificate-for-ldaps-authentication-optional) and upload it to an Azure Storage account as blob storage. Then, [grant access to Azure Storage resources by using an SAS](../storage/common/storage-sas-overview.md). -- Configure DNS resolution for Azure VMware Solution to your on-premises AD. Enable DNS Forwarder from the Azure portal. For more information, see [configure DNS forwarder for Azure VMware Solution](configure-dns-azure-vmware-solution.md).+- Configure DNS resolution for Azure VMware Solution to your on-premises Windows Server Active Directory. Set up a DNS forwarder in the Azure portal. For more information, see [Configure a DNS forwarder for Azure VMware Solution](configure-dns-azure-vmware-solution.md). > [!NOTE]-> For more information about LDAPS and certificate issuance, contact your security or identity management team. +> For more information about LDAPS and certificate issuance, contact your security team or your identity management team. -## (Optional) Export the certificate for LDAPS authentication +## Export the certificate for LDAPS authentication (Optional) -First, verify that the certificate used for LDAPS is valid. If you don't have a certificate, follow the steps to [create a certificate for secure LDAP](../active-directory-domain-services/tutorial-configure-ldaps.md#create-a-certificate-for-secure-ldap) before continuing. +First, verify that the certificate that's used for LDAPS is valid. If you don't have a certificate, complete the steps to [create a certificate for LDAPS](../active-directory-domain-services/tutorial-configure-ldaps.md#create-a-certificate-for-secure-ldap) before you continue. -1. Sign in to a domain controller with administrator permissions where LDAPS is enabled. -1. Open the **Run command**, type **mmc**, and select **OK**. +To verify that the certificate is valid: ++1. Sign in to a domain controller on which LDAPS is active by using Administrator permissions. +1. Open the **Run** tool, enter **mmc**, and then select **OK**. 1. Select **File** > **Add/Remove Snap-in**.-1. Choose **Certificates** from the list of Snap-ins and select **Add>**. -1. In the **Certificates snap-in** window, select **Computer account** and then select **Next**. -1. Keep **Local computer...** selected, select **Finish**, and then **OK**. -1. Expand the **Personal** folder under the **Certificates (Local Computer)** management console and select the **Certificates** folder to view the installed certificates. +1. In the list of snap-ins, select **Certificates**, and then select **Add**. +1. In the **Certificates snap-in** pane, select **Computer account**, and then select **Next**. +1. Keep **Local computer** selected, select **Finish**, and then select **OK**. +1. In the **Certificates (Local Computer)** management console, expand the **Personal** folder and select the **Certificates** folder to view the installed certificates. - :::image type="content" source="media/run-command/ldaps-certificate-personal-certficates.png" alt-text="Screenshot of the list of certificates in the management console." lightbox="media/run-command/ldaps-certificate-personal-certficates.png"::: + :::image type="content" source="media/run-command/ldaps-certificate-personal-certficates.png" alt-text="Screenshot that shows the list of certificates in the management console." lightbox="media/run-command/ldaps-certificate-personal-certficates.png"::: -1. Double-click the certificate for LDAPS purposes. Ensure the certificate date **Valid from** and **to** is current and the certificate has a **private key** that corresponds to the certificate. +1. Double-click the certificate for LDAPS. Ensure that the certificate date **Valid from** and **Valid to** is current and that the certificate has a private key that corresponds to the certificate. - :::image type="content" source="media/run-command/ldaps-certificate-personal-general.png" alt-text="Screenshot of the properties of the LDAPS certificate." lightbox="media/run-command/ldaps-certificate-personal-general.png"::: + :::image type="content" source="media/run-command/ldaps-certificate-personal-general.png" alt-text="Screenshot that shows the properties of the LDAPS certificate." lightbox="media/run-command/ldaps-certificate-personal-general.png"::: -1. In the same window, select the **Certification Path** tab and verify that the **Certification path** is valid. It should include the certificate chain of root CA and optional intermediate certificates. Check that the **Certificate Status** is OK. +1. In the same dialog, select the **Certification Path** tab and verify that the value for **Certification path** is valid. It should include the certificate chain of root CA and optional intermediate certificates. Check that the **Certificate status** is **OK**. - :::image type="content" source="media/run-command/ldaps-certificate-cert-path.png" alt-text="Screenshot of the certificate chain in the Certification Path tab." lightbox="media/run-command/ldaps-certificate-cert-path.png"::: + :::image type="content" source="media/run-command/ldaps-certificate-cert-path.png" alt-text="Screenshot that shows the certificate chain on the Certification Path tab." lightbox="media/run-command/ldaps-certificate-cert-path.png"::: -1. Close the window. +1. Select **OK**. -Next, export the certificate: +To export the certificate: -1. In the Certificates console, right-click the LDAPS certificate and select **All Tasks** > **Export**. The Certificate Export Wizard appears. Select **Next**. -1. In the **Export Private Key** section, choose **No, do not export the private key** and select **Next**. -1. In the **Export File Format** section, select **Base-64 encoded X.509(.CER)** and select **Next**. -1. In the **File to Export** section, select **Browse...**, choose a folder location to export the certificate, enter a name, and select **Save**. +1. In the Certificates console, right-click the LDAPS certificate and select **All Tasks** > **Export**. The Certificate Export Wizard opens. Select **Next**. +1. In the **Export Private Key** section, select **No, do not export the private key**, and then select **Next**. +1. In the **Export File Format** section, select **Base-64 encoded X.509(.CER)**, and then select **Next**. +1. In the **File to Export** section, select **Browse**. Select a folder location to export the certificate, and enter a name. Then select **Save**. > [!NOTE]-> If more than one domain controller is LDAPS enabled, repeat the export procedure for each additional domain controller to export their corresponding certificates. Note that you can only reference two LDAPS servers in the `New-LDAPSIdentitySource` Run Command. If the certificate is a wildcard certificate, such as ***.avsdemo.net**, you only need to export the certificate from one of the domain controllers. +> If more than one domain controller is set to use LDAPS, repeat the export procedure for each additional domain controller to export their corresponding certificates. Note that you can reference only two LDAPS servers in the `New-LDAPSIdentitySource` Run tool. If the certificate is a wildcard certificate, such as `.avsdemo.net`, export the certificate from only one of the domain controllers. -## (Optional) Upload the LDAPS certificate to blob storage and generate a SAS URL +## Upload the LDAPS certificate to blob storage and generate an SAS URL (Optional) -- Upload the certificate file (.cer format) you just exported to an Azure Storage account as blob storage. Then, [grant access to Azure Storage resources using a shared access signature (SAS)](../storage/common/storage-sas-overview.md).+Next, upload the certificate file (in *.cer* format) you exported to an Azure Storage account as blob storage. Then, [grant access to Azure Storage resources by using an SAS](../storage/common/storage-sas-overview.md). -- If you need multiple certificates, upload each one individually and generate a SAS URL for each.+If you need multiple certificates, upload each one individually and generate an SAS URL for each certificate. > [!IMPORTANT]-> Remember to copy all SAS URL strings, as they won't be accessible once you leave the page. +> Remember to copy all SAS URL strings. The strings aren't accessible after you leave the page. > [!TIP]-> An alternative method for consolidating certificates involves storing all the certificate chains in one file, as detailed in [this VMware KB article](https://kb.vmware.com/s/article/2041378). Then, generate a single SAS URL for the file that contains all the certificates. +> An alternative method to consolidate certificates involves storing all the certificate chains in one file, as detailed in a [VMware knowledge base article](https://kb.vmware.com/s/article/2041378). Then, generate a single SAS URL for the file that contains all the certificates. -## Configure NSX-T DNS for Active Directory domain resolution +## Set up NSX-T DNS for Windows Server Active Directory domain resolution -Create a DNS zone and add it to the DNS service. Follow the instructions in [configure a DNS forwarder in the Azure portal](./configure-dns-azure-vmware-solution.md). +Create a DNS zone and add it to the DNS service. Complete the steps in [Configure a DNS forwarder in the Azure portal](./configure-dns-azure-vmware-solution.md). -After completing these steps, verify that your DNS service includes your DNS zone. +After you complete these steps, verify that your DNS service includes your DNS zone. -Your Azure VMware Solution private cloud should now properly resolve your on-premises Active Directory domain name. +Your Azure VMware Solution private cloud should now properly resolve your on-premises Windows Server Active Directory domain name. -## Add Active Directory over LDAP with SSL +## Add Windows Server Active Directory by using LDAP via SSL -To add AD over LDAP with SSL as an external identity source to use with SSO into vCenter Server, run the `New-LDAPSIdentitySource` cmdlet: +To add Windows Server Active Directory over LDAP with SSL as an external identity source to use with SSO to vCenter Server, run the New-LDAPSIdentitySource cmdlet. -1. Navigate to your Azure VMware Solution private cloud and select **Run command** > **Packages** > **New-LDAPSIdentitySource**. +1. Go to your Azure VMware Solution private cloud and select **Run command** > **Packages** > **New-LDAPSIdentitySource**. 1. Provide the required values or modify the default values, and then select **Run**. - | **Field** | **Value** | + | Name | Description | | | |- | **GroupName** | The group in the external identity source that grants cloudadmin access. For example, **avs-admins**. | - | **SSLCertificatesSasUrl** | Path to SAS strings containing the certificates for authentication to the AD source. Separate multiple certificates with a comma. For example, **pathtocert1,pathtocert2**. | - | **Credential** | The domain username and password for authentication with the AD source (not cloudadmin). Use the **username@avslab.local** format. | - | **BaseDNGroups** | Location to search for groups. For example, **CN=group1, DC=avsldap,DC=local**. Base DN is required for LDAP Authentication. | - | **BaseDNUsers** | Location to search for valid users. For example, **CN=users,DC=avsldap,DC=local**. Base DN is required for LDAP Authentication. | - | **PrimaryUrl** | Primary URL of the external identity source. For example, **ldaps://yourserver.avslab.local.:636**. | - | **SecondaryURL** | Secondary fallback URL if there's primary failure. For example, **ldaps://yourbackupldapserver.avslab.local:636**. | - | **DomainAlias** | For Active Directory identity sources, the domain's NetBIOS name. Add the NetBIOS name of the AD domain as an alias of the identity source, typically in the **avsldap\** format. | - | **DomainName** | The domain's FQDN. For example, **avslab.local**. | - | **Name** | User-friendly name of the external identity source. For example,**avslab.local**. | - | **Retain up to** | Retention period of the cmdlet output. The default value is 60 days. | - | **Specify name for execution** | Alphanumeric name. For example, **addexternalIdentity**. | - | **Timeout** | The period after which a cmdlet exits if it takes too long to finish. | --1. Check **Notifications** or the **Run Execution Status** pane to monitor progress and confirm successful completion. --## Add Active Directory over LDAP + | **GroupName** | The group in the external identity source that grants CloudAdmin access. For example, **avs-admins**. | + | **SSLCertificatesSasUrl** | The path to SAS strings that contain the certificates for authentication to the Windows Server Active Directory source. Separate multiple certificates with a comma. For example, **pathtocert1,pathtocert2**. | + | **Credential** | The domain username and password for authentication with the Windows Server Active Directory source (not CloudAdmin). Use the `<username@avslab.local>` format. | + | **BaseDNGroups** | The location to search for groups. For example, **CN=group1, DC=avsldap,DC=local**. Base DN is required for LDAP authentication. | + | **BaseDNUsers** | The location to search for valid users. For example, **CN=users,DC=avsldap,DC=local**. Base DN is required for LDAP authentication. | + | **PrimaryUrl** | The primary URL of the external identity source. For example, `ldaps://yourserver.avslab.local:636`. | + | **SecondaryURL** | The secondary fallback URL if the primary fails. For example, `ldaps://yourbackupldapserver.avslab.local:636`. | + | **DomainAlias** | For Windows Server Active Directory identity sources, the domain's NetBIOS name. Add the NetBIOS name of the Windows Server Active Directory domain as an alias of the identity source, typically in the **avsldap\\** format. | + | **DomainName** | The domain's fully qualified domain name (FQDN). For example, **avslab.local**. | + | **Name** | A name for the external identity source. For example, **avslab.local**. | + | **Retain up to** | The retention period of the cmdlet output. The default value is 60 days. | + | **Specify name for execution** | An alphanumeric name. For example, **addexternalIdentity**. | + | **Timeout** | The period after which a cmdlet exits if it isn't finished running. | ++1. To monitor progress and confirm successful completion, check **Notifications** or the **Run Execution Status** pane. ++## Add Windows Server Active Directory by using LDAP > [!NOTE]-> We recommend that you use the [Add Active Directory over LDAP with SSL](#add-active-directory-over-ldap-with-ssl) method. +> We recommend that you use the method to [add Windows Server Active Directory over LDAP by using SSL](#add-windows-server-active-directory-by-using-ldap-via-ssl). -To add AD over LDAP as an external identity source to use with SSO into vCenter Server, run the `New-LDAPIdentitySource` cmdlet: +To add Windows Server Active Directory over LDAP as an external identity source to use with SSO to vCenter Server, run the New-LDAPIdentitySource cmdlet. 1. Select **Run command** > **Packages** > **New-LDAPIdentitySource**. 1. Provide the required values or modify the default values, and then select **Run**. - | **Field** | **Value** | + | Name | Description | | | |- | **Name** | User-friendly name of the external identity source. For example, **avslab.local**. This name is displayed in vCenter. | + | **Name** | A name for the external identity source. For example, **avslab.local**. This name appears in vCenter Server. | | **DomainName** | The domain's FQDN. For example, **avslab.local**. |- | **DomainAlias** | For Active Directory identity sources, the domain's NetBIOS name. Add the AD domain's NetBIOS name as an alias of the identity source, typically in the **avsldap\** format. | - | **PrimaryUrl** | Primary URL of the external identity source. For example, **ldap://yourserver.avslab.local:389**. | - | **SecondaryURL** | Secondary fallback URL if there is a primary failure. | - | **BaseDNUsers** | Location to search for valid users. For example, **CN=users,DC=avslab,DC=local**. Base DN is required for LDAP Authentication. | - | **BaseDNGroups** | Location to search for groups. For example, **CN=group1, DC=avslab,DC=local**. Base DN is required for LDAP Authentication. | - | **Credential** | The domain username and password for authentication with the AD source (not cloudadmin). The user must be in the **username@avslab.local** format. | - | **GroupName** | The group in your external identity source that grants cloudadmin access. For example, **avs-admins**. | - | **Retain up to** | Retention period for the cmdlet output. The default value is 60 days. | - | **Specify name for execution** | Alphanumeric name. For example, **addexternalIdentity**. | - | **Timeout** | The period after which a cmdlet exits if it takes too long to finish. | + | **DomainAlias** | For Windows Server Active Directory identity sources, the domain's NetBIOS name. Add the Windows Server Active Directory domain's NetBIOS name as an alias of the identity source, typically in the **avsldap\** format. | + | **PrimaryUrl** | The primary URL of the external identity source. For example, `ldap://yourserver.avslab.local:389`. | + | **SecondaryURL** | The secondary fallback URL if there's a primary failure. | + | **BaseDNUsers** | The location to search for valid users. For example, **CN=users,DC=avslab,DC=local**. Base DN is required for LDAP authentication. | + | **BaseDNGroups** | The location to search for groups. For example, **CN=group1, DC=avslab,DC=local**. Base DN is required for LDAP authentication. | + | **Credential** | The domain username and password for authentication with the Windows Server Active Directory source (not CloudAdmin). The user must be in the `<username@avslab.local>` format. | + | **GroupName** | The group in your external identity source that grants CloudAdmin access. For example, **avs-admins**. | + | **Retain up to** | The retention period for the cmdlet output. The default value is 60 days. | + | **Specify name for execution** | An alphanumeric name. For example, **addexternalIdentity**. | + | **Timeout** | The period after which a cmdlet exits if it isn't finished running. | -1. Check **Notifications** or the **Run Execution Status** pane to monitor the progress. +1. To monitor the progress, check **Notifications** or the **Run Execution Status** pane. -## Add existing AD group to a cloudadmin group +## Add an existing Windows Server Active Directory group to a CloudAdmin group > [!IMPORTANT]-> Nested groups aren't supported, and their use may cause loss of access. +> Nested groups aren't supported. Using a nested group might cause loss of access. -Users in a cloudadmin group have privileges equal to the cloudadmin (cloudadmin@vsphere.local) role defined in vCenter Server SSO. To add an existing AD group to a cloudadmin group, run the `Add-GroupToCloudAdmins` cmdlet: +Users in a CloudAdmin group have user rights that are equal to the CloudAdmin (`<cloudadmin@vsphere.local>`) role that's defined in vCenter Server SSO. To add an existing Windows Server Active Directory group to a CloudAdmin group, run the Add-GroupToCloudAdmins cmdlet. 1. Select **Run command** > **Packages** > **Add-GroupToCloudAdmins**. -1. Provide the required values or change the default values, and then select **Run**. +1. Enter or select the required values, and then select **Run**. - | **Field** | **Value** | + | Name | Description | | | |- | **GroupName** | Name of the group to add. For example, **VcAdminGroup**. | - | **Retain up to** | Retention period of the cmdlet output. The default value is 60 days. | - | **Specify name for execution** | Alphanumeric name. For example, **addADgroup**. | - | **Timeout** | The period after which a cmdlet exits if taking too long to finish. | + | **GroupName** | The name of the group to add. For example, **VcAdminGroup**. | + | **Retain up to** | The retention period of the cmdlet output. The default value is 60 days. | + | **Specify name for execution** | An alphanumeric name. For example, **addADgroup**. | + | **Timeout** | The period after which a cmdlet exits if it isn't finished running. | 1. Check **Notifications** or the **Run Execution Status** pane to see the progress. ## List external identity sources -To list all external identity sources already integrated with vCenter Server SSO, run the `Get-ExternalIdentitySources` cmdlet: +To list all external identity sources that are already integrated with vCenter Server SSO, run the Get-ExternalIdentitySources cmdlet. 1. Sign in to the [Azure portal](https://portal.azure.com).- - >[!NOTE] - >If you need access to the Azure US Gov portal, go to https://portal.azure.us/ ++ > [!NOTE] + > If you need access to the Azure for US Government portal, go to `<https://portal.azure.us/>`. 1. Select **Run command** > **Packages** > **Get-ExternalIdentitySources**. - :::image type="content" source="media/run-command/run-command-overview.png" alt-text="Screenshot of the Run command menu with available packages in the Azure portal." lightbox="media/run-command/run-command-overview.png"::: + :::image type="content" source="media/run-command/run-command-overview.png" alt-text="Screenshot that shows the Run command menu with available packages in the Azure portal." lightbox="media/run-command/run-command-overview.png"::: -1. Provide the required values or change the default values, and then select **Run**. +1. Enter or select the required values, and then select **Run**. - :::image type="content" source="medilet in the Run command menu."::: + :::image type="content" source="medilet in the Run command menu."::: - | **Field** | **Value** | + | Name | Description | | | |- | **Retain up to** |Retention period of the cmdlet output. The default value is 60 days. | - | **Specify name for execution** | Alphanumeric name. For example, **getExternalIdentity**. | - | **Timeout** | The period after which a cmdlet exits if taking too long to finish. | + | **Retain up to** | The retention period of the cmdlet output. The default value is 60 days. | + | **Specify name for execution** | An alphanumeric name. For example, **getExternalIdentity**. | + | **Timeout** | The period after which a cmdlet exits if it isn't finished running. | -1. Check **Notifications** or the **Run Execution Status** pane to see the progress. +1. To see the progress, check **Notifications** or the **Run Execution Status** pane. ++ :::image type="content" source="media/run-command/run-packages-execution-command-status.png" alt-text="Screenshot that shows the Run Execution Status pane in the Azure portal." lightbox="media/run-command/run-packages-execution-command-status.png"::: - :::image type="content" source="media/run-command/run-packages-execution-command-status.png" alt-text="Screenshot of the Run Execution Status pane in the Azure portal." lightbox="media/run-command/run-packages-execution-command-status.png"::: +## Assign more vCenter Server roles to Windows Server Active Directory identities -## Assign more vCenter Server roles to Active Directory identities +After you add an external identity over LDAP or LDAPS, you can assign vCenter Server roles to Windows Server Active Directory security groups based on your organization's security controls. -After you've added an external identity over LDAP or LDAPS, you can assign vCenter Server roles to Active Directory security groups based on your organization's security controls. +1. Sign in to vCenter Server as CloudAdmin, select an item from the inventory, select the **Actions** menu, and then select **Add Permission**. -1. Sign in to vCenter Server with cloudadmin privileges, select an item from the inventory, select the **ACTIONS** menu, and choose **Add Permission**. + :::image type="content" source="media/run-command/ldaps-vcenter-permission-assignment-1.png" alt-text="Screenshot that shows the Actions menu in vCenter Server with the Add Permission option." lightbox="media/run-command/ldaps-vcenter-permission-assignment-1.png"::: - :::image type="content" source="media/run-command/ldaps-vcenter-permission-assignment-1.png" alt-text="Screenshot of the ACTIONS menu in vCenter Server with Add Permission option." lightbox="media/run-command/ldaps-vcenter-permission-assignment-1.png"::: +1. In the **Add Permission** dialog: -1. In the **Add Permission** prompt: - 1. **Domain**: Select the previously added Active Directory. - 1. **User/Group**: Enter the desired user or group name, find it, then select it. - 1. **Role**: Choose the role to assign. - 1. **Propagate to children**: Optionally, select the checkbox to propagate permissions to child resources. - :::image type="content" source="media/run-command/ldaps-vcenter-permission-assignment-2.png" alt-text="Screenshot of the Add Permission prompt in vCenter Server." lightbox="media/run-command/ldaps-vcenter-permission-assignment-3.png"::: + 1. **Domain**: Select the previously added instance of Windows Server Active Directory. + 1. **User/Group**: Enter the user or group name, search for it, and then select it. + 1. **Role**: Select the role to assign. + 1. **Propagate to children**: Optionally, select the checkbox to propagate permissions to child resources. -1. Switch to the **Permissions** tab and verify the permission assignment was added. + :::image type="content" source="media/run-command/ldaps-vcenter-permission-assignment-2.png" alt-text="Screenshot that shows the Add Permission dialog in vCenter Server." lightbox="media/run-command/ldaps-vcenter-permission-assignment-2.png"::: - :::image type="content" source="media/run-command/ldaps-vcenter-permission-assignment-3.png" alt-text="Screenshot of the Permissions tab in vCenter Server after adding a permission assignment." lightbox="media/run-command/ldaps-vcenter-permission-assignment-3.png"::: +1. Select the **Permissions** tab and verify that the permission assignment was added. -1. Users can now sign in to vCenter Server using their Active Directory credentials. + :::image type="content" source="media/run-command/ldaps-vcenter-permission-assignment-3.png" alt-text="Screenshot that shows the Permissions tab in vCenter Server after adding a permission assignment." lightbox="media/run-command/ldaps-vcenter-permission-assignment-3.png"::: -## Remove AD group from the cloudadmin role +Users can now sign in to vCenter Server by using their Windows Server Active Directory credentials. -To remove a specified AD group from the cloudadmin role, run the `Remove-GroupFromCloudAdmins` cmdlet: +## Remove a Windows Server Active Directory group from the CloudAdmin role ++To remove a specific Windows Server Active Directory group from the CloudAdmin role, run the Remove-GroupFromCloudAdmins cmdlet. 1. Select **Run command** > **Packages** > **Remove-GroupFromCloudAdmins**. -1. Provide the required values or change the default values, then select **Run**. +1. Enter or select the required values, and then select **Run**. - | **Field** | **Value** | + | Name | Description | | | |- | **GroupName** | Name of the group to remove. For example, **VcAdminGroup**. | - | **Retain up to** | Retention period of the cmdlet output. The default value is 60 days. | - | **Specify name for execution** | Alphanumeric name. For example, **removeADgroup**. | - | **Timeout** | The period after which a cmdlet exits if taking too long to finish. | + | **GroupName** | The name of the group to remove. For example, **VcAdminGroup**. | + | **Retain up to** | The retention period of the cmdlet output. The default value is 60 days. | + | **Specify name for execution** | An alphanumeric name. For example, **removeADgroup**. | + | **Timeout** | The period after which a cmdlet exits if it isn't finished running. | -1. Check **Notifications** or the **Run Execution Status** pane to see the progress. +1. To see the progress, check **Notifications** or the **Run Execution Status** pane. -## Remove existing external identity sources +## Remove all existing external identity sources -To remove all existing external identity sources in bulk, run the `Remove-ExternalIdentitySources` cmdlet: +To remove all existing external identity sources at once, run the Remove-ExternalIdentitySources cmdlet. 1. Select **Run command** > **Packages** > **Remove-ExternalIdentitySources**. -1. Provide the required values or change the default values, then select **Run**. +1. Enter or select the required values, and then select **Run**: - | **Field** | **Value** | + | Name | Description | | | |- | **Retain up to** | Retention period of the cmdlet output. The default value is 60 days. | - | **Specify name for execution** | Alphanumeric name. For example, **remove_externalIdentity**. | - | **Timeout** | The period after which a cmdlet exits if taking too long to finish. | + | **Retain up to** | The retention period of the cmdlet output. The default value is 60 days. | + | **Specify name for execution** | An alphanumeric name. For example, **remove_externalIdentity**. | + | **Timeout** | The period after which a cmdlet exits if it isn't finished running. | -1. Check **Notifications** or the **Run Execution Status** pane to see the progress. +1. To see the progress, check **Notifications** or the **Run Execution Status** pane. -## Rotate an existing external identity source account's username and/or password +## Rotate an existing external identity source account's username or password -1. Rotate the password of the account used for authentication with the AD source in the domain controller. +1. Rotate the password of the account that's used for authentication with the Windows Server Active Directory source in the domain controller. 1. Select **Run command** > **Packages** > **Update-IdentitySourceCredential**. -1. Provide the required values and the updated password, and then select **Run**. +1. Enter or select the required values, and then select **Run**. - | **Field** | **Value** | + | Name | Description | | | |- | **Credential** | The domain username and password used for authentication with the AD source (not cloudadmin). The user must be in the **username@avslab.local** format. | + | **Credential** | The domain username and password that are used for authentication with the Windows Server Active Directory source (not CloudAdmin). The user must be in the `<username@avslab.local>` format. | | **DomainName** | The FQDN of the domain. For example, **avslab.local**. | -1. Check **Notifications** or the **Run Execution Status** pane to see the progress. --> [!IMPORTANT] -> If you don't provide a DomainName, all external identity sources will be removed. The command **Update-IdentitySourceCredential** should be run only after the password is rotated in the domain controller. +1. To see the progress, check **Notifications** or the **Run Execution Status** pane. -## Next steps +> [!WARNING] +> If you don't provide a value for **DomainName**, all external identity sources are removed. Run the cmdlet Update-IdentitySourceCredential only after the password is rotated in the domain controller. -Now that you learned about how to configure LDAP and LDAPS, explore the following articles: +## Related content -- [How to configure storage policy](configure-storage-policy.md) - Each VM deployed to a vSAN datastore is assigned at least one VM storage policy. Learn how to assign a VM storage policy during an initial deployment of a VM or other VM operations, such as cloning or migrating.-- [Azure VMware Solution identity concepts](concepts-identity.md) - Use vCenter Server to manage virtual machine (VM) workloads and NSX-T Manager to manage and extend the private cloud. Access and identity management use the cloudadmin role for vCenter Server and restricted administrator rights for NSX-T Manager.-- [Configure external identity source for NSX-T](configure-external-identity-source-nsx-t.md) +- [Create a storage policy](configure-storage-policy.md) - [Azure VMware Solution identity concepts](concepts-identity.md)+- [Set an external identity source for NSX-T Data Center](configure-external-identity-source-nsx-t.md) - [VMware product documentation](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/administration/GUID-DB5A44F1-6E1D-4E5C-8B50-D6161FFA5BD2.html) |
azure-vmware | Configure Vmware Cloud Director Service Azure Vmware Solution | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-vmware-cloud-director-service-azure-vmware-solution.md | In this article, learn how to configure [VMware Cloud Director](https://docs.vmw - VMware Reverse proxy VM is deployed within the Azure VMware solution SDDC and requires outbound connectivity to your VMware Cloud director Service Instance. [Plan how you would provide this internet connectivity.](concepts-design-public-internet-access.md) -- Public IP on NSX-T edge can be used to provide outbound access for the VMware Reverse proxy VM as shown in this article. Learn more on, [How to configure a public IP in the Azure portal](enable-public-ip-nsx-edge.md#configure-a-public-ip-in-the-azure-portal) and [Outbound Internet access for VMs](enable-public-ip-nsx-edge.md#outbound-internet-access-for-vms)+- Public IP on NSX-T edge can be used to provide outbound access for the VMware Reverse proxy VM as shown in this article. Learn more on, [How to configure a public IP in the Azure portal](enable-public-ip-nsx-edge.md#set-up-a-public-ip-address-or-range) and [Outbound Internet access for VMs](enable-public-ip-nsx-edge.md#outbound-internet-access-for-vms) - VMware Reverse proxy can acquire an IP address through either DHCP or manual IP configuration. - Optionally create a dedicated Tier-1 router for the reverse proxy VM segment. |
azure-vmware | Deploy Arc For Azure Vmware Solution | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-arc-for-azure-vmware-solution.md | In this article, learn how to deploy Arc for Azure VMware Solution. Once you set - Identify your VMware vSphere resources (VMs, templates, networks, datastores, clusters/hosts/resource pools) and register them with Arc at scale. - Perform different virtual machine (VM) operations directly from Azure like; create, resize, delete, and power cycle operations (start/stop/restart) on VMware VMs consistently with Azure.-- Permit developers and application teams to use VM operations on-demand with [Role-based access control](https://learn.microsoft.com/azure/role-based-access-control/overview).-- Install the Arc-connected machine agent to [govern, protect, configure, and monitor](https://learn.microsoft.com/azure/azure-arc/servers/overview#supported-cloud-operations) them.+- Permit developers and application teams to use VM operations on-demand with [Role-based access control](/azure/role-based-access-control/overview). +- Install the Arc-connected machine agent to [govern, protect, configure, and monitor](/azure/azure-arc/servers/overview#supported-cloud-operations) them. - Browse your VMware vSphere resources (vms, templates, networks, and storage) in Azure The following requirements must be met in order to use Azure Arc-enabled Azure V You need the following items to ensure you're set up to begin the onboarding process to deploy Arc for Azure VMware Solution. -- Validate the regional support before you start the onboarding process. Arc for Azure VMware Solution is supported in all regions where Arc for VMware vSphere on-premises is supported. For details, see [Azure Arc-enabled VMware vSphere](https://learn.microsoft.com/azure/azure-arc/vmware-vsphere/overview#supported-regions).-- A [management VM](https://learn.microsoft.com/azure/azure-arc/resource-bridge/system-requirements#management-machine-requirements) with internet access that has a direct line of site to the vCenter.-- From the Management VM, verify you have access to [vCenter Server and NSX-T manager portals](https://learn.microsoft.com/azure/azure-vmware/tutorial-access-private-cloud#connect-to-the-vcenter-server-of-your-private-cloud).+- Validate the regional support before you start the onboarding process. Arc for Azure VMware Solution is supported in all regions where Arc for VMware vSphere on-premises is supported. For details, see [Azure Arc-enabled VMware vSphere](/azure/azure-arc/vmware-vsphere/overview#supported-regions). +- A [management VM](/azure/azure-arc/resource-bridge/system-requirements#management-machine-requirements) with internet access that has a direct line of site to the vCenter. +- From the Management VM, verify you have access to [vCenter Server and NSX-T manager portals](/azure/azure-vmware/tutorial-access-private-cloud#connect-to-the-vcenter-server-of-your-private-cloud). - A resource group in the subscription where you have an owner or contributor role.-- An unused, isolated [NSX Data Center network segment](https://learn.microsoft.com/azure/azure-vmware/tutorial-nsx-t-network-segment) that is a static network segment used for deploying the Arc for Azure VMware Solution OVA. If an isolated NSX-T Data Center network segment doesn't exist, one gets created.+- An unused, isolated [NSX Data Center network segment](/azure/azure-vmware/tutorial-nsx-t-network-segment) that is a static network segment used for deploying the Arc for Azure VMware Solution OVA. If an isolated NSX-T Data Center network segment doesn't exist, one gets created. - Verify your Azure subscription is enabled and has connectivity to Azure end points.-- The firewall and proxy URLs must be allowlisted in order to enable communication from the management machine, Appliance VM, and Control Plane IP to the required Arc resource bridge URLs. See the [Azure eArc resource bridge (Preview) network requirements](https://learn.microsoft.com/azure/azure-arc/resource-bridge/network-requirements).+- The firewall and proxy URLs must be allowlisted in order to enable communication from the management machine, Appliance VM, and Control Plane IP to the required Arc resource bridge URLs. See the [Azure eArc resource bridge (Preview) network requirements](/azure/azure-arc/resource-bridge/network-requirements). - Verify your vCenter Server version is 6.7 or higher. - A resource pool or a cluster with a minimum capacity of 16 GB of RAM and four vCPUs. - A datastore with a minimum of 100 GB of free disk space is available through the resource pool or cluster. When the script is run successfully, check the status to see if Azure Arc is now Recover from failed deployments -If the Azure Arc resource bridge deployment fails, consult the [Azure Arc resource bridge troubleshooting](https://learn.microsoft.com/azure/azure-arc/resource-bridge/troubleshoot-resource-bridge) guide. While there can be many reasons why the Azure Arc resource bridge deployment fails, one of them is KVA timeout error. Learn more about the [KVA timeout error](https://learn.microsoft.com/azure/azure-arc/resource-bridge/troubleshoot-resource-bridge#kva-timeout-error) and how to troubleshoot. +If the Azure Arc resource bridge deployment fails, consult the [Azure Arc resource bridge troubleshooting](/azure/azure-arc/resource-bridge/troubleshoot-resource-bridge) guide. While there can be many reasons why the Azure Arc resource bridge deployment fails, one of them is KVA timeout error. Learn more about the [KVA timeout error](/azure/azure-arc/resource-bridge/troubleshoot-resource-bridge#kva-timeout-error) and how to troubleshoot. ## Discover and project your VMware vSphere infrastructure resources to Azure Before you install an extension, you need to enable guest management on the VMwa Before you can install an extension, ensure your target machine meets the following conditions: -- Is running a [supported operating system](https://learn.microsoft.com/azure/azure-arc/servers/prerequisites#supported-operating-systems).-- Is able to connect through the firewall to communicate over the internet and these [URLs](https://learn.microsoft.com/azure/azure-arc/servers/network-requirements?tabs=azure-cloud#urls) aren't blocked.+- Is running a [supported operating system](/azure/azure-arc/servers/prerequisites#supported-operating-systems). +- Is able to connect through the firewall to communicate over the internet and these [URLs](/azure/azure-arc/servers/network-requirements?tabs=azure-cloud#urls) aren't blocked. - Has VMware tools installed and running. - Is powered on and the resource bridge has network connectivity to the host running the VM. When the extension installation steps are completed, they trigger deployment and ## Supported extensions and management services -Perform VM operations on VMware VMs through Azure using [supported extensions and management services](https://learn.microsoft.com/azure/azure-arc/vmware-vsphere/perform-vm-ops-through-azure#supported-extensions-and-management-services) +Perform VM operations on VMware VMs through Azure using [supported extensions and management services](/azure/azure-arc/vmware-vsphere/perform-vm-ops-through-azure#supported-extensions-and-management-services) |
azure-vmware | Disable Internet Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/disable-internet-access.md | Title: Disable internet access or enable a default route -description: This article explains how to disable internet access for Azure VMware Solution and enable default route for Azure VMware Solution. + Title: Set a default internet route or turn off internet access +description: Learn how to set a default internet route or turn off internet access in your Azure VMware Solution private cloud. Last updated 12/11/2023 -# Disable internet access or enable a default route -In this article, learn how to disable Internet access or enable a default route for your Azure VMware Solution private cloud. There are multiple ways to set up a default route. You can use a Virtual WAN hub, Network Virtual Appliance in a Virtual Network, or use a default route from on-premises. If you don't set up a default route, there's no Internet access to your Azure VMware Solution private cloud. +# Set a default internet route or turn off internet access -With a default route setup, you can achieve the following tasks: -- Disable Internet access to your Azure VMware Solution private cloud. +In this article, learn how to set a default internet route or turn off internet access in your Azure VMware Solution private cloud. - > [!Note] - > Ensure that a default route is not advertised from on-premises or Azure as that will override this setup. - -- Enable Internet access by generating a default route from Azure Firewall or third-party Network Virtual Appliance. -## Prerequisites -- If Internet access is required, a default route must be advertised from an Azure Firewall, Network Virtual Appliance or Virtual WAN Hub. -- Azure VMware Solution private cloud.-## Disable Internet access or enable a default route in the Azure portal -1. Sign in to the Azure portal. -1. Search for **Azure VMware Solution** and select it. -1. Locate and select your Azure VMware Solution private cloud. -1. On the left navigation, under **Workload networking**, select **Internet connectivity**. -1. Select the **Don't connect or connect using default route from Azure** button and select **Save**. -If you don't have a default route from on-premises or from Azure, you successfully disabled Internet connectivity to your Azure VMware Solution private cloud. +You have multiple options to set up a default internet access route. You can use a virtual WAN hub or a network virtual appliance (NVA) in a virtual network, or you can use a default route from an on-premises environment. If you don't set a default route, your Azure VMware Solution private cloud has no internet access. ++With a default route set, you can achieve the following tasks: ++- Turn off internet access to your Azure VMware Solution private cloud. ++ > [!NOTE] + > Ensure that a default route is not advertised from on-premises or from Azure. An advertised default route overrides this setup. ++- Turn on internet access by generating a default route from Azure Firewall or from a third-party NVA. -## Next steps +## Prerequisites ++- An Azure VMware Solution private cloud. +- If internet access is required, a default route must be advertised from an instance of Azure Firewall, an NVA, or a virtual WAN hub. ++## Set a default internet access route ++To set a default internet access route or to turn off internet access, use the Azure portal: ++1. Sign in to the Azure portal. +1. Search for **Azure VMware Solution**, and then select it in the search results. +1. Find and select your Azure VMware Solution private cloud. +1. On the resource menu under **Workload networking**, select **Internet connectivity**. +1. Select the **Connect using default route from Azure** option or the **Don't connect using default route from Azure** option, and then select **Save**. -[Internet connectivity design considerations (Preview)](concepts-design-public-internet-access.md) +If you don't have a default route from on-premises or from Azure, by completing the preceding steps, you turned off internet connectivity to your Azure VMware Solution private cloud. -[Enable Managed SNAT for Azure VMware Solution Workloads](enable-managed-snat-for-workloads.md) +## Related content -[Enable Public IP to the NSX Edge for Azure VMware Solution](enable-public-ip-nsx-edge.md) +- [Internet connectivity design considerations](concepts-design-public-internet-access.md) +- [Turn on Managed SNAT for Azure VMware Solution workloads](enable-managed-snat-for-workloads.md) +- [Turn on public IP addresses to an NSX-T Edge node for NSX-T Data Center](enable-public-ip-nsx-edge.md) |
azure-vmware | Enable Managed Snat For Workloads | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/enable-managed-snat-for-workloads.md | Title: Enable Managed SNAT for Azure VMware Solution Workloads -description: This article explains how to enable Managed SNAT for Azure VMware Solution Workloads. + Title: Turn on Managed SNAT for Azure VMware Solution workloads +description: Learn how to turn on Managed SNAT for Azure VMware Solution workloads. Last updated 12/21/2023 -# Enable Managed SNAT for Azure VMware Solution workloads +# Turn on Managed SNAT for Azure VMware Solution workloads -In this article, learn how to enable Azure VMware SolutionΓÇÖs Managed Source NAT (SNAT) to connect to the Internet outbound. A SNAT service translates from RFC1918 space to the public Internet for simple outbound Internet access. ICMP gets disabled by design, you can't ping an Internet host. The SNAT service doesn't work when you have a default route from Azure. +In this article, learn how to turn on Source Network Address Translation (SNAT) via the Azure VMware Solution Managed SNAT service to connect to outbound internet. -With this capability, you: +A SNAT service translates from an RFC 1918 space to the public internet for simple outbound internet access. Internet Control Message Protocol (ICMP) is turned off by design so that users can't ping an internet host. The SNAT service doesn't work when you have a default route from Azure. -- Have a basic SNAT service with outbound Internet connectivity from your Azure VMware Solution private cloud.-- Have no control of outbound SNAT rules.-- Have no control of the public IP address used.-- Cannot terminate inbound initiated Internet traffic.-- Are unable to view connection logs. -- Have a limit of 128,000 concurrent connections. +The Managed SNAT service in Azure VMware Solution gives you: ++- A basic SNAT service with outbound internet connectivity from your Azure VMware Solution private cloud. +- A limit of 128,000 concurrent connections. ++By using the Managed SNAT service, you *don't* have: ++- Control of outbound SNAT rules. +- Control of the public IP address that's used. +- The ability to terminate inbound-initiated internet traffic. +- The ability to view connection logs. ## Reference architecture-The architecture shows Internet access outbound from your Azure VMware Solution private cloud using an Azure VMware Solution Managed SNAT Service. +The following figure shows internet access that's outbound from your Azure VMware Solution private cloud via the Managed SNAT service in Azure VMware Solution. +++## Set up outbound internet access by using the Managed SNAT service -## Configure Outbound Internet access using Managed SNAT in the Azure port -1. Sign in to the Azure portal and then search for and select **Azure VMware Solution**. -2. Select the Azure VMware Solution private cloud. -1. In the left navigation, under **Workload Networking**, select **Internet Connectivity**. -4. Select **Connect using SNAT** button and select **Save**. - You successfully enabled outbound Internet access for your Azure VMware Solution private cloud using our Managed SNAT service. +To set up outbound internet access via Managed SNAT, use the Azure portal: -## Next steps -[Internet connectivity design considerations (Preview)](concepts-design-public-internet-access.md) +1. Sign in to the Azure portal. +1. Search for **Azure VMware Solution**, and then select it in the search results. +1. Select your Azure VMware Solution private cloud. +1. On the resource menu under **Workload networking**, select **Internet connectivity**. +1. Select **Connect using SNAT**, and then select **Save**. -[Enable Public IP to the NSX Edge for Azure VMware Solution (Preview)](enable-public-ip-nsx-edge.md) +## Related content -[Disable Internet access or enable a default route](disable-internet-access.md) +- [Internet connectivity design considerations](concepts-design-public-internet-access.md) +- [Turn on public IP addresses to an NSX-T Edge node for NSX-T Data Center](enable-public-ip-nsx-edge.md) +- [Set a default internet route or disable internet access](disable-internet-access.md) |
azure-vmware | Enable Public Ip Nsx Edge | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/enable-public-ip-nsx-edge.md | Title: Enable Public IP on the NSX-T Data Center Edge for Azure VMware Solution -description: This article shows how to enable internet access for your Azure VMware Solution. + Title: Turn on public IP addresses to an NSX-T Edge node for NSX-T Data Center +description: Learn how to turn on internet access for NSX-T Data Center in Azure VMware Solution. Learn how to turn on public IP addresses to an NSX-T Edge node and set internet access rules. Last updated 12/12/2023 -# Enable Public IP on the NSX-T Data Center Edge for Azure VMware Solution +# Turn on public IP addresses to an NSX-T Edge node for NSX-T Data Center -In this article, learn how to enable Public IP on the NSX-T Data Center Edge for your Azure VMware Solution. +In this article, learn how to turn on public IP addresses on a VMware NSX-T Edge node to run VMware NSX-T Data Center for your instance of Azure VMware Solution. ->[!TIP] ->Before you enable Internet access to your Azure VMware Solution, review the [Internet connectivity design considerations](concepts-design-public-internet-access.md). +> [!TIP] +> Before you turn on internet access to your instance of Azure VMware Solution, review [Internet connectivity design considerations](concepts-design-public-internet-access.md). -Public IP on the NSX-T Data Center Edge is a feature in Azure VMware Solution that enables inbound and outbound internet access for your Azure VMware Solution environment. +Public IP addresses to an NSX-T Edge node for NSX-T Data Center is a feature in Azure VMware Solution that turns on inbound and outbound internet access for your Azure VMware Solution environment. ->[!IMPORTANT] ->The use of Public IPv4 addresses can be consumed directly in Azure VMware Solution and charged based on the Public IPv4 prefix shown on [Pricing - Virtual Machine IP Address Options.](https://azure.microsoft.com/pricing/details/ip-addresses/). There are no data ingress or egress charges related to this service. +> [!IMPORTANT] +> IPv4 public IP address usage can be consumed directly in Azure VMware Solution and charged based on the IPv4 public IP address prefix that's shown in [Pricing - Virtual machine IP addresses](https://azure.microsoft.com/pricing/details/ip-addresses/). No charges for data ingress or egress are related to this service. -The Public IP is configured in Azure VMware Solution through the Azure portal and the NSX-T Data Center interface within your Azure VMware Solution private cloud. +The public IP address range is configured in Azure VMware Solution through the Azure portal and the NSX-T Data Center interface within your Azure VMware Solution private cloud. With this capability, you have the following features: -- A cohesive and simplified experience for reserving and using a Public IP down to the NSX Edge.-- The ability to receive up to 1000 or more Public IPs, enabling Internet access at scale.+- A cohesive and simplified experience for reserving and using a public IP address to the NSX-T Edge node. +- The ability to receive 1,000 or more public IP addresses. Turn on internet access at scale. - Inbound and outbound internet access for your workload VMs.-- DDoS Security protection against network traffic in and out of the internet.-- HCX Migration support over the Public Internet.+- Distributed denial-of-service (DDoS) security protection against network traffic to and from the internet. +- VMware HCX migration support over the public internet. ->[!IMPORTANT] ->You can configure up to 64 total Public IP addresses across these network blocks. If you want to configure more than 64 Public IP addresses, please submit a support ticket stating how many. +> [!IMPORTANT] +> You can set up a maximum of 64 total public IP addresses across these network blocks. If you want to configure more than 64 public IP addresses, please submit a support ticket that indicates the number of addresses you need. ## Prerequisites -- Azure VMware Solution private cloud-- DNS Server configured on the NSX-T Data Center+- An Azure VMware Solution private cloud. +- A DNS server set up for your instance of NSX-T Data Center. ## Reference architecture -The architecture shows internet access to and from your Azure VMware Solution private cloud using a Public IP directly to the NSX-T Data Center Edge. +The following figure shows internet access to and from your Azure VMware Solution private cloud via a public IP address directly to the NSX-T Edge node for NSX-T Data Center. ->[!IMPORTANT] ->The use of Public IP down to the NSX-T Data Center Edge is not compatible with reverse DNS Lookup. This includes not being able to support hosting a mail server in Azure VMware Solution. -## Configure a Public IP in the Azure portal +> [!IMPORTANT] +> Using a public IP address at the NSX-T Edge node for NSX-T Data Center is not compatible with reverse DNS lookup. If you use this scenario, you can't host a mail server in Azure VMware Solution. -1. Sign in to the Azure portal. -1. Search for and select Azure VMware Solution. -1. Select the Azure VMware Solution private cloud. -1. In the left navigation, under **Workload Networking**, select **Internet connectivity**. -1. Select the **Connect using Public IP down to the NSX-T Edge** button. +## Set up a public IP address or range ->[!IMPORTANT] ->Before selecting a Public IP, ensure you understand the implications to your existing environment. For more information, see [Internet connectivity design considerations](concepts-design-public-internet-access.md). This should include a risk mitigation review with your relevant networking and security governance and compliance teams. +To set up a public IP address or range, use the Azure portal: -6. Select **Public IP**. - :::image type="content" source="media/public-ip-nsx-edge/public-ip-internet-connectivity.png" alt-text="Diagram that shows how to select public IP to the NSX Edge"::: -6. Enter the **Public IP name** and select a subnet size from the **Address space** dropdown and select **Configure**. -7. This Public IP should be configured within 20 minutes and show the subnet. - :::image type="content" source="media/public-ip-nsx-edge/public-ip-subnet-internet-connectivity.png" alt-text="Diagram that shows Internet connectivity in Azure VMware Solution."::: -1. If you don't see the subnet, refresh the list. If the refresh fails, try the configuration again. - -9. After configuring the Public IP, select the **Connect using the Public IP down to the NSX-T Edge** checkbox to disable all other Internet options. -10. Select **Save**. +1. Sign in to the Azure portal, and then go to your Azure VMware Solution private cloud. +1. On the resource menu under **Workload networking**, select **Internet connectivity**. +1. Select the **Connect using Public IP down to the NSX-T Edge** checkbox. -You successfully enabled Internet connectivity for your Azure VMware Solution private cloud and reserved a Microsoft allocated Public IP. You can now configure this Public IP down to the NSX-T Data Center Edge for your workloads. The NSX-T Data Center is used for all VM communication. There are several options for configuring your reserved Public IP down to the NSX-T Data Center Edge. + > [!IMPORTANT] + > Before you select a public IP address, ensure that you understand the implications to your existing environment. For more information, see [Internet connectivity design considerations](concepts-design-public-internet-access.md). Considerations should include a risk mitigation review with your relevant networking and security governance and compliance teams. -There are three options for configuring your reserved Public IP down to the NSX-T Data Center Edge: Outbound Internet Access for VMs, Inbound Internet Access for VMs, and Gateway Firewall used to Filter Traffic to VMs at T1 Gateways. +1. Select **Public IP**. -### Outbound Internet access for VMs + :::image type="content" source="media/public-ip-nsx-edge/public-ip-internet-connectivity.png" alt-text="Diagram that shows how to select a public IP address to the NSX-T Edge node."::: -A Sourced Network Translation Service (SNAT) with Port Address Translation (PAT) is used to allow many VMs to one SNAT service. This connection means you can provide Internet connectivity for many VMs. +1. Enter a value for **Public IP name**. In the **Address space** dropdown list, select a subnet size. Then, select **Configure**. ->[!IMPORTANT] -> To enable SNAT for your specified address ranges, you must [configure a gateway firewall rule](#gateway-firewall-used-to-filter-traffic-to-vms-at-t1-gateways) and SNAT for the specific address ranges you desire. If you don't want SNAT enabled for specific address ranges, you must create a [No-NAT rule](#no-network-address-translation-rule-for-specific-address-ranges) for the address ranges to exclude. For your SNAT service to work as expected, the No-NAT rule should be a lower priority than the SNAT rule. + This public IP address is available within approximately 20 minutes. -**Add rule** + Check that the subnet is listed. If you don't see the subnet, refresh the list. If the refresh fails to display the subnet, try the configuration again. -1. From your Azure VMware Solution private cloud, select **vCenter Server Credentials** -2. Locate your NSX-T Manager URL and credentials. -3. Sign in to **VMware NSX-T Manager**. -4. Navigate to **NAT Rules**. -5. Select the T1 Router. -1. Select **ADD NAT RULE**. + :::image type="content" source="media/public-ip-nsx-edge/public-ip-subnet-internet-connectivity.png" alt-text="Diagram that shows internet connectivity in Azure VMware Solution."::: -**Configure rule** - -1. Enter a name. +1. After you set the public IP address, select the **Connect using the public IP down to the NSX-T Edge** checkbox to turn off all other internet options. ++1. Select **Save**. ++You successfully turned on internet connectivity for your Azure VMware Solution private cloud and reserved a Microsoft-allocated public IP address. You can now set this public IP address to the NSX-T Edge node for NSX-T Data Center to use for your workloads. NSX-T Data Center is used for all virtual machine (VM) communication. ++You have three options for configuring your reserved public IP address to the NXS Edge node for NSX-T Data Center: ++- Outbound internet access for VMs +- Inbound internet access for VMs +- A gateway firewall to filter traffic to VMs at T1 gateways ++### Outbound internet access for VMs ++A Source Network Address Translation (SNAT) service with Port Address Translation (PAT) is used to allow many VMs to use one SNAT service. Using this type of connection means that you can provide internet connectivity for many VMs. ++> [!IMPORTANT] +> To enable SNAT for your specified address ranges, you must [configure a gateway firewall rule](#set-up-a-gateway-firewall-to-filter-traffic-to-vms-at-t1-gateways) and SNAT for the specific address ranges that you want to use. If you don't want SNAT turned on for specific address ranges, you must create a [No-NAT rule](#create-a-no-nat-rule) for address ranges to exclude from Network Address Translation (NAT). For your SNAT service to work as expected, the No-NAT rule should be a lower priority than the SNAT rule. ++#### Create a SNAT rule ++1. In your Azure VMware Solution private cloud, select **vCenter Server Credentials**. +1. Locate your NSX Manager URL and credentials. +1. Sign in to VMware NSX Manager. +1. Go to **NAT Rules**. +1. Select the T1 router. +1. Select **Add NAT Rule**. +1. Enter a name for the rule. 1. Select **SNAT**.-1. Optionally, enter a source such as a subnet to SNAT or destination. -1. Enter the translated IP. This IP is from the range of Public IPs you reserved from the Azure VMware Solution Portal. -1. Optionally, give the rule a higher priority number. This prioritization moves the rule further down the rule list to ensure more specific rules are matched first. -1. Select **SAVE**. -Logging is enabled through the logging slider. For more information on NSX-T Data Center NAT configuration and options, see the -[NSX-T Data Center NAT Administration Guide](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/administration/GUID-7AD2C384-4303-4D6C-A44A-DEF45AA18A92.html) + Optionally, enter a source, such as a subnet to SNAT or a destination. -### No Network Address Translation rule for specific address ranges +1. Enter the translated IP address. This IP address is from the range of public IP addresses that you reserved in the Azure VMware Solution portal. -A No SNAT rule in NSX-T Manager can be used to exclude certain matches from performing Network Address Translation. This policy can be used to allow private IP traffic to bypass existing network translation rules. + Optionally, give the rule a higher-priority number. This prioritization moves the rule further down the rule list to ensure that more specific rules are matched first. -1. From your Azure VMware Solution private cloud, select **vCenter Server Credentials**. -1. Locate your NSX-T Manager URL and credentials. -1. Sign in to **VMware NSX-T Manager** and then select **NAT Rules**. -1. Select the T1 Router and then select **ADD NAT RULE**. -1. Select **NO SNAT** rule as the type of NAT rule. -1. Select the **Source IP** as the range of addresses you don't want to be translated. The **Destination IP** should be any internal addresses you're reaching from the range of Source IP ranges. -1. Select **SAVE**. +1. Select **Save**. -### Inbound Internet Access for VMs +Logging is turned on via the logging slider. -A Destination Network Translation Service (DNAT) is used to expose a VM on a specific Public IP address and/or a specific port. This service provides inbound internet access to your workload VMs. +For more information on NSX-T Data Center NAT configuration and options, see the +[NSX-T Data Center NAT Administration Guide](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/administration/GUID-7AD2C384-4303-4D6C-A44A-DEF45AA18A92.html). -**Log in to VMware NSX-T Manager** +#### Create a No-NAT rule -1. From your Azure VMware Solution private cloud, select **VMware credentials**. -2. Locate your NSX-T Manager URL and credentials. -3. Sign in to **VMware NSX-T Manager**. +You can create a No-NAT or No-SNAT rule in NSX Manager to exclude certain matches from performing NAT. This policy can be used to allow private IP address traffic to bypass existing network translation rules. -**Configure the DNAT rule** +1. In your Azure VMware Solution private cloud, select **vCenter Server Credentials**. +1. Sign in to NSX Manager, and then select **NAT Rules**. +1. Select the T1 router, and then select **Add NAT Rule**. +1. Select **No SNAT** rule as the type of NAT rule. +1. Select the **Source IP** value as the range of addresses that you don't want to be translated. The **Destination IP** value should be any internal addresses that you're reaching from the range of source IP address ranges. +1. Select **Save**. -1. Name the rule. -1. Select **DNAT** as the action. -1. Enter the reserved Public IP in the destination match. This IP is from the range of Public IPs reserved from the Azure VMware Solution Portal. -1. Enter the VM Private IP in the translated IP. -1. Select **SAVE**. -1. Optionally, configure the Translated Port or source IP for more specific matches. +### Inbound internet access for VMs -The VM is now exposed to the internet on the specific Public IP and/or specific ports. +A Destination Network Translation (DNAT) service is used to expose a VM on a specific public IP address or on a specific port. This service provides inbound internet access to your workload VMs. -### Gateway Firewall used to filter traffic to VMs at T1 Gateways +#### Create a DNAT rule -You can provide security protection for your network traffic in and out of the public internet through your Gateway Firewall. +1. In your Azure VMware Solution private cloud, select **vCenter Server Credentials**. +1. Sign in to NSX Manager, and then select **NAT Rules**. +1. Select the T1 router, and then select **Add DNAT Rule**. +1. Enter a name for the rule. +1. Select **DNAT** as the action. +1. For the destination match, enter the reserved public IP address. This IP address is from the range of public IP addresses that are reserved in the Azure VMware Solution portal. +1. For the translated IP, enter the VM private IP address. +1. Select **Save**. -1. From your Azure VMware Solution Private Cloud, select **VMware credentials**. -2. Locate your NSX-T Manager URL and credentials. -3. Sign in to **VMware NSX-T Manager**. -4. From the NSX-T home screen, select **Gateway Policies**. -5. Select **Gateway Specific Rules**, choose the T1 Gateway and select **ADD POLICY**. -6. Select **New Policy** and enter a policy name. -7. Select the Policy and select **ADD RULE**. -8. Configure the rule. + Optionally, configure the translated port or the source IP address for more specific matches. - 1. Select **New Rule**. - 1. Enter a descriptive name. - 1. Configure the source, destination, services, and action. +The VM is now exposed to the internet on the specific public IP address or on specific ports. -1. Select **Match External Address** to apply firewall rules to the external address of a NAT rule. +### Set up a gateway firewall to filter traffic to VMs at T1 gateways -For example, the following rule is set to Match External Address, and this setting allows SSH traffic inbound to the Public IP. - :::image type="content" source="media/public-ip-nsx-edge/gateway-specific-rules-match-external-connectivity.png" alt-text="Screenshot Internet connectivity inbound Public IP." lightbox="media/public-ip-nsx-edge/gateway-specific-rules-match-external-connectivity-expanded.png"::: +You can provide security protection for your network traffic in and out of the public internet through your gateway firewall. -If **Match Internal Address** was specified, the destination would be the internal or private IP address of the VM. +1. In your Azure VMware Solution private cloud, select **VMware credentials**. +1. Sign in to NSX Manager. +1. On the NSX-T overview page, select **Gateway Policies**. +1. Select **Gateway Specific Rules**, choose the T1 gateway, and then select **Add Policy**. +1. Select **New Policy** and enter a policy name. +1. Select the policy and select **Add Rule**. +1. Configure the rule: ++ 1. Select **New Rule**. + 1. Enter a descriptive name. + 1. Configure the source, destination, services, and action. ++1. Select **Match External Address** to apply firewall rules to the external address of a NAT rule. -For more information on the NSX-T Data Center Gateway Firewall, see the [NSX-T Data Center Gateway Firewall Administration Guide]( https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/administration/GUID-A52E1A6F-F27D-41D9-9493-E3A75EC35481.html). -The Distributed Firewall could be used to filter traffic to VMs. This feature is outside the scope of this document. For more information, see [NSX-T Data Center Distributed Firewall Administration Guide]( https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/administration/GUID-6AB240DB-949C-4E95-A9A7-4AC6EF5E3036.html). + For example, the following rule is set to **Match External Address**. The setting allows Secure Shell (SSH) traffic inbound to the public IP address. -## Next steps + :::image type="content" source="media/public-ip-nsx-edge/gateway-specific-rules-match-external-connectivity.png" alt-text="Screenshot that shows internet connectivity inbound to the public IP address." lightbox="media/public-ip-nsx-edge/gateway-specific-rules-match-external-connectivity-expanded.png"::: -[Internet connectivity design considerations (Preview)](concepts-design-public-internet-access.md) +If **Match Internal Address** was specified, the destination is the internal or private IP address of the VM. -[Enable Managed SNAT for Azure VMware Solution Workloads (Preview)](enable-managed-snat-for-workloads.md) +For more information on the NSX-T Data Center gateway firewall, see the [NSX-T Data Center Gateway Firewall Administration Guide]( https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/administration/GUID-A52E1A6F-F27D-41D9-9493-E3A75EC35481.html). +The distributed firewall can be used to filter traffic to VMs. For more information, see [NSX-T Data Center Distributed Firewall Administration Guide]( https://docs.vmware.com/en/VMware-NSX-T-Data-Center/3.1/administration/GUID-6AB240DB-949C-4E95-A9A7-4AC6EF5E3036.html). -[Disable Internet access or enable a default route](disable-internet-access.md) +## Related content -[Enable HCX access over the internet](enable-hcx-access-over-internet.md) +- [Internet connectivity design considerations](concepts-design-public-internet-access.md) +- [Turn on Managed SNAT for Azure VMware Solution workloads](enable-managed-snat-for-workloads.md) +- [Set a default internet route or turn off internet access](disable-internet-access.md) +- [Turn on VMware HCX access over the internet](enable-hcx-access-over-internet.md) |
azure-vmware | Manage Arc Enabled Azure Vmware Solution | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/manage-arc-enabled-azure-vmware-solution.md | The following command invokes the set credential for the specified appliance res ## Upgrade the Arc resource bridge -Azure Arc-enabled Azure VMware Private Cloud requires the Arc resource bridge to connect your VMware vSphere environment with Azure. Periodically, new images of Arc resource bridge are released to include security and feature updates. The Arc resource bridge can be manually upgraded from the vCenter server. You must meet all upgrade [prerequisites](https://review.learn.microsoft.com/azure/azure-arc/resource-bridge/upgrade?branch=main#prerequisites) before attempting to upgrade. The vCenter server must have the kubeconfig and appliance configuration files stored locally. If the cloudadmin credentials change after the initial deployment of the resource bridge, [update the Arc appliance credential](https://review.learn.microsoft.com/azure/azure-vmware/manage-arc-enabled-azure-vmware-solution#update-arc-appliance-credential) before you attempt a manual upgrade. +Azure Arc-enabled Azure VMware Private Cloud requires the Arc resource bridge to connect your VMware vSphere environment with Azure. Periodically, new images of Arc resource bridge are released to include security and feature updates. The Arc resource bridge can be manually upgraded from the vCenter server. You must meet all upgrade [prerequisites](/azure/azure-arc/resource-bridge/upgrade#prerequisites) before attempting to upgrade. The vCenter server must have the kubeconfig and appliance configuration files stored locally. If the cloudadmin credentials change after the initial deployment of the resource bridge, [update the Arc appliance credential](/azure/azure-vmware/manage-arc-enabled-azure-vmware-solution#update-arc-appliance-credential) before you attempt a manual upgrade. -Arc resource bridge can be manually upgraded from the management machine. The [manual upgrade](https://review.learn.microsoft.com/azure/azure-arc/resource-bridge/upgrade?branch=main#manual-upgrade) generally takes between 30-90 minutes, depending on the network speed. The upgrade command takes your Arc resource bridge to the immediate next version, which might not be the latest available version. Multiple upgrades could be needed to reach a [supported version](https://review.learn.microsoft.com/azure/azure-arc/resource-bridge/upgrade?branch=main#supported-versions). Verify your resource bridge version by checking the Azure resource of your Arc resource bridge. +Arc resource bridge can be manually upgraded from the management machine. The [manual upgrade](/azure/azure-arc/resource-bridge/upgrade#manual-upgrade) generally takes between 30-90 minutes, depending on the network speed. The upgrade command takes your Arc resource bridge to the immediate next version, which might not be the latest available version. Multiple upgrades could be needed to reach a [supported version](/azure/azure-arc/resource-bridge/upgrade#supported-versions). Verify your resource bridge version by checking the Azure resource of your Arc resource bridge. -Arc resource bridges, on a supported [private cloud provider](https://review.learn.microsoft.com/azure/azure-arc/resource-bridge/upgrade?branch=main#private-cloud-providers) with an appliance version 1.0.15 or higher, are automatically opted in to [cloud-managed upgrade](https://review.learn.microsoft.com/azure/azure-arc/resource-bridge/upgrade?branch=main#cloud-managed-upgrade).  +Arc resource bridges, on a supported [private cloud provider](/azure/azure-arc/resource-bridge/upgrade#private-cloud-providers) with an appliance version 1.0.15 or higher, are automatically opted in to [cloud-managed upgrade](/azure/azure-arc/resource-bridge/upgrade#cloud-managed-upgrade).  ## Collect logs from the Arc resource bridge |
azure-vmware | Request Host Quota Azure Vmware Solution | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/request-host-quota-azure-vmware-solution.md | Last updated 12/19/2023 # Request host quota for Azure VMware Solution -In this article, learn how to request host quota/capacity for [Azure VMware Solution](https://review.learn.microsoft.com/azure/azure-vmware/introduction?branch=main). You learn how to submit a support ticket to have your hosts allocated whether it's for a new deployment or an existing one. +In this article, learn how to request host quot). You learn how to submit a support ticket to have your hosts allocated whether it's for a new deployment or an existing one. If you have an existing Azure VMware Solution private cloud and want more hosts allocated, follow the same process. You need an Azure account in an Azure subscription that adheres to one of the fo ## Request host quota for EA and MCA customers -1. In your Azure portal, under **Help + Support**, create a **[New support request](https://rc.portal.azure.com/#create/Microsoft.Support)** and provide the following information: +1. In your Azure portal, under **Help + Support**, create a **[New support request](https://portal.azure.com/#create/Microsoft.Support)** and provide the following information: - **Issue type:** Technical - **Subscription:** Select your subscription - **Service:** All services > Azure VMware Solution You need an Azure account in an Azure subscription that adheres to one of the fo ## Request host quota for CSP customers -CSPs must use [Microsoft Partner Center](https://partner.microsoft.com) to enable Azure VMware Solution for their customers. This article uses [CSP Azure plan](https://learn.microsoft.com/partner-center/azure-plan-lp) as an example to illustrate the purchase procedure for partners. +CSPs must use [Microsoft Partner Center](https://partner.microsoft.com) to enable Azure VMware Solution for their customers. This article uses [CSP Azure plan](/partner-center/azure-plan-lp) as an example to illustrate the purchase procedure for partners. Access the Azure portal using the **Admin On Behalf Of** (AOBO) procedure from Partner Center. Access the Azure portal using the **Admin On Behalf Of** (AOBO) procedure from P 1. Select **Azure plan** and then select **Add to cart**. - 1. Review and finish the general setup of the Azure plan subscription for your customer. For more information, see [Microsoft Partner Center documentation](https://learn.microsoft.com/partner-center/azure-plan-manage). + 1. Review and finish the general setup of the Azure plan subscription for your customer. For more information, see [Microsoft Partner Center documentation](/partner-center/azure-plan-manage). -1. After you configure the Azure plan and you have the needed [Azure RBAC permissions](https://learn.microsoft.com/partner-center/azure-plan-manage) in place for the subscription, you'll request the quota for your Azure plan subscription. +1. After you configure the Azure plan and you have the needed [Azure RBAC permissions](/partner-center/azure-plan-manage) in place for the subscription, you'll request the quota for your Azure plan subscription. 1. Access Azure portal from [Microsoft Partner Center](https://partner.microsoft.com) using the **Admin On Behalf Of** (AOBO) procedure. Access the Azure portal using the **Admin On Behalf Of** (AOBO) procedure from P 1. Expand customer details and select **Microsoft Azure Management Portal**. - 1. In the Azure portal, under **Help + Support**, create a **[New support request](https://rc.portal.azure.com/#create/Microsoft.Support)** and provide the following information: + 1. In the Azure portal, under **Help + Support**, create a **[New support request](https://portal.azure.com/#create/Microsoft.Support)** and provide the following information: - **Issue type:** Technical - **Subscription:** Select your subscription - **Service:** All services > Azure VMware Solution |
azure-vmware | Reserved Instance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/reserved-instance.md | -When you commit to a reserved instance of [Azure VMware Solution](https://review.learn.microsoft.com/azure/azure-vmware/introduction?branch=main), you save money. The reservation discount automatically applies to the running Azure VMware Solution hosts that match the reservation scope and attributes. In addition, a reserved instance purchase covers only the compute part of your usage and includes software licensing costs. +When you commit to a reserved instance of [Azure VMware Solution](introduction.md), you save money. The reservation discount automatically applies to the running Azure VMware Solution hosts that match the reservation scope and attributes. In addition, a reserved instance purchase covers only the compute part of your usage and includes software licensing costs. ## Purchase restriction considerations |
communication-services | Add Chat Push Notifications | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/add-chat-push-notifications.md | Here we recommend creating a .p12 APNS cert and set it in Notification Hub. <img src="./media/add-chat-push-notification/xcode-config.png" width="730" height="500" alt="Screenshot of Enable Push Notifications and Background modes in Xcode."> +* Set "Require Only App-Extension-Safe API" as "No" for Pod Target - AzureCore + ## Implementation In protocol extension, chat SDK provides the implementation of `decryptPayload(n 5. Plug the IOS device into your mac, run the program and click ΓÇ£allowΓÇ¥ when asked to authorize push notification on device. -6. As User B, send a chat message. You (User A) should be able to receive a push notification in your IOS device. +6. As User B, send a chat message. You (User A) should be able to receive a push notification in your IOS device. |
container-apps | Tutorial Code To Cloud | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/tutorial-code-to-cloud.md | az containerapp create \ --resource-group $RESOURCE_GROUP \ --environment $ENVIRONMENT \ --image $ACR_NAME.azurecr.io/$API_NAME \- --target-port 3500 \ + --target-port 8080 \ --ingress 'external' \ --registry-server $ACR_NAME.azurecr.io \ --query properties.configuration.ingress.fqdn az containerapp create \ * By setting `--ingress` to `external`, your container app is accessible from the public internet. -* The `target-port` is set to `3500` to match the port that the container is listening to for requests. +* The `target-port` is set to `8080` to match the port that the container is listening to for requests. * Without a `query` property, the call to `az containerapp create` returns a JSON response that includes a rich set of details about the application. Adding a query parameter filters the output to just the app's fully qualified domain name (FQDN). $AppArgs = @{ TemplateContainer = $TemplateObj ConfigurationRegistry = $RegistryObj ConfigurationSecret = $SecretObj- IngressTargetPort = 3500 + IngressTargetPort = 8080 IngressExternal = $true } $MyApp = New-AzContainerApp @AppArgs $MyApp.IngressFqdn ``` * By setting `IngressExternal` to `external`, your container app is accessible from the public internet.-* The `IngressTargetPort` parameter is set to `3500` to match the port that the container is listening to for requests. +* The `IngressTargetPort` parameter is set to `8080` to match the port that the container is listening to for requests. |
container-apps | Tutorial Scaling | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/tutorial-scaling.md | az containerapp up \ --location centralus \ --environment 'my-container-apps' \ --image mcr.microsoft.com/k8se/quickstart:latest \- --target-port 80 \ + --target-port 8080 \ --ingress external \ --query properties.configuration.ingress.fqdn \ ``` az containerapp up ` --location centralus ` --environment my-container-apps ` --image mcr.microsoft.com/k8se/quickstart:latest `- --target-port 80 ` - --ingress external ` + --target-port 8080 ` + --ingress external ` --query properties.configuration.ingress.fqdn ` ``` The `show` command returns entries from the system logs for your container app i } { "TimeStamp":"2023-08-01T16:47:31.9481264+00:00",- "Log":"Now listening on: http://[::]:3500" + "Log":"Now listening on: http://[::]:8080" } { "TimeStamp":"2023-08-01T16:47:31.9490917+00:00", |
cost-management-billing | Billing Tags | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/billing-tags.md | Billing tags are applied in the Azure portal. The required permissions are: **Billing profile tags** -1. Go to [https://portal.azure.com.](https://portal.azure.com.) +1. Go to [https://portal.azure.com.](https://portal.azure.com). 1. Search for and select **Cost Management + Billing**. 1. Select the billing profile where you want to set the tags. 1. On the left menu, select **Properties** under **Settings** and then select **Add or Update tags**. |
data-lake-store | Data Lake Store Connectivity From Vnets | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-connectivity-from-vnets.md | Title: Connect to Azure Data Lake Storage Gen1 from VNETs | Microsoft Docs description: Learn how to enable access to Azure Data Lake Storage Gen1 from Azure virtual machines that have restricted access to resources.- -- ms.assetid: 683fcfdc-cf93-46c3-b2d2-5cb79f5e9ea5 When an ExpressRoute circuit is configured, the on-premises servers can access D ## See also * [Overview of Azure Data Lake Storage Gen1](data-lake-store-overview.md)-* [Securing data stored in Azure Data Lake Storage Gen1](data-lake-store-security-overview.md) +* [Securing data stored in Azure Data Lake Storage Gen1](data-lake-store-security-overview.md) |
databox-online | Azure Stack Edge Gpu Clustering Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-clustering-overview.md | For an Azure Stack Edge cluster with two nodes, if a node fails, then a cluster - For more information about the cluster witness, see [Cluster witness on Azure Stack Edge](azure-stack-edge-gpu-cluster-witness-overview.md). - For more information about witness in the cloud, see [Configure cloud witness](azure-stack-edge-gpu-manage-cluster.md#configure-cloud-witness).+ - For detailed steps to deploy a cloud witness, see [Deploy cloud witness for a failover cluster](/windows-server/failover-clustering/deploy-cloud-witness?tabs=windows#to-create-an-azure-storage-account). ## Infrastructure cluster |
defender-for-cloud | Defender For Storage Malware Scan | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-malware-scan.md | Malware Scanning doesn't block access or change permissions to the uploaded blob - Unsupported storage accounts: Legacy v1 storage accounts aren't supported by malware scanning. - Unsupported service: Azure Files isn't supported by malware scanning. - Unsupported regions: Jio India West, Korea South.-- Regions that are supported by Defender for Storage but not by malware scanning. Learn more about [availability for Defender for Storage.](/azure/defender-for-cloud/defender-for-storage-introduction)+ - Regions that are supported by Defender for Storage but not by malware scanning. Learn more about [availability for Defender for Storage.](/azure/defender-for-cloud/defender-for-storage-introduction) + - Unsupported blob types: [Append and Page blobs](/rest/api/storageservices/understanding-block-blobs--append-blobs--and-page-blobs) aren't supported for Malware Scanning. - Unsupported encryption: Client-side encrypted blobs aren't supported as they can't be decrypted before scanning by the service. However, data encrypted at rest by Customer Managed Key (CMK) is supported. - Unsupported index tag results: Index tag scan result isn't supported in storage accounts with Hierarchical namespace enabled (Azure Data Lake Storage Gen2). |
defender-for-cloud | Upcoming Changes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md | If you're looking for the latest release notes, you can find them in the [What's | Planned change | Announcement date | Estimated date for change | |--|--|--|+| [Update to agentless VM scanning built-in Azure role](#update-to-agentless-vm-scanning-built-in-azure-role) |January 14, 2024 | February 2024 | +| [Deprecation of two recommendations related to PCI](#deprecation-of-two-recommendations-related-to-pci) |January 14, 2024 | February 2024 | | [Four new recommendations for Azure Stack HCI resource type](#four-new-recommendations-for-azure-stack-hci-resource-type) | January 11, 2024 | February 2024 | | [Defender for Servers built-in vulnerability assessment (Qualys) retirement path](#defender-for-servers-built-in-vulnerability-assessment-qualys-retirement-path) | January 9, 2024 | May 2024 | | [Retirement of the Defender for Cloud Containers Vulnerability Assessment powered by Qualys](#retirement-of-the-defender-for-cloud-containers-vulnerability-assessment-powered-by-qualys) | January 9, 2023 | March 2024 | If you're looking for the latest release notes, you can find them in the [What's | [Deprecating two security incidents](#deprecating-two-security-incidents) | | November 2023 | | [Defender for Cloud plan and strategy for the Log Analytics agent deprecation](#defender-for-cloud-plan-and-strategy-for-the-log-analytics-agent-deprecation) | | August 2024 | +## Update to agentless VM scanning built-in Azure role ++**Announcement date: January 14, 2024** ++**Estimated date of change: February 2024** ++In Azure, agentless scanning for VMs uses a built-in role (called [VM scanner operator](/azure/defender-for-cloud/faq-permissions)) with the minimum necessary permissions required to scan and assess your VMs for security issues. To continuously provide relevant scan health and configuration recommendations for VMs with encrypted volumes, an update to this role's permissions is planned. The update includes the addition of the ```Microsoft.Compute/DiskEncryptionSets/read``` permission. This permission solely enables improved identification of encrypted disk usage in VMs. It does not provide Defender for Cloud any additional capabilities to decrypt or access the content of these encrypted volumes beyond the encryption methods [already supported](/azure/defender-for-cloud/concept-agentless-data-collection#availability) prior to this change. This change is expected to take place during February 2024 and no action is required on your end. ++## Deprecation of two recommendations related to PCI ++**Announcement date: January 14, 2024** ++**Estimated date for change: February 2024** ++The following two recommendations related to PCI (Permission Creep Index) are set for deprecation: ++- `Over-provisioned identities in accounts should be investigated to reduce the Permission Creep Index (PCI)` +- `Over-Provisioned identities in subscriptions should be investigated to reduce the Permission Creep Index (PCI)` + ## Four new recommendations for Azure Stack HCI resource type **Announcement date: January 11, 2024** Azure Stack HCI is set to be a new resource type that can be managed through Mic **Estimated date for change: May 2024** The Defender for Servers built-in vulnerability assessment solution powered by Qualys is on a retirement path which is estimated to complete on **May 1st, 2024**. If you're currently using the vulnerability assessment solution powered by Qualys, you should plan your [transition to the integrated Microsoft Defender vulnerability management solution](how-to-transition-to-built-in.md).- + For more information about our decision to unify our vulnerability assessment offering with Microsoft Defender Vulnerability Management, you can read [this blog post](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/defender-for-cloud-unified-vulnerability-assessment-powered-by/ba-p/3990112). You can also check out the [common questions about the transition to Microsoft Defender Vulnerability Management solution](faq-scanner-detection.yml). |
education-hub | Deploy Resources Azure For Students | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/education-hub/deploy-resources-azure-for-students.md | You must have an Azure for Students account. ## Follow the tutorials to deploy resources -[Deploy an App on Azure](https://learn.microsoft.com/azure/app-service/) -[Deploy a Virtual Machine](https://learn.microsoft.com/azure/virtual-machines/) -[Deploy a SQL Database](https://learn.microsoft.com/azure/azure-sql/?view=azuresql) -[Deploy Azure AI Speech-to-Test](https://learn.microsoft.com/azure/ai-services/speech-service/index-speech-to-text) -[Deploy Azure AI Custom Vision Service](https://learn.microsoft.com/azure/ai-services/custom-vision-service/) +[Deploy an App on Azure](/azure/app-service/) +[Deploy a Virtual Machine](/azure/virtual-machines/) +[Deploy a SQL Database](/azure/azure-sql/) +[Deploy Azure AI Speech-to-Test](/azure/ai-services/speech-service/index-speech-to-text) +[Deploy Azure AI Custom Vision Service](/azure/ai-services/custom-vision-service/) ## Next steps |
event-grid | Event Schema Aks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-schema-aks.md | Title: Azure Kubernetes Service as Event Grid source description: This article describes how to use Azure Kubernetes Service as an Event Grid event source. It provides the schema and links to tutorial and how-to articles. -++ Last updated 12/02/2022- # Azure Kubernetes Service (AKS) as an Event Grid source |
event-hubs | Schema Registry Json Schema Kafka | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/schema-registry-json-schema-kafka.md | description: This article provides information on how to use JSON Schema in Sche Last updated 04/26/2023 ms.devlang: scala--++ # Use JSON Schema with Apache Kafka applications (Preview) |
genomics | Business Continuity Genomics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/genomics/business-continuity-genomics.md | Title: Overview of business continuity description: This overview describes the capabilities that Microsoft Genomics provides for business continuity and disaster recovery. -keywords: business continuity, disaster recovery -- - |
genomics | File Support Ticket Genomics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/genomics/file-support-ticket-genomics.md | Title: How to file a support request description: This article describes how to file a support request to contact Microsoft Genomics if you're not able to resolve your issue with the troubleshooting guide or FAQ. -keywords: troubleshooting, error, debugging, support - - |
genomics | Overview What Is Genomics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/genomics/overview-what-is-genomics.md | Title: What is Microsoft Genomics? description: Learn how Microsoft Genomics can power genome sequencing, using a cloud implementation of Burrows-Wheeler Aligner (BWA) and Genome Analysis Toolkit (GATK).- - |
genomics | Quickstart Input Bam | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/genomics/quickstart-input-bam.md | Title: Submit a workflow using BAM file input description: This article demonstrates how to submit a workflow to the Microsoft Genomics service if your input file is a single BAM file. - - |
genomics | Quickstart Input Multiple | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/genomics/quickstart-input-multiple.md | Title: Submit a workflow using multiple inputs description: This article demonstrates how to submit a workflow to the Microsoft Genomics service if your input file is multiple FASTQ or BAM files from the same sample.- - Last updated 02/05/2018 |
genomics | Quickstart Input Pair Fastq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/genomics/quickstart-input-pair-fastq.md | Title: Submit a workflow using FASTQ file inputs description: This article demonstrates how to submit a workflow to the Microsoft Genomics service if your input files are a single pair of FASTQ files. - - |
genomics | Quickstart Input Sas | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/genomics/quickstart-input-sas.md | Title: Workflow using shared access signatures description: This article demonstrates how to submit a workflow to the Microsoft Genomics service using shared access signatures (SAS) instead of storage account keys.- - |
genomics | Quickstart Run Genomics Workflow Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/genomics/quickstart-run-genomics-workflow-portal.md | Title: 'Quickstart: Run a workflow - Microsoft Genomics' description: The quickstart shows how to load input data into Azure Blob Storage and run a workflow through the Microsoft Genomics service.- - |
genomics | Version Release History Genomics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/genomics/version-release-history-genomics.md | Title: Version release history description: The release history of updates to the Microsoft Genomics Python client for fixes and new functionality. - - |
iot-hub | Iot Hub Devguide Messages Read Builtin | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-messages-read-builtin.md | Title: Understand the Azure IoT Hub built-in endpoint -description: This article describes how to use the built-in, Event Hub-compatible endpoint to read device-to-cloud messages. + Title: Understand the built-in endpoint ++description: This article describes how to use the built-in, Event Hubs-compatible endpoint to read device-to-cloud messages. - Previously updated : 12/19/2022 Last updated : 12/11/2023 # Read device-to-cloud messages from the built-in endpoint -By default, messages are routed to the built-in service-facing endpoint (**messages/events**) that is compatible with [Event Hubs](../event-hubs/index.yml). This endpoint is currently only exposed using the [AMQP](https://www.amqp.org/) protocol on port 5671 and [AMQP over WebSockets](http://docs.oasis-open.org/amqp-bindmap/amqp-wsb/v1.0/cs01/amqp-wsb-v1.0-cs01.html) on port 443. An IoT hub exposes the following properties to enable you to control the built-in Event Hub-compatible messaging endpoint **messages/events**. +By default, messages are routed to the built-in service-facing endpoint (**messages/events**) that is compatible with [Event Hubs](../event-hubs/index.yml). IoT Hub exposes the **messages/events** built-in endpoint for your back-end services to read the device-to-cloud messages received by your hub. This endpoint is Event Hubs-compatible, which enables you to use any of the mechanisms the Event Hubs service supports for reading messages. ++If you're using [message routing](iot-hub-devguide-messages-d2c.md) and the [fallback route](iot-hub-devguide-messages-d2c.md#fallback-route) is enabled, a message that doesn't match a query on any route goes to the built-in endpoint. If you disable this fallback route, a message that doesn't match any query is dropped. ++This endpoint is currently only exposed using the [AMQP](https://www.amqp.org/) protocol on port 5671 and [AMQP over WebSockets](http://docs.oasis-open.org/amqp-bindmap/amqp-wsb/v1.0/cs01/amqp-wsb-v1.0-cs01.html) on port 443. An IoT hub exposes the following properties to enable you to control the built-in Event Hub-compatible messaging endpoint **messages/events**. | Property | Description | | - | -- | | **Partition count** | Set this property at creation to define the number of [partitions](../event-hubs/event-hubs-features.md#partitions) for device-to-cloud event ingestion. | | **Retention time** | This property specifies how long in days messages are retained by IoT Hub. The default is one day, but it can be increased to seven days. | -IoT Hub allows data retention in the built-in Event Hubs for a maximum of seven days. You can set the retention time during creation of your IoT hub. Data retention time in IoT Hub depends on your IoT hub tier and unit type. In terms of size, the built-in Event Hubs can retain messages of the maximum message size up to at least 24 hours of quota. For example, one S1 unit IoT hub provides enough storage to retain at least 400,000 messages, at 4 KB per message. If your devices are sending smaller messages, they may be retained for longer (up to seven days) depending on how much storage is consumed. We guarantee to retain the data for the specified retention time as a minimum. After the retention time has passed, messages expire and become inaccessible. --IoT Hub also enables you to manage consumer groups on the built-in device-to-cloud receive endpoint. You can have up to 20 consumer groups for each IoT hub. --If you're using [message routing](iot-hub-devguide-messages-d2c.md) and the [fallback route](iot-hub-devguide-messages-d2c.md#fallback-route) is enabled, a message that doesn't match a query on any route goes to the built-in endpoint. If you disable this fallback route, a message that doesn't match any query is dropped. --You can modify the retention time, either programmatically using the [IoT Hub resource provider REST APIs](/rest/api/iothub/iothubresource), or with the [Azure portal](https://portal.azure.com). +IoT Hub allows data retention in the built-in endpoint for a maximum of seven days. You can set the retention time during creation of your IoT hub. Data retention time in IoT Hub depends on your IoT hub tier and unit type. In terms of size, the built-in endpoint can retain messages of the maximum message size up to at least 24 hours of quota. For example, one S1 unit IoT hub provides enough storage to retain at least 400,000 messages, at 4 KB per message. If your devices are sending smaller messages, they may be retained for longer (up to seven days) depending on how much storage is consumed. We guarantee to retain the data for the specified retention time as a minimum. After the retention time has passed, messages expire and become inaccessible. You can modify the retention time, either programmatically using the [IoT Hub resource provider REST APIs](/rest/api/iothub/iothubresource), or with the [Azure portal](https://portal.azure.com). -IoT Hub exposes the **messages/events** built-in endpoint for your back-end services to read the device-to-cloud messages received by your hub. This endpoint is Event Hub-compatible, which enables you to use any of the mechanisms the Event Hubs service supports for reading messages. +IoT Hub also enables you to manage consumer groups on the built-in endpoint. You can have up to 20 consumer groups for each IoT hub. -## Read from the built-in endpoint +## Connect to the built-in endpoint Some product integrations and Event Hubs SDKs are aware of IoT Hub and let you use your IoT hub service connection string to connect to the built-in endpoint. When you use Event Hubs SDKs or product integrations that are unaware of IoT Hub 1. Select **Built-in endpoints** from the resource menu, under **Hub settings**. 1. The **Built-in endpoints** working pane contains three sections:- - - The **Event Hub Details** section contains the following values: **Partitions**, **Event Hub-compatible name**, **Retain for**, and **Consumer Groups**. - - The **Event Hub compatible endpoint** section contains the following values: **Shared access policy** and **Event Hub-compatible endpoint**. - - The **Cloud to device messaging** section contains the following values: **Default TTL**, **Feedback retention time**, and **Maximum delivery count**. ++ * The **Event Hub Details** section contains the following values: **Partitions**, **Event Hub-compatible name**, **Retain for**, and **Consumer Groups**. + * The **Event Hub compatible endpoint** section contains the following values: **Shared access policy** and **Event Hub-compatible endpoint**. + * The **Cloud to device messaging** section contains the following values: **Default TTL**, **Feedback retention time**, and **Maximum delivery count**. :::image type="content" source="./media/iot-hub-devguide-messages-read-builtin/eventhubcompatible.png" alt-text="Screen capture showing device-to-cloud settings." lightbox="./media/iot-hub-devguide-messages-read-builtin/eventhubcompatible.png"::: If the SDK you're using requires other values, then they would be: You can then choose any shared access policy from the **Shared access policy** drop-down, as shown in the previous screenshot. It only shows policies that have the **ServiceConnect** permissions to connect to the specified event hub. +## SDK samples + The SDKs you can use to connect to the built-in Event Hub-compatible endpoint that IoT Hub exposes include: | Language | SDK | Example |-| -- | | | -| .NET | https://www.nuget.org/packages/Azure.Messaging.EventHubs | [Quickstart](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-csharp) | -| Java | https://mvnrepository.com/artifact/com.azure/azure-messaging-eventhubs | [Quickstart](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-java) | -| Node.js | https://www.npmjs.com/package/@azure/event-hubs | [Quickstart](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-nodejs) | -| Python | https://pypi.org/project/azure-eventhub/ | [Quickstart](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-python) | +| -- | | - | +| .NET | https://www.nuget.org/packages/Azure.Messaging.EventHubs | [ReadD2cMessages .NET](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/iothub/service/samples/getting%20started/ReadD2cMessages) | +| Java | https://mvnrepository.com/artifact/com.azure/azure-messaging-eventhubs | | +| Node.js | https://www.npmjs.com/package/@azure/event-hubs | [read-d2c-messages Node.js](https://github.com/Azure-Samples/azure-iot-samples-node/tree/master/iot-hub/Quickstarts/read-d2c-messages) | +| Python | https://pypi.org/project/azure-eventhub/ | [read-dec-messages Python](https://github.com/Azure-Samples/azure-iot-samples-python/tree/master/iot-hub/Quickstarts/read-d2c-messages) | The product integrations you can use with the built-in Event Hub-compatible endpoint that IoT Hub exposes include: The product integrations you can use with the built-in Event Hub-compatible endp For more information, see the [Apache Kafka developer guide for Azure Event Hubs](../event-hubs/apache-kafka-developer-guide.md). * [Azure Databricks](/azure/databricks/) -## Use AMQP-WS or a proxy with Event Hubs SDKs --You can use the Event Hubs SDKs to read from the built-in endpoint in environments where AMQP over WebSockets or reading through a proxy is required. For more information, see the following samples. --| Language | Sample | -| -- | | -| .NET | [ReadD2cMessages .NET](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/iothub/service/samples/getting%20started/ReadD2cMessages) | -| Node.js | [read-d2c-messages Node.js](https://github.com/Azure-Samples/azure-iot-samples-node/tree/master/iot-hub/Quickstarts/read-d2c-messages) | -| Python | [read-dec-messages Python](https://github.com/Azure-Samples/azure-iot-samples-python/tree/master/iot-hub/Quickstarts/read-d2c-messages) | - ## Next steps * For more information about IoT Hub endpoints, see [IoT Hub endpoints](iot-hub-devguide-endpoints.md). -* The [Quickstarts](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-nodejs) show you how to send device-to-cloud messages from simulated devices and read the messages from the built-in endpoint. --For more information, see the [Process IoT Hub device-to-cloud messages using routes](tutorial-routing.md) tutorial. - * If you want to route your device-to-cloud messages to custom endpoints, see [Use message routes and custom endpoints for device-to-cloud messages](iot-hub-devguide-messages-d2c.md). |
iot-hub | Iot Hub Ha Dr | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-ha-dr.md | Once the failover operation for the IoT hub completes, all operations from the d > [!CAUTION] >->* The Event Hubs-compatible name and endpoint of the IoT Hub built-in events endpoint change after failover. When receiving telemetry messages from the built-in endpoint using either the Event Hubs client or event processor host, you should [use the IoT hub connection string](iot-hub-devguide-messages-read-builtin.md#read-from-the-built-in-endpoint) to establish the connection. This ensures that your back-end applications continue to work without requiring manual intervention post failover. If you use the Event Hub-compatible name and endpoint in your application directly, you will need to [fetch the new Event Hub-compatible endpoint](iot-hub-devguide-messages-read-builtin.md#read-from-the-built-in-endpoint) after failover to continue operations. For more information, see [Manual failover and Event Hub](#manual-failover-and-event-hubs). +>* The Event Hubs-compatible name and endpoint of the IoT Hub built-in events endpoint change after failover. When receiving telemetry messages from the built-in endpoint using either the Event Hubs client or event processor host, you should [use the IoT hub connection string](iot-hub-devguide-messages-read-builtin.md#connect-to-the-built-in-endpoint) to establish the connection. This ensures that your back-end applications continue to work without requiring manual intervention post failover. If you use the Event Hub-compatible name and endpoint in your application directly, you will need to [fetch the new Event Hub-compatible endpoint](iot-hub-devguide-messages-read-builtin.md#connect-to-the-built-in-endpoint) after failover to continue operations. For more information, see [Manual failover and Event Hub](#manual-failover-and-event-hubs). >* If you use Azure Functions or Azure Stream Analytics to connect the built-in Events endpoint, you might need to perform a **Restart**. This is because during failover previous offsets are no longer valid. >* When routing to storage, we recommend listing the blobs or files and then iterating over them, to ensure all blobs or files are read without making any assumptions of partition. The partition range could potentially change during a Microsoft-initiated failover or manual failover. You can use the [List Blobs API](/rest/api/storageservices/list-blobs) to enumerate the list of blobs or [List ADLS Gen2 API](/rest/api/storageservices/datalakestoragegen2/filesystem/list) for the list of files. To learn more, see [Azure Storage as a routing endpoint](iot-hub-devguide-messages-d2c.md#azure-storage-as-a-routing-endpoint). The Event Hubs-compatible name and endpoint of the IoT Hub built-in events endpo ### Use the portal -For more information about using the portal to retrieve the Event Hub-compatible endpoint and the Event Hub-compatible name, see [Read from the built-in endpoint](iot-hub-devguide-messages-read-builtin.md#read-from-the-built-in-endpoint). +For more information about using the portal to retrieve the Event Hub-compatible endpoint and the Event Hub-compatible name, see [Connect to the built-in endpoint](iot-hub-devguide-messages-read-builtin.md#connect-to-the-built-in-endpoint). ### Use the .NET SDK |
iot-operations | Howto Manage Secrets | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/deploy-iot-ops/howto-manage-secrets.md | - - ignite-2023 + - ignite-2023 #CustomerIntent: As an IT professional, I want prepare an Azure-Arc enabled Kubernetes cluster with Key Vault secrets so that I can deploy Azure IoT Operations to it. For more information, see [Deploy Azure IoT Operations extensions](./howto-deplo ## Configure service principal and Azure Key Vault upfront -If the Azure account executing the `az iot ops init` command does not have permissions to query the Azure Resource Graph and create service principals, you can prepare these upfront and use extra arguments when running the CLI command as described in [Deploy Azure IoT Operations extensions](./howto-deploy-iot-operations.md?tabs=cli). +If the Azure account executing the `az iot ops init` command does not have permissions to query the Microsoft Graph and create service principals, you can prepare these upfront and use extra arguments when running the CLI command as described in [Deploy Azure IoT Operations extensions](./howto-deploy-iot-operations.md?tabs=cli). ### Configure service principal for interacting with Azure Key Vault via Microsoft Entra ID First, register an application with Microsoft Entra ID. When your application is created, you are directed to its resource page. -1. Copy the **Application (client) ID** from the app registration overview page. You'll use this value as an argument when running Azure IoT Operations deployment. +1. Copy the **Application (client) ID** from the app registration overview page. You'll use this value as an argument when running Azure IoT Operations deployment with the `az iot ops init` command. Next, give your application permissions for key vault. Create a client secret that will be added to your Kubernetes cluster to authenti 1. Provide an optional description for the secret, then select **Add**. -1. Copy the **Value** and **Secret ID** from your new secret. You'll use these values later below. +1. Copy the **Value** from your new secret. You'll use this value later when you run `az iot ops init`. Retrieve the service principal Object Id |
key-vault | Quick Create Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/quick-create-cli.md | openssl req -newkey rsa:2048 -nodes -keyout cert_1.key -x509 -days 365 -out cert openssl req -newkey rsa:2048 -nodes -keyout cert_2.key -x509 -days 365 -out cert_2.cer ``` +> [!NOTE] +> Even if the certificate has "expired," it can still be used to restore the security domain. + > [!IMPORTANT] > Create and store the RSA key pairs and security domain file generated in this step securely. |
load-balancer | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/whats-new.md | The product group is actively working on resolutions for the following known iss |Issue |Description |Mitigation | | - ||| | IP based LB outbound IP | IP based LB uses Azure's Default Outbound Access IP for outbound | In order to prevent outbound access from this IP, use NAT Gateway for a predictable IP address and to prevent SNAT port exhaustion |-| numberOfProbes, "Unhealthy threshold" | Health probe configuration property numberOfProbes, otherwise known as "Unhealthy threshold" in Portal, isn't respected. Load Balancer health probes will probe up/down immediately after one probe regardless of the property's configured value | To control the number of successful or failed consecutive probes necessary to mark backend instances as healthy or unhealthy, please leverage the property ["probeThreshold"](https://learn.microsoft.com/azure/templates/microsoft.network/loadbalancers?pivots=deployment-language-arm-template#probepropertiesformat-1) instead | +| numberOfProbes, "Unhealthy threshold" | Health probe configuration property numberOfProbes, otherwise known as "Unhealthy threshold" in Portal, isn't respected. Load Balancer health probes will probe up/down immediately after one probe regardless of the property's configured value | To control the number of successful or failed consecutive probes necessary to mark backend instances as healthy or unhealthy, please leverage the property ["probeThreshold"](/azure/templates/microsoft.network/loadbalancers?pivots=deployment-language-arm-template#probepropertiesformat-1) instead | |
machine-learning | How To Manage Workspace | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-workspace.md | As your needs change or requirements for automation increase you can also manage [!INCLUDE [register-namespace](includes/machine-learning-register-namespace.md)] -* When you use network isolation that is based on a workspace's managed virtual network with a deployment, you can use resources (Azure Container Registry (ACR), Storage account, Key Vault, and Application Insights) from a different resource group or subscription than that of your workspace. However, these resources must belong to the same tenant as your workspace. For limitations that apply to securing managed online endpoints using a workspace's managed virtual network, see [Network isolation with managed online endpoints](concept-secure-online-endpoint.md#limitations). +* When you use network isolation with online endpoints, you can use workspace-associated resources (Azure Container Registry (ACR), Storage account, Key Vault, and Application Insights) from a different resource group than that of your workspace. However, these resources must belong to the same subscription and tenant as your workspace. For limitations that apply to securing managed online endpoints using a workspace's managed virtual network, see [Network isolation with managed online endpoints](concept-secure-online-endpoint.md#limitations). * By default, creating a workspace also creates an Azure Container Registry (ACR). Since ACR doesn't currently support unicode characters in resource group names, use a resource group that doesn't contain these characters. |
machine-learning | Open Model Llm Tool | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/tools-reference/open-model-llm-tool.md | Once your flow is associated to an Azure Machine Learning or Azure AI Studio wor - **Using Azure Machine Learning or Azure AI Studio workspaces**: If you're using prompt flow in one of the web page based browsers workspaces, the online endpoints available on that workspace who up automatically. -- **Using VS Code or code first**: If you're using prompt flow in VS Code or one of the Code First offerings, you need to connect to the workspace. The Open Model LLM tool uses the azure.identity DefaultAzureCredential client for authorization. One way is through [setting environment credential values](https://learn.microsoft.com/python/api/azure-identity/azure.identity.environmentcredential).+- **Using VS Code or code first**: If you're using prompt flow in VS Code or one of the Code First offerings, you need to connect to the workspace. The Open Model LLM tool uses the azure.identity DefaultAzureCredential client for authorization. One way is through [setting environment credential values](/python/api/azure-identity/azure.identity.environmentcredential). ### Custom connections |
machine-learning | Tutorial Enable Recurrent Materialization Run Batch Inference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-enable-recurrent-materialization-run-batch-inference.md | Before you proceed with this tutorial, be sure to complete the first and second 1. Install the Azure Machine Learning extension. - [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/3. Enable recurrent materialization and run batch inference.ipynb?name=install-ml-ext-cli)] + [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_and_cli/3. Enable recurrent materialization and run batch inference.ipynb?name=install-ml-ext-cli)] 2. Authenticate. - [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/3. Enable recurrent materialization and run batch inference.ipynb?name=auth-cli)] + [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_and_cli/3. Enable recurrent materialization and run batch inference.ipynb?name=auth-cli)] 3. Set the default subscription. - [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/3. Enable recurrent materialization and run batch inference.ipynb?name=set-default-subs-cli)] + [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_and_cli/3. Enable recurrent materialization and run batch inference.ipynb?name=set-default-subs-cli)] |
machine-learning | Tutorial Get Started With Feature Store | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-get-started-with-feature-store.md | Not applicable. 1. Install the Azure Machine Learning CLI extension. - [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=install-ml-ext-cli)] + [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=install-ml-ext-cli)] 1. Authenticate. - [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=auth-cli)] + [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=auth-cli)] 1. Set the default subscription. - [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=set-default-subs-cli)] + [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=set-default-subs-cli)] This tutorial doesn't need explicit installation of these resources, because the ### [SDK and CLI track](#tab/SDK-and-CLI-track) - [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=create-fs-cli)] + [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=create-fs-cli)] > [!NOTE] > - The default blob store for the feature store is an ADLS Gen2 container. This tutorial doesn't need explicit installation of these resources, because the For more information more about access control, see [Manage access control for managed feature store](./how-to-setup-access-control-feature-store.md). - [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=assign-aad-ds-role-cli)] + [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=assign-aad-ds-role-cli)] ## Prototype and develop a feature set As a best practice, entities help enforce use of the same join key definition ac Create an `account` entity that has the join key `accountID` of type `string`. - [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=register-acct-entity-cli)] + [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=register-acct-entity-cli)] Use this code to register a feature set asset with the feature store. You can th ### [SDK and CLI track](#tab/SDK-and-CLI-track) -[!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=register-txn-fset-cli)] +[!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=register-txn-fset-cli)] The Storage Blob Data Reader role must be assigned to your user account on the o Execute this code cell for role assignment. The permissions might need some time to propagate. - [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=grant-rbac-to-user-identity-cli)] + [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=grant-rbac-to-user-identity-cli)] The Storage Blob Data Reader role must be assigned to your user account on the o > The sample data used in this notebook is small. Therefore, this parameter is set to 1 in the > featureset_asset_offline_enabled.yaml file. - [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=enable-offline-mat-txns-fset-cli)] + [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=enable-offline-mat-txns-fset-cli)] The Storage Blob Data Reader role must be assigned to your user account on the o This code cell materializes data by current status *None* or *Incomplete* for the defined feature window. - [!notebook-python[] (~/azureml-examples-main/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=backfill-txns-fset-cli)] + [!notebook-python[] (~/azureml-examples-temp-fix/sdk/python/featurestore_sample/notebooks/sdk_and_cli/1. Develop a feature set and register with managed feature store.ipynb?name=backfill-txns-fset-cli)] |
migrate | Concepts Business Case Calculation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/concepts-business-case-calculation.md | Cost components for running on-premises servers. For TCO calculations, an annual | | Azure App Service security cost | Defender for App Service | For web apps recommended for App Service or App Service containers, the Defender for App Service cost for that region is added. | | Facilities | Facilities & Infrastructure | DC Facilities - Lease and Power | Facilities cost isn't applicable for Azure cost. | | Labor | Labor | IT admin | DC admin cost = ((Number of virtual machines) / (Avg. # of virtual machines that can be managed by a full-time administrator)) * 730 * 12 |-| Management | Azure Management Services | Azure Monitor, Azure Backup and Azure Update Manager | Azure Monitor costs for each server as per listed price in the region assuming collection of logs ingestion for the guest operating system and one custom application is enabled for the server, totaling logs data of 3GB/month. <br/><br/> Azure Backup cost for each server/month is dynamically estimated based on the [Azure Backup Pricing](https://learn.microsoft.com/azure/backup/azure-backup-pricing), which includes a protected instance fee, snapshot storage and recovery services vault storage. <br/><br/> Azure Update Manager is free for Azure servers. | +| Management | Azure Management Services | Azure Monitor, Azure Backup and Azure Update Manager | Azure Monitor costs for each server as per listed price in the region assuming collection of logs ingestion for the guest operating system and one custom application is enabled for the server, totaling logs data of 3GB/month. <br/><br/> Azure Backup cost for each server/month is dynamically estimated based on the [Azure Backup Pricing](/azure/backup/azure-backup-pricing), which includes a protected instance fee, snapshot storage and recovery services vault storage. <br/><br/> Azure Update Manager is free for Azure servers. | ### Year on Year costs You can override the above values in the assumptions section of the Business cas ## Next steps-- [Learn more](./migrate-services-overview.md) about Azure Migrate.+- [Learn more](./migrate-services-overview.md) about Azure Migrate. |
migrate | How To Upgrade Windows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-upgrade-windows.md | The Windows OS upgrade capability helps you move from an older operating system You can upgrade to up to two versions from the current version.   > [!Note]-> After you migrate and upgrade to Windows Server 2012 in Azure, you will get 3 years of free Extended Security Updates in Azure. [Learn more](https://learn.microsoft.com/windows-server/get-started/extended-security-updates-overview). +> After you migrate and upgrade to Windows Server 2012 in Azure, you will get 3 years of free Extended Security Updates in Azure. [Learn more](/windows-server/get-started/extended-security-updates-overview). **Source** | **Supported target versions** |
migrate | Migrate Support Matrix Vmware Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-vmware-migration.md | The table summarizes agentless migration requirements for VMware vSphere VMs. **Support** | **Details** | -**Supported operating systems** | Windows Server 2003 and later versions. [Learn more](https://learn.microsoft.com/troubleshoot/azure/virtual-machines/server-software-support). <br/><br/> You can migrate all the Linux operating systems supported by Azure listed [here](https://learn.microsoft.com/troubleshoot/azure/cloud-services/support-linux-open-source-technology). +**Supported operating systems** | Windows Server 2003 and later versions. [Learn more](/troubleshoot/azure/virtual-machines/server-software-support). <br/><br/> You can migrate all the Linux operating systems supported by Azure listed [here](/troubleshoot/azure/cloud-services/support-linux-open-source-technology). **Windows VMs in Azure** | You might need to [make some changes](prepare-for-migration.md#verify-required-changes-before-migrating) on VMs before migration. **Linux VMs in Azure** | Some VMs might require changes so that they can run in Azure.<br/><br/> For Linux, Azure Migrate makes the changes automatically for these operating systems:<br/> - Red Hat Enterprise Linux 9.x, 8.x, 7.9, 7.8, 7.7, 7.6, 7.5, 7.4, 7.0, 6.x<br> - CentOS 9.x (Release and Stream), 8.x (Release and Stream), 7.9, 7.7, 7.6, 7.5, 7.4, 6.x</br> - SUSE Linux Enterprise Server 15 SP4, 15 SP3, 15 SP2, 15 SP1, 15 SP0, 12, 11 SP4, 11 SP3 <br>- Ubuntu 22.04, 21.04, 20.04, 19.04, 19.10, 18.04LTS, 16.04LTS, 14.04LTS<br> - Debian 11, 10, 9, 8, 7<br> - Oracle Linux 9, 8, 7.7-CI, 7.7, 6<br> - Kali Linux (2016, 2017, 2018, 2019, 2020, 2021, 2022) <br> For other operating systems, you make the [required changes](prepare-for-migration.md#verify-required-changes-before-migrating) manually.<br/> The `SELinux Enforced` setting is currently not fully supported. It causes Dynamic IP setup and Microsoft Azure Linux Guest agent (waagent/WALinuxAgent) installation to fail. You can still migrate and use the VM. **Boot requirements** | **Windows VMs:**<br/>OS Drive (C:\\) and System Reserved Partition (EFI System Partition for UEFI VMs) should reside on the same disk.<br/>If `/boot` is on a dedicated partition, it should reside on the OS disk and not be spread across multiple disks. <br/> If `/boot` is part of the root (/) partition, then the '/' partition should be on the OS disk and not span other disks. <br/><br/> **Linux VMs:**<br/> If `/boot` is on a dedicated partition, it should reside on the OS disk and not be spread across multiple disks.<br/> If `/boot` is part of the root (/) partition, then the '/' partition should be on the OS disk and not span other disks. |
migrate | Tutorial Discover Spring Boot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-spring-boot.md | After you have performed server discovery and software inventory using the Azure - | - **Supported Linux OS** | Ubuntu 20.04, RHEL 9 **Hardware configuration required** | 6 GB RAM, with 30 GB storage on root volume, 4 Core CPU- **Network Requirements** | Access to the following endpoints: <br/><br/> https://dc.services.visualstudio.com/v2/track <br/><br/> [Azure CLI endpoints for proxy bypass](https://learn.microsoft.com/cli/azure/azure-cli-endpoints?tabs=azure-cloud) + **Network Requirements** | Access to the following endpoints: <br/><br/> https://dc.services.visualstudio.com/v2/track <br/><br/> [Azure CLI endpoints for proxy bypass](/cli/azure/azure-cli-endpoints?tabs=azure-cloud) 5. After copying the script, go to your Linux server, save the script as *Deploy.sh* on the server. Select any web app to view its details. The **Web apps** screen provides the fol ## Next steps - [Assess Spring Boot](tutorial-assess-spring-boot.md) apps for migration.-- [Review the data](discovered-metadata.md#collected-data-for-physical-servers) that the appliance collects during discovery.+- [Review the data](discovered-metadata.md#collected-data-for-physical-servers) that the appliance collects during discovery. |
operator-nexus | Howto Use Azure Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-use-azure-policy.md | In this article, you'll learn how to use Azure Policy to secure and validate the If you're new to Azure Policy, here are some helpful resources that you can use to become more familiar with Azure Policy. - [Azure Policy documentation](../governance/policy/overview.md)-- Interactive Learning Modules: [Azure Policy on Microsoft Learn](https://docs.microsoft.com/learn/browse/?terms=Azure%20Policy)+- Interactive Learning Modules: [Azure Policy training on Microsoft Learn](/learn/browse/?terms=Azure%20Policy) ##### Understanding Policy Definitions and Assignments |
oracle | Database Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/oracle/oracle-db/database-overview.md | Database and application developers work in the Azure portal or use Azure tools ## Purchase Oracle Database@Azure -To purchase Oracle Database@Azure, contact [Oracle's sales team](https://go.oracle.com/LP=138489) or your Oracle sales representative for a sale offer. Oracle Sales team creates an Azure Private Offer in the Azure Marketplace for your service. After an offer has been created for your organization, you can accept the offer and complete the purchase in the Azure portal's Marketplace service. For more information on Azure private offers, see [Overview of the commercial marketplace and enterprise procurement](https://learn.microsoft.com/marketplace/procurement-overview). +To purchase Oracle Database@Azure, contact [Oracle's sales team](https://go.oracle.com/LP=138489) or your Oracle sales representative for a sale offer. Oracle Sales team creates an Azure Private Offer in the Azure Marketplace for your service. After an offer has been created for your organization, you can accept the offer and complete the purchase in the Azure portal's Marketplace service. For more information on Azure private offers, see [Overview of the commercial marketplace and enterprise procurement](/marketplace/procurement-overview). Billing and payment for the service is done through Azure. Payment for Oracle Database@Azure counts toward your Microsoft Azure Consumption Commitment (MACC). Existing Oracle Database software customers can use the Bring Your Own License (BYOL) option or Unlimited License Agreements (ULAs). On your regular Microsoft Azure invoices, you can see charges for Oracle Database@Azure alongside charges for your other Azure Marketplace services. |
oracle | Onboard Oracle Database | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/oracle/oracle-db/onboard-oracle-database.md | For more information on creating identity federation using Azure's identity serv ## Step 1: Purchase Oracle Database@Azure in the Azure portal 1. Sign in to your Azure account.-2. Navigate to the Marketplace service in the Azure portal. See [What is Azure Marketplace?](https://learn.microsoft.com/marketplace/azure-marketplace-overview) in the Azure documentation for more information on Azure Marketplace. +2. Navigate to the Marketplace service in the Azure portal. See [What is Azure Marketplace?](/marketplace/azure-marketplace-overview) in the Azure documentation for more information on Azure Marketplace. Alternately, if you received an email from Azure with a link to your private offer, you can select the link to go to your offer in the Azure portal. Skip to step 4 if you selected a link to your offer and are viewing it in the Azure portal. 3. In Azure Marketplace, under **Management**, select **Private Offer Management**. 4. In the list of private offers, select the **View + accept** button in the row for the Oracle Database@Azure offer.-5. Review the offer details, then accept and subscribe to the private offer. For more information on private offers in the Azure Marketplace, see [Private offers in Azure Marketplace](https://learn.microsoft.com/marketplace/private-offers-in-azure-marketplace) +5. Review the offer details, then accept and subscribe to the private offer. For more information on private offers in the Azure Marketplace, see [Private offers in Azure Marketplace](/marketplace/private-offers-in-azure-marketplace) To accept and subscribe to the private offer: |
orbital | Initiate Licensing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/initiate-licensing.md | Both satellites and ground stations require authorizations from federal regulato Azure Orbital Ground Station consists of five first-party, Microsoft-owned ground stations and networks of third-party Partner ground stations. Except in South Africa, adding a new satellite point of communication to licensed Microsoft ground stations requires an authorization from the respective federal regulator. While the specifics of obtaining authorization vary by geography, coordination with incumbent users is always required. -- If you're interested in contacting [select **public** satellites supported by Azure Orbital Ground Station](https://learn.microsoft.com/azure/orbital/modem-chain#named-modem-configuration), Microsoft has already completed all regulatory requirements to add these satellite points of communication to all Microsoft ground stations.+- If you're interested in contacting [select **public** satellites supported by Azure Orbital Ground Station](/azure/orbital/modem-chain#named-modem-configuration), Microsoft has already completed all regulatory requirements to add these satellite points of communication to all Microsoft ground stations. - If you're interested in having your **existing** satellite space station or constellation communicate with one or more Microsoft ground stations, you must modify your authorization for the US market to add each ground station. |
orbital | Prepare For Launch | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/prepare-for-launch.md | Our team enables the Launch Window Scheduling feature manually on a per-spacecra The following outlines a typical contact scheduling flow when using Launch Window Scheduling: 1. You don't need an accurate TLE to schedule a contact. [Update the spacecraft resource](update-tle.md) with the best estimate TLE.-1. Specify the time window of interest for your contact by adjusting the **Start Time** and **End Time** fields in the [List Available Contacts API](https://learn.microsoft.com/rest/api/orbital/azureorbitalgroundstation/spacecrafts/list-available-contacts?view=rest-orbital-azureorbitalgroundstation-2022-11-01&tabs=HTTP) or [Portal contact scheduling flow](schedule-contact.md). To account for the unpredictability of launch and vehicle separation, we recommend your window include additional time before and after the anticipated satellite pass. +1. Specify the time window of interest for your contact by adjusting the **Start Time** and **End Time** fields in the [List Available Contacts API](/rest/api/orbital/azureorbitalgroundstation/spacecrafts/list-available-contacts?tabs=HTTP) or [Portal contact scheduling flow](schedule-contact.md). To account for the unpredictability of launch and vehicle separation, we recommend your window include additional time before and after the anticipated satellite pass. 1. The service returns contact options if a whole window or partial window is available in your specified block. 1. [Schedule the contact](schedule-contact.md) as normal. |
postgresql | Concepts Read Replicas | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-read-replicas.md | Read replicas are primarily designed for scenarios where offloading queries is b A read replica can be created in the same region as the primary server and in a different one. Cross-region replication can be helpful for scenarios like disaster recovery planning or bringing data closer to your users. -You can have a primary server in any [Azure Database for PostgreSQL region](https://azure.microsoft.com/global-infrastructure/services/?products=postgresql). A primary server can also have replicas in any global region of Azure that supports Azure Database for PostgreSQL. Additionally, we support special regions [Azure Government](../../azure-government/documentation-government-welcome.md) and [Microsoft Azure operated by 21Vianet](https://learn.microsoft.com/azure/china/overview-operations). The special regions now supported are: +You can have a primary server in any [Azure Database for PostgreSQL region](https://azure.microsoft.com/global-infrastructure/services/?products=postgresql). A primary server can also have replicas in any global region of Azure that supports Azure Database for PostgreSQL. Additionally, we support special regions [Azure Government](../../azure-government/documentation-government-welcome.md) and [Microsoft Azure operated by 21Vianet](/azure/china/overview-operations). The special regions now supported are: - **Azure Government regions**: - US Gov Arizona |
postgresql | Concepts Connectivity Architecture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-connectivity-architecture.md | The following table lists the gateway IP address subnets of the Azure Database f | West US 2 | 13.66.136.192/29, 40.78.240.192/29, 40.78.248.192/29| | West US 3 | 20.150.168.32/29, 20.150.176.32/29, 20.150.184.32/29 | + ## Frequently asked questions ### What you need to know about this planned maintenance? |
postgresql | Concepts Ssl Connection Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-ssl-connection-security.md | Azure Database for PostgreSQL prefers connecting your client applications to the By default, the PostgreSQL database service is configured to require TLS connection. You can choose to disable requiring TLS if your client application does not support TLS connectivity. ->[!NOTE] -> Based on the feedback from customers we have extended the root certificate deprecation for our existing Baltimore Root CA till November 30,2022(11/30/2022). - > [!IMPORTANT] -> SSL root certificate is set to expire starting December,2022 (12/2022). Please update your application to use the [new certificate](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt.pem). To learn more , see [planned certificate updates](concepts-certificate-rotation.md) +> **SSL intermediate certificates are set to be updated starting January 31,2024 (01/31/2024).** +> An intermediate certificate is a subordinate certificate issued by a trusted root specifically to issue end-entity certificates. The result is a certificate chain that begins at the trusted root CA, through the intermediate CA (or CAs) and ends with the SSL certificate issued to you. +> Certificate Pinning is a security technique where only authorized, or pinned, certificates are accepted when establishing a secure session. Any attempt to establish a secure session using a different certificate is rejected. +> Unlike trusted root CA, [which we already updated fully during current year](./concepts-certificate-rotation.md), and where certificate can be pinned using *verify-ca* or *verify-full* connection string client directive, there is **no standard, well established way to pin intermediate CA**. However, **there is a theoretical ability to create custom connectivity stack that pins intermediate certificates to the client** in a variety of programming languages. +> As explained above, in the **unlikely scenario** that you are pinning the intermediate certificates with custom code, you may be impacted by this change. To determine if you are pinning CAs, please refer to **[Certificate pinning and Azure services](../../security/fundamentals/certificate-pinning.md#how-to-address-certificate-pinning-in-your-application)** + ## Enforcing TLS connections |
sap | Hana Vm Premium Ssd V1 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/hana-vm-premium-ssd-v1.md | A less costly alternative for such configurations could look like: | E64ds_v4 | 504 GiB | 1200 MB/s | 7 x P10 | 1 x E20 | 1 x E6 | 1 x E6 | won't achieve less than 1ms storage latency<sup>1</sup> | | M64ls | 512 GiB | 1,000 MB/s | 7 x P10 | 1 x E20 | 1 x E6 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 10,000<sup>2</sup> | | M32(d)ms_v2 | 875 GiB | 500 MB/s | 6 x P15 | 1 x E30 | 1 x E6 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 5,000<sup>2</sup> |-| M48(d)s_1_v3, M96(d)s_1_v3 | 974 GiB | 1,560 MBps | 7 x P15 | 1 x E30 | 1 x E6 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 10,000<sup>2</sup> || +| M48(d)s_1_v3, M96(d)s_1_v3 | 974 GiB | 1,560 MBps | 7 x P15 | 1 x E30 | 1 x E6 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 10,000<sup>2</sup> | | M64s, M64(d)s_v2 | 1,024 GiB | 1,000 MB/s | 7 x P15 | 1 x E30 | 1 x E6 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 10,000<sup>2</sup> | | M64ms, M64(d)ms_v2| 1,792 GiB | 1,000 MB/s | 6 x P20 | 1 x E30 | 1 x E6 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 10,000<sup>2</sup> | | M96(d)s_2_v3 | 1,946 GiB | 3,120 MBps | 6 x P20 | 1 x E30 | 1 x E10 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 20,000<sup>2</sup> | |
sentinel | Data Connectors Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors-reference.md | Title: Find your Microsoft Sentinel data connector | Microsoft Docs description: Learn about specific configuration steps for Microsoft Sentinel data connectors. Previously updated : 10/23/2023 Last updated : 07/26/2023 Data connectors are available as part of the following offerings: - [[Recommended] Cisco Secure Email Gateway via AMA](data-connectors/recommended-cisco-secure-email-gateway-via-ama.md) - [Cisco Application Centric Infrastructure](data-connectors/cisco-application-centric-infrastructure.md) - [Cisco ASA](data-connectors/cisco-asa.md)+- [Cisco AS) - [Cisco Duo Security (using Azure Functions)](data-connectors/cisco-duo-security-using-azure-functions.md) - [Cisco Identity Services Engine](data-connectors/cisco-identity-services-engine.md) - [Cisco Meraki](data-connectors/cisco-meraki.md) Data connectors are available as part of the following offerings: ## Crowdstrike - [Crowdstrike Falcon Data Replicator (using Azure Functions)](data-connectors/crowdstrike-falcon-data-replicator-using-azure-functions.md)+- [Crowdstrike Falcon Data Replicator V2 (using Azure Functions) (Preview)](data-connectors/crowdstrike-falcon-data-replicator-v2-using-azure-functions.md) - [CrowdStrike Falcon Endpoint Protection](data-connectors/crowdstrike-falcon-endpoint-protection.md) ## Cyber Defense Group B.V. Data connectors are available as part of the following offerings: - [CyberArk Enterprise Password Vault (EPV) Events](data-connectors/cyberark-enterprise-password-vault-epv-events.md) - [CyberArkEPM (using Azure Functions)](data-connectors/cyberarkepm-using-azure-functions.md) +## CyberPion ++- [IONIX Security Logs](data-connectors/ionix-security-logs.md) + ## Cybersixgill - [Cybersixgill Actionable Alerts (using Azure Functions)](data-connectors/cybersixgill-actionable-alerts-using-azure-functions.md) Data connectors are available as part of the following offerings: ## Microsoft - [Automated Logic WebCTRL](data-connectors/automated-logic-webctrl.md)-- [Microsoft Entra ID](data-connectors/azure-active-directory.md)-- [Microsoft Entra ID Protection](data-connectors/azure-active-directory-identity-protection.md) - [Azure Activity](data-connectors/azure-activity.md) - [Azure Batch Account](data-connectors/azure-batch-account.md)-- [Azure AI Search](data-connectors/azure-cognitive-search.md)+- [Azure Cognitive Search](data-connectors/azure-cognitive-search.md) - [Azure Data Lake Storage Gen1](data-connectors/azure-data-lake-storage-gen1.md) - [Azure DDoS Protection](data-connectors/azure-ddos-protection.md) - [Azure Event Hub](data-connectors/azure-event-hub.md) Data connectors are available as part of the following offerings: - [Microsoft Defender for IoT](data-connectors/microsoft-defender-for-iot.md) - [Microsoft Defender for Office 365 (preview)](data-connectors/microsoft-defender-for-office-365.md) - [Microsoft Defender Threat Intelligence](data-connectors/microsoft-defender-threat-intelligence.md)+- [Microsoft Entra ID](data-connectors/azure-active-directory.md) +- [Microsoft Entra ID Protection](data-connectors/microsoft-entra-id-protection.md) - [Microsoft PowerBI (preview)](data-connectors/microsoft-powerbi.md) - [Microsoft Project (preview)](data-connectors/microsoft-project.md) - [Microsoft Purview (preview)](data-connectors/microsoft-purview.md) Data connectors are available as part of the following offerings: - [SentinelOne (using Azure Functions)](data-connectors/sentinelone-using-azure-functions.md) +## SERAPHIC ALGORITHMS LTD +- [Seraphic Web Security](data-connectors/seraphic-web-security.md) + ## Slack - [Slack Audit (using Azure Functions)](data-connectors/slack-audit-using-azure-functions.md) Data connectors are available as part of the following offerings: - [Ubiquiti UniFi (Preview)](data-connectors/ubiquiti-unifi.md) +## Valence Security Inc. ++- [SaaS Security](data-connectors/saas-security.md) + ## vArmour Networks - [vArmour Application Controller](data-connectors/varmour-application-controller.md) |
sentinel | Armorblox Using Azure Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/armorblox-using-azure-functions.md | Title: "Armorblox (using Azure Functions) connector for Microsoft Sentinel" description: "Learn how to install the connector Armorblox (using Azure Functions) to connect your data source to Microsoft Sentinel." Previously updated : 07/26/2023 Last updated : 01/06/2024 The [Armorblox](https://www.armorblox.com/) data connector provides the capabili | **Azure function app code** | https://aka.ms/sentinel-armorblox-functionapp | | **Log Analytics table(s)** | Armorblox_CL<br/> | | **Data collection rules support** | Not currently supported |-| **Supported by** | [armorblox](https://www.armorblox.com/contact/) | +| **Supported by** | [Armorblox](https://www.armorblox.com/contact/) | ## Query samples |
sentinel | Cisco Asa Ftd Via Ama | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/cisco-asa-ftd-via-ama.md | + + Title: "Cisco ASA/FTD via AMA (Preview) connector for Microsoft Sentinel" +description: "Learn how to install the connector Cisco ASA/FTD via AMA (Preview) to connect your data source to Microsoft Sentinel." ++ Last updated : 01/06/2024+++++# Cisco ASA/FTD via AMA (Preview) connector for Microsoft Sentinel ++The Cisco ASA firewall connector allows you to easily connect your Cisco ASA logs with Microsoft Sentinel, to view dashboards, create custom alerts, and improve investigation. This gives you more insight into your organization's network and improves your security operation capabilities. ++## Connector attributes ++| Connector attribute | Description | +| | | +| **Log Analytics table(s)** | CommonSecurityLog<br/> | +| **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) | +| **Supported by** | [Microsoft Corporation](https://support.microsoft.com/) | ++## Query samples ++**All logs** + ```kusto +CommonSecurityLog ++ | where DeviceVendor == "Cisco" ++ | where DeviceProduct == "ASA" + + | sort by TimeGenerated + ``` ++++## Prerequisites ++To integrate with Cisco ASA/FTD via AMA (Preview) make sure you have: ++- To collect data from non-Azure VMs, they must have Azure Arc installed and enabled. [Learn more](/azure/azure-monitor/agents/azure-monitor-agent-install?tabs=ARMAgentPowerShell,PowerShellWindows,PowerShellWindowsArc,CLIWindows,CLIWindowsArc) +++## Vendor installation instructions ++Enable data collection ruleΓÇï ++Cisco ASA/FTD event logs are collected only from **Linux** agents. +++++Run the following command to install and apply the Cisco ASA/FTD collector: +++ `sudo wget -O Forwarder_AMA_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/Syslog/Forwarder_AMA_installer.py&&sudo python Forwarder_AMA_installer.py` ++++## Next steps ++For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-ciscoasa?tab=Overview) in the Azure Marketplace. |
sentinel | Crowdstrike Falcon Data Replicator V2 Using Azure Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/crowdstrike-falcon-data-replicator-v2-using-azure-functions.md | + + Title: "Crowdstrike Falcon Data Replicator V2 (using Azure Functions) (Preview) connector for Microsoft Sentinel" +description: "Learn how to install the connector Crowdstrike Falcon Data Replicator V2 (using Azure Functions) (Preview) to connect your data source to Microsoft Sentinel." ++ Last updated : 01/06/2024+++++# Crowdstrike Falcon Data Replicator V2 (using Azure Functions) (Preview) connector for Microsoft Sentinel ++The [Crowdstrike](https://www.crowdstrike.com/) Falcon Data Replicator connector provides the capability to ingest raw event data from the [Falcon Platform](https://www.crowdstrike.com/blog/tech-center/intro-to-falcon-data-replicator/) events into Microsoft Sentinel. The connector provides ability to get events from Falcon Agents which helps to examine potential security risks, analyze your team's use of collaboration, diagnose configuration problems and more. ++## Connector attributes ++| Connector attribute | Description | +| | | +| **Azure function app code** | https://aka.ms/sentinel-CrowdstrikeReplicatorV2-functionapp | +| **Kusto function alias** | CrowdstrikeReplicator | +| **Kusto function url** | https://aka.ms/sentinel-crowdstrikereplicator-parser | +| **Log Analytics table(s)** | CrowdStrike_Additional_Events_CL<br/> ASimNetworkSessionLogs<br/> ASimDnsActivityLogs<br/> ASimAuditEventLogs<br/> ASimFileEventLogs<br/> ASimAuthenticationEventLogs<br/> ASimProcessEventLogs<br/> ASimRegistryEventLogs<br/> ASimUserManagementActivityLogs<br/> CrowdStrike_Secondary_Data_CL<br/> | +| **Data collection rules support** | Not currently supported | +| **Supported by** | [Microsoft Corporation](https://support.microsoft.com) | ++## Query samples ++**Data Replicator - All Activities** + ```kusto +CrowdStrikeReplicatorV2 + + | sort by TimeGenerated desc + ``` ++++## Prerequisites ++To integrate with Crowdstrike Falcon Data Replicator V2 (using Azure Functions) (Preview) make sure you have: ++- **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/). +- **SQS and AWS S3 account credentials/permissions**: **AWS_SECRET**, **AWS_REGION_NAME**, **AWS_KEY**, **QUEUE_URL** is required. [See the documentation to learn more about data pulling](https://www.crowdstrike.com/blog/tech-center/intro-to-falcon-data-replicator/). To start, contact CrowdStrike support. At your request they will create a CrowdStrike managed Amazon Web Services (AWS) S3 bucket for short term storage purposes as well as a SQS (simple queue service) account for monitoring changes to the S3 bucket. +++## Vendor installation instructions +++This connector uses Azure Functions to connect to the AWS SQS / S3 to pull logs into Microsoft Sentinel. This might result in additional data ingestion costs. Check the [Azure Functions pricing page](https://azure.microsoft.com/pricing/details/functions/) for details. ++**(Optional Step)** Securely store API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App. ++Prerequisites ++1. Configure FDR in CrowdStrike - You must contact the [CrowdStrike support team](https://supportportal.crowdstrike.com/) to enable CrowdStrike FDR. + - Once CrowdStrike FDR is enabled, from the CrowdStrike console, navigate to Support --> API Clients and Keys. + - You need to Create new credentials to copy the AWS Access Key ID, AWS Secret Access Key, SQS Queue URL and AWS Region. +2. Register AAD application - For DCR to authentiate to ingest data into log analytics, you must use AAD application. + - [Follow the instructions here](/azure/azure-monitor/logs/tutorial-logs-ingestion-portal#create-azure-ad-application) (steps 1-5) to get **AAD Tenant Id**, **AAD Client Id** and **AAD Client Secret**. + - For **AAD Principal** Id of this application, access the AAD App through [AAD Portal](https://aad.portal.azure.com/#view/Microsoft_AAD_IAM/StartboardApplicationsMenuBlade/~/AppAppsPreview/menuId/) and capture Object Id from the application overview page. ++Deployment Options ++Choose ONE from the following two deployment options to deploy the connector and the associated Azure Function ++Option 1 - Azure Resource Manager (ARM) Template ++Use this method for automated deployment of the Crowdstrike Falcon Data Replicator connector V2 using an ARM Tempate. ++1. Click the **Deploy to Azure** button below. ++ [![Deploy To Azure](https://aka.ms/deploytoazurebutton)](https://aka.ms/sentinel-CrowdstrikeReplicatorV2-azuredeploy) +2. Provide the required details such as Microsoft Sentinel Workspace, CrowdStrike AWS credentials, Azure AD Application details and ingestion configurations +**NOTE:** Within the same resource group, you can't mix Windows and Linux apps in the same region. Select existing resource group without Windows apps in it or create new resource group. It is recommended to create a new Resource Group for deployment of function app and associated resources. +3. Mark the checkbox labeled **I agree to the terms and conditions stated above**. +4. Click **Purchase** to deploy. ++Option 2 - Manual Deployment of Azure Functions ++Use the following step-by-step instructions to deploy the Crowdstrike Falcon Data Replicator connector manually with Azure Functions (Deployment via Visual Studio Code). +++**1. Deploy DCE, DCR and Custom Tables for data ingestion** ++1. Deploy the required DCE, DCR(s) and the Custom Tables by using the [Data Collection Resource ARM template](https://aka.ms/sentinel-CrowdstrikeReplicatorV2-azuredeploy-data-resource) +2. After successful deployment of DCE and DCR(s), get the below information and keep it handy (required during Azure Functions app deployment). + - DCE log ingestion - Follow the instructions available at [Create data collection endpoint](/azure/azure-monitor/logs/tutorial-logs-ingestion-portal#create-data-collection-endpoint) (Step 3). + - Immutable Ids of one or more DCRs (as applicable) - Follow the instructions available at [Collect information from the DCR](/azure/azure-monitor/logs/tutorial-logs-ingestion-portal#collect-information-from-the-dcr) (Stpe 2). +++**2. Deploy a Function App** ++1. Download the [Azure Function App](https://aka.ms/sentinel-CrowdstrikeReplicatorV2-functionapp) file. Extract archive to your local development computer. +2. Follow the [function app manual deployment instructions](https://github.com/Azure/Azure-Sentinel/blob/master/DataConnectors/AzureFunctionsManualDeployment.md#function-app-manual-deployment-instructions) to deploy the Azure Functions app using VSCode. +3. After successful deployment of the function app, follow next steps for configuring it. +++**3. Configure the Function App** ++1. Go to Azure Portal for the Function App configuration. +2. In the Function App, select the Function App Name and select **Configuration**. +3. In the **Application settings** tab, select ** New application setting**. +4. Add each of the following application settings individually, with their respective string values (case-sensitive): ++ - AWS_KEY + - AWS_SECRET + - AWS_REGION_NAME + - QUEUE_URL + - USER_SELECTION_REQUIRE_RAW //True if raw data is required + - USER_SELECTION_REQUIRE_SECONDARY //True if secondary data is required + - MAX_QUEUE_MESSAGES_MAIN_QUEUE // 100 for consumption and 150 for Premium + - MAX_SCRIPT_EXEC_TIME_MINUTES // add the value of 10 here + - AZURE_TENANT_ID + - AZURE_CLIENT_ID + - AZURE_CLIENT_SECRET + - DCE_INGESTION_ENDPOINT + - NORMALIZED_DCR_ID + - RAW_DATA_DCR_ID + - EVENT_TO_TABLE_MAPPING_LINK // File is present on github. Add if the file can be accessed using internet + - REQUIRED_FIELDS_SCHEMA_LINK //File is present on github. Add if the file can be accessed using internet + - Schedule //Add value as '0 */1 * * * *' to ensure the function runs every minute. +1. Once all application settings have been entered, click **Save**. ++++## Next steps ++For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-crowdstrikefalconep?tab=Overview) in the Azure Marketplace. |
sentinel | Crowdstrike Falcon Endpoint Protection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/crowdstrike-falcon-endpoint-protection.md | Title: "CrowdStrike Falcon Endpoint Protection connector for Microsoft Sentinel" description: "Learn how to install the connector CrowdStrike Falcon Endpoint Protection to connect your data source to Microsoft Sentinel." Previously updated : 05/22/2023 Last updated : 01/06/2024 CrowdStrikeFalconEventStream ## Vendor installation instructions -**NOTE:** This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias Crowd Strike Falcon Endpoint Protection and load the function code, on the second line of the query, enter the hostname(s) of your CrowdStrikeFalcon device(s) and any other unique identifiers for the logstream. The function usually takes 10-15 minutes to activate after solution installation/update. +**NOTE:** This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias Crowd Strike Falcon Endpoint Protection and load the function code or click [here](https://aka.ms/sentinel-crowdstrikefalconendpointprotection-parser), on the second line of the query, enter the hostname(s) of your CrowdStrikeFalcon device(s) and any other unique identifiers for the logstream. The function usually takes 10-15 minutes to activate after solution installation/update. 1. Linux Syslog agent configuration |
sentinel | Dynamics 365 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/dynamics-365.md | Title: "Dynamics 365 connector for Microsoft Sentinel" -description: "Learn how to install the connector Dynamics 365 to connect your data source to Microsoft Sentinel." +description: "Learn how to install the connector Dynamics365 to connect your data source to Microsoft Sentinel." Previously updated : 03/06/2023 Last updated : 01/06/2024 |
sentinel | Google Workspace G Suite Using Azure Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/google-workspace-g-suite-using-azure-functions.md | Title: "Google Workspace (G Suite) (using Azure Functions) connector for Microso description: "Learn how to install the connector Google Workspace (G Suite) (using Azure Functions) to connect your data source to Microsoft Sentinel." Previously updated : 11/29/2023 Last updated : 01/06/2024 To integrate with Google Workspace (G Suite) (using Azure Functions) make sure y >**(Optional Step)** Securely store workspace and API authorization key(s) or token(s) in Azure Key Vault. Azure Key Vault provides a secure mechanism to store and retrieve key values. [Follow these instructions](/azure/app-service/app-service-key-vault-references) to use Azure Key Vault with an Azure Function App. -**NOTE:** This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias GWorkspaceReports and load the function code, on the second line of the query, enter the hostname(s) of your GWorkspaceReports device(s) and any other unique identifiers for the logstream. The function usually takes 10-15 minutes to activate after solution installation/update. +**NOTE:** This data connector depends on a parser based on a Kusto Function to work as expected which is deployed as part of the solution. To view the function code in Log Analytics, open Log Analytics/Microsoft Sentinel Logs blade, click Functions and search for the alias GWorkspaceReports and load the function code or click [here](https://github.com/Azure/Azure-Sentinel/blob/master/Solutions/GoogleWorkspaceReports/Parsers/GWorkspaceActivityReports), on the second line of the query, enter the hostname(s) of your GWorkspaceReports device(s) and any other unique identifiers for the logstream. The function usually takes 10-15 minutes to activate after solution installation/update. **STEP 1 - Ensure the prerequisites to obtain the Google Pickel String** |
sentinel | Ionix Security Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/ionix-security-logs.md | + + Title: "IONIX Security Logs connector for Microsoft Sentinel" +description: "Learn how to install the connector IONIX Security Logs to connect your data source to Microsoft Sentinel." ++ Last updated : 01/06/2024+++++# IONIX Security Logs connector for Microsoft Sentinel ++The IONIX Security Logs data connector, ingests logs from the IONIX system directly into Sentinel. The connector allows users to visualize their data, create alerts and incidents and improve security investigations. ++## Connector attributes ++| Connector attribute | Description | +| | | +| **Log Analytics table(s)** | CyberpionActionItems_CL<br/> | +| **Data collection rules support** | Not currently supported | +| **Supported by** | [IONIX](https://www.ionix.io/contact-us/) | ++## Query samples ++**Fetch latest Action Items that are currently open** + ```kusto +let lookbackTime = 14d; +let maxTimeGeneratedBucket = toscalar( + CyberpionActionItems_CL + + | where TimeGenerated > ago(lookbackTime) + + | summarize max(bin(TimeGenerated, 1h)) + ); +CyberpionActionItems_CL + + | where TimeGenerated > ago(lookbackTime) and is_open_b == true + + | where bin(TimeGenerated, 1h) == maxTimeGeneratedBucket + + ``` ++++## Prerequisites ++To integrate with IONIX Security Logs make sure you have: ++- **IONIX Subscription**: a subscription and account is required for IONIX logs. [One can be acquired here.](https://azuremarketplace.microsoft.com/en/marketplace/apps/cyberpion1597832716616.cyberpion) +++## Vendor installation instructions +++Follow the [instructions](https://www.ionix.io/integrations/azure-sentinel/) to integrate IONIX Security Alerts into Sentinel. ++++++## Next steps ++For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/cyberpion1597832716616.cyberpion_mss?tab=Overview) in the Azure Marketplace. |
sentinel | Microsoft Defender For Office 365 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/microsoft-defender-for-office-365.md | Title: "Microsoft Defender for Office 365 connector for Microsoft Sentinel (preview)" + Title: "Microsoft Defender for Office 365 connector for Microsoft Sentinel" description: "Learn how to install the connector Microsoft Defender for Office 365 to connect your data source to Microsoft Sentinel." Previously updated : 02/23/2023 Last updated : 01/06/2024 For more information, see the [Microsoft Sentinel documentation](https://go.micr | **Log Analytics table(s)** | SecurityAlert (OATP)<br/> | | **Data collection rules support** | Not currently supported | | **Supported by** | [Microsoft Corporation](https://support.microsoft.com/) |+++## Next steps ++For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-microsoftdefenderforo365?tab=Overview) in the Azure Marketplace. |
sentinel | Microsoft Entra Id Protection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/microsoft-entra-id-protection.md | + + Title: "Microsoft Entra ID Protection connector for Microsoft Sentinel" +description: "Learn how to install the connector Microsoft Entra ID Protection to connect your data source to Microsoft Sentinel." ++ Last updated : 01/06/2024+++++# Microsoft Entra ID Protection connector for Microsoft Sentinel ++Microsoft Entra ID Protection provides a consolidated view at risk users, risk events and vulnerabilities, with the ability to remediate risk immediately, and set policies to auto-remediate future events. The service is built on MicrosoftΓÇÖs experience protecting consumer identities and gains tremendous accuracy from the signal from over 13 billion logins a day. Integrate Microsoft Entra ID Protection alerts with Microsoft Sentinel to view dashboards, create custom alerts, and improve investigation. For more information, see the [Microsoft Sentinel documentation ](https://go.microsoft.com/fwlink/p/?linkid=2220065&wt.mc_id=sentinel_dataconnectordocs_content_cnl_csasci). ++[Get Microsoft Entra ID Premium P1/P2 ](https://aka.ms/asi-ipcconnectorgetlink) ++## Connector attributes ++| Connector attribute | Description | +| | | +| **Log Analytics table(s)** | SecurityAlert (IPC)<br/> | +| **Data collection rules support** | Not currently supported | +| **Supported by** | [Microsoft Corporation](https://support.microsoft.com/) | +++## Next steps ++For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/azuresentinel.azure-sentinel-solution-azureactivedirectoryip?tab=Overview) in the Azure Marketplace. |
sentinel | Saas Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/saas-security.md | + + Title: "SaaS Security connector for Microsoft Sentinel" +description: "Learn how to install the connector SaaS Security to connect your data source to Microsoft Sentinel." ++ Last updated : 01/06/2024+++++# SaaS Security connector for Microsoft Sentinel ++Connects the Valence SaaS security platform Azure Log Analytics via the REST API interface ++## Connector attributes ++| Connector attribute | Description | +| | | +| **Log Analytics table(s)** | ValenceAlert_CL<br/> | +| **Data collection rules support** | Not currently supported | +| **Supported by** | [Valence Security](https://www.valencesecurity.com/) | ++## Query samples ++**All Valence Security alerts** + ```kusto +ValenceAlert_CL + ``` ++**All critical Valence Security alerts** + ```kusto +ValenceAlert_CL + | where alertType_severity_s == "Critical" + ``` ++++## Vendor installation instructions ++Step 1 : Read the detailed documentation ++The installation process is documented in great detail in [Valence Security's knowledge base](https://support.valencesecurity.com). The user should consult this documentation further to understand installation and debug of the integration. ++Step 2: Retrieve the workspace access credentials ++The first installation step is to retrieve both your **Workspace ID** and **Primary Key** from the Microsoft Sentinel platform. +Copy the values shown below and save them for configuration of the API log forwarder integration. ++++Step 3: Configure Sentinel integration on the Valence Security Platform ++As a Valence Security Platform admin, go to the [configuration screen](https://app.valencesecurity.com/settings/configuration), click Connect in the SIEM Integration card, and choose Microsoft Sentinel. Paste the values from the previous step and click Connect. Valence will test the connection so when success is reported, the connection worked. ++++## Next steps ++For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/valencesecurityinc1673598943514.valence_sentinel_solution?tab=Overview) in the Azure Marketplace. |
sentinel | Seraphic Web Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/seraphic-web-security.md | + + Title: "Seraphic Web Security connector for Microsoft Sentinel" +description: "Learn how to install the connector Seraphic Web Security to connect your data source to Microsoft Sentinel." ++ Last updated : 01/06/2024+++++# Seraphic Web Security connector for Microsoft Sentinel ++The Seraphic Web Security data connector provides the capability to ingest [Seraphic Web Security](https://seraphicsecurity.com/) events and alerts into Microsoft Sentinel. ++## Connector attributes ++| Connector attribute | Description | +| | | +| **Log Analytics table(s)** | SeraphicWebSecurity_CL | +| **Data collection rules support** | Not currently supported | +| **Supported by** | [Seraphic Security](https://seraphicsecurity.com) | ++## Query samples +**All Seraphic Web Security events** + ```kusto + SeraphicWebSecurity_CL + | where bd_type_s == 'Event' + | sort by TimeGenerated desc + ``` +**All Seraphic Web Security alerts** + ```kusto + SeraphicWebSecurity_CL + | where bd_type_s == 'Alert' + | sort by TimeGenerated desc + ``` ++## Prerequisites ++To integrate with Seraphic Web Security make sure you have: ++- **Seraphic API key**: API key for Microsoft Sentinel connected to your Seraphic Web Security tenant. To get this API key for your tenant - [read this documentation](https://constellation.seraphicsecurity.com/integrations/microsoft_sentinel/Guidance/MicrosoftSentinel-IntegrationGuide-230822.pdf). +++## Vendor installation instructions ++Connect Seraphic Web Security ++Please insert the integration name, the Seraphic integration URL and your workspace name for Microsoft Sentinel: +++++## Next steps ++For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/seraphicalgorithmsltd1616061090462.seraphic-security-sentinel?tab=Overview) in the Azure Marketplace. |
service-bus-messaging | Enable Partitions Premium | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/enable-partitions-premium.md | Service Bus partitions enable queues and topics, or messaging entities, to be pa > - JMS is currently not supported on partitioned namespaces. > - The feature is currently available in the regions noted below. New regions will be added regularly, we will keep this article updated with the latest regions as they become available. > -> | | | | | | +> | Regions | Regions | Regions | Regions |Regions | > |--|-||-|--| > | Australia Central | Central US | Italy North | Poland Central | UK South | > | Australia East | East Asia | Japan West | South Central US | UK West | |
site-recovery | Site Recovery Monitor And Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-monitor-and-troubleshoot.md | With Azure Monitor action groups, you can route alerts to other notification cha You can use the following interfaces supported by Azure Monitor to manage action groups and alert processing rules: -- [Azure Monitor REST API reference](https://learn.microsoft.com/rest/api/monitor/)-- [Azure Monitor PowerShell reference](https://learn.microsoft.com/powershell/module/az.monitor)-- [Azure Monitor CLI reference](https://learn.microsoft.com/cli/azure/monitor)+- [Azure Monitor REST API reference](/rest/api/monitor/) +- [Azure Monitor PowerShell reference](/powershell/module/az.monitor) +- [Azure Monitor CLI reference](/cli/azure/monitor) ### Suppress notifications during a planned maintenance window |
update-manager | Assessment Options | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/assessment-options.md | Title: Assessment options in Update Manager. description: The article describes the assessment options available in Update Manager. Previously updated : 11/16/2023 Last updated : 11/29/2023 -Update Manager provides you the flexibility to assess the status of available updates and manage the process of installing required updates for your machines. +Update Manager provides you with the flexibility to assess the status of available updates and manage the process of installing required updates for your machines. ## Periodic assessment Update Manager provides you the flexibility to assess the status of available up :::image type="content" source="media/updates-maintenance/periodic-assessment-inline.png" alt-text="Screenshot showing periodic assessment option." lightbox="media/updates-maintenance/periodic-assessment-expanded.png"::: +> [!NOTE] +> For Arc-enabled servers, ensure that the subscription in which the Arc-server is onboarded is registered to Microsoft.Compute resource provider. For more information on how to register to the resource provider, see [Azure resource providers and types](../azure-resource-manager/management/resource-providers-and-types.md). + ## Check for updates now/On-demand assessment Update Manager allows you to check for latest updates on your machines at any time, on-demand. You can view the latest update status and act accordingly. Go to **Updates** blade on any VM and select **Check for updates** or select multiple machines from Update Manager and check for updates for all machines at once. For more information, see [check and install on-demand updates](view-updates.md). Update Manager allows you to check for latest updates on your machines at any ti ## Update assessment scan You can initiate a software updates compliance scan on a machine to get a current list of operating system updates available. + - **On Windows** - the software update scan is performed by the Windows Update Agent. - **On Linux** - The software update scan is performed using the package manager that returns the missing updates as per the configured repositories. - In the **Updates** page, after you initiate an assessment, a notification is generated to inform you the activity has started and another is displayed when it is finished. + In the **Updates** page, after you initiate an assessment, a notification is generated to inform you the activity has started and another is displayed when it's finished. :::image type="content" source="media/assessment-options/updates-preview-page.png" alt-text="Screenshot of the Updates page."::: |
update-manager | Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/troubleshoot.md | Title: Troubleshoot known issues with Azure Update Manager description: This article provides details on known issues and how to troubleshoot any problems with Azure Update Manager. Previously updated : 09/18/2023 Last updated : 01/13/2024 To review the logs related to all actions performed by the extension, check for * `WindowsUpdateExtension.log`: Contains information related to the patch actions. This information includes patches assessed and installed on the machine and any problems encountered in the process. * `CommandExecution.log`: There's a wrapper above the patch action, which is used to manage the extension and invoke specific patch operation. This log contains information about the wrapper. For autopatching, the log has information on whether the specific patch operation was invoked. -## Unable to change the patch orchestration option to manual updates from automatic updates +### Azure Arc-enabled servers ++For Azure Arc-enabled servers, see [Troubleshoot VM extensions](../azure-arc/servers/troubleshoot-vm-extensions.md) for general troubleshooting steps. ++To review the logs related to all actions performed by the extension, on Windows, check for more information in `C:\ProgramData\GuestConfig\extension_Logs\Microsoft.SoftwareUpdateManagement\WindowsOsUpdateExtension`. It includes the following two log files of interest: ++* `WindowsUpdateExtension.log`: Contains information related to the patch actions. This information includes the patches assessed and installed on the machine and any problems encountered in the process. +* `cmd_execution_<numeric>_stdout.txt`: There's a wrapper above the patch action. It's used to manage the extension and invoke specific patch operation. This log contains information about the wrapper. For autopatching, the log has information on whether the specific patch operation was invoked. +* `cmd_excution_<numeric>_stderr.txt` +++### Unable to generate periodic assessment for Arc-enabled servers ++#### Issue ++The subscriptions in which the Arc-enabled servers are onboarded aren't producing assessment data. ++#### Resolution +Ensure that the Arc servers subscriptions are registered to Microsoft.Compute resource provider so that the periodic assessment data is generated periodically as expected. [Learn more](../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider) ++### Maintenance configuration isn't applied when VM is moved to a different subscription ++#### Issue ++When a VM is moved to another subscription, the scheduled maintenance configuration associated to the VM isn't running. ++#### Resolution ++If you move a VM to a different resource group or subscription, the scheduled patching for the VM stops working as this scenario is currently unsupported by the system. You can delete the older association of the moved VM and create the new association to include the moved VMs in a maintenance configuration. -Here's the scenario. +### Unable to change the patch orchestration option to manual updates from automatic updates -### Issue +#### Issue The Azure machine has the patch orchestration option as `AutomaticByOS/Windows` automatic updates and you're unable to change the patch orchestration to Manual Updates by using **Change update settings**. -### Resolution +#### Resolution If you don't want any patch installation to be orchestrated by Azure or aren't using custom patching solutions, you can change the patch orchestration option to **Customer Managed Schedules (Preview)** or `AutomaticByPlatform` and `ByPassPlatformSafetyChecksOnUserSchedule` and not associate a schedule/maintenance configuration to the machine. This setting ensures that no patching is performed on the machine until you change it explicitly. For more information, see **Scenario 2** in [User scenarios](prerequsite-for-schedule-patching.md#user-scenarios). :::image type="content" source="./media/troubleshoot/known-issue-update-settings-failed.png" alt-text="Screenshot that shows a notification of failed update settings."::: -## Machine shows as "Not assessed" and shows an HRESULT exception --Here's the scenario. +### Machine shows as "Not assessed" and shows an HRESULT exception -### Issue +#### Issue * You have machines that show as `Not assessed` under **Compliance**, and you see an exception message below them. * You see an `HRESULT` error code in the portal. -### Cause +#### Cause The Update Agent (Windows Update Agent on Windows and the package manager for a Linux distribution) isn't configured correctly. Update Manager relies on the machine's Update Agent to provide the updates that are needed, the status of the patch, and the results of deployed patches. Without this information, Update Manager can't properly report on the patches that are needed or installed. -### Resolution +#### Resolution Try to perform updates locally on the machine. If this operation fails, it typically means that there's an Update Agent configuration error. You can also download and run the [Windows Update troubleshooter](https://suppor > [!NOTE] > The [Windows Update troubleshooter](https://support.microsoft.com/help/4027322/windows-update-troubleshooter) documentation indicates that it's for use on Windows clients, but it also works on Windows Server. -### Azure Arc-enabled servers --For Azure Arc-enabled servers, see [Troubleshoot VM extensions](../azure-arc/servers/troubleshoot-vm-extensions.md) for general troubleshooting steps. --To review the logs related to all actions performed by the extension, on Windows, check for more information in `C:\ProgramData\GuestConfig\extension_Logs\Microsoft.SoftwareUpdateManagement\WindowsOsUpdateExtension`. It includes the following two log files of interest: --* `WindowsUpdateExtension.log`: Contains information related to the patch actions. This information includes the patches assessed and installed on the machine and any problems encountered in the process. -* `cmd_execution_<numeric>_stdout.txt`: There's a wrapper above the patch action. It's used to manage the extension and invoke specific patch operation. This log contains information about the wrapper. For autopatching, the log has information on whether the specific patch operation was invoked. -* `cmd_excution_<numeric>_stderr.txt` ## Known issues in schedule patching To review the logs related to all actions performed by the extension, on Windows - If a machine is newly created, the schedule might have 15 minutes of schedule trigger delay in the case of Azure VMs. - Policy definition **Schedule recurring updates using Azure Update Manager** with version 1.0.0-preview successfully remediates resources. However, it always shows them as noncompliant. The current value of the existence condition is a placeholder that always evaluates to false. -### Unable to apply patches for the shutdown machines -Here's the scenario. +### Schedule patching fails with error 'ShutdownOrUnresponsive' ++#### Issue ++Schedule patching hasn't installed the patches on the VMs and gives an error as 'ShutdownOrUnresponsive'. ++#### Resolution +Schedules triggered on machines deleted and recreated with the same resource ID within 8 hours may fail with ShutdownOrUnresponsive error due to a known limitation. ++### Unable to apply patches for the shutdown machines #### Issue Patches aren't getting applied for the machines that are in shutdown state. You The machines are in a shutdown state. -### Resolution +#### Resolution Keep your machines turned on at least 15 minutes before the scheduled update. For more information, see [Shut down machines](../virtual-machines/maintenance-configurations.md#shut-down-machines). ### Patch run failed with Maintenance window exceeded property showing true even if time remained -Here's the scenario. - #### Issue When you view an update deployment in **Update History**, the property **Failed with Maintenance window exceeded** shows **true** even though enough time was left for execution. In this case, one of the following problems is possible: |
virtual-machines | Weblogic Server Azure Virtual Machine | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/weblogic-server-azure-virtual-machine.md | + + Title: "Quickstart: Deploy WebLogic Server on Azure Virtual Machine using the Azure portal" +description: Shows how to quickly stand up WebLogic Server on Azure Virtual Machine. +++ Last updated : 01/03/2024+++++# Quickstart: Deploy WebLogic Server on Azure Virtual Machine using the Azure portal ++This article shows you how to quickly deploy WebLogic Application Server (WLS) on Azure Virtual Machines (VM) with the simplest possible set of configuration choices using the Azure portal. For a more full featured tutorial, including the use of Azure Application Gateway to make WLS cluster on VM securely visible on the public internet, see [Tutorial: Migrate a WebLogic Server cluster to Azure with Azure Application Gateway as a load balancer](/azure/developer/java/migration/migrate-weblogic-with-app-gateway?toc=/azure/virtual-machines/workloads/oracle/toc.json&bc=/azure/virtual-machines/workloads/oracle/breadcrumb/toc.json). ++In this quickstart, you will: ++- Deploy WLS with Administration Server on a VM using the Azure portal. +- Deploy a Java EE sample application with WLS Administration Console portal. ++This quickstart assumes a basic understanding of WLS concepts. For more information, see [Oracle WebLogic Server](https://www.oracle.com/java/weblogic/). ++## Prerequisites ++- [!INCLUDE [quickstarts-free-trial-note](../../../../includes/quickstarts-free-trial-note.md)] ++## Deploy WLS with Administration Server on a VM ++The steps in this section direct you to deploy WLS on VM in the simplest possible way: using the [single node with an admin server](https://aka.ms/wls-vm-admin) offer. Other offers are available to meet different scenarios, including: [single node without an admin server](https://aka.ms/wls-vm-singlenode), [cluster](https://aka.ms/wls-vm-cluster), and [dynamic cluster](https://aka.ms/wls-vm-dynamic-cluster). For more information, see [What are solutions for running Oracle WebLogic Server on Azure Virtual Machines?](/azure/virtual-machines/workloads/oracle/oracle-weblogic). +++The following steps show you how to find the WLS with Admin Server offer and fill out the **Basics** pane. ++1. In the search bar at the top of the portal, enter *weblogic*. In the auto-suggested search results, in the **Marketplace** section, select **Oracle WebLogic Server With Admin Server**. ++ :::image type="content" source="media/weblogic-server-azure-virtual-machine/search-weblogic-admin-offer-from-portal.png" alt-text="Screenshot of Azure portal showing WLS in search results." lightbox="media/weblogic-server-azure-virtual-machine/search-weblogic-admin-offer-from-portal.png"::: ++ You can also go directly to the offer with this [portal link](https://aka.ms/wls-vm-admin). ++1. On the offer page, select **Create**. ++1. On the **Basics** pane, ensure the value shown in the **Subscription** field is the same one that has the roles listed in the prerequisites section. ++1. The offer must be deployed in an empty resource group. In the **Resource group** field, select **Create new** and fill in a value for the resource group. Because resource groups must be unique within a subscription, pick a unique name. An easy way to have unique names is to use a combination of your initials, today's date, and some identifier. For example, *ejb0802wls*. ++1. Under **Instance details**, select the region for the deployment. For a list of Azure regions how and where VMs operate, see [Regions for virtual machines in Azure](/azure/virtual-machines/regions). ++1. Accept the default value in **Oracle WebLogic Image**. ++1. Accept the default value in **Virtual machine size**. ++ If the default size isn't available in your region, choose an available size by selecting **Change size**, then select one of the listed sizes. ++1. Under **Credentials for Virtual Machines and WebLogic**, leave the default value for **Username for admin account of VMs**. ++## Choose how to authenticate the virtual machine ++There are several options to provide authentication to the VM, but you can choose only one. The steps in this section explain each option so you can choose the best one for your deployment. ++### Option 1: Use password ++This option configures a simple username/password pair for VM authentication. Follow these steps to provide values: ++1. Under **Authentication type**, leave the default value **Password**. +1. Fill in *wlsVmAdmin2022* for **Password**. Use the same value for the confirmation field. ++### Option 2: Generate new Key pair ++This option generates a public key pair, installing the public key on the server. After the offer passes validation, you'll get a pop-up window to download the SSH key pair. ++Follow these steps to provide values for the WLS deployment: ++1. Under **Authentication type**, select **SSH Public Key**. +1. Under **SSH public key source**, select **Generate new key pair**. +1. Fill in *wlsKeyAdmin2022* for **Key pair name**. ++When you've completed the offer validation, select **Create**. You'll then get a pop-up window. Select **Download private key and create resource**, which will download the SSH key as a *.pem* file. +++Once the *.pem* file is downloaded, you might want to move it somewhere on your computer where it's easy to reference from your SSH client. ++### Option 3: Use an SSH public key stored in Azure ++This option requires you to store the SSH public key in Azure before continuing. ++The steps in this section show you how to create SSH key from the Azure portal and continue your WLS deployment. ++1. In the search bar at the top of the portal, enter *ssh key*. In the auto-suggested search results, in the **Services** section, select **SSH keys**. +1. On the service page, select **Create**. +1. On the **Basics** pane, ensure the value shown in the **Subscription** field is the same one that has the roles listed in the prerequisites section. +1. You can deploy the SSH key in an existing resource group or by creating a new resource group. To create a new resource group, in the **Resource group** field, select **Create new** and fill in a value for the resource group name. For example, *ejb0802sshkey*. +1. Fill in *ejb0802sshkey-for-wls-machine* for **Key pair name**. +1. Under **SSH public key source**, select **Generate new key pair**. ++When you've completed the validation, select **Create**. You'll then get a pop-up window. Select **Download private key and create resource**, which will download the SSH key as a *.pem* file. ++After the SSH key deployment completed, get back to the WLS deployment and follow these steps to provide values: ++1. Under **Authentication type**, select **SSH Public Key**. +1. Under **SSH public key source**, select **Use existing key stored in Azure**. +1. Under **Stored Keys**, select the SSH key name `ejb0802sshkey-for-wls-machine` created earlier. ++### Option 4: Provide an existing SSH public key ++This option allows you to private an SSH public key for VM authentication. ++If you don't have an SSH key, you can follow [Create an SSH key pair](/azure/virtual-machines/linux/mac-create-ssh-keys#create-an-ssh-key-pair) to create a key pair using RSA encryption and a bit length of 4096. Azure currently supports SSH protocol 2 (SSH-2) RSA public-private key pairs with a minimum length of 2048 bits. ++You can display your public key with the following `cat` command, replacing `~/.ssh/id_rsa.pub` with the path and filename of your own public key file if needed: ++```bash +cat ~/.ssh/id_rsa.pub +``` ++A typical public key value looks like this example: ++```text +ssh-rsa AAAAB...Q== username@domainname +``` ++Then, follow these steps to provide values for the WLS deployment: ++1. Under **Authentication type**, select **SSH Public Key**. +1. Under **SSH public key source**, select **Use existing public key**. +1. Fill in **SSH public key** with your public key value. ++You've now finished configuring VM authentication. Use the following steps to continue with the other aspects of the WLS deployment. ++1. Leave the default value for **Username for WebLogic Administrator**. +1. Fill in *wlsVmCluster2022* for the **Password for WebLogic Administrator**. Use the same value for the confirmation. +1. Select **Review + create**. Ensure the green **Validation Passed** message appears at the top. If not, fix any validation problems and select **Review + create** again. +1. Select **Create**. +1. Track the progress of the deployment in the **Deployment is in progress** page. ++Depending on network conditions and other activity in your selected region, the deployment may take up to 30 minutes to complete. ++## Examine the deployment output ++The steps in this section show you how to verify the deployment has successfully completed. ++If you navigated away from the **Deployment is in progress** page, the following steps will show you how to get back to that page. If you're still on the page that shows **Your deployment is complete**, you can skip to the steps after the image below. ++1. In the upper left of any portal page, select the hamburger menu and select **Resource groups**. +1. In the box with the text **Filter for any field**, enter the first few characters of the resource group you created previously. If you followed the recommended convention, enter your initials, then select the appropriate resource group. +1. In the left navigation pane, in the **Settings** section, select **Deployments**. You'll see an ordered list of the deployments to this resource group, with the most recent one first. +1. Scroll to the oldest entry in this list. This entry corresponds to the deployment you started in the preceding section. Select the oldest deployment, as shown here. ++ :::image type="content" source="media/weblogic-server-azure-virtual-machine/resource-group-deployments.png" alt-text="Azure portal screenshot showing the resource group deployments list." lightbox="media/weblogic-server-azure-virtual-machine/resource-group-deployments.png"::: ++1. In the left panel, select **Outputs**. This list shows the output values from the deployment. Useful information is included in the outputs. +1. The **sshCommand** value is the fully qualified, SSH command to connect the VM that runs WLS. Select the copy icon next to the field value to copy the link to your clipboard. Save this value aside for later. +1. The **adminConsoleURL** value is the fully qualified, public internet visible link to the WLS admin console. Select the copy icon next to the field value to copy the link to your clipboard. Save this value aside for later. ++## Deploy a Java EE application from Administration Console portal ++Use the following steps to run a sample application in the WLS. ++1. Download a sample application as a *.war* or *.ear* file. The sample app should be self contained and not have any database, messaging, or other external connection requirements. The sample app from the WLS Kubernetes Operator documentation is a good choice. You can download it from [Oracle](https://aka.ms/wls-aks-testwebapp). Save the file to your local filesystem. ++1. Paste the value of **adminConsoleURL** in an internet-connected web browser. You should see the familiar WLS admin console login screen as shown in the following screenshot. ++ :::image type="content" source="media/weblogic-server-azure-virtual-machine/wls-admin-login.png" alt-text="Screenshot of WLS admin login screen."::: ++1. Log in with user name *weblogic* and your password (this article uses *wlsVmCluster2022*). You'll see the WLS Administration Console overview page. ++1. Under **Change Center** on the top left corner, select **Lock & Edit**, as shown in the following screenshot. ++ :::image type="content" source="media/weblogic-server-azure-virtual-machine/admin-console-portal.png" alt-text="Screenshot of Oracle WebLogic Server Administration Console with Lock & Edit button highlighted." lightbox="media/weblogic-server-azure-virtual-machine/admin-console-portal.png"::: ++1. Under **Domain Structure** on the left side, select **Deployments**. ++1. Under **Configuration**, select **Install**. There will be an **Install Application Assistant** to guide you to finish the installation. ++ 1. Under **Locate deployment to install and prepare for deployment**, select **Upload your file(s)**. + 1. Under **Upload a deployment to the Administration Server**, select **Choose File** and upload your sample application. Select **Next**. + 1. Select **Finish**. ++1. Under **Change Center** on the top left corner, select **Activate Changes**. You'll see the message **All changes have been activated. No restarts are necessary**. ++1. Under **Summary of Deployments**, select **Control**. Select the checkbox near the application name to select the application. Select **Start** and then select **Servicing all requests**. ++1. Under **Start Application Assistant**, select **Yes**. If no error happens, you'll see the message **Start requests have been sent to the selected deployments.** ++1. Construct a fully qualified URL for the sample app, such as `http://<vm-host-name>:<port>/<your-app-path>`. You can get the host name and port from **adminConsoleURL** by removing `/console/`. If you're using the recommended sample app, the URL should be `http://<vm-host-name>:<port>/testwebapp/`, which should be similar to `http://wls-5b942e9f2a-admindomain.westus.cloudapp.azure.com:7001/testwebapp/`. ++1. Paste the fully qualified URL in an internet-connected web browser. If you deployed the recommended sample app, you should see the following page. ++ :::image type="content" source="media/weblogic-server-azure-virtual-machine/test-webapp.png" alt-text="Screenshot of the test web app."::: ++## Connect to the virtual machine ++If you want to manage the VM, you can connect to it with SSH command. Before accessing the machine, make sure you have enabled port 22 for SSH agent. ++Follow these steps to enable port 22: ++1. Navigate back to your working resource group. In the overview page, you'll find a network security group named **wls-nsg**. Select **wls-nsg**. +1. In the left panel, select **Settings**, then **Inbound security rules**. If there's a rule to allow port `22`, then you can jump to step 4. +1. In the top of the page, select **Add**. ++ 1. Under **Destination port ranges**, fill in the value *22*. + 1. Fill in the rule name *Port_SSH* for **Name**. + 1. Leave the default value for the other fields. + 1. Select **Add**. ++ After the deployment completes, you'll be able to SSH to the VM. ++1. Connect the VM with the value of **sshCommand**. You can specify a key file in the command. ++ 1. Use the following command to ensure you have read-only access to the private key: ++ ```bash + chmod 400 <keyname>.pem + ``` ++ 1. Use `ssh` to connect to your VM, as shown in the following example: ++ ```bash + ssh -i <private key path> weblogic@wls-5b942e9f2a-admindomain.westus.cloudapp.azure.com + ``` ++## Clean up resources ++If you're not going to continue to use the WLS, delete resources with the following steps: ++1. Navigate back to your working resource group. At the top of the page, under the text **Resource group**, select the resource group. Then, select **Delete resource group**. ++1. If you created an SSH key and stored it in Azure in [Option 3: Use an SSH public key stored in Azure](#option-3-use-an-ssh-public-key-stored-in-azure), then search for the resource group *ejb0802sshkey* in the search bar at the top of the portal. Then, select your resource group and delete it. ++## Next steps ++Continue to explore options to run WLS on Azure. ++> [!div class="nextstepaction"] +> [Learn more about Oracle WebLogic on Azure](/azure/virtual-machines/workloads/oracle/oracle-weblogic) +> [!div class="nextstepaction"] +> [Explore the official documentation from Oracle](https://aka.ms/wls-vm-docs) +> [!div class="nextstepaction"] +> [Explore the options for day 2 and beyond](https://aka.ms/wls-vms-day2) |
virtual-machines | Jboss Eap Single Server Azure Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/redhat/jboss-eap-single-server-azure-vm.md | + + Title: "Quickstart: Deploy JBoss EAP Server on Azure VM using the Azure portal" +description: Shows you how to quickly stand up JBoss EAP Server on an Azure virtual machine. +++ Last updated : 01/03/2024+++++# Quickstart: Deploy JBoss EAP Server on an Azure virtual machine using the Azure portal ++This article shows you how to quickly deploy JBoss EAP Server on an Azure virtual machine (VM) using the Azure portal. ++## Prerequisites ++- [!INCLUDE [quickstarts-free-trial-note](../../../../includes/quickstarts-free-trial-note.md)] +- Ensure the Azure identity you use to sign in has either the [Contributor](/azure/role-based-access-control/built-in-roles#contributor) role or the [Owner](/azure/role-based-access-control/built-in-roles#owner) role in the current subscription. For an overview of Azure roles, see [What is Azure role-based access control (Azure RBAC)?](/azure/role-based-access-control/overview) +- Ensure you have the necessary Red Hat licenses. You need to have a Red Hat Account with Red Hat Subscription Management (RHSM) entitlement for JBoss EAP. This entitlement lets the Azure portal install the Red Hat tested and certified JBoss EAP version. + > [!NOTE] + > If you don't have an EAP entitlement, you can sign up for a free developer subscription through the [Red Hat Developer Subscription for Individuals](https://developers.redhat.com/register). Write down the account details, which you use as the *RHSM username* and *RHSM password* in the next section. +- After you're registered, you can find the necessary credentials (*Pool IDs*) by using the following steps. You also use the *Pool IDs* as the *RHSM Pool ID with EAP entitlement* later in this article. + 1. Sign in to your [Red Hat account](https://sso.redhat.com). + 1. The first time you sign in, you're asked to complete your profile. Make sure you select **Personal** for **Account Type**, as shown in the following screenshot. ++ :::image type="content" source="media/jboss-eap-single-server-azure-vm/update-account-type-as-personal.png" alt-text="Screenshot of selecting 'Personal' for the 'Account Type'." lightbox="media/jboss-eap-single-server-azure-vm/update-account-type-as-personal.png"::: ++ 1. In the tab where you're signed in, open [Red Hat Developer Subscription for Individuals](https://aka.ms/red-hat-individual-dev-sub). This link takes you to all of the subscriptions in your account for the appropriate SKU. + 1. Select the first subscription from the **All purchased Subscriptions** table. + 1. Copy and write down the value following **Master Pools** from **Pool IDs**. ++> [!NOTE] +> The Azure Marketplace offer you're going to use in this article includes support for Red Hat Satellite for license management. Using Red Hat Satellite is beyond the scope of this quick start. For an overview on Red Hat Satellite, see [Red Hat Satellite](https://aka.ms/red-hat-satellite). To learn more about moving your Red Hat JBoss EAP and Red Hat Enterprise Linux subscriptions to Azure, see [Red Hat Cloud Access program](https://aka.ms/red-hat-cloud-access-overview). ++## Deploy JBoss EAP Server on Azure VM ++The steps in this section direct you to deploy JBoss EAP Server on Azure VMs. +++The following steps show you how to find the JBoss EAP Server on Azure VM offer and fill out the **Basics** pane. ++1. In the search bar at the top of the Azure portal, enter *JBoss EAP*. In the search results, in the **Marketplace** section, select **JBoss EAP standalone on RHEL VM**. ++ :::image type="content" source="media/jboss-eap-single-server-azure-vm/marketplace-search-results.png" alt-text="Screenshot of Azure portal showing JBoss EAP Server on Azure VM in search results." lightbox="media/jboss-eap-single-server-azure-vm/marketplace-search-results.png"::: ++ In the drop-down menu, ensure **PAYG** is selected. ++ Alternatively, you can also go directly to the [JBoss EAP standalone on RHEL VM](https://aka.ms/eap-vm-single-portal) offer. In this case, the correct plan is already selected for you. ++ In either case, this offer deploys JBoss EAP by providing your Red Hat subscription at deployment time, and runs it on Red Hat Enterprise Linux using a pay-as-you-go payment configuration for the base VM. ++1. On the offer page, select **Create**. +1. On the **Basics** pane, ensure the value shown in the **Subscription** field is the same one that has the roles listed in the prerequisites section. +1. You must deploy the offer in an empty resource group. In the **Resource group** field, select **Create new** and fill in a value for the resource group. Because resource groups must be unique within a subscription, pick a unique name. An easy way to have unique names is to use a combination of your initials, today's date, and some identifier. For example, *ejb0823jbosseapvm*. +1. Under **Instance details**, select the region for the deployment. +1. Leave the default VM size for **Virtual machine size**. +1. Leave the default option **OpenJDK 17** for **JDK version**. +1. Leave the default value **jbossuser** for **Username**. +1. Leave the default option **Password** for **Authentication type**. +1. Fill in password for **Password**. Use the same value for **Confirm password**. +1. Under **Optional Basic Configuration**, leave the default option **Yes** for **Accept defaults for optional configuration**. +1. Scroll to the bottom of the **Basics** pane and notice the helpful links for **Report issues, get help, and share feedback**. +1. Select **Next: JBoss EAP Settings**. ++The following steps show you how to fill out **JBoss EAP Settings** pane and start the deployment. ++1. Leave the default value **jbossadmin** for **JBoss EAP Admin username**. +1. Fill in JBoss EAP password for **JBoss EAP password**. Use the same value for **Confirm password**. Write down the value for later use. +1. Leave the default option **No** for **Connect to an existing Red Hat Satellite Server?**. +1. Fill in your RHSM username for **RHSM username**. The value is the same one that has been prepared in the prerequisites section. +1. Fill in your RHSM password for **RHSM password**. Use the same value for **Confirm password**. The value is the same one that has been prepared in the prerequisites section. +1. Fill in your RHSM pool ID for **RHSM Pool ID with EAP entitlement**. The value is the same one that has been prepared in the prerequisites section. +1. Select **Next: Networking**. +1. Select **Next: Database**. +1. Select **Review + create**. Ensure the green **Validation Passed** message appears at the top. If the message doesn't appear, fix any validation problems, then select **Review + create** again. +1. Select **Create**. +1. Track the progress of the deployment on the **Deployment is in progress** page. ++Depending on network conditions and other activity in your selected region, the deployment may take up to 6 minutes to complete. After that, you should see text **Your deployment is complete** displayed on the deployment page. ++## Optional: Verify the functionality of the deployment ++By default, the JBoss EAP Server is deployed on an Azure VM in a dedicated virtual network without public access. If you want to verify the functionality of the deployment by viewing the **Red Hat JBoss Enterprise Application Platform** management console, use the following steps to assign the VM a public IP address for access. ++1. On the deployment page, select **Deployment details** to expand the list of Azure resource deployed. Select network security group `jbosseap-nsg` to open its details page. +1. Under **Settings**, select **Inbound security rules**. Select **+ Add** to open **Add inbound security rule** panel for adding a new inbound security rule. +1. Fill in *9990* for **Destination port ranges**. Fill in *Port_jbosseap* for **Name**. Select **Add**. Wait until the security rule created. +1. Select **X** icon to close the network security group `jbosseap-nsg` details page. You're switched back to the deployment page. +1. Select the resource ending with `-nic` (with type `Microsoft.Network/networkInterfaces`) to open its details page. +1. Under **Settings**, select **IP configurations**. Select `ipconfig1` from the list of IP configurations to open its configuration details panel. +1. Under **Public IP address**, select **Associate**. Select **Create new** to open the **Add a public IP address** popup. Fill in *jbosseapvm-ip* for **Name**. Select **Static** for **Assignment**. Select **OK**. +1. Select **Save**. Wait until the public IP address created and the update completes. Select the **X** icon to close the IP configuration page. +1. Copy the value of the public IP address from the **Public IP address** column for `ipconfig1`. For example, `20.232.155.59`. ++ :::image type="content" source="media/jboss-eap-single-server-azure-vm/public-ip-address.png" alt-text="Screenshot of public IP address assigned to the network interface." lightbox="media/jboss-eap-single-server-azure-vm/public-ip-address.png"::: ++1. Paste the public IP address in an Internet-connected web browser, append `:9990`, and press **Enter**. You should see the familiar **Red Hat JBoss Enterprise Application Platform** management console sign-in screen, as shown in the following screenshot. ++ :::image type="content" source="media/jboss-eap-single-server-azure-vm/jboss-eap-console-login.png" alt-text="Screenshot of JBoss EAP management console sign-in screen." lightbox="media/jboss-eap-single-server-azure-vm/jboss-eap-console-login.png"::: ++1. Fill in the value of **JBoss EAP Admin username** which is **jbossadmin**. Fill in the value of **JBoss EAP password** you specified before for **Password**. Select **Sign in**. +1. You should see the familiar **Red Hat JBoss Enterprise Application Platform** management console welcome page as shown in the following screenshot. ++ :::image type="content" source="media/jboss-eap-single-server-azure-vm/jboss-eap-console-welcome.png" alt-text="Screenshot of JBoss EAP management console welcome page." lightbox="media/jboss-eap-single-server-azure-vm/jboss-eap-console-welcome.png"::: ++> [!NOTE] +> You can also follow the guide [Connect to environments privately using Azure Bastion host and jumpboxes](/azure/cloud-adoption-framework/scenarios/cloud-scale-analytics/architectures/connect-to-environments-privately) and visit the **Red Hat JBoss Enterprise Application Platform** management console with the URL `http://<private-ip-address-of-vm>:9990`. ++## Clean up resources ++To avoid Azure charges, you should clean up unnecessary resources. When you no longer need the JBoss EAP Server deployed on Azure VM, unregister the JBoss EAP server and remove Azure resources. ++Run the following command to unregister the JBoss EAP server and VM from Red Hat subscription management. ++```azurecli +az vm run-command invoke\ + --resource-group <resource-group-name> \ + --name <vm-name> \ + --command-id RunShellScript \ + --scripts "sudo subscription-manager unregister" +``` ++Run the following command to remove the resource group, VM, network interface, virtual network, and all related resources. ++```azurecli +az group delete --name <resource-group-name> --yes --no-wait +``` ++## Next steps ++Learn more about migrating JBoss EAP applications to JBoss EAP on Azure VMs by following these links: ++> [!div class="nextstepaction"] +> [Migrate JBoss EAP applications to JBoss EAP on Azure VMs](/azure/developer/java/migration/migrate-jboss-eap-to-jboss-eap-on-azure-vms?toc=/azure/virtual-machines/workloads/oracle/toc.json&bc=/azure/virtual-machines/workloads/oracle/breadcrumb/toc.json) |
virtual-network | Default Outbound Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/default-outbound-access.md | Title: Default outbound access in Azure description: Learn about default outbound access in Azure.- There are multiple ways to turn off default outbound access. The following secti :::image type="content" source="./media/default-outbound-access/private-subnet-portal.png" alt-text="Screenshot of Azure portal showing Private subnet option."::: -* Using CLI, when creating a subnet with [az network vnet subnet create](https://learn.microsoft.com/cli/azure/network/vnet/subnet?view=azure-cli-latest#az-network-vnet-subnet-create), use the `--default-outbound` option and choose "false" +* Using PowerShell, when creating a subnet with [New-AzVirtualNetworkSubnetConfig](https://learn.microsoft.com/powershell/module/az.network/new-azvirtualnetworksubnetconfig?view=azps-11.1.0), use the `DefaultOutboundAccess` option and choose "$false" ++* Using CLI, when creating a subnet with [az network vnet subnet create](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-create), use the `--default-outbound` option and choose "false" * Using an Azure Resource Manager template, set the value of `defaultOutboundAccess` parameter to be "false" |