Updates from: 02/21/2024 02:12:00
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Find Help Open Support Ticket https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/find-help-open-support-ticket.md
If you're unable to find answers by using self-help resources, you can open an o
1. Sign in to the [Azure portal](https://portal.azure.com).
-1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
+1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Microsoft Entra tenant from the **Directories + subscriptions** menu. Currently, you can't submit support cases directly from your Azure AD B2C tenant.
1. In the Azure portal, search for and select **Microsoft Entra ID**.
ai-services Luis Migration Authoring Entities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/luis-migration-authoring-entities.md
- Title: Migrate to V3 machine-learning entity
-description: The V3 authoring provides one new entity type, the machine-learning entity, along with the ability to add relationships to the machine-learning entity and other entities or features of the application.
------ Previously updated : 01/19/2024--
-# Migrate to V3 Authoring entity
---
-The V3 authoring provides one new entity type, the machine-learning entity, along with the ability to add relationships to the machine-learning entity and other entities or features of the application. There is currently no date by which migration needs to be completed.
-
-## Entities are decomposable in V3
-
-Entities created with the V3 authoring APIs, either using the [APIs](https://westeurope.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview) or with the portal, allow you to build a layered entity model with a parent and children. The parent is known to as the **machine-learning entity** and the children are known as **subentities** of the machine learned entity.
-
-Each subentity is also a machine-learning entity but with the added configuration options of features.
-
-* **Required features** are rules that guarantee an entity is extracted when it matches a feature. The rule is defined by required feature to the model:
- * [Prebuilt entity](luis-reference-prebuilt-entities.md)
- * [Regular expression entity](reference-entity-regular-expression.md)
- * [List entity](reference-entity-list.md).
-
-## How do these new relationships compare to V2 authoring
-
-V2 authoring provided hierarchical and composite entities along with roles and features to accomplish this same task. Because the entities, features, and roles were not explicitly related to each other, it was difficult to understand how LUIS implied the relationships during prediction.
-
-With V3, the relationship is explicit and designed by the app authors. This allows you, as the app author, to:
-
-* Visually see how LUIS is predicting these relationships, in the example utterances
-* Test for these relationships either with the [interactive test pane](how-to/train-test.md) or at the endpoint
-* Use these relationships in the client application, via a well-structured, named, nested [.json object](reference-entity-machine-learned-entity.md)
-
-## Planning
-
-When you migrate, consider the following in your migration plan:
-
-* Back up your LUIS app, and perform the migration on a separate app. Having a V2 and V3 app available at the same time allows you to validate the changes required and the impact on the prediction results.
-* Capture current prediction success metrics
-* Capture current dashboard information as a snapshot of app status
-* Review existing intents, entities, phrase lists, patterns, and batch tests
-* The following elements can be migrated **without change**:
- * Intents
- * Entities
- * Regular expression entity
- * List entity
- * Features
- * Phrase list
-* The following elements need to be migrated **with changes**:
- * Entities
- * Hierarchical entity
- * Composite entity
- * Roles - roles can only be applied to a machine-learning (parent) entity. Roles can't be applied to subentities
- * Batch tests and patterns that use the hierarchical and composite entities
-
-When you design your migration plan, leave time to review the final machine-learning entities, after all hierarchical and composite entities have been migrated. While a straight migration will work, after you make the change and review your batch test results, and prediction JSON, the more unified JSON may lead you to make changes so the final information delivered to the client-side app is organized differently. This is similar to code refactoring and should be treated with the same review process your organization has in place.
-
-If you don't have batch tests in place for your V2 model, and migrate the batch tests to the V3 model as part of the migration, you won't be able to validate how the migration will impact the endpoint prediction results.
-
-## Migrating from V2 entities
-
-As you begin to move to the V3 authoring model, you should consider how to move to the machine-learning entity, and its subentities and features.
-
-The following table notes which entities need to migrate from a V2 to a V3 entity design.
-
-|V2 authoring entity type|V3 authoring entity type|Example|
-|--|--|--|
-|Composite entity|Machine learned entity|[learn more](#migrate-v2-composite-entity)|
-|Hierarchical entity|machine-learning entity's role|[learn more](#migrate-v2-hierarchical-entity)|
-
-## Migrate V2 Composite entity
-
-Each child of the V2 composite should be represented with a subentity of the V3 machine-learning entity. If the composite child is a prebuilt, regular expression, or a list entity, this should be applied as a required feature on the subentity.
-
-Considerations when planning to migrate a composite entity to a machine-learning entity:
-* Child entities can't be used in patterns
-* Child entities are no longer shared
-* Child entities need to be labeled if they used to be non-machine-learned
-
-### Existing features
-
-Any phrase list used to boost words in the composite entity should be applied as a feature to either the machine-learning (parent) entity, the subentity (child) entity, or the intent (if the phrase list only applies to one intent). Plan to add the feature to the entity where it should boost most significantly. Do not add the feature generically to the machine-learning (parent) entity, if it will most significantly boost the prediction of a subentity (child).
-
-### New features
-
-In V3 authoring, add a planning step to evaluate entities as possible features for all the entities and intents.
-
-### Example entity
-
-This entity is an example only. Your own entity migration may require other considerations.
-
-Consider a V2 composite for modifying a pizza `order` that uses:
-* prebuilt datetimeV2 for delivery time
-* phrase list to boost certain words such as pizza, pie, crust, and topping
-* list entity to detect toppings such as mushrooms, olives, pepperoni.
-
-An example utterance for this entity is:
-
-`Change the toppings on my pie to mushrooms and delivery it 30 minutes later`
-
-The following table demonstrates the migration:
-
-|V2 models|V3 models|
-|--|--|
-|Parent - Component entity named `Order`|Parent - machine-learning entity named `Order`|
-|Child - Prebuilt datetimeV2|* Migrate prebuilt entity to new app.<br>* Add required feature on parent for prebuilt datetimeV2.|
-|Child - list entity for toppings|* Migrate list entity to new app.<br>* Then add a required feature on the parent for the list entity.|
--
-## Migrate V2 Hierarchical entity
-
-In V2 authoring, a hierarchical entity was provided before roles existing in LUIS. Both served the same purpose of extracting entities based on context usage. If you have hierarchical entities, you can think of them as simple entities with roles.
-
-In V3 authoring:
-* A role can be applied on the machine-learning (parent) entity.
-* A role can't be applied to any subentities.
-
-This entity is an example only. Your own entity migration may require other considerations.
-
-Consider a V2 hierarchical entity for modifying a pizza `order`:
-* where each child determines either an original topping or the final topping
-
-An example utterance for this entity is:
-
-`Change the topping from mushrooms to olives`
-
-The following table demonstrates the migration:
-
-|V2 models|V3 models|
-|--|--|
-|Parent - Component entity named `Order`|Parent - machine-learning entity named `Order`|
-|Child - Hierarchical entity with original and final pizza topping|* Add role to `Order` for each topping.|
-
-## API change constraint replaced with required feature
-
-This change was made in May 2020 at the //Build conference and only applies to the v3 authoring APIs where an app is using a constrained feature. If you are migrating from v2 authoring to v3 authoring, or have not used v3 constrained features, skip this section.
-
-**Functionality** - ability to require an existing entity as a feature to another model and only extract that model if the entity is detected. The functionality has not changed but the API and terminology have changed.
-
-|Previous terminology|New terminology|
-|--|--|
-|`constrained feature`<br>`constraint`<br>`instanceOf`|`required feature`<br>`isRequired`|
-
-#### Automatic migration
-
-Starting **June 19 2020**, you wonΓÇÖt be allowed to create constraints programmatically using the previous authoring API that exposed this functionality.
-
-All existing constraint features will be automatically migrated to the required feature flag. No programmatic changes are required to your prediction API and no resulting change on the quality of the prediction accuracy.
-
-#### LUIS portal changes
-
-The LUIS preview portal referenced this functionality as a **constraint**. The current LUIS portal designates this functionality as a **required feature**.
-
-#### Previous authoring API
-
-This functionality was applied in the preview authoring **[Create Entity Child API](https://westus.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/5d86cf3c6a25a45529767d77)** as the part of an entity's definition, using the `instanceOf` property of an entity's child:
-
-```json
-{
- "name" : "dayOfWeek",
- "instanceOf": "datetimeV2",
- "children": [
- {
- "name": "dayNumber",
- "instanceOf": "number",
- "children": []
- }
- ]
-}
-```
-
-#### New authoring API
-
-This functionality is now applied with the **[Add entity feature relation API](https://westus.dev.cognitive.microsoft.com/docs/services/luis-programmatic-apis-v3-0-preview/operations/5d9dc1781e38aaec1c375f26)** using the `featureName` and `isRequired` properties. The value of the `featureName` property is the name of the model.
-
-```json
-{
- "featureName": "YOUR-MODEL-NAME-HERE",
- "isRequired" : true
-}
-```
--
-## Next steps
-
-* [Developer resources](developer-reference-resource.md)
ai-services Migrate From Composite Entity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/LUIS/migrate-from-composite-entity.md
- Title: Upgrade composite entity - LUIS
-description: Upgrade composite entity to machine-learning entity with upgrade process in the LUIS portal.
------ Previously updated : 01/19/2024--
-# Upgrade composite entity to machine-learning entity
---
-Upgrade composite entity to machine-learning entity to build an entity that receives more complete predictions with better decomposability for debugging the entity.
-
-## Current version model restrictions
-
-The upgrade process creates machine-learning entities, based on the existing composite entities found in your app, into a new version of your app. This includes composite entity children and roles. The process also switches the labels in example utterances to use the new entity.
-
-## Upgrade process
-
-The upgrade process:
-* Creates new machine-learning entity for each composite entity.
-* Child entities:
- * If child entity is only used in composite entity, it will only be added to machine-learning entity.
- * If child entity is used in composite _and_ as a separate entity (labeled in example utterances), it will be added to the version as an entity and as a subentity to the new machine-learning entity.
- * If the child entity uses a role, each role will be converted into a subentity of the same name.
- * If the child entity is a non-machine-learning entity (regular expression, list entity, or prebuilt entity), a new subentity is created with the same name, and the new subentity has a feature using the non-machine-learning entity with the required feature added.
-* Names are retained but must be unique at same subentity/sibling level. Refer to [unique naming limits](./luis-limits.md#name-uniqueness).
-* Labels in example utterances are switched to new machine-learning entity with subentities.
-
-Use the following chart to understand how your model changes:
-
-|Old object|New object|Notes|
-|--|--|--|
-|Composite entity|machine-learning entity with structure|Both objects are parent objects.|
-|Composite's child entity is **simple entity**|subentity|Both objects are child objects.|
-|Composite's child entity is **Prebuilt entity** such as Number|subentity with name of Prebuilt entity such as Number, and subentity has _feature_ of Prebuilt Number entity with constraint option set to _true_.|subentity contains feature with constraint at subentity level.|
-|Composite's child entity is **Prebuilt entity** such as Number, and prebuilt entity has a **role**|subentity with name of role, and subentity has feature of Prebuilt Number entity with constraint option set to true.|subentity contains feature with constraint at subentity level.|
-|Role|subentity|The role name becomes the subentity name. The subentity is a direct descendant of the machine-learning entity.|
-
-## Begin upgrade process
-
-Before updating, make sure to:
-
-* Change versions if you are not on the correct version to upgrade
--
-1. Begin the upgrade process from the notification or you can wait until the next scheduled prompt.
-
- > [!div class="mx-imgBorder"]
- > ![Begin upgrade from notifications](./media/update-composite-entity/notification-begin-update.png)
-
-1. On the pop-up, select **Upgrade now**.
-
-1. Review the **What happens when you upgrade** information then select **Continue**.
-
-1. Select the composite entities from the list to upgrade then select **Continue**.
-
-1. You can move any untrained changes from the current version into the upgraded version by selecting the checkbox.
-
-1. Select **Continue** to begin the upgrade process.
-
-1. The progress bar indicates the status of the upgrade process.
-
-1. When the process is done, you are on a new trained version with the new machine-learning entities. Select **Try your new entities** to see the new entity.
-
- If the upgrade or training failed for the new entity, a notification provides more information.
-
-1. On the Entity list page, the new entities are marked with **NEW** next to the type name.
-
-## Next steps
-
-* [Authors and collaborators](luis-how-to-collaborate.md)
ai-studio Configure Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/configure-private-link.md
We have two network isolation aspects. One is the network isolation to access an
You get several Azure AI default resources in your resource group. You need to configure following network isolation configurations. -- Disable public network access flag of Azure AI default resources such as Storage, Key Vault, Container Registry. Azure AI services and Azure AI Search should be public.
+- Disable public network access flag of Azure AI default resources such as Storage, Key Vault, Container Registry.
- Establish private endpoint connection to Azure AI default resource. Note that you need to have blob and file PE for the default storage account. - [Managed identity configurations](#managed-identity-configuration) to allow Azure AI hub resources access your storage account if it's private.
+- Azure AI services and Azure AI Search should be public.
## Prerequisites
See [this documentation](../../machine-learning/how-to-custom-dns.md#find-the-ip
- [Create a project](create-projects.md) - [Learn more about Azure AI Studio](../what-is-ai-studio.md) - [Learn more about Azure AI hub resources](../concepts/ai-resources.md)-- [Troubleshoot secure connectivity to a project](troubleshoot-secure-connection-project.md)
+- [Troubleshoot secure connectivity to a project](troubleshoot-secure-connection-project.md)
aks Cis Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cis-kubernetes.md
Recommendations can have one of the following statuses:
|5.1.12|Minimize access to webhook configuration objects|Not Scored|L1|Depends on Environment| |5.1.13|Minimize access to the service account token creation|Not Scored|L1|Depends on Environment| |5.2|Pod Security Policies||||
-|5.2.1|Ensure that the clsuter has at least one active policy control mechanism in place|Not Scored|L1|Depends on Environment|
+|5.2.1|Ensure that the cluster has at least one active policy control mechanism in place|Not Scored|L1|Depends on Environment|
|5.2.2|Minimize the admission of privileged containers|Not Scored|L1|Depends on Environment| |5.2.3|Minimize the admission of containers wishing to share the host process ID namespace|Scored|L1|Depends on Environment| |5.2.4|Minimize the admission of containers wishing to share the host IPC namespace|Scored|L1|Depends on Environment|
For more information about AKS security, see the following articles:
<!-- INTERNAL LINKS --> [cis-benchmarks]: /compliance/regulatory/offering-CIS-Benchmark
-[security-concepts-aks-apps-clusters]: concepts-security.md
+[security-concepts-aks-apps-clusters]: concepts-security.md
aks Csi Storage Drivers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-storage-drivers.md
Title: Container Storage Interface (CSI) drivers on Azure Kubernetes Service (AKS) description: Learn about and deploy the Container Storage Interface (CSI) drivers for Azure Disks and Azure Files in an Azure Kubernetes Service (AKS) cluster Previously updated : 05/31/2023 Last updated : 02/16/2024
The CSI storage driver support on AKS allows you to natively use:
## Prerequisites - You need the Azure CLI version 2.42 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].-- If the open-source CSI Blob storage driver is installed on your cluster, uninstall it before enabling the Azure Blob storage driver.
+- If the open-source CSI storage driver is installed on your cluster, uninstall it before enabling the Azure storage CSI driver.
- To enforce the Azure Policy for AKS [policy definition][azure-policy-aks-definition] **Kubernetes clusters should use Container Storage Interface(CSI) driver StorageClass**, the Azure Policy add-on needs to be enabled on new and existing clusters. For an existing cluster, review the [Learn Azure Policy for Kubernetes][learn-azure-policy-kubernetes] to enable it. ## Disk encryption supported scenarios
aks Howto Deploy Java Wls App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/howto-deploy-java-wls-app.md
The other values in the outputs are beyond the scope of this article, but are ex
```sql CREATE TABLE COFFEE (ID NUMERIC(19) NOT NULL, NAME VARCHAR(255) NULL, PRICE FLOAT(32) NULL, PRIMARY KEY (ID)); CREATE TABLE SEQUENCE (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT NUMERIC(28) NULL, PRIMARY KEY (SEQ_NAME));
+ INSERT INTO SEQUENCE VALUES ('SEQ_GEN',0);
``` After a successful run, you should see the message **Query succeeded: Affected rows: 0**. If you don't see this message, troubleshoot and resolve the problem before proceeding.
In the previous steps, you created the auxiliary image including models and WDT.
kubectl -n ${WLS_DOMAIN_NS} create secret generic \ ${SECRET_NAME} \
- --from-literal=password='${DB_PASSWORD}' \
- --from-literal=url='${DB_CONNECTION_STRING}' \
- --from-literal=user='${DB_USER}'
+ --from-literal=password="${DB_PASSWORD}" \
+ --from-literal=url="${DB_CONNECTION_STRING}" \
+ --from-literal=user="${DB_USER}"
kubectl -n ${WLS_DOMAIN_NS} label secret \ ${SECRET_NAME} \
Use the following steps to verify the functionality of the deployment by viewing
1. Sign in with the username `weblogic` and the password you entered when deploying WLS from the Azure portal. Recall that this value is `wlsAksCluster2022`.
+1. In the **Domain Structure** box, select **Services**.
+
+1. Under the **Services**, select **Data Sources**.
+
+1. In the **Summary of JDBC Data Sources** panel, select **Monitoring**. Your screen should look similar to the following example. You find the state of data source is running on managed servers.
+
+ :::image type="content" source="media/howto-deploy-java-wls-app/datasource-state.png" alt-text="Screenshot of data source state." border="false":::
+ 1. In the **Domain Structure** box, select **Deployments**. 1. In the **Deployments** table, there should be one row. The name should be the same value as the `Application` value in your *appmodel.yaml* file. Select the name.
aks Istio Meshconfig https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/istio-meshconfig.md
+
+ Title: Configure Istio-based service mesh add-on for Azure Kubernetes Service (preview)
+description: Configure Istio-based service mesh add-on for Azure Kubernetes Service (preview)
++ Last updated : 02/14/2024+++
+# Configure Istio-based service mesh add-on for Azure Kubernetes Service (preview)
+
+Open-source Istio uses [MeshConfig][istio-meshconfig] to define mesh-wide settings for the Istio service mesh. Istio-based service mesh add-on for AKS builds on top of MeshConfig and classifies different properties as supported, allowed, and blocked.
+
+This article walks through how to configure Istio-based service mesh add-on for Azure Kubernetes Service and the support policy applicable for such configuration.
++
+## Prerequisites
+
+This guide assumes you followed the [documentation][istio-deploy-addon] to enable the Istio add-on on an AKS cluster.
+
+## Set up configuration on cluster
+
+1. Find out which revision of Istio is deployed on the cluster:
+
+ ```bash
+ az aks show -n $CLUSTER -g $RESOURCE_GROUP --query 'serviceMeshProfile'
+ ```
+
+ Output:
+
+ ```
+ {
+ "istio": {
+ "certificateAuthority": null,
+ "components": {
+ "egressGateways": null,
+ "ingressGateways": null
+ },
+ "revisions": [
+ "asm-1-18"
+ ]
+ },
+ "mode": "Istio"
+ }
+ ```
+
+2. Create a ConfigMap with the name `istio-shared-configmap-<asm-revision>` in the `aks-istio-system` namespace. For example, if your cluster is running asm-1-18 revision of mesh, then the ConfigMap needs to be named as `istio-shared-configmap-asm-1-18`. Mesh configuration has to be provided within the data section under mesh.
+
+ Example:
+
+ ```yaml
+ apiVersion: v1
+ kind: ConfigMap
+ metadata:
+ name: istio-shared-configmap-asm-1-18
+ namespace: aks-istio-system
+ data:
+ mesh: |-
+ accessLogFile: /dev/stdout
+ defaultConfig:
+ holdApplicationUntilProxyStarts: true
+ ```
+ The values under `defaultConfig` are mesh-wide settings applied for Envoy sidecar proxy.
+
+> [!CAUTION]
+> A default ConfigMap (for example, `istio-asm-1-18` for revision asm-1-18) is created in `aks-istio-system` namespace on the cluster when the Istio addon is enabled. However, this default ConfigMap gets reconciled by the managed Istio addon and thus users should NOT directly edit this ConfigMap. Instead users should create a revision specific Istio shared ConfigMap (for example `istio-shared-configmap-asm-1-18` for revision asm-1-18) in the aks-istio-system namespace, and then the Istio control plane will merge this with the default ConfigMap, with the default settings taking precedence.
+
+### Mesh configuration and upgrades
+
+When you're performing [canary upgrade for Istio](./istio-upgrade.md), you need create a separate ConfigMap for the new revision in the `aks-istio-system` namespace **before initiating the canary upgrade**. This way the configuration is available when the new revision's control plane is deployed on cluster. For example, if you're upgrading the mesh from asm-1-18 to asm-1-19, you need to copy changes over from `istio-shared-configmap-asm-1-18` to create a new ConfigMap called `istio-shared-configmap-asm-1-19` in the `aks-istio-system` namespace.
+
+After the upgrade is completed or rolled back, you can delete the ConfigMap of the revision that was removed from the cluster.
+
+## Allowed, supported, and blocked values
+
+Fields in `MeshConfig` are classified into three categories:
+
+- **Blocked**: Disallowed fields are blocked via addon managed admission webhooks. API server immediately publishes the error message to the user that the field is disallowed.
+- **Supported**: Supported fields (for example, fields related to access logging) receive support from Azure support.
+- **Allowed**: These fields (such as proxyListenPort or proxyInboundListenPort) are allowed but they aren't covered by Azure support.
+
+Mesh configuration and the list of allowed/supported fields are revision specific to account for fields being added/removed across revisions. The full list of allowed fields and the supported/unsupported ones within the allowed list is provided in the below table. When new mesh revision is made available, any changes to allowed and supported classification of the fields is noted in this table.
+
+### MeshConfig
+
+| **Field** | **Supported** |
+|--||
+| proxyListenPort | false |
+| proxyInboundListenPort | false |
+| proxyHttpPort | false |
+| connectTimeout | false |
+| tcpKeepAlive | false |
+| defaultConfig | true |
+| outboundTrafficPolicy | true |
+| extensionProviders | true |
+| defaultProvideres | true |
+| accessLogFile | true |
+| accessLogFormat | true |
+| accessLogEncoding | true |
+| enableTracing | true |
+| enableEnvoyAccessLogService | true |
+| disableEnvoyListenerLog | true |
+| trustDomain | false |
+| trustDomainAliases | false |
+| caCertificates | false |
+| defaultServiceExportTo | false |
+| defaultVirtualServiceExportTo | false |
+| defaultDestinationRuleExportTo | false |
+| localityLbSetting | false |
+| dnsRefreshRate | false |
+| h2UpgradePolicy | false |
+| enablePrometheusMerge | true |
+| discoverySelectors | true |
+| pathNormalization | false |
+| defaultHttpRetryPolicy | false |
+| serviceSettings | false |
+| meshMTLS | false |
+| tlsDefaults | false |
+
+### ProxyConfig (meshConfig.defaultConfig)
+
+| **Field** | **Supported** |
+|--||
+| tracingServiceName | true |
+| drainDuration | true |
+| statsUdpAddress | false |
+| proxyAdminPort | false |
+| tracing | true |
+| concurrency | true |
+| envoyAccessLogService | true |
+| envoyMetricsService | true |
+| proxyMetadata | false |
+| statusPort | false |
+| extraStatTags | false |
+| proxyStatsMatcher | false |
+| terminationDrainDuration | true |
+| meshId | false |
+| holdApplicationUntilProxyStarts | true |
+| caCertificatesPem | false |
+| privateKeyProvider | false |
+
+Fields present in [open source MeshConfig reference documentation][istio-meshconfig] but not in the above table are blocked. For example, `configSources` is blocked.
+
+> [!CAUTION]
+> **Support scope of configurations:** Mesh configuration allows for extension providers such as self-managed instances of Zipkin or Apache Skywalking to be configured with the Istio addon. However, these extension providers are outside the support scope of the Istio addon. Any issues associated with extension tools are outside the support boundary of the Istio addon.
+
+## Common errors and troubleshooting tips
+
+- Ensure that the MeshConfig is indented with spaces instead of tabs.
+- Ensure that you're only editing the revision specific shared ConfigMap (for example `istio-shared-configmap-asm-1-18`) and not trying to edit the default ConfigMap (for example `istio-asm-1-18`).
+- The ConfigMap must follow the name `istio-shared-configmap-<asm-revision>` and be in the `aks-istio-system` namespace.
+- Ensure that all MeshConfig fields are spelled correctly. If they're unrecognized or if they aren't part of the allowed list, admission control denies such configurations.
+- When performing canary upgrades, [check your revision specific ConfigMaps](#mesh-configuration-and-upgrades) to ensure configurations exist for the revisions deployed on your cluster.
+- Certain `MeshConfig` options such as accessLogging may increase Envoy's resource consumption, and disabling some of these settings may mitigate Istio data plane resource utilization. It's also advisable to use the `discoverySelectors` field in the MeshConfig to help alleviate memory consumption for Istiod and Envoy.
+- If the `concurrency` field in the MeshConfig is misconfigured and set to zero, it causes Envoy to use up all CPU cores. Instead if this field is unset, number of worker threads to run is automatically determined based on CPU requests/limits.
+- [Pod and sidecar race conditions][istio-sidecar-race-condition] in which the application starts before Envoy can be mitigated using the `holdApplicationUntilProxyStarts` field in the MeshConfig.
++
+[istio-meshconfig]: https://istio.io/latest/docs/reference/config/istio.mesh.v1alpha1/
+[istio-sidecar-race-condition]: https://istio.io/latest/docs/ops/common-problems/injection/#pod-or-containers-start-with-network-issues-if-istio-proxy-is-not-ready
aks Istio Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/istio-upgrade.md
Title: Upgrade Istio-based service mesh add-on for Azure Kubernetes Service (preview)
-description: Upgrade Istio-based service mesh add-on for Azure Kubernetes Service (preview)
+description: Upgrade Istio-based service mesh add-on for Azure Kubernetes Service (preview).
Last updated 05/04/2023
Istio add-on allows upgrading the minor version using [canary upgrade process][i
If the cluster is currently using a supported minor version of Istio, upgrades are only allowed one minor version at a time. If the cluster is using an unsupported version of Istio, you must upgrade to the lowest supported minor version of Istio for that Kubernetes version. After that, upgrades can again be done one minor version at a time.
-The following example illustrates how to upgrade from revision `asm-1-17` to `asm-1-18`. The steps are the same for all minor upgrades.
+The following example illustrates how to upgrade from revision `asm-1-18` to `asm-1-19`. The steps are the same for all minor upgrades.
1. Use the [az aks mesh get-upgrades](/cli/azure/aks/mesh#az-aks-mesh-get-upgrades) command to check which revisions are available for the cluster as upgrade targets:
The following example illustrates how to upgrade from revision `asm-1-17` to `as
If you expect to see a newer revision not returned by this command, you may need to upgrade your AKS cluster first so that it's compatible with the newest revision.
-1. Initiate a canary upgrade from revision `asm-1-17` to `asm-1-18` using [az aks mesh upgrade start](/cli/azure/aks/mesh#az-aks-mesh-upgrade-start):
+1. If you've set up [mesh configuration][meshconfig] for the existing mesh revision on your cluster, you need to create a separate ConfigMap corresponding to the new revision in the `aks-istio-system` namespace **before initiating the canary upgrade** in the next step. This configuration is applicable the moment the new revision's control plane is deployed on cluster. More details can be found [here][meshconfig-canary-upgrade].
+
+1. Initiate a canary upgrade from revision `asm-1-18` to `asm-1-19` using [az aks mesh upgrade start](/cli/azure/aks/mesh#az-aks-mesh-upgrade-start):
```bash az aks mesh upgrade start --resource-group $RESOURCE_GROUP --name $CLUSTER --revision asm-1-18
The following example illustrates how to upgrade from revision `asm-1-17` to `as
A canary upgrade means the 1.18 control plane is deployed alongside the 1.17 control plane. They continue to coexist until you either complete or roll back the upgrade.
-1. Verify control plane pods corresponding to both `asm-1-17` and `asm-1-18` exist:
+1. Verify control plane pods corresponding to both `asm-1-18` and `asm-1-19` exist:
* Verify `istiod` pods:
The following example illustrates how to upgrade from revision `asm-1-17` to `as
``` NAME READY STATUS RESTARTS AGE
- istiod-asm-1-17-55fccf84c8-dbzlt 1/1 Running 0 58m
- istiod-asm-1-17-55fccf84c8-fg8zh 1/1 Running 0 58m
- istiod-asm-1-18-f85f46bf5-7rwg4 1/1 Running 0 51m
- istiod-asm-1-18-f85f46bf5-8p9qx 1/1 Running 0 51m
+ istiod-asm-1-18-55fccf84c8-dbzlt 1/1 Running 0 58m
+ istiod-asm-1-18-55fccf84c8-fg8zh 1/1 Running 0 58m
+ istiod-asm-1-19-f85f46bf5-7rwg4 1/1 Running 0 51m
+ istiod-asm-1-19-f85f46bf5-8p9qx 1/1 Running 0 51m
``` * If ingress is enabled, verify ingress pods:
The following example illustrates how to upgrade from revision `asm-1-17` to `as
``` NAME READY STATUS RESTARTS AGE
- aks-istio-ingressgateway-external-asm-1-17-58f889f99d-qkvq2 1/1 Running 0 59m
- aks-istio-ingressgateway-external-asm-1-17-58f889f99d-vhtd5 1/1 Running 0 58m
- aks-istio-ingressgateway-external-asm-1-18-7466f77bb9-ft9c8 1/1 Running 0 51m
- aks-istio-ingressgateway-external-asm-1-18-7466f77bb9-wcb6s 1/1 Running 0 51m
- aks-istio-ingressgateway-internal-asm-1-17-579c5d8d4b-4cc2l 1/1 Running 0 58m
- aks-istio-ingressgateway-internal-asm-1-17-579c5d8d4b-jjc7m 1/1 Running 0 59m
- aks-istio-ingressgateway-internal-asm-1-18-757d9b5545-g89s4 1/1 Running 0 51m
- aks-istio-ingressgateway-internal-asm-1-18-757d9b5545-krq9w 1/1 Running 0 51m
+ aks-istio-ingressgateway-external-asm-1-18-58f889f99d-qkvq2 1/1 Running 0 59m
+ aks-istio-ingressgateway-external-asm-1-18-58f889f99d-vhtd5 1/1 Running 0 58m
+ aks-istio-ingressgateway-external-asm-1-19-7466f77bb9-ft9c8 1/1 Running 0 51m
+ aks-istio-ingressgateway-external-asm-1-19-7466f77bb9-wcb6s 1/1 Running 0 51m
+ aks-istio-ingressgateway-internal-asm-1-18-579c5d8d4b-4cc2l 1/1 Running 0 58m
+ aks-istio-ingressgateway-internal-asm-1-18-579c5d8d4b-jjc7m 1/1 Running 0 59m
+ aks-istio-ingressgateway-internal-asm-1-19-757d9b5545-g89s4 1/1 Running 0 51m
+ aks-istio-ingressgateway-internal-asm-1-19-757d9b5545-krq9w 1/1 Running 0 51m
``` Observe that ingress gateway pods of both revisions are deployed side-by-side. However, the service and its IP remain immutable.
The following example illustrates how to upgrade from revision `asm-1-17` to `as
1. Relabel the namespace so that any new pods get the Istio sidecar associated with the new revision and its control plane: ```bash
- kubectl label namespace default istio.io/rev=asm-1-18 --overwrite
+ kubectl label namespace default istio.io/rev=asm-1-19 --overwrite
``` Relabeling doesn't affect your workloads until they're restarted.
The following example illustrates how to upgrade from revision `asm-1-17` to `as
* Relabel the namespace to the previous revision ```bash
- kubectl label namespace default istio.io/rev=asm-1-17 --overwrite
+ kubectl label namespace default istio.io/rev=asm-1-18 --overwrite
``` * Roll back the workloads to use the sidecar corresponding to the previous Istio revision by restarting these workloads again:
The following example illustrates how to upgrade from revision `asm-1-17` to `as
az aks mesh upgrade rollback --resource-group $RESOURCE_GROUP --name $CLUSTER ```
+1. If [mesh configuration][meshconfig] was set up for the revisions in previous steps, you can now delete the ConfigMap for the revision that was removed from the cluster on completing or rolling back the upgrade.
+ > [!NOTE] > Manually relabeling namespaces when moving them to a new revision can be tedious and error-prone. [Revision tags](https://istio.io/latest/docs/setup/upgrade/canary/#stable-revision-labels) solve this problem. Revision tags are stable identifiers that point to revisions and can be used to avoid relabeling namespaces. Rather than relabeling the namespace, a mesh operator can simply change the tag to point to a new revision. All namespaces labeled with that tag will be updated at the same time. However, note that you still need to restart the workloads to make sure the correct version of `istio-proxy` sidecars are injected. ## Patch version upgrade
-* Istio add-on patch version availability information is published in [AKS weekly release notes][aks-release-notes].
-* Patches are rolled out automatically for istiod and ingress pods as part of these AKS weekly releases, which respect the `default` [planned maintenance window](./planned-maintenance.md) set up for the cluster.
+* Istio add-on patch version availability information is published in [AKS release notes][aks-release-notes].
+* Patches are rolled out automatically for istiod and ingress pods as part of these AKS releases, which respect the `default` [planned maintenance window](./planned-maintenance.md) set up for the cluster.
* User needs to initiate patches to Istio proxy in their workloads by restarting the pods for reinjection: * Check the version of the Istio proxy intended for new or restarted pods. This version is the same as the version of the istiod and Istio ingress pods after they were patched:
The following example illustrates how to upgrade from revision `asm-1-17` to `as
Example output: ```bash
- "image": "mcr.microsoft.com/oss/istio/proxyv2:1.17.2-distroless",
- "image": "mcr.microsoft.com/oss/istio/proxyv2:1.17.2-distroless"
+ "image": "mcr.microsoft.com/oss/istio/proxyv2:1.18.2-distroless",
+ "image": "mcr.microsoft.com/oss/istio/proxyv2:1.18.2-distroless"
``` * Check the Istio proxy image version for all pods in a namespace:
The following example illustrates how to upgrade from revision `asm-1-17` to `as
Example output: ```bash
- productpage-v1-979d4d9fc-p4764: docker.io/istio/examples-bookinfo-productpage-v1:1.17.0, mcr.microsoft.com/oss/istio/proxyv2:1.17.1-distroless
+ productpage-v1-979d4d9fc-p4764: docker.io/istio/examples-bookinfo-productpage-v1:1.18.0, mcr.microsoft.com/oss/istio/proxyv2:1.18.1-distroless
``` * To trigger reinjection, restart the workloads. For example:
The following example illustrates how to upgrade from revision `asm-1-17` to `as
Example output: ```bash
- productpage-v1-979d4d9fc-p4764: docker.io/istio/examples-bookinfo-productpage-v1:1.17.0, mcr.microsoft.com/oss/istio/proxyv2:1.17.2-distroless
+ productpage-v1-979d4d9fc-p4764: docker.io/istio/examples-bookinfo-productpage-v1:1.18.0, mcr.microsoft.com/oss/istio/proxyv2:1.18.2-distroless
``` [aks-release-notes]: https://github.com/Azure/AKS/releases [istio-canary-upstream]: https://istio.io/latest/docs/setup/upgrade/canary/
+[meshconfig]: ./istio-meshconfig.md
+[meshconfig-canary-upgrade]: ./istio-meshconfig.md#mesh-configuration-and-upgrades
aks Private Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/private-clusters.md
az aks create \
--enable-private-cluster \ --network-plugin azure \ --vnet-subnet-id <subnet-id> \
- --docker-bridge-address 172.17.0.1/16 \
--dns-service-ip 10.2.0.10 \ --service-cidr 10.2.0.0/24 ```
-> [!NOTE]
-> If the Docker bridge address CIDR *172.17.0.1/16* clashes with the subnet CIDR, change the Docker bridge address.
- ## Use custom domains If you want to configure custom domains that can only be resolved internally, see [Use custom domains][use-custom-domains].
app-service How To Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/how-to-migrate.md
Title: Use the migration feature to migrate your App Service Environment to App Service Environment v3
-description: Learn how to migrate your App Service Environment to App Service Environment v3 by using the migration feature.
+ Title: Use the in-place migration feature to migrate your App Service Environment to App Service Environment v3
+description: Learn how to migrate your App Service Environment to App Service Environment v3 by using the in-place migration feature.
Previously updated : 1/16/2024 Last updated : 2/12/2024 zone_pivot_groups: app-service-cli-portal
-# Use the migration feature to migrate App Service Environment v1 and v2 to App Service Environment v3
+# Use the in-place migration feature to migrate App Service Environment v1 and v2 to App Service Environment v3
-You can automatically migrate App Service Environment v1 and v2 to [App Service Environment v3](overview.md) by using the migration feature. To learn more about the migration process and to see if your App Service Environment supports migration at this time, see the [overview of the migration feature](migrate.md).
+> [!NOTE]
+> The migration feature described in this article is used for in-place (same subnet) automated migration of App Service Environment v1 and v2 to App Service Environment v3. If you're looking for information on the side by side migration feature, see [Migrate to App Service Environment v3 by using the side by side migration feature](side-by-side-migrate.md). If you're looking for information on manual migration options, see [Manual migration options](migration-alternatives.md). For help deciding which migration option is right for you, see [Migration path decision tree](upgrade-to-asev3.md#migration-path-decision-tree). For more information on App Service Environment v3, see [App Service Environment v3 overview](overview.md).
+>
+
+You can automatically migrate App Service Environment v1 and v2 to [App Service Environment v3](overview.md) by using the in-place migration feature. To learn more about the migration process and to see if your App Service Environment supports migration at this time, see the [overview of the in-place migration feature](migrate.md).
> [!IMPORTANT] > We recommend that you use this feature for development environments before migrating any production environments, to avoid unexpected problems. Please provide any feedback related to this article or the feature by using the buttons at the bottom of the page.
+>
## Prerequisites
-Ensure that you understand how migrating to App Service Environment v3 affects your applications. Review the [migration process](migrate.md#overview-of-the-migration-process-using-the-migration-feature) to understand the process timeline and where and when you need to get involved. Also review the [FAQs](migrate.md#frequently-asked-questions), which can answer some of your questions.
+Ensure that you understand how migrating to App Service Environment v3 affects your applications. Review the [migration process](migrate.md#overview-of-the-migration-process-using-the-in-place-migration-feature) to understand the process timeline and where and when you need to get involved. Also review the [FAQs](migrate.md#frequently-asked-questions), which can answer some of your questions.
Ensure that there are no locks on your virtual network, resource group, resource, or subscription. Locks block platform operations during migration. Ensure that no Azure policies are blocking actions that are required for the migration, including subnet modifications and Azure App Service resource creations. Policies that block resource modifications and creations can cause migration to get stuck or fail.
+Since scaling is blocked during the migration, you should scale your environment to the desired size before starting the migration. If you need to scale your environment after the migration, you can do so once the migration is complete.
+ ::: zone pivot="experience-azcli"
-We recommend that you use the [Azure portal](how-to-migrate.md?pivots=experience-azp) for the migration experience. If you decide to use the [Azure CLI](/cli/azure/) for the migration, follow the steps described here in order and as written, because you're making Azure REST API calls. We recommend that you use the Azure CLI to make these API calls. For information about other methods, see [Azure REST API reference](/rest/api/azure/).
+We recommend that you use the [Azure portal](how-to-migrate.md?pivots=experience-azp) for the in-place migration experience. If you decide to use the [Azure CLI](/cli/azure/) for the migration, follow the steps described here in order and as written, because you're making Azure REST API calls. We recommend that you use the Azure CLI to make these API calls. For information about other methods, see [Azure REST API reference](/rest/api/azure/).
For this guide, [install the Azure CLI](/cli/azure/install-azure-cli) or use [Azure Cloud Shell](https://shell.azure.com/).
ASE_ID=$(az appservice ase show --name $ASE_NAME --resource-group $ASE_RG --quer
## 2. Validate that migration is supported
-The following command checks whether your App Service Environment is supported for migration. If you receive an error or if your App Service Environment is in an unhealthy or suspended state, you can't migrate at this time. See the [troubleshooting](migrate.md#troubleshooting) section for descriptions of the potential error messages that you can get. If your environment [isn't supported for migration](migrate.md#supported-scenarios) or you want to migrate to App Service Environment v3 without using the migration feature, see the [manual migration options](migration-alternatives.md).
+The following command checks whether your App Service Environment is supported for migration. If you receive an error or if your App Service Environment is in an unhealthy or suspended state, you can't migrate at this time. See the [troubleshooting](migrate.md#troubleshooting) section for descriptions of the potential error messages that you can get. If your environment [isn't supported for migration using the in-place migration feature](migrate.md#supported-scenarios) or you want to migrate to App Service Environment v3 without using the in-place migration feature, see the [manual migration options](migration-alternatives.md).
```azurecli az rest --method post --uri "${ASE_ID}/migrate?api-version=2021-02-01&phase=validation"
If your existing App Service Environment uses a custom domain suffix, you need t
> [!NOTE] > If you're configuring a custom domain suffix, when you're adding the network permissions on your Azure key vault, be sure that your key vault allows access from your App Service Environment's new outbound IP addresses that were generated in step 3.
+>
If your migration doesn't include a custom domain suffix and you aren't enabling zone redundancy, you can move on to migration.
On the **Migration** page, the platform validates if migration is supported for
If your environment isn't supported for migration, a banner appears at the top of the page and includes an error message with a reason. For descriptions of the error messages that can appear if you aren't eligible for migration, see [Troubleshooting](migrate.md#troubleshooting).
-If your App Service Environment isn't supported for migration at this time or your environment is in an unhealthy or suspended state, you can't use the migration feature. If your environment [isn't supported for migration with the migration feature](migrate.md#supported-scenarios) or you want to migrate to App Service Environment v3 without using the migration feature, see the [manual migration options](migration-alternatives.md).
+If your App Service Environment isn't supported for migration at this time or your environment is in an unhealthy or suspended state, you can't use the migration feature. If your environment [isn't supported for migration with the in-place migration feature](migrate.md#supported-scenarios) or you want to migrate to App Service Environment v3 without using the in-place migration feature, see the [manual migration options](migration-alternatives.md).
:::image type="content" source="./media/migration/migration-not-supported.png" alt-text="Screenshot that shows an example portal message that says the migration feature doesn't support the App Service Environment.":::
app-service How To Side By Side Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/how-to-side-by-side-migrate.md
+
+ Title: Use the side by side migration feature to migrate your App Service Environment v2 to App Service Environment v3
+description: Learn how to migrate your App Service Environment v2 to App Service Environment v3 by using the side by side migration feature.
+++ Last updated : 2/15/2024+
+# zone_pivot_groups: app-service-cli-portal
+
+# Use the side by side migration feature to migrate App Service Environment v2 to App Service Environment v3 (Preview)
+
+> [!NOTE]
+> The migration feature described in this article is used for side by side (different subnet) automated migration of App Service Environment v2 to App Service Environment v3 and is currently **in preview**.
+>
+> If you're looking for information on the in-place migration feature, see [Migrate to App Service Environment v3 by using the in-place migration feature](migrate.md). If you're looking for information on manual migration options, see [Manual migration options](migration-alternatives.md). For help deciding which migration option is right for you, see [Migration path decision tree](upgrade-to-asev3.md#migration-path-decision-tree). For more information on App Service Environment v3, see [App Service Environment v3 overview](overview.md).
+>
+
+You can automatically migrate App Service Environment v2 to [App Service Environment v3](overview.md) by using the side by side migration feature. To learn more about the migration process and to see if your App Service Environment supports migration at this time, see the [overview of the side by side migration feature](side-by-side-migrate.md).
+
+> [!IMPORTANT]
+> We recommend that you use this feature for development environments before migrating any production environments, to avoid unexpected problems. Please provide any feedback related to this article or the feature by using the buttons at the bottom of the page.
+>
+
+## Prerequisites
+
+Ensure that you understand how migrating to App Service Environment v3 affects your applications. Review the [migration process](side-by-side-migrate.md#overview-of-the-migration-process-using-the-side-by-side-migration-feature) to understand the process timeline and where and when you need to get involved. Also review the [FAQs](side-by-side-migrate.md#frequently-asked-questions), which can answer some of your questions.
+
+Ensure that there are no locks on your virtual network, resource groups, resources, or subscription. Locks block platform operations during migration.
+
+Ensure that no Azure policies are blocking actions that are required for the migration, including subnet modifications and Azure App Service resource creations. Policies that block resource modifications and creations can cause migration to get stuck or fail.
+
+Since your App Service Environment v3 is in a different subnet in your virtual network, you need to ensure that you have an available subnet in your virtual network that meets the [subnet requirements for App Service Environment v3](./networking.md#subnet-requirements). The subnet you select must also be able to communicate with the subnet that your existing App Service Environment is in. Ensure there's nothing blocking communication between the two subnets. If you don't have an available subnet, you need to create one before migrating. Creating a new subnet might involve increasing your virtual network address space. For more information, see [Create a virtual network and subnet](../../virtual-network/manage-virtual-network.md).
+
+Since scaling is blocked during the migration, you should scale your environment to the desired size before starting the migration. If you need to scale your environment after the migration, you can do so once the migration is complete.
+
+Follow the steps described here in order and as written, because you're making Azure REST API calls. We recommend that you use the Azure CLI to make these API calls. For information about other methods, see [Azure REST API reference](/rest/api/azure/).
+
+For this guide, [install the Azure CLI](/cli/azure/install-azure-cli) or use [Azure Cloud Shell](https://shell.azure.com/).
+
+## 1. Select the subnet for your new App Service Environment v3
+
+Select a subnet in your App Service Environment v3 that meets the [subnet requirements for App Service Environment v3](./networking.md#subnet-requirements). Note the name of the subnet you select. This subnet must be different than the subnet your existing App Service Environment is in.
+
+## 2. Get your App Service Environment ID
+
+Run the following commands to get your App Service Environment ID and store it as an environment variable. Replace the placeholders for the name and resource groups with your values for the App Service Environment that you want to migrate. `ASE_RG` and `VNET_RG` are the same if your virtual network and App Service Environment are in the same resource group.
+
+```azurecli
+ASE_NAME=<Your-App-Service-Environment-name>
+ASE_RG=<Your-ASE-Resource-Group>
+VNET_RG=<Your-VNet-Resource-Group>
+ASE_ID=$(az appservice ase show --name $ASE_NAME --resource-group $ASE_RG --query id --output tsv)
+```
+
+## 3. Validate migration is supported
+
+The following command checks whether your App Service Environment is supported for migration. If you receive an error or if your App Service Environment is in an unhealthy or suspended state, you can't migrate at this time. See the [troubleshooting](side-by-side-migrate.md#troubleshooting) section for descriptions of the potential error messages that you can get. If your environment [isn't supported for migration using the side by side migration feature](side-by-side-migrate.md#supported-scenarios) or you want to migrate to App Service Environment v3 without using the side by side migration feature, see the [manual migration options](migration-alternatives.md).
+
+```azurecli
+az rest --method post --uri "${ASE_ID}/NoDowntimeMigrate?phase=Validation&api-version=2022-03-01"
+```
+
+If there are no errors, your migration is supported, and you can continue to the next step.
+
+## 4. Generate outbound IP addresses for your new App Service Environment v3
+
+Create a file called *zoneredundancy.json* with the following details for your region and zone redundancy selection.
+
+```json
+{
+ "location":"<region>",
+ "Properties": {
+ "zoneRedundant": "<true/false>"
+ }
+}
+```
+
+You can make your new App Service Environment v3 zone redundant if your existing environment is in a [region that supports zone redundancy](./overview.md#regions). Zone redundancy can be configured by setting the `zoneRedundant` property to `true`. Zone redundancy is an optional configuration. This configuration can only be set during the creation of your new App Service Environment v3 and can't be removed at a later time.
+
+Run the following command to create new outbound IP addresses. This step takes about 15 minutes to complete. Don't scale or make changes to your existing App Service Environment during this time.
+
+```azurecli
+az rest --method post --uri "${ASE_ID}/NoDowntimeMigrate?phase=PreMigration&api-version=2022-03-01" --body @zoneredundancy.json
+```
+
+Run the following command to check the status of this step:
+
+```azurecli
+az rest --method get --uri "${ASE_ID}?api-version=2022-03-01" --query properties.status
+```
+
+If the step is in progress, you get a status of `Migrating`. After you get a status of `Ready`, run the following command to view your new outbound IPs. If you don't see the new IPs immediately, wait a few minutes and try again.
+
+```azurecli
+az rest --method get --uri "${ASE_ID}?api-version=2022-03-01"
+```
+
+## 5. Update dependent resources with new outbound IPs
+
+By using the new IPs, update any of your resources or networking components to ensure that your new environment functions as intended after migration is complete. It's your responsibility to make any necessary updates.
+
+This step is also a good time to review the [inbound and outbound network](networking.md#ports-and-network-restrictions) dependency changes when moving to App Service Environment v3. These changes include the port change for Azure Load Balancer, which now uses port 80. Don't migrate until you complete this step.
+
+## 6. Delegate your App Service Environment subnet
+
+App Service Environment v3 requires the subnet it's in to have a single delegation of `Microsoft.Web/hostingEnvironments`. Previous versions didn't require this delegation. You need to confirm that your subnet is delegated properly and update the delegation (if necessary) before migrating. You can update the delegation either by running the following command or by going to the subnet in the [Azure portal](https://portal.azure.com).
+
+```azurecli
+az network vnet subnet update --resource-group $VNET_RG --name <subnet-name> --vnet-name <vnet-name> --delegations Microsoft.Web/hostingEnvironments
+```
+
+## 7. Confirm there are no locks on the virtual network
+
+Virtual network locks block platform operations during migration. If your virtual network has locks, you need to remove them before migrating. If necessary, you can add back the locks after migration is complete.
+
+Locks can exist at three scopes: subscription, resource group, and resource. When you apply a lock at a parent scope, all resources within that scope inherit the same lock. If you have locks applied at the subscription, resource group, or resource scope, you need to remove them before the migration. For more information on locks and lock inheritance, see [Lock your resources to protect your infrastructure](../../azure-resource-manager/management/lock-resources.md).
+
+Use the following command to check if your virtual network has any locks:
+
+```azurecli
+az lock list --resource-group $VNET_RG --resource <vnet-name> --resource-type Microsoft.Network/virtualNetworks
+```
+
+Delete any existing locks by using the following command:
+
+```azurecli
+az lock delete --resource-group $VNET_RG --name <lock-name> --resource <vnet-name> --resource-type Microsoft.Network/virtualNetworks
+```
+
+For related commands to check if your subscription or resource group has locks, see the [Azure CLI reference for locks](../../azure-resource-manager/management/lock-resources.md#azure-cli).
+
+## 8. Prepare your configurations
+
+If your existing App Service Environment uses a custom domain suffix, you can [configure one for your new App Service Environment v3 resource during the migration process](./side-by-side-migrate.md#add-a-custom-domain-suffix-optional). Configuring a custom domain suffix is optional. If your App Service Environment v2 has a custom domain suffix and you don't want to use it on your new App Service Environment v3, skip this step. If you previously didn't have a custom domain suffix but want one, you can configure one at this point or at any time once migration is complete. For more information on App Service Environment v3 custom domain suffixes, including requirements, step-by-step instructions, and best practices, see [Custom domain suffix for App Service Environments](./how-to-custom-domain-suffix.md).
+
+> [!NOTE]
+> If you're configuring a custom domain suffix, when you're adding the network permissions on your Azure key vault, be sure that your key vault allows access from your App Service Environment's new outbound IP addresses that were generated in step 4.
+>
+
+To set these configurations, including identifying the subnet you selected earlier, create another file called *parameters.json* with the following details based on your scenario. Be sure to use the new subnet that you selected for your new App Service Environment v3. Don't include the properties for a custom domain suffix if this feature doesn't apply to your migration. Pay attention to the value of the `zoneRedundant` property and set it to the same value you used in the outbound IP generation step. **You must use the same value for zone redundancy that you used in the IP generation step.**
+
+If you're migrating without a custom domain suffix, use this code:
+
+```json
+{
+ "Properties": {
+ "VirtualNetwork": {
+ "Id": "/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.Network/virtualNetworks/<virtual-network-name>/subnets/<subnet-name>"
+ },
+ "zoneRedundant": "<true/false>"
+ }
+}
+```
+
+If you're using a user assigned managed identity for your custom domain suffix configuration, use this code:
+
+```json
+{
+ "Properties": {
+ "VirtualNetwork": {
+ "Id": "/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.Network/virtualNetworks/<virtual-network-name>/subnets/<subnet-name>"
+ },
+ "zoneRedundant": "<true/false>",
+ "customDnsSuffixConfiguration": {
+ "dnsSuffix": "internal-contoso.com",
+ "certificateUrl": "https://contoso.vault.azure.net/secrets/myCertificate",
+ "keyVaultReferenceIdentity": "/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/asev3-migration/providers/Microsoft.ManagedIdentity/userAssignedIdentities/ase-managed-identity"
+ }
+ }
+}
+```
+
+If you're using a system assigned managed identity for your custom domain suffix configuration, use this code:
+
+```json
+{
+ "properties": {
+ "VirtualNetwork": {
+ "Id": "/subscriptions/<subscription-id>/resourceGroups/<resource-group-name>/providers/Microsoft.Network/virtualNetworks/<virtual-network-name>/subnets/<subnet-name>"
+ },
+ "zoneRedundant": "<true/false>",
+ "customDnsSuffixConfiguration": {
+ "dnsSuffix": "internal-contoso.com",
+ "certificateUrl": "https://contoso.vault.azure.net/secrets/myCertificate",
+ "keyVaultReferenceIdentity": "SystemAssigned"
+ }
+ }
+}
+```
+
+## 9. Migrate to App Service Environment v3 and check status
+
+After you complete all of the preceding steps, you can start the migration. Make sure that you understand the [implications of migration](side-by-side-migrate.md#migrate-to-app-service-environment-v3).
+
+This step takes three to six hours complete. During that time, there's no application downtime. Scaling, deployments, and modifications to your existing App Service Environment are blocked during this step.
+
+Run the following command to start the migration:
+
+```azurecli
+az rest --method post --uri "${ASE_ID}/NoDowntimeMigrate?phase=HybridDeployment&api-version=2022-03-01" --body @parameters.json
+```
+
+Run the following command to check the status of your migration:
+
+```azurecli
+az rest --method get --uri "${ASE_ID}?api-version=2022-03-01" --query properties.subStatus
+```
+After you get a status of `Ready`, migration is done, and you have an App Service Environment v3 resource. Your apps are now running in your new environment as well as in your old environment.
+
+Get the details of your new environment by running the following command or by going to the [Azure portal](https://portal.azure.com).
+
+```azurecli
+az appservice ase show --name $ASE_NAME --resource-group $ASE_RG
+```
+
+## 10. Get the inbound IP address for your new App Service Environment v3 and update dependent resources
+
+You have two App Service Environments at this stage in the migration process. Your apps are running in both environments. You need to update any dependent resources to use the new inbound IP address for your new App Service Environment v3. For internal facing (ILB) App Service Environments, you need to update your private DNS zones to point to the new inbound IP address.
+
+You can get the inbound IP address for your new App Service Environment v3 by running the following command.
+
+```azurecli
+az rest --method get --uri "${ASE_ID}?api-version=2022-03-01"
+```
+
+## 11. Redirect customer traffic and complete migration
+
+This step is your opportunity to test and validate your new App Service Environment v3. Once you confirm your apps are working as expected, you can redirect customer traffic to your new environment by running the following command. This command also deletes your old environment.
+
+```azurecli
+az rest --method post --uri "${ASE_ID}/NoDowntimeMigrate?phase=DnsChange&api-version=2022-03-01"
+```
+
+If you find any issues or decide at this point that you no longer want to proceed with the migration, contact support to revert the migration. Don't run the above command if you need to revert the migration. For more information, see [Revert migration](side-by-side-migrate.md#redirect-customer-traffic-and-complete-migration).
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Use an App Service Environment v3 resource](using.md)
+
+> [!div class="nextstepaction"]
+> [App Service Environment v3 networking](networking.md)
+
+> [!div class="nextstepaction"]
+> [Custom domain suffix](./how-to-custom-domain-suffix.md)
app-service Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migrate.md
Title: Migrate to App Service Environment v3 by using the migration feature
-description: Overview of the migration feature for migration to App Service Environment v3
+ Title: Migrate to App Service Environment v3 by using the in-place migration feature
+description: Overview of the in-place migration feature for migration to App Service Environment v3.
Previously updated : 01/30/2024 Last updated : 02/15/2024
-# Migration to App Service Environment v3 using the migration feature
+# Migration to App Service Environment v3 using the in-place migration feature
-App Service can now automate migration of your App Service Environment v1 and v2 to an [App Service Environment v3](overview.md). App Service Environment v3 provides [advantages and feature differences](overview.md#feature-differences) over earlier versions. Make sure to review the [supported features](overview.md#feature-differences) of App Service Environment v3 before migrating to reduce the risk of an unexpected application issue.
+> [!NOTE]
+> The migration feature described in this article is used for in-place (same subnet) automated migration of App Service Environment v1 and v2 to App Service Environment v3. If you're looking for information on the side by side migration feature, see [Migrate to App Service Environment v3 by using the side by side migration feature](side-by-side-migrate.md). If you're looking for information on manual migration options, see [Manual migration options](migration-alternatives.md). For help deciding which migration option is right for you, see [Migration path decision tree](upgrade-to-asev3.md#migration-path-decision-tree). For more information on App Service Environment v3, see [App Service Environment v3 overview](overview.md).
+>
+
+App Service can automate migration of your App Service Environment v1 and v2 to an [App Service Environment v3](overview.md). There are different migration options. Review the [migration path decision tree](upgrade-to-asev3.md#migration-path-decision-tree) to decide which option is best for your use case. App Service Environment v3 provides [advantages and feature differences](overview.md#feature-differences) over earlier versions. Make sure to review the [supported features](overview.md#feature-differences) of App Service Environment v3 before migrating to reduce the risk of an unexpected application issue.
+
+The in-place migration feature automates your migration to App Service Environment v3 by upgrading your existing App Service Environment in the same subnet. This migration option is best for customers who want to migrate to App Service Environment v3 with minimal changes to their networking configurations and can support about one hour of application downtime. If you can't support downtime, see the [side migration feature](side-by-side-migrate.md) or the [manual migration options](migration-alternatives.md).
> [!IMPORTANT] > It is recommended to use this feature for dev environments first before migrating any production environments to ensure there are no unexpected issues. Please provide any feedback related to this article or the feature using the buttons at the bottom of the page.
App Service can now automate migration of your App Service Environment v1 and v2
## Supported scenarios
-At this time, the migration feature doesn't support migrations to App Service Environment v3 in the following regions:
+At this time, the in-place migration feature doesn't support migrations to App Service Environment v3 in the following regions:
### Microsoft Azure operated by 21Vianet - China East 2 - China North 2
-The following App Service Environment configurations can be migrated using the migration feature. The table gives the App Service Environment v3 configuration when using the migration feature based on your existing App Service Environment. All supported App Service Environments can be migrated to a [zone redundant App Service Environment v3](../../availability-zones/migrate-app-service-environment.md) using the migration feature as long as the environment is [in a region that supports zone redundancy](./overview.md#regions). You can [configure zone redundancy](#choose-your-app-service-environment-v3-configurations) during the migration process.
+The following App Service Environment configurations can be migrated using the in-place migration feature. The table gives the App Service Environment v3 configuration when using the in-place migration feature based on your existing App Service Environment. All supported App Service Environments can be migrated to a [zone redundant App Service Environment v3](../../availability-zones/migrate-app-service-environment.md) using the in-place migration feature as long as the environment is [in a region that supports zone redundancy](./overview.md#regions). You can [configure zone redundancy](#choose-your-app-service-environment-v3-configurations) during the migration process.
|Configuration |App Service Environment v3 Configuration | ||--|
If you want your new App Service Environment v3 to use a custom domain suffix an
You can find the version of your App Service Environment by navigating to your App Service Environment in the [Azure portal](https://portal.azure.com) and selecting **Configuration** under **Settings** on the left-hand side. You can also use [Azure Resource Explorer](https://resources.azure.com/) and review the value of the `kind` property for your App Service Environment.
-## Migration feature limitations
+## In-place migration feature limitations
-The following are limitations when using the migration feature:
+The following are limitations when using the in-place migration feature:
- Your new App Service Environment v3 is in the existing subnet that was used for your old environment. - You can't change the region your App Service Environment is located in.
App Service Environment v3 doesn't support the following features that you can b
- Configuring an IP-based TLS/SSL binding with your apps. - App Service Environment v3 doesn't fall back to Azure DNS if your configured custom DNS servers in the virtual network aren't able to resolve a given name. If this behavior is required, ensure that you have a forwarder to a public DNS or include Azure DNS in the list of custom DNS servers.
-The migration feature doesn't support the following scenarios. See the [manual migration options](migration-alternatives.md) if your App Service Environment falls into one of these categories.
+The in-place migration feature doesn't support the following scenarios. See the [manual migration options](migration-alternatives.md) if your App Service Environment falls into one of these categories.
-- App Service Environment v1 in a [Classic VNet](/previous-versions/azure/virtual-network/create-virtual-network-classic)
+- App Service Environment v1 in a [Classic virtual network](/previous-versions/azure/virtual-network/create-virtual-network-classic)
- ELB App Service Environment v2 with IP SSL addresses - ELB App Service Environment v1 with IP SSL addresses-- App Service Environment in a region not listed in the supported regions
-The App Service platform reviews your App Service Environment to confirm migration support. If your scenario doesn't pass all validation checks, you can't migrate at this time using the migration feature. If your environment is in an unhealthy or suspended state, you can't migrate until you make the needed updates.
+The App Service platform reviews your App Service Environment to confirm in-place migration support. If your scenario doesn't pass all validation checks, you can't migrate at this time using the in-place migration feature. If your environment is in an unhealthy or suspended state, you can't migrate until you make the needed updates.
> [!NOTE] > App Service Environment v3 doesn't support IP SSL. If you use IP SSL, you must remove all IP SSL bindings before migrating to App Service Environment v3. The migration feature will support your environment once all IP SSL bindings are removed.
If your App Service Environment doesn't pass the validation checks or you try to
|Error message |Description |Recommendation | |||-|
-|Migrate can only be called on an ASE in ARM VNET and this ASE is in Classic VNET. |App Service Environments in Classic VNets can't migrate using the migration feature. |Migrate using one of the [manual migration options](migration-alternatives.md). |
-|ASEv3 Migration is not yet ready. |The underlying infrastructure isn't ready to support App Service Environment v3. |Migrate using one of the [manual migration options](migration-alternatives.md) if you want to migrate immediately. Otherwise, wait for the migration feature to be available in your region. |
-|Migration cannot be called on this ASE, please contact support for help migrating. |Support needs to be engaged for migrating this App Service Environment. This issue is potentially due to custom settings used by this environment. |Engage support to resolve your issue. |
+|Migrate can only be called on an ASE in ARM VNET and this ASE is in Classic VNET. |App Service Environments in Classic VNets can't migrate using the in-place migration feature. |Migrate using one of the [manual migration options](migration-alternatives.md). |
+|ASEv3 Migration is not yet ready. |The underlying infrastructure isn't ready to support App Service Environment v3. |Migrate using one of the [manual migration options](migration-alternatives.md) if you want to migrate immediately. Otherwise, wait for the in-place migration feature to be available in your region. |
+|Migration cannot be called on this ASE, please contact support for help migrating. |Support needs to be engaged for migrating this App Service Environment. This issue is potentially due to custom settings used by this environment. |Open a support case to engage support to resolve your issue. |
|Migrate cannot be called if IP SSL is enabled on any of the sites.|App Service Environments that have sites with IP SSL enabled can't be migrated using the migration feature at this time. |Migrate using one of the [manual migration options](migration-alternatives.md) if you want to migrate immediately. | |Full migration cannot be called before IP addresses are generated. |This error appears if you attempt to migrate before finishing the premigration steps. |Ensure you complete all premigration steps before you attempt to migrate. See the [step-by-step guide for migrating](how-to-migrate.md). | |Migration to ASEv3 is not allowed for this ASE. |You can't migrate using the migration feature. |Migrate using one of the [manual migration options](migration-alternatives.md). |
If your App Service Environment doesn't pass the validation checks or you try to
|App Service Environment management operation in progress. |Your App Service Environment is undergoing a management operation. These operations can include activities such as deployments or upgrades. Migration is blocked until these operations are complete. |You can migrate once these operations are complete. | |Migrate is not available for this subscription.|Support needs to be engaged for migrating this App Service Environment.|Open a support case to engage support to resolve your issue.| |Your InteralLoadBalancingMode is not currently supported.|App Service Environments that have InternalLoadBalancingMode set to certain values can't be migrated using the migration feature at this time. |Migrate using one of the [manual migration options](migration-alternatives.md) if you want to migrate immediately. |
-|Migration is invalid. Your ASE needs to be upgraded to the latest build to ensure successful migration. We will upgrade your ASE now. Please try migrating again in few hours once platform upgrade has finished. |Your App Service Environment isn't on the minimum build required for migration. An upgrade is started. Your App Service Environment isn't impacted, but you can't scale or make changes to your App Service Environment while the upgrade is in progress. You can't migrate until the upgrade finishes. |Wait until the upgrade finishes and then migrate. |
+|Migration is invalid. Your ASE needs to be upgraded to the latest build to ensure successful migration. We will upgrade your ASE now. Please try migrating again in few hours once platform upgrade has finished. |Your App Service Environment isn't on the minimum build required for migration. An upgrade is started. Your App Service Environment won't be impacted, but you won't be able to scale or make changes to your App Service Environment while the upgrade is in progress. You won't be able to migrate until the upgrade finishes. |Wait until the upgrade finishes and then migrate. |
-## Overview of the migration process using the migration feature
+## Overview of the migration process using the in-place migration feature
-Migration consists of a series of steps that must be followed in order. Key points are given for a subset of the steps. It's important to understand what happens during these steps and how your environment and apps are impacted. After reviewing the following information and when you're ready to migrate, follow the [step-by-step guide](how-to-migrate.md).
+In-place migration consists of a series of steps that must be followed in order. Key points are given for a subset of the steps. It's important to understand what happens during these steps and how your environment and apps are impacted. After reviewing the following information and when you're ready to migrate, follow the [step-by-step guide](how-to-migrate.md).
### Generate IP addresses for your new App Service Environment v3
When completed, you'll be given the new IPs that your future App Service Environ
### Update dependent resources with new IPs
-Once the new IPs are created, you have the new default outbound to the internet public addresses. In preparation for the migration, you can adjust any external firewalls, DNS routing, network security groups, and any other resources that rely on these IPs. For ELB App Service Environment, you also have the new inbound IP address that you can use to set up new endpoints with services like [Traffic Manager](../../traffic-manager/traffic-manager-overview.md) or [Azure Front Door](../../frontdoor/front-door-overview.md). **It's your responsibility to update any and all resources that will be impacted by the IP address change associated with the new App Service Environment v3. Don't move on to the next step until you've made all required updates.** This step is also a good time to review the [inbound and outbound network](networking.md#ports-and-network-restrictions) dependency changes when moving to App Service Environment v3 including the port change for the Azure Load Balancer, which now uses port 80.
+Once the new IPs are created, you have the new default outbound to the internet public addresses. In preparation for the migration, you can adjust any external firewalls, DNS routing, network security groups, and any other resources that rely on these IPs. For ELB App Service Environment, you also have the new inbound IP address that you can use to set up new endpoints with services like [Traffic Manager](../../traffic-manager/traffic-manager-overview.md) or [Azure Front Door](../../frontdoor/front-door-overview.md). **It's your responsibility to update any and all resources that will be impacted by the IP address change associated with the new App Service Environment v3. Don't move on to the next step until you've made all required updates.** This step is also a good time to review the [inbound and outbound network](networking.md#ports-and-network-restrictions) dependency changes when moving to App Service Environment v3 including the port change for the Azure Load Balancer health probe, which now uses port 80.
### Delegate your App Service Environment subnet
-App Service Environment v3 requires the subnet it's in to have a single delegation of `Microsoft.Web/hostingEnvironments`. Migration can't succeed if the App Service Environment's subnet isn't delegated or it's delegated to a different resource.
+App Service Environment v3 requires the subnet it's in to have a single delegation of `Microsoft.Web/hostingEnvironments`. Migration can't succeed if the App Service Environment's subnet isn't delegated or you delegate it to a different resource.
### Acknowledge instance size changes
-Your App Service plans are converted from Isolated to the corresponding Isolated v2 SKU as part of the migration. For example, I2 is converted to I2v2. Your apps may be over-provisioned after the migration since the Isolated v2 tier has more memory and CPU per corresponding instance size. You have the opportunity to scale your environment as needed once migration is complete. For more information, review the [SKU details](https://azure.microsoft.com/pricing/details/app-service/windows/).
+Your App Service plans are converted from Isolated to the corresponding Isolated v2 tier as part of the migration. For example, I2 is converted to I2v2. Your apps might be over-provisioned after the migration since the Isolated v2 tier has more memory and CPU per corresponding instance size. You have the opportunity to scale your environment as needed once migration is complete. For more information, review the [SKU details](https://azure.microsoft.com/pricing/details/app-service/windows/).
### Ensure there are no locks on your resources
Azure Policy can be used to deny resource creation and modification to certain p
### Choose your App Service Environment v3 configurations
-Your App Service Environment v3 can be deployed across availability zones in the regions that support it. This architecture is known as [zone redundancy](../../availability-zones/migrate-app-service-environment.md). Zone redundancy can only be configured during App Service Environment creation. If you want your new App Service Environment v3 to be zone redundant, enable the configuration during the migration process. Any App Service Environment that is using the migration feature to migrate can be configured as zone redundant as long as you're using a [region that supports zone redundancy for App Service Environment v3](./overview.md#regions). If you're existing environment is using a region that doesn't support zone redundancy, the configuration option is disabled and you can't configure it. The migration feature doesn't support changing regions. If you'd like to use a different region, use one of the [manual migration options](migration-alternatives.md).
+Your App Service Environment v3 can be deployed across availability zones in the regions that support it. This architecture is known as [zone redundancy](../../availability-zones/migrate-app-service-environment.md). Zone redundancy can only be configured during App Service Environment creation. If you want your new App Service Environment v3 to be zone redundant, enable the configuration during the migration process. Any App Service Environment that is using the in-place migration feature to migrate can be configured as zone redundant as long as you're using a [region that supports zone redundancy for App Service Environment v3](./overview.md#regions). If you're existing environment is in a region that doesn't support zone redundancy, the configuration option is disabled and you can't configure it. The in-place migration feature doesn't support changing regions. If you'd like to use a different region, use one of the [manual migration options](migration-alternatives.md).
> [!NOTE] > Enabling zone redundancy can lead to additional charges. Review the [zone redundancy pricing model](../../availability-zones/migrate-app-service-environment.md#pricing) for more information.
If your migration includes a custom domain suffix, for App Service Environment v
After completing the previous steps, you should continue with migration as soon as possible.
-Migration requires a three to six hour service window for App Service Environment v2 to v3 migrations. Up to a six hour service window is required depending on environment size for v1 to v3 migrations. The service window can be extended in rare cases where manual intervention by the service team is required. During migration, scaling and environment configurations are blocked and the following events occur:
+> [!IMPORTANT]
+> Since scaling is blocked during the migration, you should scale your environment to the desired size before starting the migration.
+>
+
+Migration requires a three to six hour service window for App Service Environment v2 to v3 migrations. Up to a six hour service window is required depending on environment size for v1 to v3 migrations. The service window might be extended in rare cases where manual intervention by the service team is required. During migration, scaling and environment configurations are blocked and the following events occur:
- The existing App Service Environment is shut down and replaced by the new App Service Environment v3.-- All App Service plans in the App Service Environment are converted from the Isolated to Isolated v2 SKU.
+- All App Service plans in the App Service Environment are converted from the Isolated to Isolated v2 tier.
- All of the apps that are on your App Service Environment are temporarily down. **You should expect about one hour of downtime during this period**.
- - If you can't support downtime, see [migration-alternatives](migration-alternatives.md#migrate-manually).
+ - If you can't support downtime, see the [side by side migration feature](side-by-side-migrate.md) or the[migration-alternatives](migration-alternatives.md#migrate-manually).
- The public addresses that are used by the App Service Environment change to the IPs generated during the IP generation step.
-The following statuses are available during the migration process:
-
-|Status |Description |
-||-|
-|Validating and preparing the migration. |The platform is validating migration support and performing necessary checks. |
-|Deploying App Service Environment v3 infrastructure. |Your new App Service Environment v3 infrastructure is provisioning. |
-|Waiting for infrastructure to complete. |The platform is validating your new infrastructure and performing necessary checks. |
-|Setting up networking. Migration downtime period has started. Applications are not accessible. |The platform is deleting your old infrastructure and moving all of your apps to your new App Service Environment v3. Your apps are down and aren't accepting traffic. |
-|Running post migration validations. |The platform is performing necessary checks to ensure the migration succeeded. |
-|Finalizing migration. |The platform is finalizing the migration. |
- As in the IP generation step, you can't scale, modify your App Service Environment, or deploy apps to it during this process. When migration is complete, the apps that were on the old App Service Environment are running on the new App Service Environment v3.
-> [!NOTE]
-> Due to the conversion of App Service plans from Isolated to Isolated v2, your apps may be over-provisioned after the migration since the Isolated v2 tier has more memory and CPU per corresponding instance size. You'll have the opportunity to [scale your environment](../manage-scale-up.md) as needed once migration is complete. For more information, review the [SKU details](https://azure.microsoft.com/pricing/details/app-service/windows/).
->
- ## Pricing
-There's no cost to migrate your App Service Environment. You stop being charged for your previous App Service Environment as soon as it shuts down during the migration process, and you begin getting charged for your new App Service Environment v3 as soon as it's deployed. For more information about App Service Environment v3 pricing, see the [pricing details](overview.md#pricing).
-
-When you migrate to App Service Environment v3 from previous versions, there are scenarios that you should consider that can potentially reduce your monthly cost. In addition to the following scenarios, consider [reservations](../../cost-management-billing/reservations/reservation-discount-app-service.md#how-reservation-discounts-apply-to-isolated-v2-instances) and [savings plans](../../cost-management-billing/savings-plan/savings-plan-compute-overview.md) to further reduce your costs.
+There's no cost to migrate your App Service Environment. When you use the in-place migration feature, you stop being charged for your previous App Service Environment as soon as it shuts down during the migration process. You begin getting charged for your new App Service Environment v3 as soon as it gets deployed. For more information about App Service Environment v3 pricing, see the [pricing details](overview.md#pricing).
-### Scale down your App Service plans
-
-The App Service plan SKUs available for App Service Environment v3 run on the Isolated v2 (Iv2) tier. The number of cores and amount of RAM are effectively doubled per corresponding tier compared the Isolated tier. When you migrate, your App Service plans are converted to the corresponding tier. For example, your I2 instances are converted to I2v2. While I2 has two cores and 7-GB RAM, I2v2 has four cores and 16-GB RAM. If you expect your capacity requirements to stay the same, you're over-provisioned and paying for compute and memory you're not using. For this scenario, you can scale down your I2v2 instance to I1v2 and end up with a similar number of cores and RAM that you had previously.
+When you migrate to App Service Environment v3 from previous versions, there are scenarios that you should consider that can potentially reduce your monthly cost. Consider [reservations](../../cost-management-billing/reservations/reservation-discount-app-service.md#how-reservation-discounts-apply-to-isolated-v2-instances) and [savings plans](../../cost-management-billing/savings-plan/savings-plan-compute-overview.md) to further reduce your costs. For information on cost saving opportunities, see [Cost saving opportunities after upgrading to App Service Environment v3](upgrade-to-asev3.md#cost-saving-opportunities-after-upgrading-to-app-service-environment-v3).
> [!NOTE]
-> All scenarios are calculated using costs based on Linux $USD pricing in East US. The payment option is set to monthly. Estimates are based on the prices applicable on the day the estimate was created. Actual total estimates may vary. For the most up-to-date estimates, see the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/).
->
-
-To demonstrate the cost saving opportunity for this scenario, use the [pricing calculator](https://azure.microsoft.com/pricing/calculator/) to estimate the monthly savings as a result of scaling down your App Service plans. For this example, your App Service Environment v2 has 1 I2 instance. You require two cores and 7-GB RAM. You're using pay-as-you-go pricing. On App Service Environment v2, your monthly payment is the following.
-
-[Stamp fee + 1(I2) = $991.34 + $416.10 = $1,407.44](https://azure.com/e/014bf22b3e88439dba350866a472a41a)
-
-When you migrate this App Service Environment using the migration feature, your new App Service Environment v3 has 1 I2v2 instance, which means you have four cores and 16-GB RAM. If you don't change anything, your monthly payment is the following.
-
-[1(I2v2) = $563.56](https://azure.com/e/17946ea2c4db483d882526ba515a6771)
-
-Your monthly cost is reduced, but you don't need that much compute and capacity. You scale down your instance to I1v2 and your monthly cost is reduced even further.
-
-[1(I1v2) = $281.78](https://azure.com/e/9d481c3af3cd407d975017c2b8158bbd)
-
-### Break even point
-
-In most cases, migrating to App Service Environment v3 allows for cost saving opportunities. However, cost savings may not always be possible, especially if you're required to maintain a large number of small instances.
-
-To demonstrate this scenario, you have an App Service Environment v2 with a single I1 instance. Your monthly cost is:
-
-[Stamp fee + 1(I1) = $991.34 + $208.05 = **$1,199.39**](https://azure.com/e/ac89a70062a240e1b990304052d49fad)
-
-If you migrate this environment to App Service Environment v3, your monthly cost is:
-
-[1(I1v2) = **$281.78**](https://azure.com/e/9d481c3af3cd407d975017c2b8158bbd)
-
-This change is a significant cost reduction, but you're over-provisioned since you have double the cores and RAM, which you may not need. This excess isn't an issue for this scenario since the new environment is cheaper. However, when you increase your I1 instances in a single App Service Environment, you see how migrating to App Service Environment v3 can increase your monthly cost.
-
-For this scenario, your App Service Environment v2 has 14 I1 instances. Your monthly cost is:
-
-[Stamp fee + 14(I1) = $991.34 + $2,912.70 = **$3,904.04**](https://azure.com/e/bd1dce4b5c8f4d6d807ed3c4ae78fcae)
-
-When you migrate this environment to App Service Environment v3, your monthly cost is:
-
-[14(I1v2) = **$3,944.92**](https://azure.com/e/e0f1ebacf937479ba073a9c32cb2452f)
-
-Your App Service Environment v3 is now more expensive than your App Service Environment v2. As you start add more I1 instances, and therefore need more I1v2 instances when you migrate, the difference in price becomes more significant. If this scenario is a requirement for your environment, you may need to plan for an increase in your monthly cost. The following graph visually depicts the point where App Service Environment v3 becomes more expensive than App Service Environment v2 for this specific scenario.
-
-> [!NOTE]
-> This calculation was done with Linux $USD prices in East US. Break even points will vary due to price variances in the different regions. For an estimate that reflects your situation, see [the Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/).
+> Due to the conversion of App Service plans from Isolated to Isolated v2, your apps may be over-provisioned after the migration since the Isolated v2 tier has more memory and CPU per corresponding instance size. You'll have the opportunity to [scale your environment](../manage-scale-up.md) as needed once migration is complete. For more information, review the [SKU details](https://azure.microsoft.com/pricing/details/app-service/windows/).
>
+### Scale down your App Service plans
-For more scenarios on cost changes and savings opportunities with App Service Environment v3, see [Estimate your cost savings by migrating to App Service Environment v3](https://azure.github.io/AppService/2023/03/02/App-service-environment-v3-pricing.html).
+The App Service plan SKUs available for App Service Environment v3 run on the Isolated v2 (Iv2) tier. The number of cores and amount of RAM are effectively doubled per corresponding tier compared the Isolated tier. When you migrate, your App Service plans are converted to the corresponding tier. For example, your I2 instances are converted to I2v2. While I2 has two cores and 7-GB RAM, I2v2 has four cores and 16-GB RAM. If you expect your capacity requirements to stay the same, you're over-provisioned and paying for compute and memory you're not using. For this scenario, you can scale down your I2v2 instance to I1v2 and end up with a similar number of cores and RAM that you had previously.
## Frequently asked questions - **What if migrating my App Service Environment is not currently supported?**
- You can't migrate using the migration feature at this time. If you have an unsupported environment and want to migrate immediately, see the [manual migration options](migration-alternatives.md).
+ You can't migrate using the in-place migration feature at this time. If you have an unsupported environment and want to migrate immediately, see the [manual migration options](migration-alternatives.md).
+- **How do I choose which migration option is right for me?**
+ Review the [migration path decision tree](upgrade-to-asev3.md#migration-path-decision-tree) to decide which option is best for your use case.
+- **How do I know if I should use the in-place migration feature?**
+ The in-place migration feature is best for customers who want to migrate to App Service Environment v3 with minimal changes to their networking configurations and can support about one hour of application downtime. If you can't support downtime, see the [side migration feature](side-by-side-migrate.md) or the [manual migration options](migration-alternatives.md). The in-place migration feature creates your App Service Environment v3 in the same subnet as your existing environment and uses the same networking infrastructure. You might have to account for the inbound and outbound IP address changes if you have any dependencies on these specific IPs.
- **Will I experience downtime during the migration?**
- Yes, you should expect about one hour of downtime during the three to six hour service window during the migration step, so plan accordingly. If downtime isn't an option for you, see the [manual migration options](migration-alternatives.md).
+ Yes, you should expect about one hour of downtime during the three to six hour service window during the migration step, so plan accordingly. If you have a different App Service Environment that you can point traffic to while you migrate using the in-place migration feature, you can eliminate application downtime. If you don't have another App Service Environment and you can't support downtime, see the side by [side migration feature](side-by-side-migrate.md) or the [manual migration options](migration-alternatives.md).
- **Will I need to do anything to my apps after the migration to get them running on the new App Service Environment?** No, all of your apps running on the old environment are automatically migrated to the new environment and run like before. No user input is needed. - **What if my App Service Environment has a custom domain suffix?**
- The migration feature supports this [migration scenario](#supported-scenarios). You can migrate using a manual method if you don't want to use the migration feature. You can configure your [custom domain suffix](./how-to-custom-domain-suffix.md) when creating your App Service Environment v3 or any time after.
+ The in-place migration feature supports this [migration scenario](#supported-scenarios).
- **What if my App Service Environment is zone pinned?** Zone pinned App Service Environment v2 is now a supported scenario for migration using the migration feature. App Service Environment v3 doesn't support zone pinning. When migrating to App Service Environment v3, you can choose to configure zone redundancy or not. - **What if my App Service Environment has IP SSL addresses?**
- IP SSL isn't supported on App Service Environment v3. You must remove all IP SSL bindings before migrating using the migration feature or one of the manual options. If you intend to use the migration feature, once you remove all IP SSL bindings, you pass that validation check and can proceed with the automated migration.
+ IP SSL isn't supported on App Service Environment v3. You must remove all IP SSL bindings before migrating using the migration feature or one of the manual options. If you intend to use the in-place migration feature, once you remove all IP SSL bindings, you pass that validation check and can proceed with the automated migration.
- **What properties of my App Service Environment will change?** You're on App Service Environment v3 so be sure to review the [features and feature differences](overview.md#feature-differences) compared to previous versions. For ILB App Service Environment, you keep the same ILB IP address. For internet facing App Service Environment, the public IP address and the outbound IP address change. Note for ELB App Service Environment, previously there was a single IP for both inbound and outbound. For App Service Environment v3, they're separate. For more information, see [App Service Environment v3 networking](networking.md#addresses). For a full comparison of the App Service Environment versions, see [App Service Environment version comparison](version-comparison.md). - **What happens if migration fails or there is an unexpected issue during the migration?**
- If there's an unexpected issue, support teams are on hand. It's recommended to migrate dev environments before touching any production environments.
+ If there's an unexpected issue, support teams are on hand. You should migrate dev environments before touching any production environments to learn about the migration process and see how it impacts your workloads.
- **What happens to my old App Service Environment?**
- If you decide to migrate an App Service Environment using the migration feature, the old environment gets shutdown, deleted, and all of your apps are migrated to a new environment. Your old environment is no longer accessible. A rollback to the old environment isn't possible.
+ If you decide to migrate an App Service Environment using the in-place migration feature, the old environment gets shutdown, deleted, and all of your apps are migrated to a new environment. Your old environment is no longer accessible. A rollback to the old environment isn't possible.
- **What will happen to my App Service Environment v1/v2 resources after 31 August 2024?**
- After 31 August 2024, if you haven't migrated to App Service Environment v3, your App Service Environment v1/v2s and the apps deployed in them will no longer be available. App Service Environment v1/v2 is hosted on App Service scale units running on [Cloud Services (classic)](../../cloud-services/cloud-services-choose-me.md) architecture that will be [retired on 31 August 2024](https://azure.microsoft.com/updates/cloud-services-retirement-announcement/). Because of this, [App Service Environment v1/v2 will no longer be available after that date](https://azure.microsoft.com/updates/app-service-environment-v1-and-v2-retirement-announcement/). Migrate to App Service Environment v3 to keep your apps running or save or back up any resources or data that you need to maintain.
+ After 31 August 2024, if you don't to App Service Environment v3, your App Service Environment v1/v2s and the apps deployed in them will no longer be available. App Service Environment v1/v2 is hosted on App Service scale units running on [Cloud Services (classic)](../../cloud-services/cloud-services-choose-me.md) architecture that will be [retired on 31 August 2024](https://azure.microsoft.com/updates/cloud-services-retirement-announcement/). Because of this, [App Service Environment v1/v2 will no longer be available after that date](https://azure.microsoft.com/updates/app-service-environment-v1-and-v2-retirement-announcement/). Migrate to App Service Environment v3 to keep your apps running or save or back up any resources or data that you need to maintain.
## Next steps > [!div class="nextstepaction"]
-> [Migrate your App Service Environment to App Service Environment v3](how-to-migrate.md)
-
-> [!div class="nextstepaction"]
-> [Manually migrate to App Service Environment v3](migration-alternatives.md)
+> [Migrate your App Service Environment to App Service Environment v3 using the in-place migration feature](how-to-migrate.md)
> [!div class="nextstepaction"] > [App Service Environment v3 Networking](networking.md)
app-service Migration Alternatives https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migration-alternatives.md
Title: Migrate to App Service Environment v3
description: Learn how to migrate your applications to App Service Environment v3. Previously updated : 01/30/2024 Last updated : 02/12/2024 # Migrate to App Service Environment v3
-If you're currently using App Service Environment v1 or v2, you have the opportunity to migrate your workloads to [App Service Environment v3](overview.md). App Service Environment v3 has [advantages and feature differences](overview.md#feature-differences) that provide enhanced support for your workloads and can reduce overall costs.
+> [!NOTE]
+> There are two automated migration features available to help you upgrade to App Service Environment v3. To learn more about those features and for help deciding which migration option is right for you, see [Migration path decision tree](upgrade-to-asev3.md#migration-path-decision-tree). Consider one of the automated options for a quicker path to [App Service Environment v3](overview.md).
+>
-The App Service Environment v3 [migration feature](migrate.md) provides an automated migration path to App Service Environment v3. Consider using the migration feature if your environment falls into one of the [supported scenarios](migrate.md#supported-scenarios).
+If you're currently using App Service Environment v1 or v2, you have the opportunity to migrate your workloads to [App Service Environment v3](overview.md). App Service Environment v3 has [advantages and feature differences](overview.md#feature-differences) that provide enhanced support for your workloads and can reduce overall costs. Consider using the [the automated migration features](upgrade-to-asev3.md) if your environment meets the criteria described in the [migration path decision tree](upgrade-to-asev3.md#migration-path-decision-tree).
-If your App Service Environment [isn't supported for the migration feature](migrate.md#migration-feature-limitations), you must use one of the manual methods to migrate to App Service Environment v3.
+If your App Service Environment isn't supported for the migration features, you must use one of the manual methods to migrate to App Service Environment v3.
## Prerequisites Scenario: You have an app that runs on App Service Environment v1 or App Service Environment v2, and you need that app to run on App Service Environment v3.
-For any migration method that doesn't use the [migration feature](migrate.md), you need to [create the App Service Environment v3 resource](creation.md) and a new subnet by using the method of your choice.
+For any migration method that doesn't use the automated migration features, you need to [create the App Service Environment v3 resource](creation.md) and a new subnet by using the method of your choice.
[Networking changes](networking.md) between App Service Environment v1/v2 and App Service Environment v3 involve new (and for internet-facing environments, additional) IP addresses. You need to update any infrastructure that relies on these IPs. Be sure to account for inbound dependency changes, such as the Azure Load Balancer port.
You can [deploy ARM templates](../deploy-complex-application-predictably.md) by
## Migrate manually
-The [migration feature](migrate.md) automates the migration to App Service Environment v3 and transfers all of your apps to the new environment. There's about one hour of downtime during this migration. If your apps can't have any downtime, we recommend that you use one of the manual options to re-create your apps in App Service Environment v3.
+The [in-place migration feature](migrate.md) automates the migration to App Service Environment v3 and transfers all of your apps to the new environment. There's about one hour of downtime during this migration. If your apps can't have any downtime, we recommend that you use the [side by side migration feature](side-by-side-migrate.md), which is a zero-downtime migration option since the new environment is created in a different subnet. If you also choose not to use the side by side migration feature, you can use one of the manual options to re-create your apps in App Service Environment v3.
You can distribute traffic between your old and new environments by using [Application Gateway](../networking/app-gateway-with-service-endpoints.md). If you're using an internal load balancer (ILB) App Service Environment, [create an Azure Application Gateway instance](integrate-with-application-gateway.md) with an extra back-end pool to distribute traffic between your environments. For information about ILB App Service Environments and internet-facing App Service Environments, see [Application Gateway integration](../overview-app-gateway-integration.md).
After your migration and any testing with your new environment are complete, del
## Frequently asked questions
+- **How do I know if I should migrate to App Service Environment v3 using one of the manual options?**
+ For help deciding which migration option is right for you, see [Migration path decision tree](upgrade-to-asev3.md#migration-path-decision-tree). If your environment meets the criteria described in the [migration path decision tree](upgrade-to-asev3.md#migration-path-decision-tree), consider using one of the automated migration features for a quicker path to [App Service Environment v3](overview.md). Manual migration is recommended if you need to slowly move your apps to your new environment and validate throughout the whole process.
- **Will I experience downtime during the migration?** Downtime is dependent on your migration process. If you have a different App Service Environment that you can point traffic to while you migrate, or if you can use a different subnet to create your new environment, you won't have downtime. If you must use the same subnet, there's downtime while you delete the old environment, create the App Service Environment v3 resource, create the new App Service plans, re-create the apps, and update any resources that use the new IP addresses. - **Do I need to change anything about my apps to get them to run on App Service Environment v3?**
app-service Side By Side Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/side-by-side-migrate.md
+
+ Title: Migrate to App Service Environment v3 by using the side by side migration feature
+description: Overview of the side by side migration feature for migration to App Service Environment v3.
++ Last updated : 2/15/2024+++
+# Migration to App Service Environment v3 using the side by side migration feature (Preview)
+
+> [!NOTE]
+> The migration feature described in this article is used for side by side (different subnet) automated migration of App Service Environment v2 to App Service Environment v3 and is currently **in preview**.
+>
+> If you're looking for information on the in-place migration feature, see [Migrate to App Service Environment v3 by using the in-place migration feature](migrate.md). If you're looking for information on manual migration options, see [Manual migration options](migration-alternatives.md). For help deciding which migration option is right for you, see [Migration path decision tree](upgrade-to-asev3.md#migration-path-decision-tree). For more information on App Service Environment v3, see [App Service Environment v3 overview](overview.md).
+>
+
+App Service can automate migration of your App Service Environment v1 and v2 to an [App Service Environment v3](overview.md). There are different migration options. Review the [migration path decision tree](upgrade-to-asev3.md#migration-path-decision-tree) to decide which option is best for your use case. App Service Environment v3 provides [advantages and feature differences](overview.md#feature-differences) over earlier versions. Make sure to review the [supported features](overview.md#feature-differences) of App Service Environment v3 before migrating to reduce the risk of an unexpected application issue.
+
+The side by side migration feature automates your migration to App Service Environment v3. The side by side migration feature creates a new App Service Environment v3 with all of your apps in a different subnet. Your existing App Service Environment isn't deleted until you initiate its deletion at the end of the migration process. Because of this process, there's a rollback option if you need to cancel your migration. This migration option is best for customers who want to migrate to App Service Environment v3 with zero downtime and can support using a different subnet for their new environment. If you need to use the same subnet and can support about one hour of application downtime, see the [in-place migration feature](migrate.md). For manual migration options that allow you to migrate at your own pace, see [manual migration options](migration-alternatives.md).
+
+> [!IMPORTANT]
+> It is recommended to use this feature for dev environments first before migrating any production environments to ensure there are no unexpected issues. Please provide any feedback related to this article or the feature using the buttons at the bottom of the page.
+>
+
+## Supported scenarios
+
+At this time, the side by side migration feature supports migrations to App Service Environment v3 in the following regions:
+
+### Azure Public
+
+- East Asia
+- West Central US
+
+The following App Service Environment configurations can be migrated using the side by side migration feature. The table gives the App Service Environment v3 configuration when using the side by side migration feature based on your existing App Service Environment.
+
+|Configuration |App Service Environment v3 Configuration |
+||--|
+|[Internal Load Balancer (ILB)](create-ilb-ase.md) App Service Environment v2 |ILB App Service Environment v3 |
+|[External (ELB/internet facing with public IP)](create-external-ase.md) App Service Environment v2 |ELB App Service Environment v3 |
+|ILB App Service Environment v2 with a custom domain suffix |ILB App Service Environment v3 (custom domain suffix is optional) |
+
+App Service Environment v3 can be deployed as [zone redundant](../../availability-zones/migrate-app-service-environment.md). Zone redundancy can be enabled as long as your App Service Environment v3 is [in a region that supports zone redundancy](./overview.md#regions).
+
+If you want your new App Service Environment v3 to use a custom domain suffix and you aren't using one currently, custom domain suffix can be configured during the migration set-up or at any time once migration is complete. For more information, see [Configure custom domain suffix for App Service Environment](./how-to-custom-domain-suffix.md). If your existing environment has a custom domain suffix and you no longer want to use it, don't configure a custom domain suffix during the migration set-up.
+
+## Side by side migration feature limitations
+
+The following are limitations when using the side by side migration feature:
+
+- Your new App Service Environment v3 is in a different subnet but the same virtual network as your existing environment.
+- You can't change the region your App Service Environment is located in.
+- ELB App Service Environment canΓÇÖt be migrated to ILB App Service Environment v3 and vice versa.
+
+App Service Environment v3 doesn't support the following features that you might be using with your current App Service Environment v2.
+
+- Configuring an IP-based TLS/SSL binding with your apps.
+- App Service Environment v3 doesn't fall back to Azure DNS if your configured custom DNS servers in the virtual network aren't able to resolve a given name. If this behavior is required, ensure that you have a forwarder to a public DNS or include Azure DNS in the list of custom DNS servers.
+
+The side by side migration feature doesn't support the following scenarios. See the [manual migration options](migration-alternatives.md) if your App Service Environment falls into one of these categories.
+
+- App Service Environment v1
+ - You can find the version of your App Service Environment by navigating to your App Service Environment in the [Azure portal](https://portal.azure.com) and selecting **Configuration** under **Settings** on the left-hand side. You can also use [Azure Resource Explorer](https://resources.azure.com/) and review the value of the `kind` property for your App Service Environment.
+ - If you have an App Service Environment v1, you can migrate using the [in-place migration feature](migrate.md) or one of the [manual migration options](migration-alternatives.md).
+- ELB App Service Environment v2 with IP SSL addresses
+- [Zone pinned](zone-redundancy.md) App Service Environment v2
+
+The App Service platform reviews your App Service Environment to confirm side by side migration support. If your scenario doesn't pass all validation checks, you can't migrate at this time using the side by side migration feature. If your environment is in an unhealthy or suspended state, you can't migrate until you make the needed updates.
+
+> [!NOTE]
+> App Service Environment v3 doesn't support IP SSL. If you use IP SSL, you must remove all IP SSL bindings before migrating to App Service Environment v3. The migration feature will support your environment once all IP SSL bindings are removed.
+>
+
+### Troubleshooting
+
+If your App Service Environment doesn't pass the validation checks or you try to perform a migration step in the incorrect order, you see one of the following error messages:
+
+|Error message |Description |Recommendation |
+|||-|
+|Migrate can only be called on an ASE in ARM VNET and this ASE is in Classic VNET. |App Service Environments in Classic virtual networks can't migrate using the side by side migration feature. |Migrate using one of the [manual migration options](migration-alternatives.md). |
+|ASEv3 Migration is not yet ready. |The underlying infrastructure isn't ready to support App Service Environment v3. |Migrate using one of the [manual migration options](migration-alternatives.md) if you want to migrate immediately. Otherwise, wait for the side by side migration feature to be available in your region. |
+|Cannot enable zone redundancy for this ASE. |The region the App Service Environment is in doesn't support zone redundancy. |If you need to enable zone redundancy, use one of the manual migration options to migrate to a [region that supports zone redundancy](overview.md#regions). |
+|Migrate cannot be called on this custom DNS suffix ASE at this time. |Custom domain suffix migration is blocked. |Open a support case to engage support to resolve your issue. |
+|Zone redundant ASE migration cannot be called at this time. |Zone redundant App Service Environment migration is blocked. |Open a support case to engage support to resolve your issue. |
+|Migrate cannot be called on ASEv2 that is zone-pinned. |App Service Environment v2 that's zone pinned can't be migrated using the side by side migration feature at this time. |Migrate using one of the [manual migration options](migration-alternatives.md) if you want to migrate immediately. |
+|Existing revert migration operation ongoing, please try again later. |A previous migration attempt is being reverted. |Wait until the revert that's in progress completes before attempting to start migration again. |
+|Properties.VirtualNetwork.Id should contain the subnet resource ID. |The error appears if you attempt to migrate without providing a new subnet for the placement of your App Service Environment v3. |Ensure you follow the guidance and complete the step to identify the subnet you'll use for your App Service Environment v3. |
+|Unable to move to `<requested phase>` from the current phase `<previous phase>` of No Downtime Migration. |This error appears if you attempt to do a migration step in the incorrect order. |Ensure you follow the migration steps in order. |
+|Failed to start revert operation on ASE in hybrid state, please try again later. |This error appears if you try to revert the migration but something goes wrong. This error doesn't affect either your old or your new environment. |Open a support case to engage support to resolve your issue. |
+|This ASE cannot be migrated without downtime. |This error appears if you try to use the side by side migration feature on an App Service Environment v1. |The side by side migration feature doesn't support App Service Environment v1. Migrate using the [in-place migration feature](migrate.md) or one of the [manual migration options](migration-alternatives.md). |
+|Migrate is not available for this subscription. |Support needs to be engaged for migrating this App Service Environment.|Open a support case to engage support to resolve your issue.|
+|Zone redundant migration cannot be called since the IP addresses created during pre-migrate are not zone redundant. |This error appears if you attempt a zone redundant migration but didn't create zone redundant IPs during the IP generation step. |Open a support case to engage support if you need to enable zone redundancy. Otherwise, you can migrate without enabling zone redundancy. |
+|Migrate cannot be called if IP SSL is enabled on any of the sites. |App Service Environments that have sites with IP SSL enabled can't be migrated using the side by side migration feature at this time. |Migrate using one of the [manual migration options](migration-alternatives.md) if you want to migrate immediately. Otherwise, you can disable the IP SSL on all sites in the App Service Environment and attempt migration again. |
+|Cannot migrate within the same subnet. |The error appears if you specify the same subnet that your current environment is in for placement of your App Service Environment v3. |You must specify a different subnet for your App Service Environment v3. If you need to use the same subnet, migrate using the [in-place migration feature](migrate.md). |
+|Subscription has too many App Service Environments. Please remove some before trying to create more.|The App Service Environment [quota for your subscription](../../azure-resource-manager/management/azure-subscription-service-limits.md#app-service-limits) is met. |Remove unneeded environments or contact support to review your options. |
+|Migrate cannot be called on this ASE until the active upgrade has finished. |App Service Environments can't be migrated during platform upgrades. You can set your [upgrade preference](how-to-upgrade-preference.md) from the Azure portal. In some cases, an upgrade is initiated when visiting the migration page if your App Service Environment isn't on the current build. |Wait until the upgrade finishes and then migrate. |
+|App Service Environment management operation in progress. |Your App Service Environment is undergoing a management operation. These operations can include activities such as deployments or upgrades. Migration is blocked until these operations are complete. |You can migrate once these operations are complete. |
+|Your InternalLoadBalancingMode is not currently supported.|App Service Environments that have InternalLoadBalancingMode set to certain values can't be migrated using the side by side migration feature at this time. |Migrate using one of the [manual migration options](migration-alternatives.md) if you want to migrate immediately. |
+|Migration is invalid. Your ASE needs to be upgraded to the latest build to ensure successful migration. We will upgrade your ASE now. Please try migrating again in few hours once platform upgrade has finished. |Your App Service Environment isn't on the minimum build required for migration. An upgrade is started. Your App Service Environment won't be impacted, but you won't be able to scale or make changes to your App Service Environment while the upgrade is in progress. You won't be able to migrate until the upgrade finishes. |Wait until the upgrade finishes and then migrate. |
+|Full migration cannot be called before IP addresses are generated. |This error appears if you attempt to migrate before finishing the premigration steps. |Ensure you complete all premigration steps before you attempt to migrate. See the [step-by-step guide for migrating](how-to-side-by-side-migrate.md). |
+
+## Overview of the migration process using the side by side migration feature
+
+Side by side migration consists of a series of steps that must be followed in order. Key points are given for a subset of the steps. It's important to understand what happens during these steps and how your environment and apps are impacted. After reviewing the following information and when you're ready to migrate, follow the [step-by-step guide](how-to-side-by-side-migrate.md).
+
+### Select and prepare the subnet for your new App Service Environment v3
+
+The platform creates your new App Service Environment v3 in a different subnet than your existing App Service Environment. You need to select a subnet that meets the following requirements:
+
+- The subnet must be in the same virtual network, and therefore region, as your existing App Service Environment.
+ - If your virtual network doesn't have an available subnet, you need to create one. You might need to increase the address space of your virtual network to create a new subnet. For more information, see [Create a virtual network](../../virtual-network/quick-create-portal.md).
+- The subnet must be able to communicate with the subnet your existing App Service Environment is in. Ensure there aren't network security groups or other network configurations that would prevent communication between the subnets.
+- The subnet must have a single delegation of `Microsoft.Web/hostingEnvironments`.
+- The subnet must have enough available IP addresses to support your new App Service Environment v3. The number of IP addresses needed depends on the number of instances you want to use for your new App Service Environment v3. For more information, see [App Service Environment v3 networking](networking.md#addresses).
+- The subnet must not have any locks applied to it. If there are locks, they must be removed before migration. The locks can be readded if needed once migration is complete. For more information on locks and lock inheritance, see [Lock your resources to protect your infrastructure](../../azure-resource-manager/management/lock-resources.md).
+- There must not be any Azure Policies blocking migration or related actions. If there are policies that block the creation of App Service Environments or the modification of subnets, they must be removed before migration. The policies can be readded if needed once migration is complete. For more information on Azure Policy, see [Azure Policy overview](../../governance/policy/overview.md).
+
+### Generate outbound IP addresses for your new App Service Environment v3
+
+The platform creates the [the new outbound IP addresses](networking.md#addresses). While these IPs are getting created, activity with your existing App Service Environment isn't interrupted, however, you can't scale or make changes to your existing environment. This process takes about 15 minutes to complete.
+
+When completed, you'll be given the new outbound IPs that your future App Service Environment v3 uses. These new IPs have no effect on your existing environment. The IPs used by your existing environment continue to be used up until you redirect customer traffic and complete the migration in the final step.
+
+You receive the new inbound IP address once migration is complete but before you make the [DNS change to redirect customer traffic to your new App Service Environment v3](#redirect-customer-traffic-and-complete-migration). You don't get the inbound IP at this point in the process because the inbound IP is dependent on the subnet you select for the new environment. You have a chance to update any resources that are dependent on the new inbound IP before you redirect traffic to your new App Service Environment v3.
+
+This step is also where you decide if you want to enable zone redundancy for your new App Service Environment v3. Zone redundancy can be enabled as long as your App Service Environment v3 is [in a region that supports zone redundancy](./overview.md#regions).
+
+### Update dependent resources with new outbound IPs
+
+The new outbound IPs are created and given to you before you start the actual migration. The new default outbound to the internet public addresses are given so you can adjust any external firewalls, DNS routing, network security groups, and any other resources that rely on these IPs before completing the migration. **It's your responsibility to update any and all resources that will be impacted by the IP address change associated with the new App Service Environment v3. Don't move on to the next step until you've made all required updates.** This step is also a good time to review the [inbound and outbound network](networking.md#ports-and-network-restrictions) dependency changes when moving to App Service Environment v3 including the port change for the Azure Load Balancer health probe, which now uses port 80.
+
+### Delegate your App Service Environment subnet
+
+App Service Environment v3 requires the subnet it's in to have a single delegation of `Microsoft.Web/hostingEnvironments`. Migration can't succeed if the App Service Environment's subnet isn't delegated or you delegate it to a different resource. Ensure that the subnet you select for your new App Service Environment v3 has a single delegation of `Microsoft.Web/hostingEnvironments`.
+
+### Acknowledge instance size changes
+
+Your App Service plans are created with the corresponding Isolated v2 SKU as part of the migration. For example, I2 plans correspond with I2v2. Your apps might be over-provisioned after the migration since the Isolated v2 tier has more memory and CPU per corresponding instance size. You have the opportunity to scale your environment as needed once migration is complete. For more information, review the [SKU details](https://azure.microsoft.com/pricing/details/app-service/windows/).
+
+### Ensure there are no locks on your resources
+
+Virtual network locks block platform operations during migration. If your virtual network has locks, you need to remove them before migrating. The locks can be readded if needed once migration is complete. Locks can exist at three different scopes: subscription, resource group, and resource. When you apply a lock at a parent scope, all resources within that scope inherit the same lock. If you have locks applied at the subscription, resource group, or resource scope, they need to be removed before the migration. For more information on locks and lock inheritance, see [Lock your resources to protect your infrastructure](../../azure-resource-manager/management/lock-resources.md).
+
+### Ensure there are no Azure Policies blocking migration
+
+Azure Policy can be used to deny resource creation and modification to certain principals. If you have a policy that blocks the creation of App Service Environments or the modification of subnets, you need to remove it before migrating. The policy can be readded if needed once migration is complete. For more information on Azure Policy, see [Azure Policy overview](../../governance/policy/overview.md).
+
+### Add a custom domain suffix (optional)
+
+If your existing App Service Environment uses a custom domain suffix, you can configure a custom domain suffix for your new App Service Environment v3. Custom domain suffix on App Service Environment v3 is implemented differently than on App Service Environment v2. You need to provide the custom domain name, managed identity, and certificate, which must be stored in Azure Key Vault. For more information on App Service Environment v3 custom domain suffix including requirements, step-by-step instructions, and best practices, see [Configure custom domain suffix for App Service Environment](./how-to-custom-domain-suffix.md). Configuring a custom domain suffix is optional. If your App Service Environment v2 has a custom domain suffix and you don't want to use it on your new App Service Environment v3, don't configure a custom domain suffix during the migration set-up.
+
+### Migrate to App Service Environment v3
+
+After completing the previous steps, you should continue with migration as soon as possible.
+
+There's no application downtime during the migration, but as in the outbound IP generation step, you can't scale, modify your existing App Service Environment, or deploy apps to it during this process.
+
+> [!IMPORTANT]
+> Since scaling is blocked during the migration, you should scale your environment to the desired size before starting the migration.
+>
+
+Side by side migration requires a three to six hour service window for App Service Environment v2 to v3 migrations. During migration, scaling and environment configurations are blocked and the following events occur:
+
+- The new App Service Environment v3 is created in the subnet you selected.
+- Your new App Service plans are created in the new App Service Environment v3 with the corresponding Isolated v2 tier.
+- Your apps are created in the new App Service Environment v3.
+
+When this step completes, your application traffic is still going to your old App Service Environment and the IPs that were assigned to it. However, you also now have an App Service Environment v3 with all of your apps.
+
+### Get the inbound IP address for your new App Service Environment v3 and update dependent resources
+
+You get the new inbound IP address that you can use to set up new endpoints with services like [Traffic Manager](../../traffic-manager/traffic-manager-overview.md) or [Azure Front Door](../../frontdoor/front-door-overview.md). Don't move on to the next step until you account for this change. There's downtime if you don't update dependent resources with the new inbound IP. **It's your responsibility to update any and all resources that are impacted by the IP address change associated with the new App Service Environment v3. Don't move on to the next step until you've made all required updates.**
+
+### Redirect customer traffic and complete migration
+
+The final step is to redirect traffic to your new App Service Environment v3 and complete the migration. The platform does this change for you, but only when you initiate it. Before you do this step, you should review your new App Service Environment v3 and perform any needed testing to validate that it's functioning as intended. You can do this review using the IPs associated with your App Service Environment v3 from the IP generation steps. Once you're ready to redirect traffic, you can complete the final step of the migration. This step updates internal DNS records to point to the load balancer IP address of your new App Service Environment v3. Changes are effective immediately. This step also shuts down your old App Service Environment and deletes it. Your new App Service Environment v3 is now your production environment.
+
+> [!IMPORTANT]
+> During the preview, in some cases there may be up to 20 minutes of downtime when you complete the final step of the migration. This downtime is due to the DNS change. The downtime is expected to be removed once the feature is generally available. If you have a requirement for zero downtime, you should wait until the side by side migration feature is generally available. During preview, however, you can still use the side by side migration feature to migrate your dev environments to App Service Environment v3 to learn about the migration process and see how it impacts your workloads.
+>
+
+If you discover any issues with your new App Service Environment v3, don't run the command to redirect customer traffic. This command also initiates the deletion of your App Service Environment v2. If you find an issue, you can revert all changes and return to your old App Service Environment v2. The revert process takes 3 to 6 hours to complete. There's no downtime associated with this process. Once the revert process completes, your old App Service Environment is back online and your new App Service Environment v3 is deleted. You can then attempt the migration again once you resolve any issues.
+
+## Pricing
+
+There's no cost to migrate your App Service Environment. However, you're billed for both your App Service Environment v2 and your new App Service Environment v3 once you start the migration process. You stop being charged for your old App Service Environment v2 when you complete the final migration step where your DNS is updated and the old environment gets deleted. You should complete your validation as quickly as possible to prevent excess charges from accumulating. For more information about App Service Environment v3 pricing, see the [pricing details](overview.md#pricing).
+
+When you migrate to App Service Environment v3 from previous versions, there are scenarios that you should consider that can potentially reduce your monthly cost. Consider [reservations](../../cost-management-billing/reservations/reservation-discount-app-service.md#how-reservation-discounts-apply-to-isolated-v2-instances) and [savings plans](../../cost-management-billing/savings-plan/savings-plan-compute-overview.md) to further reduce your costs. For information on cost saving opportunities, see [Cost saving opportunities after upgrading to App Service Environment v3](upgrade-to-asev3.md#cost-saving-opportunities-after-upgrading-to-app-service-environment-v3).
+
+> [!NOTE]
+> Due to the differences between the Isolated to Isolated v2 pricing tiers, your apps may be over-provisioned after the migration since the Isolated v2 tier has more memory and CPU per corresponding instance size. You'll have the opportunity to [scale your environment](../manage-scale-up.md) as needed once migration is complete. For more information, review the [SKU details](https://azure.microsoft.com/pricing/details/app-service/windows/).
+>
+
+### Scale down your App Service plans
+
+The App Service plan SKUs available for App Service Environment v3 run on the Isolated v2 (Iv2) tier. The number of cores and amount of RAM are effectively doubled per corresponding tier compared the Isolated tier. When you migrate, your App Service plans are converted to the corresponding tier. For example, your I2 instances are converted to I2v2. While I2 has two cores and 7-GB RAM, I2v2 has four cores and 16-GB RAM. If you expect your capacity requirements to stay the same, you're over-provisioned and paying for compute and memory you're not using. For this scenario, you can scale down your I2v2 instance to I1v2 and end up with a similar number of cores and RAM that you had previously.
+
+## Frequently asked questions
+
+- **What if migrating my App Service Environment is not currently supported?**
+ You can't migrate using the side by side migration feature at this time. If you have an unsupported environment and want to migrate immediately, see the [manual migration options](migration-alternatives.md).
+- **How do I choose which migration option is right for me?**
+ Review the [migration path decision tree](upgrade-to-asev3.md#migration-path-decision-tree) to decide which option is best for your use case.
+- **How do I know if I should use the side by side migration feature?**
+ The side by side migration feature is best for customers who want to migrate to App Service Environment v3 but can't support application downtime. Since a new subnet is used for your new environment, there are networking considerations to be aware of, including new IPs. If you can support downtime, see the [in-place migration feature](migrate.md), which results in minimal configuration changes, or the [manual migration options](migration-alternatives.md). The in-place migration feature creates your App Service Environment v3 in the same subnet as your existing environment and uses the same networking infrastructure.
+- **Will I experience downtime during the migration?**
+ No, there's no downtime during the side by side migration process. Your apps continue to run on your existing App Service Environment until you complete the final step of the migration where DNS changes are effective immediately. Once you complete the final step, your old App Service Environment is shut down and deleted. Your new App Service Environment v3 is now your production environment.
+- **Will I need to do anything to my apps after the migration to get them running on the new App Service Environment?**
+ No, all of your apps running on the old environment are automatically migrated to the new environment and run like before. No user input is needed.
+- **What if my App Service Environment has a custom domain suffix?**
+ The side by side migration feature supports this [migration scenario](#supported-scenarios).
+- **What if my App Service Environment is zone pinned?**
+ The side by side migration feature doesn't support this [migration scenario](#supported-scenarios) at this time. If you have a zone pinned App Service Environment and want to migrate immediately, see the [manual migration options](migration-alternatives.md).
+- **What if my App Service Environment has IP SSL addresses?**
+ IP SSL isn't supported on App Service Environment v3. You must remove all IP SSL bindings before migrating using the migration feature or one of the manual options. If you intend to use the side by side migration feature, once you remove all IP SSL bindings, you pass that validation check and can proceed with the automated migration.
+- **What properties of my App Service Environment will change?**
+ You're on App Service Environment v3 so be sure to review the [features and feature differences](overview.md#feature-differences) compared to previous versions. Both your inbound and outbound IPs change when using the side by side migration feature. Note for ELB App Service Environment, previously there was a single IP for both inbound and outbound. For App Service Environment v3, they're separate. For more information, see [App Service Environment v3 networking](networking.md#addresses). For a full comparison of the App Service Environment versions, see [App Service Environment version comparison](version-comparison.md).
+- **What happens if migration fails or there is an unexpected issue during the migration?**
+ If there's an unexpected issue, support teams are on hand. We recommend that you migrate dev environments before touching any production environments to learn about the migration process and see how it impacts your workloads. With the side by side migration feature, you can revert all changes if there's any issues.
+- **What happens to my old App Service Environment?**
+ If you decide to migrate an App Service Environment using the side by side migration feature, your old environment is used up until the final step in the migration process. Once you complete the final step, the old environment and all of the apps hosted on it get shutdown and deleted. Your old environment is no longer accessible. A rollback to the old environment at this point isn't possible.
+- **What will happen to my App Service Environment v1/v2 resources after 31 August 2024?**
+ After 31 August 2024, if you don't migrate to App Service Environment v3, your App Service Environment v1/v2s and the apps deployed in them will no longer be available. App Service Environment v1/v2 is hosted on App Service scale units running on [Cloud Services (classic)](../../cloud-services/cloud-services-choose-me.md) architecture that will be [retired on 31 August 2024](https://azure.microsoft.com/updates/cloud-services-retirement-announcement/). Because of this, [App Service Environment v1/v2 will no longer be available after that date](https://azure.microsoft.com/updates/app-service-environment-v1-and-v2-retirement-announcement/). Migrate to App Service Environment v3 to keep your apps running or save or back up any resources or data that you need to maintain.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Migrate your App Service Environment to App Service Environment v3 using the side by side migration feature](how-to-side-by-side-migrate.md)
+
+> [!div class="nextstepaction"]
+> [App Service Environment v3 Networking](networking.md)
+
+> [!div class="nextstepaction"]
+> [Using an App Service Environment v3](using.md)
+
+> [!div class="nextstepaction"]
+> [Custom domain suffix](./how-to-custom-domain-suffix.md)
app-service Upgrade To Asev3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/upgrade-to-asev3.md
description: Take the first steps toward upgrading to App Service Environment v3
Previously updated : 2/2/2024 Last updated : 2/20/2024 # Upgrade to App Service Environment v3
Last updated 2/2/2024
> As of [29 January 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-2/), you can no longer create new App Service Environment v1 and v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. >
-This page is your one-stop shop for guidance and resources to help you upgrade successfully with minimal downtime. Follow the guidance to plan and complete your upgrade as soon as possible. This page will be updated with the latest information as it becomes available.
+This page is your one-stop shop for guidance and resources to help you upgrade successfully with minimal downtime. Follow the guidance to plan and complete your upgrade as soon as possible. This page is updated with the latest information as it becomes available.
## Upgrade steps |Step|Action|Resources| |-|||
-|**1**|**Pre-flight check**|Determine if your environment meets the prerequisites to automate your upgrade using the migration feature.<br><br>- [Automated upgrade using the migration feature](migrate.md)<br><br>If not, you can upgrade manually.<br><br>- [Manual migration](migration-alternatives.md)|
-|**2**|**Migrate**|Based on results of your review, either upgrade using the migration feature or follow the manual steps.<br><br>- [Use the automated migration feature](how-to-migrate.md)<br>- [Migrate manually](migration-alternatives.md)|
-|**3**|**Testing and troubleshooting**|Upgrading using the automated migration feature requires a 3-6 hour service window. Support teams are monitoring upgrades to ensure success. If you have a support plan and you need technical help, create a [support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).|
+|**1**|**Pre-flight check**|Determine if your environment meets the prerequisites to automate your upgrade using one of the automated migration features. Decide whether an in-place or side by side migration is right for your use case.<br><br>- [Migration path decision tree](#migration-path-decision-tree)<br>- [Automated upgrade using the in-place migration feature](migrate.md)<br>- [Automated upgrade using the side by side migration feature](side-by-side-migrate.md)<br><br>If not, you can upgrade manually.<br><br>- [Manual migration](migration-alternatives.md)|
+|**2**|**Migrate**|Based on results of your review, either upgrade using one of the automated migration features or follow the manual steps.<br><br>- [Use the in-place automated migration feature](how-to-migrate.md)<br>- [Use the side by side automated migration feature](how-to-side-by-side-migrate.md)<br>- [Migrate manually](migration-alternatives.md)|
+|**3**|**Testing and troubleshooting**|Upgrading using one of the automated migration features requires a 3-6 hour service window. If you use the side by side migration feature, you have the opportunity to [test and validate your App Service Environment v3](side-by-side-migrate.md#redirect-customer-traffic-and-complete-migration) before completing the upgrade. Support teams are monitoring upgrades to ensure success. If you have a support plan and you need technical help, create a [support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).|
|**4**|**Optimize your App Service plans**|Once your upgrade is complete, you can optimize the App Service plans for additional benefits.<br><br>Review the autoselected Isolated v2 SKU sizes and scale up or scale down your App Service plans as needed.<br><br>- [Scale down your App Service plans](../manage-scale-up.md)<br>- [App Service Environment post-migration scaling guidance](migrate.md#pricing)<br><br>Explore reserved instance pricing, savings plans, and check out the pricing estimates if needed.<br><br>- [App Service pricing page](https://azure.microsoft.com/pricing/details/app-service/windows/)<br>- [How reservation discounts apply to Isolated v2 instances](../../cost-management-billing/reservations/reservation-discount-app-service.md#how-reservation-discounts-apply-to-isolated-v2-instances)<br>- [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator)| |**5**|**Learn more**|On-demand: [Learn Live webinar with Azure FastTrack Architects](https://www.youtube.com/watch?v=lI9TK_v-dkg&ab_channel=MicrosoftDeveloper).<br><br>Need more help? [Submit a request](https://cxp.azure.com/nominationportal/nominationform/fasttrack) to contact FastTrack.<br><br>[Frequently asked questions](migrate.md#frequently-asked-questions)<br><br>[Community support](https://aka.ms/asev1v2retirement)|
App Service Environment v3 is the latest version of App Service Environment. It'
- [App Service Environment version comparison](version-comparison.md) - [Feature differences](overview.md#feature-differences)
+### What tooling is available to help with the upgrade to App Service Environment v3?
+
+There are two automated migration features available to help you upgrade to App Service Environment v3.
+
+- **In-place migration feature** migrates your App Service Environment to App Service Environment v3 in-place. In-place means that your App Service Environment v3 replaces your existing App Service Environment in the same subnet. There's application downtime during the migration because a subnet can only have a single App Service Environment at a given time. For more information about this feature, see [Automated upgrade using the in-place migration feature](migrate.md).
+- **Side by side migration feature** creates a new App Service Environment v3 in a different subnet that you choose and recreates all of your App Service plans and apps in that new environment. Your existing environment is up and running during the entire migration. Once the new App Service Environment v3 is ready, you can redirect traffic to the new environment and complete the migration. There's no application downtime during the migration. For more information about this feature, see [Automated upgrade using the side by side migration feature](side-by-side-migrate.md).
+- **Manual migration options** are available if you can't use the automated migration features. For more information about these options, see [Migration alternatives](migration-alternatives.md).
+
+### Migration path decision tree
+
+Use the following decision tree to determine which migration path is right for you.
++
+### Cost saving opportunities after upgrading to App Service Environment v3
+
+The App Service plan SKUs available for App Service Environment v3 run on the Isolated v2 (Iv2) tier. The number of cores and amount of RAM are effectively doubled per corresponding tier compared the Isolated tier. When you migrate, your App Service plans are converted to the corresponding tier. For example, your I2 instances are converted to I2v2. While I2 has two cores and 7-GB RAM, I2v2 has four cores and 16-GB RAM. If you expect your capacity requirements to stay the same, you're over-provisioned and paying for compute and memory you're not using. For this scenario, you can scale down your I2v2 instance to I1v2 and end up with a similar number of cores and RAM that you had previously.
+
+> [!NOTE]
+> All scenarios are calculated using costs based on Linux $USD pricing in East US. The payment option is set to monthly. Estimates are based on the prices applicable on the day the estimate was created. Actual total estimates may vary. For the most up-to-date estimates, see the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/).
+>
+
+To demonstrate the cost saving opportunity for this scenario, use the [pricing calculator](https://azure.microsoft.com/pricing/calculator/) to estimate the monthly savings as a result of scaling down your App Service plans. For this example, your App Service Environment v2 has 1 I2 instance. You require two cores and 7-GB RAM. You're using pay-as-you-go pricing. On App Service Environment v2, your monthly payment is the following.
+
+[Stamp fee + 1(I2) = $991.34 + $416.10 = $1,407.44](https://azure.com/e/014bf22b3e88439dba350866a472a41a)
+
+When you migrate this App Service Environment using the migration feature, your new App Service Environment v3 has 1 I2v2 instance, which means you have four cores and 16-GB RAM. If you don't change anything, your monthly payment is the following.
+
+[1(I2v2) = $563.56](https://azure.com/e/17946ea2c4db483d882526ba515a6771)
+
+Your monthly cost is reduced, but you don't need that much compute and capacity. You scale down your instance to I1v2 and your monthly cost is reduced even further.
+
+[1(I1v2) = $281.78](https://azure.com/e/9d481c3af3cd407d975017c2b8158bbd)
+
+#### Break even point
+
+In most cases, migrating to App Service Environment v3 allows for cost saving opportunities. However, cost savings might not always be possible, especially if you're required to maintain a large number of small instances.
+
+To demonstrate this scenario, you have an App Service Environment v2 with a single I1 instance. Your monthly cost is:
+
+[Stamp fee + 1(I1) = $991.34 + $208.05 = **$1,199.39**](https://azure.com/e/ac89a70062a240e1b990304052d49fad)
+
+If you migrate this environment to App Service Environment v3, your monthly cost is:
+
+[1(I1v2) = **$281.78**](https://azure.com/e/9d481c3af3cd407d975017c2b8158bbd)
+
+This change is a significant cost reduction, but you're over-provisioned since you have double the cores and RAM, which you might not need. This excess isn't an issue for this scenario since the new environment is cheaper. However, when you increase your I1 instances in a single App Service Environment, you see how migrating to App Service Environment v3 can increase your monthly cost.
+
+For this scenario, your App Service Environment v2 has 14 I1 instances. Your monthly cost is:
+
+[Stamp fee + 14(I1) = $991.34 + $2,912.70 = **$3,904.04**](https://azure.com/e/bd1dce4b5c8f4d6d807ed3c4ae78fcae)
+
+When you migrate this environment to App Service Environment v3, your monthly cost is:
+
+[14(I1v2) = **$3,944.92**](https://azure.com/e/e0f1ebacf937479ba073a9c32cb2452f)
+
+Your App Service Environment v3 is now more expensive than your App Service Environment v2. As you start add more I1 instances, and therefore need more I1v2 instances when you migrate, the difference in price becomes more significant. If this scenario is a requirement for your environment, you might need to plan for an increase in your monthly cost. The following graph visually depicts the point where App Service Environment v3 becomes more expensive than App Service Environment v2 for this specific scenario.
+
+> [!NOTE]
+> This calculation was done with Linux $USD prices in East US. Break even points will vary due to price variances in the different regions. For an estimate that reflects your situation, see [the Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/).
+>
++
+For more scenarios on cost changes and savings opportunities with App Service Environment v3, see [Estimate your cost savings by migrating to App Service Environment v3](https://azure.github.io/AppService/2023/03/02/App-service-environment-v3-pricing.html).
+ ## We want your feedback! Got 2 minutes? We'd love to hear about your upgrade experience in this quick, anonymous poll. You'll help us learn and improve.
Got 2 minutes? We'd love to hear about your upgrade experience in this quick, an
> [!div class="nextstepaction"] > [Learn about App Service Environment v3](overview.md)+
+> [!div class="nextstepaction"]
+> [Migration to App Service Environment v3 using the in-place migration feature](migrate.md)
+
+> [!div class="nextstepaction"]
+> [Migration to App Service Environment v3 using the side by side migration feature](side-by-side-migrate.md)
+
+> [!div class="nextstepaction"]
+> [Manually migrate to App Service Environment v3](migration-alternatives.md)
app-service Version Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/version-comparison.md
There's a new version of App Service Environment that is easier to use and runs
|Network watcher or NSG flow logs to monitor traffic |Yes |Yes |Yes | |Subnet delegation |Not required |Not required |[Must be delegated to `Microsoft.Web/hostingEnvironments`](networking.md#subnet-requirements) | |Subnet size|An App Service Environment v1 with no App Service plans uses 12 addresses before you create an app. If you use an ILB App Service Environment v1, then it uses 13 addresses before you create an app. As you scale out, infrastructure roles are added at every multiple of 15 and 20 of your App Service plan instances. |An App Service Environment v2 with no App Service plans uses 12 addresses before you create an app. If you use an ILB App Service Environment v2, then it uses 13 addresses before you create an app. As you scale out, infrastructure roles are added at every multiple of 15 and 20 of your App Service plan instances. |Any particular subnet has five addresses reserved for management purposes. In addition to the management addresses, App Service Environment v3 dynamically scales the supporting infrastructure, and uses between 4 and 27 addresses, depending on the configuration and load. You can use the remaining addresses for instances in the App Service plan. The minimal size of your subnet can be a /27 address space (32 addresses). |
-|DNS fallback |Azure DNS |Azure DNS |[Ensure that you have a forwarder to a public DNS or include Azure DNS in the list of custom DNS servers](migrate.md#migration-feature-limitations) |
+|DNS fallback |Azure DNS |Azure DNS |[Ensure that you have a forwarder to a public DNS or include Azure DNS in the list of custom DNS servers](migrate.md#in-place-migration-feature-limitations) |
### Scaling
automation Automation Hrw Run Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-hrw-run-runbooks.md
Title: Run Azure Automation runbooks on a Hybrid Runbook Worker
description: This article describes how to run runbooks on machines in your local datacenter or other cloud provider with the Hybrid Runbook Worker. Previously updated : 11/21/2023 Last updated : 02/20/2024
You can configure a Windows Hybrid Runbook Worker to run only signed runbooks.
> Once you've configured a Hybrid Runbook Worker to run only signed runbooks, unsigned runbooks fail to execute on the worker. > [!NOTE]
-> PowerShell 7.x does not support signed runbooks for Windows and Linux Hybrid Runbook Worker.
+> PowerShell 7.x does not support signed runbooks for Windows and Linux Hybrid Runbook Worker.
+ ### Create signing certificate
You will perform the following steps to complete this configuration:
* Sign a runbook > [!NOTE]
-> PowerShell 7.x does not support signed runbooks for Windows and Linux Hybrid Runbook Worker.
+> - PowerShell 7.x does not support signed runbooks for agent-based Windows and agent-based Linux Hybrid Runbook Worker.
+> - Signed PowerShell and Python runbooks aren't supported in extension-based Linux Hybrid Workers.
+ ### Create a GPG keyring and keypair
+> [!NOTE]
+> The Create a GPG keyring and keypair are applicable only for the agent-based hybrid workers.
+ To create the GPG keyring and keypair, use the Hybrid Runbook Worker [nxautomation account](automation-runbook-execution.md#log-analytics-agent-for-linux). 1. Use the sudo application to sign in as the **nxautomation** account.
azure-app-configuration Enable Dynamic Configuration Dotnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/enable-dynamic-configuration-dotnet-core.md
Last updated 07/11/2023
-#Customer intent: I want to dynamically update my app to use the latest configuration data in App Configuration.
+#Customer intent: I want to dynamically update my .NET app to use the latest configuration data in App Configuration.
# Tutorial: Use dynamic configuration in a .NET app
azure-app-configuration Quickstart Dotnet App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-dotnet-app.md
In this quickstart, a .NET Framework console app is used as an example, but the
- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/). - An App Configuration store. [Create a store](./quickstart-azure-app-configuration-create.md#create-an-app-configuration-store).-- [Visual Studio](https://visualstudio.microsoft.com/vs)
+- [Visual Studio](https://visualstudio.microsoft.com/downloads)
- [.NET Framework 4.7.2 or later](https://dotnet.microsoft.com/download/dotnet-framework) ## Add a key-value
azure-app-configuration Quickstart Dotnet Core App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-dotnet-core-app.md
You use the [.NET command-line interface (CLI)](/dotnet/core/tools/) to create a
export ConnectionString='connection-string-of-your-app-configuration-store' ```
- Restart the command prompt to allow the change to take effect. Print the value of the environment variable to validate that it's set properly.
- ### [Linux](#tab/linux) If you use Linux, run the following command:
You use the [.NET command-line interface (CLI)](/dotnet/core/tools/) to create a
export ConnectionString='connection-string-of-your-app-configuration-store' ```
- Restart the command prompt to allow the change to take effect. Print the value of the environment variable to validate that it's set properly.
- 1. Run the following command to build the console app:
azure-arc Deliver Extended Security Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/deliver-extended-security-updates.md
Title: Deliver Extended Security Updates for Windows Server 2012 description: Learn how to deliver Extended Security Updates for Windows Server 2012. Previously updated : 12/13/2023 Last updated : 02/20/2024
After you provision an ESU license, you need to specify the SKU (Standard or Dat
> The provisioning of ESU licenses requires you to attest to their SA or SPLA coverage. >
-The **Licenses** tab displays Azure Arc WS2012 licenses that are available. From here you can select an existing license to apply or create a new license.
+The **Licenses** tab displays Azure Arc WS2012 licenses that are available. From here, you can select an existing license to apply or create a new license.
:::image type="content" source="media/deliver-extended-security-updates/extended-security-updates-licenses.png" alt-text="Screenshot showing existing licenses." lightbox="media/deliver-extended-security-updates/extended-security-updates-licenses.png":::
To enroll Azure Arc-enabled servers eligible for ESUs at no additional cost, fol
1. Link the tagged license (created for the production environment with cores only for the production environment servers) to your tagged non-production Azure Arc-enabled Windows Server 2012 and Windows Server 2012 R2 machines. **Do not license cores for these servers or create a new ESU license for only these servers.**
-This linking will not trigger a compliance violation or enforcement block, allowing you to extend the application of a license beyond its provisioned cores. The expectation is that the license only includes cores for production and billed servers. Any additional cores will be charged and result in over-billing.
+This linking won't trigger a compliance violation or enforcement block, allowing you to extend the application of a license beyond its provisioned cores. The expectation is that the license only includes cores for production and billed servers. Any additional cores will be charged and result in over-billing.
> [!IMPORTANT] > Adding these tags to your license will NOT make the license free or reduce the number of license cores that are chargeable. These tags allow you to link your Azure machines to existing licenses that are already configured with payable cores without needing to create any new licenses or add additional cores to your free machines.
This linking will not trigger a compliance violation or enforcement block, allow
## Upgrading from Windows Server 2012/2012 R2
-When upgrading a Windows Server 2012/2012R machine to Windows Server 2016 or above, it's not necessary to remove the Connected Machine agent from the machine. The new operating system will be visible for the machine in Azure within a few minutes of upgrade completion. Upgraded machines no longer require ESUs and are no longer eligible for them. Any ESU license associated with the machine is not automatically unlinked from the machine. See [Unlink a license](api-extended-security-updates.md#unlink-a-license) for instructions on doing so manually.
+When upgrading a Windows Server 2012/2012R machine to Windows Server 2016 or above, it's not necessary to remove the Connected Machine agent from the machine. The new operating system will be visible for the machine in Azure within a few minutes of upgrade completion. Upgraded machines no longer require ESUs and are no longer eligible for them. Any ESU license associated with the machine isn't automatically unlinked from the machine. See [Unlink a license](api-extended-security-updates.md#unlink-a-license) for instructions on doing so manually.
-<!--
+## Assess WS2012 ESU patch Status
-There are some scenarios in which you may be eligible to receive Extended Security Updates patches at no additional cost. Two of these scenarios supported by Azure Arc include the following:
--- [Dev/Test (Visual Studio)](/azure/devtest/offer/overview-what-is-devtest-offer-visual-studio)-- Disaster Recovery ([Entitled benefit DR instances from Software Assurance](https://www.microsoft.com/en-us/licensing/licensing-programs/software-assurance-by-benefits) or subscription only)-
-To qualify for these scenarios, you must have:
-
-1. Provisioned and activated a WS2012 Arc ESU License intended to be linked to regular Azure Arc-enabled servers running in production environments (i.e., normally billed ESU scenarios). This license should be provisioned only for billable cores, not cores that are eligible for free Extended Security Updates.
-
-1. Onboarded your Windows Server 2012 and Windows Server 2012 R2 machines to Azure Arc-enabled servers for the purpose of Dev/Test with Visual Studio subscriptions or Disaster Recovery
-
-To enroll Azure Arc-enabled servers eligible for ESUs at no additional cost, follow these steps to tag and link:
-
-1. Tag both the WS2012 Arc ESU License and the Azure Arc-enabled server with one of the following name-value pairs, corresponding to the appropriate exception:
-
- 1. Name: ΓÇ£ESU UsageΓÇ¥; Value: ΓÇ£WS2012 VISUAL STUDIO DEV TESTΓÇ¥
- 1. Name: ΓÇ£ESU UsageΓÇ¥; Value: ΓÇ£WS2012 DISASTER RECOVERYΓÇ¥
-
- In the case that you're using the ESU License for multiple exception scenarios, mark the license with the tag: Name: ΓÇ£ESU UsageΓÇ¥; Value: ΓÇ£WS2012 MULTIPURPOSEΓÇ¥
-
-1. Link the tagged license to your tagged Azure Arc-enabled Windows Server 2012 and Windows Server 2012 R2 machines. **Do not license cores for these servers**.
-
- This linking will not trigger a compliance violation or enforcement block, allowing you to extend the application of a license beyond its provisioned cores. The expectation is that the license only includes cores for production and billed servers. Any additional cores will be charged and result in over-billing.
-
-> [!NOTE]
-> The usage of these exception scenarios will be available for auditing purposes and abuse of these exceptions may result in recusal of WS2012 ESU privileges.
->
->
+To detect whether your Azure Arc-enabled servers are patched with the most recent Windows Server 2012/R2 Extended Security Updates, you can use the Azure Policy [Extended Security Updates should be installed on Windows Server 2012 Arc machines-Microsoft Azure](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetail.ReactView/id/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F14b4e776-9fab-44b0-b53f-38d2458ea8be/version~/null/scopes~/%5B%22%2Fsubscriptions%2F4fabcc63-0ec0-4708-8a98-04b990085bf8%22%5D). This Azure Policy, powered by Machine Configuration, identifies if the server has received the most recent ESU Patches. This is observable from the Guest Assignment and Azure Policy Compliance views built into Azure portal.
azure-arc Troubleshoot Extended Security Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/troubleshoot-extended-security-updates.md
Title: How to troubleshoot delivery of Extended Security Updates for Windows Server 2012 through Azure Arc description: Learn how to troubleshoot delivery of Extended Security Updates for Windows Server 2012 through Azure Arc. Previously updated : 01/19/2024 Last updated : 02/20/2024
If you're unable to enable this service offering, review the resource providers
- **Microsoft.Storage:** Enabling this resource provider is important for managing storage resources, which may be relevant for hybrid and on-premises scenarios.
-## ESU patches issues
+## ESU patch issues
+
+### ESU patch status
+
+To detect whether your Azure Arc-enabled servers are patched with the most recent Windows Server 2012/R2 Extended Security Updates, use Azure Update Manager or the Azure Policy [Extended Security Updates should be installed on Windows Server 2012 Arc machines-Microsoft Azure](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetail.ReactView/id/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F14b4e776-9fab-44b0-b53f-38d2458ea8be/version~/null/scopes~/%5B%22%2Fsubscriptions%2F4fabcc63-0ec0-4708-8a98-04b990085bf8%22%5D), which checks whether the most recent WS2012 ESU patches have been received. Both of these options are available at no additional cost for Azure Arc-enabled servers enrolled in WS2012 ESUs enabled by Azure Arc.
### ESU prerequisites
azure-functions Functions Bindings Timer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-timer.md
Title: Timer trigger for Azure Functions
description: Understand how to use timer triggers in Azure Functions. ms.assetid: d2f013d1-f458-42ae-baf8-1810138118ac Previously updated : 03/06/2023 Last updated : 02/19/2024 ms.devlang: csharp # ms.devlang: csharp, java, javascript, powershell, python
import azure.functions as func
app = func.FunctionApp() @app.function_name(name="mytimer")
-@app.schedule(schedule="0 */5 * * * *",
+@app.timer_trigger(schedule="0 */5 * * * *",
arg_name="mytimer", run_on_startup=True) def test_function(mytimer: func.TimerRequest) -> None:
azure-government Azure Services In Fedramp Auditscope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compliance/azure-services-in-fedramp-auditscope.md
This article provides a detailed list of Azure, Dynamics 365, Microsoft 365, and
| [Azure Database for MySQL](../../mysql/index.yml) | &#x2705; | &#x2705; | | [Azure Database for PostgreSQL](../../postgresql/index.yml) | &#x2705; | &#x2705; | | [Azure Databricks](/azure/databricks/) **&ast;&ast;** | &#x2705; | &#x2705; |
+| [Azure Fluid Relay](../../azure-fluid-relay/index.yml) | &#x2705; | &#x2705; |
| [Azure for Education](https://azureforeducation.microsoft.com/) | &#x2705; | &#x2705; | | [Azure Information Protection](/azure/information-protection/) | &#x2705; | &#x2705; | | [Azure Kubernetes Service (AKS)](../../aks/index.yml) | &#x2705; | &#x2705; |
azure-monitor Azure Monitor Agent Data Collection Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-data-collection-endpoint.md
Azure Virtual network service tags can be used to define network access controls
| Azure Commercial |`<log-analytics-workspace-id>`.ods.opinsights.azure.com |Ingest logs data |Port 443 |Outbound|Yes | 1234a123-aa1a-123a-aaa1-a1a345aa6789.ods.opinsights.azure.com | Azure Commercial | management.azure.com | Only needed if sending time series data (metrics) to Azure Monitor [Custom metrics](../essentials/metrics-custom-overview.md) database | Port 443 | Outbound | Yes | - | | Azure Commercial | `<virtual-machine-region-name>`.monitoring.azure.com | Only needed if sending time series data (metrics) to Azure Monitor [Custom metrics](../essentials/metrics-custom-overview.md) database | Port 443 | Outbound | Yes | westus2.monitoring.azure.com |
+| Azure Commercial | `<data-collection-endpoint>`.`<virtual-machine-region-name>`.ingest.monitor.azure.com | Only needed if sending data to Log Analytics [Custom Logs](./data-collection-text-log.md) table | Port 443 | Outbound | Yes | 275test-01li.eastus2euap-1.canary.ingest.monitor.azure.com |
| Azure Government | Replace '.com' above with '.us' | Same as above | Same as above | Same as above| Same as above | | Microsoft Azure operated by 21Vianet | Replace '.com' above with '.cn' | Same as above | Same as above | Same as above| Same as above |
azure-monitor Azure Monitor Agent Extension Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-extension-versions.md
We strongly recommended to always update to the latest version, or opt in to the
## Version details | Release Date | Release notes | Windows | Linux | |:|:|:|:|
-| January 2024 |**Known Issues**<ul><li>The agent extension code size is beyond the deployment limit set by Arc, thus 1.29.5 won't install on Arc enabled servers. Fix is coming in 1.29.6.</li></ul>**Windows**<ul><li>Added support for Transport Layer Security 1.3</li><li>Reverted a change to enable multiple IIS subscriptions to use same filter. Feature will be redeployed once memory leak is fixed.</li><li>Improved ETW event throughput rate</li></ul>**Linux**<ul><li>Fix Error messages logged intended for mdsd.err went to mdsd.warn instead in 1.29.4 only. Likely error messages: "Exception while uploading to Gig-LA : ...", "Exception while uploading to ODS: ...", "Failed to upload to ODS: ..."</li><li>Syslog time zones incorrect: AMA now uses machine current time when AMA receives an event to populate the TimeGenerated field. The previous behavior parsed the time zone from the Syslog event which caused incorrect times if a device sent an event from a time zone different than the AMA collector machine.</li></ul> | 1.23.0 | 1.29.5 |
+| January 2024 |**Known Issues**<ul><li>The agent extension code size is beyond the deployment limit set by Arc, thus 1.29.5 won't install on Arc enabled servers. **This issue was fixed in 1.29.6**</li></ul>**Windows**<ul><li>Added support for Transport Layer Security 1.3</li><li>Reverted a change to enable multiple IIS subscriptions to use same filter. Feature will be redeployed once memory leak is fixed.</li><li>Improved ETW event throughput rate</li></ul>**Linux**<ul><li>Fix Error messages logged intended for mdsd.err went to mdsd.warn instead in 1.29.4 only. Likely error messages: "Exception while uploading to Gig-LA : ...", "Exception while uploading to ODS: ...", "Failed to upload to ODS: ..."</li><li>Syslog time zones incorrect: AMA now uses machine current time when AMA receives an event to populate the TimeGenerated field. The previous behavior parsed the time zone from the Syslog event which caused incorrect times if a device sent an event from a time zone different than the AMA collector machine.</li><li>Reduced noise generated by AMAs' use of semanage when SELinux is enabled"</li></ul> | 1.23.0 | 1.29.5, 1.29.6 |
| December 2023 |**Known Issues**<ul><li>The agent extension code size is beyond the deployment limit set by Arc, thus 1.29.4 won't install on Arc enabled servers. Fix is coming in 1.29.6.</li><li>Multiple IIS subscriptions causes a memory leak. feature reverted in 1.23.0.</ul>**Windows** <ul><li>Prevent CPU spikes by not using bookmark when resetting an Event Log subscription</li><li>Added missing fluentbit exe to AMA client setup for Custom Log support</li><li>Updated to latest AzureCredentialsManagementService and DsmsCredentialsManagement package</li><li>Update ME to v2.2023.1027.1417</li></ul>**Linux**<ul><li>Support for TLS V1.3</li><li>Support for nopri in Syslog</li><li>Ability to set disk quota from DCR Agent Settings</li><li>Add ARM64 Ubuntu 22 support</li><li>**Fixes**<ul><li>SysLog</li><ul><li>Parse syslog Palo Alto CEF with multiple space characters following the hostname</li><li>Fix an issue with incorrectly parsing messages containing two '\n' chars in a row</li><li>Improved support for non-RFC compliant devices</li><li>Support infoblox device messages containing both hostname and IP headers</li></ul><li>Fix AMA crash in RHEL 7.2</li><li>Remove dependency on "which" command</li><li>Fix port conflicts due to AMA using 13000 </li><li>Reliability and Performance improvements</li></ul></li></ul>| 1.22.0 | 1.29.4| | October 2023| **Windows** <ul><li>Minimize CPU spikes when resetting an Event Log subscription</li><li>Enable multiple IIS subscriptions to use same filter</li><li>Cleanup files and folders for inactive tenants in multitenant mode</li><li>AMA installer won't install unnecessary certs</li><li>AMA emits Telemetry table locally</li><li>Update Metric Extension to v2.2023.721.1630</li><li>Update AzureSecurityPack to v4.29.0.4</li><li>Update AzureWatson to v1.0.99</li></ul>**Linux**<ul><li> Add support for Process metrics counters for Log Analytics upload and Azure Monitor Metrics</li><li>Use rsyslog omfwd TCP for improved syslog reliability</li><li>Support Palo Alto CEF logs where hostname is followed by 2 spaces</li><li>Bug and reliability improvements</li></ul> |1.21.0|1.28.11| | September 2023| **Windows** <ul><li>Fix issue with high CPU usage due to excessive Windows Event Logs subscription reset</li><li>Reduce fluentbit resource usage by limiting tracked files older than 3 days and limiting logging to errors only</li><li>Fix race-condition where resource_id is unavailable when agent is restarted</li><li>Fix race-condition when vm-extension provision agent (also known as GuestAgent) is issuing a disable-vm-extension command to AMA.</li><li>Update MetricExtension version to 2.2023.721.1630</li><li>Update Troubleshooter to v1.5.14 </li></ul>|1.20.0| None |
azure-monitor Azure Monitor Agent Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration.md
The following features and services now have an Azure Monitor Agent version (som
| : | : | : | : | | [VM insights, Service Map, and Dependency agent](../vm/vminsights-overview.md) | Migrate to Azure Monitor Agent | Generally Available | [Enable VM Insights](../vm/vminsights-enable-overview.md) | | [Container insights](../containers/container-insights-overview.md) | Migrate to Azure Monitor Agent | **Linux**: Generally available<br>**Windows**:Public preview | [Enable Container Insights](../containers/container-insights-onboard.md) |
-| [Microsoft Sentinel](../../sentinel/overview.md) | Migrate to Azure Monitor Agent | Public Preview | See [AMA migration for Microsoft Sentinel](../../sentinel/ama-migrate.md). |
+| [Microsoft Sentinel](../../sentinel/overview.md) | Migrate to Azure Monitor Agent | Public Preview | See [AMA migration for Microsoft Sentinel](../../sentinel/ama-migrate.md). Only CEF and Firewall collection remain for GA status |
| [Change Tracking and Inventory](../../automation/change-tracking/overview-monitoring-agent.md) | Migrate to Azure Monitor Agent | Generally Available | [Migration guidance from Change Tracking and inventory using Log Analytics to Change Tracking and inventory using Azure Monitoring Agent version](../../automation/change-tracking/guidance-migration-log-analytics-monitoring-agent.md) | | [Network Watcher](../../network-watcher/network-watcher-monitoring-overview.md) | Migrate to new service called Connection Monitor with Azure Monitor Agent | Generally Available | [Monitor network connectivity using Azure Monitor agent with connection monitor](../../network-watcher/azure-monitor-agent-with-connection-monitor.md) | | Azure Stack HCI Insights | Migrate to Azure Monitor Agent | Generally Available| [Monitor Azure Stack HCI with Insights](/azure-stack/hci/manage/monitor-hci-single) |
azure-monitor Data Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection.md
Last updated 11/01/2023
-# Data collection in Azure Monitor
+# Data collection in Azure Monitor
Azure Monitor has a [common data platform](../data-platform.md) that consolidates data from a variety of sources. Currently, different sources of data for Azure Monitor use different methods to deliver their data, and each typically require different types of configuration. Get a description of the most common data sources at [Sources of monitoring data for Azure Monitor](../data-sources.md). Azure Monitor is implementing a new [ETL](/azure/architecture/data-guide/relational-data/etl)-like data collection pipeline that improves on legacy data collection methods. This process uses a common data ingestion pipeline for all data sources and provides a standard method of configuration that's more manageable and scalable than current methods. Specific advantages of the new data collection include the following:
See [Data collection transformations in Azure Monitor](data-collection-transform
The following sections describe the data collection scenarios that are currently supported using DCR and the new data ingestion pipeline. ### Azure Monitor agent+
+>[!IMPORTANT]
+>The Log Analytics agent is on a **deprecation path** and won't be supported after **August 31, 2024**. Any new data centers brought online after January 1 2024 will not support the Log Analytics agent. If you use the Log Analytics agent to ingest data to Azure Monitor, [migrate to the new Azure Monitor agent](../agents/azure-monitor-agent-migration.md) prior to that date.
+>
The diagram below shows data collection for the [Azure Monitor agent](../agents/azure-monitor-agent-overview.md) running on a virtual machine. In this scenario, the DCR specifies events and performance data to collect from the agent machine, a transformation to filter and modify the data after its collected, and a Log Analytics workspace to send the transformed data. To implement this scenario, you create an association between the DCR and the agent. One agent can be associated with multiple DCRs, and one DCR can be associated with multiple agents. :::image type="content" source="media/data-collection-transformations/transformation-azure-monitor-agent.png" lightbox="media/data-collection-transformations/transformation-azure-monitor-agent.png" alt-text="Diagram showing data collection for Azure Monitor agent." border="false":::
azure-monitor Data Retention Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-retention-archive.md
Tables related to Application Insights resources also keep data for 90 days at n
## Pricing model
-The charge for maintaining archived logs is calculated based on the volume of data you archive, in GB, and the number or days for which you archive the data.
+The charge for maintaining archived logs is calculated based on the volume of data you archive, in GB, and the number or days for which you archive the data. Log data that has `_IsBillable == false` is not subject to retention or archive charges.
For more information, see [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/).
azure-monitor Log Standard Columns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-standard-columns.md
Use these `union withsource = tt *` queries sparingly as scans across data types
## \_IsBillable
-The **\_IsBillable** column specifies whether ingested data is billable. Data with **\_IsBillable** equal to `false` are collected for free and not billed to your Azure account.
+The **\_IsBillable** column specifies whether ingested data is considered billable. Data with **\_IsBillable** equal to `false` does not incur data ingestion, retention or archive charges.
### Examples To get a list of computers sending billed data types, use the following query:
azure-netapp-files Understand Path Lengths https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/understand-path-lengths.md
Using `\\?\Z:` instead allows access and supports longer file paths.
### Workaround if the max path length cannot be increased
-If the max path length can't be enabled in the Windows environment or the Windows client versions are too low, there's a workaround. You can mount the SMB share deeper into the directory structure can reduce the queried path length.
+If the max path length can't be enabled in the Windows environment or the Windows client versions are too low, there's a workaround. You can mount the SMB share deeper into the directory structure and reduce the queried path length.
For example, rather than mapping `\\NAS-SHARE\AzureNetAppFiles` to `Z:`, map `\\NAS-SHARE\AzureNetAppFiles\folder1\folder2\folder3\folder4` to `Z:`.
Rather than the name being too long, the error actually results from the charact
## Next steps
-* [Understand volume languages](understand-volume-languages.md)
+* [Understand volume languages](understand-volume-languages.md)
azure-resource-manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/overview.md
Title: Azure Resource Manager overview description: Describes how to use Azure Resource Manager for deployment, management, and access control of resources on Azure. Previously updated : 02/14/2024 Last updated : 02/16/2024 # What is Azure Resource Manager?
The resource group location is where Azure Resource Manager stores metadata for
When the resource group's region is unavailable, Azure Resource Manager is unable to update your resource's metadata and blocks your write calls. By colocating your resource and resource group region, you reduce the risk of region unavailability because your resources and metadata exist in one region instead of multiple regions.
+## Resolve concurrent operations
+
+When two or more operations try to update the same resource at the same time, Azure Resource Manager detects the conflict and permits only one operation to complete successfully. Azure Resource Manager blocks the other operations and returns an error.
+
+Concurrent resource updates can cause unexpected results. This resolution ensures that your updates are deterministic and reliable. You know the status of your resources and avoid any inconsistency or data loss.
+
+Suppose you have two requests (A and B) that try to update the same resource at the same time. If request A finishes before request B, request A succeeds and request B fails. Request B returns the 409 error. After getting that error code, you can get the updated status of the resource and determine if you want to resend request B.
+ ## Next steps * To learn about limits that are applied across Azure services, see [Azure subscription and service limits, quotas, and constraints](azure-subscription-service-limits.md).
azure-vmware Concepts Private Clouds Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-private-clouds-clusters.md
Azure VMware Solution monitors the following conditions on the host:
## Backup and restore
-Azure VMware Solution private cloud vCenter Server, NSX-T Data Center, and HCX Manager (if enabled) configurations are on a daily backup schedule. Backups are kept for three days. If you need to restore from a backup, open a [support request](https://rc.portal.azure.com/#create/Microsoft.Support) in the Azure portal to request restoration.
+Azure VMware Solution private cloud vCenter Server, NSX-T Data Center, and HCX Manager (if enabled) configurations are on a daily backup schedule. Open a [support request](https://rc.portal.azure.com/#create/Microsoft.Support) in the Azure portal to request restoration.
+
+> [!NOTE]
+> Restorations are intended for catastrophic situations only.
Azure VMware Solution continuously monitors the health of both the physical underlay and the VMware Solution components. When Azure VMware Solution detects a failure, it takes action to repair the failed components.
azure-vmware Configure Identity Source Vcenter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-identity-source-vcenter.md
To verify that the certificate is valid:
1. Select **OK**.
-To export the certificate:
+#### To export the certificate:
1. In the Certificates console, right-click the LDAPS certificate and select **All Tasks** > **Export**. The Certificate Export Wizard opens. Select **Next**. 1. In the **Export Private Key** section, select **No, do not export the private key**, and then select **Next**.
To remove all existing external identity sources at once, run the Remove-Externa
> [!WARNING] > If you don't provide a value for **DomainName**, all external identity sources are removed. Run the cmdlet Update-IdentitySourceCredential only after the password is rotated in the domain controller.
+## Renew existing certificates for LDAPS identity source
+
+1. Renew the existing certificates in your domain controllers.
+
+1. Optional: If the certificates are stored in default domain controllers, this step is optional. Leave the SSLCertificatesSasUrl parameter blank and the new certificates will be downloaded from the default domain controllers and updated in vCenter automatically. If you choose to not use the default way, [export the certificate for LDAPS authentication](#to-export-the-certificate) and [upload the LDAPS certificate to blob storage and generate an SAS URL](#upload-the-ldaps-certificate-to-blob-storage-and-generate-an-sas-url-optional). Save the SAS URL for the next step.
+
+1. Select **Run command** > **Packages** > **Update-IdentitySourceCertificates**.
+
+1. Provide the required values and the new SAS URL (optional), and then select **Run**.
+
+ | **Field** | **Value** |
+ | | |
+ | **DomainName*** | The FQDN of the domain, for example **avslab.local**. |
+ | **SSLCertificatesSasUrl (optional)** | A comma-delimited list of SAS path URI to Certificates for authentication. Ensure permissions to read are included. To generate, place the certificates in any storage account blob and then right-click the cert and generate SAS. If the value of this field isn't provided by a user, the certificates will be downloaded from the default domain controllers. |
+
+1. Check **Notifications** or the **Run Execution Status** pane to see the progress.
+ ## Related content - [Create a storage policy](configure-storage-policy.md)
backup Blob Backup Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/blob-backup-support-matrix.md
Title: Support matrix for Azure Blobs backup description: Provides a summary of support settings and limitations when backing up Azure Blobs. Previously updated : 02/20/2023- Last updated : 09/18/2023+
communication-services Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/guest/capabilities.md
In this article, you will learn which capabilities are supported for Teams exter
| | Receive information of call being locally recorded | ✔️ | | | Manage Teams transcription | ❌ | | | Receive information of call being transcribed | ✔️ |
-| | Manage Teams closed captions | ❌ |
+| | Manage Teams closed captions | ✔️ |
| | Support for compliance recording | ✔️ | | | [Azure Communication Services recording](../../voice-video-calling/call-recording.md) | ❌ | | Engagement | Raise and lower hand | ✔️ | | | Indicate other participants' raised and lowered hands | ✔️ |
-| | Trigger reactions | ❌ |
-| | Indicate other participants' reactions | ❌ |
+| | Trigger reactions | ✔️ |
+| | Indicate other participants' reactions | ✔️ |
| Integrations | Control Teams third-party applications | ❌ |
-| | Receive PowerPoint Live stream | ❌ |
+| | Receive PowerPoint Live stream | ✔️ |
| | Receive Whiteboard stream | ❌ | | | Interact with a poll | ❌ | | | Interact with a Q&A | ❌ | | | Interact with a OneNote | ❌ | | | Manage SpeakerCoach | ❌ | | | [Include participant in Teams meeting attendance report](https://support.microsoft.com/office/view-and-download-meeting-attendance-reports-in-teams-ae7cf170-530c-47d3-84c1-3aedac74d310) | ✔️ |
-| Accessibility | Receive closed captions | ❌ |
+| Accessibility | Receive Teams closed captions | ✔️ |
| | Communication access real-time translation (CART) | ❌ | | | Language interpretation | ❌ | | Advanced call routing | Does meeting dial-out honor forwarding rules | ✔️ |
communication-services Phone Number Management For Canada https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-canada.md
More details on eligible subscription types are as follows:
|Canada| |Denmark| |France|
-|Germany|
|Ireland| |Italy|
-|Japan|
|Netherlands| |Puerto Rico| |Spain|
communication-services Phone Number Management For United Kingdom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-united-kingdom.md
More details on eligible subscription types are as follows:
|Canada| |Denmark| |France|
-|Germany|
|Ireland| |Italy|
-|Japan|
|Netherlands| |Puerto Rico| |Spain|
communication-services Phone Number Management For United States https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/phone-number-management-for-united-states.md
More details on eligible subscription types are as follows:
|Canada| |Denmark| |France|
-|Germany|
|Ireland| |Italy|
-|Japan|
|Netherlands| |Puerto Rico| |Spain|
communication-services Calling Sdk Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/calling-sdk-features.md
Previously updated : 06/30/2021 Last updated : 02/24/2024
# Calling SDK overview
-Azure Communication Services allows end-user browsers, apps, and services to drive voice and video communication. This page focuses on Calling client SDK, which can be embedded in websites and native applications. This page provides detailed descriptions of Calling client features such as platform and browser support information. Services programmatically manage and access calls using the [Call Automation APIs](../call-automation/call-automation.md). The [Rooms API](../rooms/room-concept.md) is an optional Azure Communication Services API that adds additional features to a voice or video call, such as roles and permissions.
+Azure Communication Services allows end-user browsers, apps, and services to drive voice and video communication. This page focuses on Calling client SDK, which can be embedded in websites and native applications. This page provides detailed descriptions of Calling client features such as platform and browser support information. Services programmatically manages and access calls using the [Call Automation APIs](../call-automation/call-automation.md). The [Rooms API](../rooms/room-concept.md) is an optional Azure Communication Services API that adds additional features to a voice or video call, such as roles and permissions.
[!INCLUDE [Survey Request](../../includes/survey-request.md)]
The Azure Communication Services Calling SDK supports the following streaming co
| **Maximum # of incoming remote streams that can be rendered simultaneously** | 9 videos + 1 screen sharing on desktop browsers*, 4 videos + 1 screen sharing on web mobile browsers | 9 videos + 1 screen sharing | \* Starting from Azure Communication Services Web Calling SDK version [1.16.3](https://github.com/Azure/Communication/blob/master/releasenotes/acs-javascript-calling-library-release-notes.md#1163-stable-2023-08-24)
-While the Calling SDK does not enforce these limits, your users might experience performance degradation if they're exceeded. Use the API of [Optimal Video Count](../../how-tos/calling-sdk/manage-video.md?pivots=platform-web#remote-video-quality) to determine how many current incoming video streams your web environment can support.
+While the Calling SDK doesn't enforce these limits, your users might experience performance degradation if they're exceeded. Use the API of [Optimal Video Count](../../how-tos/calling-sdk/manage-video.md?pivots=platform-web#remote-video-quality) to determine how many current incoming video streams your web environment can support.
-## Calling SDK timeouts
+## Supported video resolutions
+The Azure Communicaton Services Calling SDK support up to the following video resolutions:
+
+| Maximum video resolution | WebJS | iOS | Android | Windows |
+| - | -- | -- | - | - |
+| **Receiving video** | 1080P | 1080P | 1080P | 1080P |
+| **Sending video** | 720P | 720P | 720P | 1080P |
+The resolution can vary depending on the number of participants on a call, the amount of bandwidth available to the client, and other overall call parameters.
+
+## Calling SDK timeouts
The following timeouts apply to the Communication Services Calling SDKs: | Action | Timeout in seconds |
communication-services Network Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/network-requirements.md
Previously updated : 09/12/2023 Last updated : 02/19/2024
The quality of real-time media over IP is significantly affected by the quality
* **Latency**. The time it takes to get an IP packet from point A to point B on the network. This network propagation delay is determined by the physical distance between the two points and any other overhead incurred by the devices that your traffic flows through. Latency is measured as one-way or round-trip time (RTT). * **Packet loss**. A percentage of packets that are lost in a specific window of time. Packet loss directly affects audio qualityΓÇöfrom small, individual lost packets having almost no impact to back-to-back burst losses that cause complete audio cut-out.
-* **Inter-packet arrival jitter, also known as jitter**. The average change in delay between successive packets. Communication Services can adapt to some levels of jitter through buffering. It's only when the jitter exceeds the buffering that a participant will notice its effects.
+* **Inter-packet arrival jitter, also known as jitter**. The average change in delay between successive packets. Communication Services can adapt to some levels of jitter through buffering. It's only when the jitter exceeds the buffering that a participant notices its effects.
## Network bandwidth
The following bandwidth requirements are for the JavaScript SDKs.
|500 Kbps|Peer-to-peer quality video calling 360 pixels at 30 FPS| |1.2 Mbps|Peer-to-peer HD-quality video calling with resolution of HD 720 pixels at 30 FPS| |500 Kbps|Group video calling 360 pixels at 30 FPS|
-|1.2 Mbps|HD group video calling with resolution of HD 720 pixels at 30 FPS|
+|1.2 Mbps|HD group video calling with resolution of HD 720 pixels at 30 FPS|
+|1.5 Mbps|Peer-to-peer HD-quality video calling with resolution of HD 1080 pixels at 30 FPS |
The following bandwidth requirements are for the native Windows, Android, and iOS SDKs.
The following bandwidth requirements are for the native Windows, Android, and iO
## Firewall configuration
-Communication Services connections require internet connectivity to specific ports and IP addresses to deliver high-quality multimedia experiences. Without access to these ports and IP addresses, Communication Services will not work properly. The list of IP ranges and allow listed domains that need to be enabled are:
+Communication Services connections require internet connectivity to specific ports and IP addresses to deliver high-quality multimedia experiences. Without access to these ports and IP addresses, Communication Services won't work properly. The list of IP ranges and allow listed domains that need to be enabled are:
| Category | IP ranges or FQDN | Ports | | :-- | :-- | :-- |
Communication Services connections require internet connectivity to specific por
| Signaling, telemetry, registration| *.skype.com, *.microsoft.com, *.azure.net, *.azure.com, *.office.com| TCP 443, 80 |
-The endpoints below should be reachable for U.S. Government GCC High customers only
+The endpoints below should be reachable for U.S. Government GCC High customers only.
| Category | IP ranges or FQDN | Ports | | :-- | :-- | :-- |
communication-services Simulcast https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/simulcast.md
Title: Azure Communication Services Simulcast
-description: Overview of Simulcast - how sending multiple video quality streams helps overall call quality
--
+description: Overview of Simulcast - how sending multiple video quality streams helps overall call quality.
++ - Previously updated : 11/21/2022+ Last updated : 02/19/2024 + # What is Simulcast?
Simulcast streaming from a web endpoint supports a maximum two video qualities.
## Available video resolutions When streaming with simulcast, there are no set resolutions for high or low quality simulcast video streams. Instead, based on many different variables, either a single or multiple video steams are delivered. If every subscriber to video is requesting and capable of receiving maximum resolution what publisher can provide, only that maximum resolution will be sent. The following resolutions are supported:
+- 1080P
- 720p - 540p - 360p - 240p-- 180p
+- 180p
communication-services Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/managed-identity.md
public async Task CreateResourceWithSystemAssignedManagedIdentity()
DataLocation = "UnitedStates", Identity = identity };
- var communicationServiceLro = await collection.GetCommunicationServiceResources().CreateOrUpdateAsync(WaitUntil.Completed, communicationServiceName, data);
+ var communicationServiceLro = await collection.CreateOrUpdateAsync(WaitUntil.Completed, communicationServiceName, data);
var resource = communicationServiceLro.Value; } ```
communication-services Get Phone Number https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/telephony/get-phone-number.md
Title: Quickstart - Get and manage phone numbers using Azure Communication Services
-description: Learn how to manage phone numbers using Azure Communication Services
+description: Learn how to manage phone numbers using Azure Communication Services.
-zone_pivot_groups: acs-azcli-azp-java-net-python-csharp-js
+zone_pivot_groups: acs-azcli-azp-azpnew-java-net-python-csharp-js
# Quickstart: Get and manage phone numbers
zone_pivot_groups: acs-azcli-azp-java-net-python-csharp-js
[!INCLUDE [Azure portal](./includes/phone-numbers-portal.md)] ::: zone-end + ::: zone pivot="programming-language-csharp" [!INCLUDE [Azure portal](./includes/phone-numbers-net.md)] ::: zone-end
zone_pivot_groups: acs-azcli-azp-java-net-python-csharp-js
Common Questions and Issues: -- When a phone number is released, the phone number will not be released or able to be repurchased until the end of the billing cycle.
+- When a phone number is released, the phone number shows up in your ACS resource on Azure Portal until the end of the billing cycle. It also can't be repurchased until the end of the billing cycle.
-- When a Communication Services resource is deleted, the phone numbers associated with that resource will be automatically released at the same time.
+- When a Communication Services resource is deleted, the phone numbers associated with that resource are automatically released at the same time.
## Next steps
confidential-computing Confidential Vm Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/confidential-vm-overview.md
Confidential VMs support the following VM sizes:
### OS support Confidential VMs support the following OS options:
-| Linux | Windows | Windows |
-||--|-|
-| **Ubuntu** | **Windows 11** | **Windows Server Datacenter** |
-| 20.04 <span class="pill purple">LTS</span> (SEV-SNP Only) | 22H2 Pro | 2019 |
-| 22.04 <span class="pill purple">LTS</span> | 22H2 Pro <span class="pill red">ZH-CN</span> | 2019 Server Core |
-| | 22H2 Pro N | |
-| **RHEL** | 22H2 Enterprise | 2022 |
-| 9.2 <span class="pill purple">Tech Preview (SEV-SNP Only)</span> | 22H2 Enterprise N | 2022 Server Core |
-| 9.3 (SEV-SNP Only) | 22H2 Enterprise Multi-session | 2022 Azure Edition |
-| | | 2022 Azure Edition Core |
+| Linux | Windows Client | Windows Server |
+||--|-|
+| **Ubuntu** | **Windows 11** | **Windows Server Datacenter** |
+| 20.04 <span class="pill purple">LTS</span> (AMD SEV-SNP Only) | 22H2 Pro | 2019 Server Core |
+| 22.04 <span class="pill purple">LTS</span> | 22H2 Pro <span class="pill red">ZH-CN</span> | |
+| | 22H2 Pro N | 2022 Server Core |
+| **RHEL** | 22H2 Enterprise | 2022 Azure Edition |
+| 9.3 <span class="pill purple">(AMD SEV-SNP Only)</span> | 22H2 Enterprise N | 2022 Azure Edition Core |
+| [9.3 <span class="pill purple">Preview (Intel TDX Only)](https://aka.ms/tdx-rhel-93-preview)</span> | 22H2 Enterprise Multi-session | |
+| | | |
+| **SUSE** | | |
+| [15 SP5 <span class="pill purple">Tech Preview (Intel TDX, AMD SEV-SNP)](https://aka.ms/cvm-sles-preview)</span> | | |
+| [15 SP5 for SAP <span class="pill purple">Tech Preview (Intel TDX, AMD SEV-SNP)](https://aka.ms/cvm-sles-preview)</span> | | |
### Regions
cosmos-db How To Setup Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-rbac.md
When creating or updating your Azure Cosmos DB account using Azure Resource Mana
This section includes frequently asked questions about role-based access control and Azure Cosmos DB.
-### Which Azure Cosmos DB APIs support role-based access control?
+### Which Azure Cosmos DB APIs support data-plane role-based access control?
-The API for NoSQL is supported. Support for the API for MongoDB is in preview.
+As of now, only the NoSQL API is supported.
### Is it possible to manage role definitions and role assignments from the Azure portal?
cost-management-billing Customize Cost Analysis Views https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/customize-cost-analysis-views.md
If you'd like to view a running total of charges on either a daily or monthly ba
If you'd like to view the total for the entire period (no granularity), select **None**. Selecting no granularity is helpful when grouping costs by a specific attribute in either a chart or table.
+| Granularity | Description |
+|-|-|
+| None | Shows the total cost for the entire date range. |
+| Daily | Shows cost per day (UTC). |
+| Monthly | Shows cost per calendar month (UTC). |
+| Accumulated | Shows the running total for each day including the total of all previous days in the selected date range. |
+ ## Visualize costs in a chart Cost analysis supports the following chart types:
You can view the full dataset for any view. Whichever selections or filters that
## Next steps -- Learn about [Saving and sharing customized views](save-share-views.md).
+- Learn about [Saving and sharing customized views](save-share-views.md).
data-factory Concepts Integration Runtime Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-integration-runtime-performance.md
If your data flow has many joins and lookups, you may want to use a **memory opt
## Cluster size
-Data flows distribute the data processing over different nodes in a Spark cluster to perform operations in parallel. A Spark cluster with more cores increases the number of nodes in the compute environment. More nodes increase the processing power of the data flow. Increasing the size of the cluster is often an easy way to reduce the processing time.
+Data flows distribute the data processing over different cores in a Spark cluster to perform operations in parallel. A Spark cluster with more cores increases the number of cores in the compute environment. More cores increase the processing power of the data flow. Increasing the size of the cluster is often an easy way to reduce the processing time.
-The default cluster size is four driver nodes and four worker nodes (small). As you process more data, larger clusters are recommended. Below are the possible sizing options:
+The default cluster size is four driver cores and four worker cores (small). As you process more data, larger clusters are recommended. Below are the possible sizing options:
-| Worker Nodes | Driver Nodes | Total Nodes | Notes |
+| Worker Cores | Driver Cores | Total Cores | Notes |
| | | -- | -- | | 4 | 4 | 8 | Small | | 8 | 8 | 16 | Medium |
The default cluster size is four driver nodes and four worker nodes (small). As
Data flows are priced at vcore-hrs meaning that both cluster size and execution-time factor into this. As you scale up, your cluster cost per minute will increase, but your overall time will decrease. > [!TIP]
-> There is a ceiling on how much the size of a cluster affects the performance of a data flow. Depending on the size of your data, there is a point where increasing the size of a cluster will stop improving performance. For example, If you have more nodes than partitions of data, adding additional nodes won't help.
+> There is a ceiling on how much the size of a cluster affects the performance of a data flow. Depending on the size of your data, there is a point where increasing the size of a cluster will stop improving performance. For example, If you have more cores than partitions of data, adding additional cores won't help.
A best practice is to start small and scale up to meet your performance needs. ## Custom shuffle partition
defender-for-cloud Azure Devops Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/azure-devops-extension.md
If you don't have access to install the extension, you must request access from
# command: 'run' | 'pre-job' | 'post-job'. Optional. The command to run. Default: run # config: string. Optional. A file path to an MSDO configuration file ('*.gdnconfig'). # policy: 'azuredevops' | 'microsoft' | 'none'. Optional. The name of a well-known Microsoft policy. If no configuration file or list of tools is provided, the policy may instruct MSDO which tools to run. Default: azuredevops.
- # categories: string. Optional. A comma-separated list of analyzer categories to run. Values: 'secrets', 'code', 'artifacts', 'IaC', 'containers. Example: 'IaC,secrets'. Defaults to all.
+ # categories: string. Optional. A comma-separated list of analyzer categories to run. Values: 'code', 'artifacts', 'IaC', 'containers'. Example: 'IaC, containers'. Defaults to all.
# languages: string. Optional. A comma-separated list of languages to analyze. Example: 'javascript,typescript'. Defaults to all. # tools: string. Optional. A comma-separated list of analyzer tools to run. Values: 'bandit', 'binskim', 'eslint', 'templateanalyzer', 'terrascan', 'trivy'. # break: boolean. Optional. If true, will fail this build step if any error level results are found. Default: false.
defender-for-cloud Github Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/github-action.md
Microsoft Security DevOps uses the following Open Source tools:
with: # config: string. Optional. A file path to an MSDO configuration file ('*.gdnconfig'). # policy: 'GitHub' | 'microsoft' | 'none'. Optional. The name of a well-known Microsoft policy. If no configuration file or list of tools is provided, the policy may instruct MSDO which tools to run. Default: GitHub.
- # categories: string. Optional. A comma-separated list of analyzer categories to run. Values: 'secrets', 'code', 'artifacts', 'IaC', 'containers. Example: 'IaC,secrets'. Defaults to all.
+ # categories: string. Optional. A comma-separated list of analyzer categories to run. Values: 'code', 'artifacts', 'IaC', 'containers'. Example: 'IaC, containers'. Defaults to all.
# languages: string. Optional. A comma-separated list of languages to analyze. Example: 'javascript,typescript'. Defaults to all. # tools: string. Optional. A comma-separated list of analyzer tools to run. Values: 'bandit', 'binskim', 'eslint', 'templateanalyzer', 'terrascan', 'trivy'.
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
If you're looking for items older than six months, you can find them in the [Arc
|Date | Update | |-|-|
+| February 20 | [New version of Defender Agent for Defender for Containers](#new-version-of-defender-agent-for-defender-for-containers) |
| February 18| [Open Container Initiative (OCI) image format specification support](#open-container-initiative-oci-image-format-specification-support) | | February 13 | [AWS container vulnerability assessment powered by Trivy retired](#aws-container-vulnerability-assessment-powered-by-trivy-retired) | | February 8 | [Recommendations released for preview: four recommendations for Azure Stack HCI resource type](#recommendations-released-for-preview-four-recommendations-for-azure-stack-hci-resource-type) |
-### Open Container Initiative (OCI) image format specification support
+### New version of Defender Agent for Defender for Containers
+
+February 20, 2024
+
+[A new version](/azure/aks/supported-kubernetes-versions#aks-kubernetes-release-calendar) of the [Defender Agent for Defender for Containers](tutorial-enable-containers-azure.md#deploy-the-defender-agent-in-azure) is available. It includes performance and security improvements, support for both AMD64 and ARM64 arch nodes (Linux only), and uses [Inspektor Gadget](https://www.inspektor-gadget.io/) as the process collection agent instead of Sysdig. The new version is only supported on Linux kernel versions 5.4 and higher, so if you have older versions of the Linux kernel, you need to upgrade. Support for ARM 64 is only available from AKS V1.29 and above. For more information, see [Supported host operating systems](support-matrix-defender-for-containers.md#supported-host-operating-systems).
+
+### Open Container Initiative (OCI) image format specification support
February 18, 2024 The [Open Container Initiative (OCI)](https://github.com/opencontainers/image-spec/blob/main/spec.md) image format specification is now supported by vulnerability assessment, powered by Microsoft Defender Vulnerability Management for AWS, Azure & GCP clouds. - ### AWS container vulnerability assessment powered by Trivy retired February 13, 2024
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
Title: Important upcoming changes description: Upcoming changes to Microsoft Defender for Cloud that you might need to be aware of and for which you might need to plan. Previously updated : 02/18/2024 Last updated : 02/20/2024 # Important upcoming changes to Microsoft Defender for Cloud
If you're looking for the latest release notes, you can find them in the [What's
| Planned change | Announcement date | Estimated date for change | |--|--|--|
+| [Update recommendations to align with Azure AI Services resources](#update-recommendations-to-align-with-azure-ai-services-resources) | February 20, 2024 | February 28, 2024 |
| [Deprecation of data recommendation](#deprecation-of-data-recommendation) | February 12, 2024 | March 14, 2024 | | [Decommissioning of Microsoft.SecurityDevOps resource provider](#decommissioning-of-microsoftsecuritydevops-resource-provider) | February 5, 2024 | March 6, 2024 | | [Changes in endpoint protection recommendations](#changes-in-endpoint-protection-recommendations) | February 1, 2024 | February 28, 2024 |
If you're looking for the latest release notes, you can find them in the [What's
| [Deprecation of two recommendations related to PCI](#deprecation-of-two-recommendations-related-to-pci) |January 14, 2024 | February 2024 | | [Defender for Servers built-in vulnerability assessment (Qualys) retirement path](#defender-for-servers-built-in-vulnerability-assessment-qualys-retirement-path) | January 9, 2024 | May 2024 | | [Retirement of the Defender for Cloud Containers Vulnerability Assessment powered by Qualys](#retirement-of-the-defender-for-cloud-containers-vulnerability-assessment-powered-by-qualys) | January 9, 2023 | March 2024 |
-| [New version of Defender Agent for Defender for Containers](#new-version-of-defender-agent-for-defender-for-containers) | January 4, 2024 | February 2024 |
| [Upcoming change for the Defender for CloudΓÇÖs multicloud network requirements](#upcoming-change-for-the-defender-for-clouds-multicloud-network-requirements) | January 3, 2024 | May 2024 | | [Deprecation of two DevOps security recommendations](#deprecation-of-two-devops-security-recommendations) | November 30, 2023 | January 2024 | | [Consolidation of Defender for Cloud's Service Level 2 names](#consolidation-of-defender-for-clouds-service-level-2-names) | November 1, 2023 | December 2023 |
If you're looking for the latest release notes, you can find them in the [What's
| [Deprecating two security incidents](#deprecating-two-security-incidents) | | November 2023 | | [Defender for Cloud plan and strategy for the Log Analytics agent deprecation](#defender-for-cloud-plan-and-strategy-for-the-log-analytics-agent-deprecation) | | August 2024 |
+## Update recommendations to align with Azure AI Services resources
+
+**Announcement date: February 20, 2024**
+
+**Estimated date of change: February 28, 2024**
+
+The Azure AI Services category (formerly known as Cognitive Services) is adding new resource types. As a result, the following recommendations and related policy are set to be updated to comply with the new Azure AI Services naming format and align with the relevant resources.
+
+| Current Recommendation | Updated Recommendation |
+| - | - |
+| Cognitive Services accounts should restrict network access | [Azure AI Services resources should restrict network access](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/f738efb8-005f-680d-3d43-b3db762d6243) |
+| Cognitive Services accounts should have local authentication methods disabled | [Azure AI Services resources should have key access disabled (disable local authentication)](https://ms.portal.azure.com/#view/Microsoft_Azure_Security/GenericRecommendationDetailsBlade/assessmentKey/13b10b36-aa99-4db6-b00c-dcf87c4761e6) |
+
+See the [list of security recommendations](recommendations-reference.md).
+ ## Deprecation of data recommendation **Announcement date: February 12, 2024**
Customers that are still using the API version **2022-09-01-preview** under `Mic
Customers currently using Defender for Cloud DevOps security from Azure portal won't be impacted.
-For details on the new API version, see [Microsoft Defender for Cloud REST APIs](/rest/api/defenderforcloud/operation-groups?view=rest-defenderforcloud-2023-09-01-preview).
+For details on the new API version, see [Microsoft Defender for Cloud REST APIs](/rest/api/defenderforcloud/operation-groups).
## Changes in endpoint protection recommendations
For more information about transitioning to our new container vulnerability asse
For common questions about the transition to Microsoft Defender Vulnerability Management, see [Common questions about the Microsoft Defender Vulnerability Management solution](common-questions-microsoft-defender-vulnerability-management.md).
-## New version of Defender Agent for Defender for Containers
-
-**Announcement date: January 4, 2024**
-
-**Estimated date for change: February 2024**
-
-A new version of the [Defender Agent for Defender for Containers](tutorial-enable-containers-azure.md#deploy-the-defender-agent-in-azure) will be released in February 2024. It includes performance and security improvements, support for both AMD64 and ARM64 arch nodes (Linux only), and uses [Inspektor Gadget](https://www.inspektor-gadget.io/) as the process collection agent instead of Sysdig. The new version is only supported on Linux kernel versions 5.4 and higher, so if you have older versions of the Linux kernel, you'll need to upgrade. For more information, see [Supported host operating systems](support-matrix-defender-for-containers.md#supported-host-operating-systems).
- ## Upcoming change for the Defender for CloudΓÇÖs multicloud network requirements **Announcement date: January 3, 2024**
deployment-environments Quickstart Create And Configure Devcenter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/quickstart-create-and-configure-devcenter.md
To create and configure a dev center in Azure Deployment Environments by using t
|**Resource group**|Either use an existing resource group or select **Create new** and enter a name for the resource group.| |**Name**|Enter a name for the dev center.| |**Location**|Select the location or region where you want to create the dev center.|
+ |**Attach a quick start catalog**|Clear the **Dev box customization tasks** checkbox. </br> Clear the **Azure deployment environment definitions** checkbox.|
1. Select **Review + Create**.
dev-box Concept Dev Box Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/concept-dev-box-concepts.md
A dev center is a collection of [Projects](#project) that require similar settin
[Azure Deployment Environments](../deployment-environments/concept-environments-key-concepts.md#dev-centers) also uses dev centers to organize resources. An organization can use the same dev center for both services.
+## Catalogs
+
+The Dev Box quick start catalog contains tasks and scripts that you can use to configure your dev box during the final stage of the creation process.Microsoft provides a [*quick start* catalog](https://github.com/microsoft/devcenter-catalog) that contains a set of sample tasks. You can attach the quick start catalog to a dev center to make these tasks available to all the projects associated with the dev center. You can modify the sample tasks to suit your needs, and you can create your own catalog of tasks.
+
+To learn how to create reusable customization tasks, see [Create reusable dev box customizations](./how-to-customize-dev-box-setup-tasks.md).
+ ## Project In Dev Box, a project represents a team or business function within the organization. Each project is a collection of [pools](#dev-box-pool), and each pool represents a region or workload. When you associate a project with a dev center, all the settings at the dev center level are applied to the project automatically.
When you're creating a network connection, you must choose the Active Directory
To learn more about native Microsoft Entra join and Microsoft Entra hybrid join, see [Plan your Microsoft Entra device deployment](../active-directory/devices/plan-device-deployment.md). - ## Azure regions for Dev Box Before setting up Dev Box, you need to choose the best regions for your organization.
dev-box How To Customize Dev Box Setup Tasks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-customize-dev-box-setup-tasks.md
+
+ Title: Customize your dev box with setup tasks
+
+description: Customize your dev box by using a catalog of setup tasks and a configuration file to install software, configure settings, and more.
++++ Last updated : 02/14/2024+
+#customer intent: As a platform engineer, I want to be able to complete configuration tasks on my dev boxes, so that my developers have the environment they need as soon as they start using their dev box.
+++
+# Create reusable dev box customizations
+
+In this article, you learn how to customize dev boxes by using a catalog of setup tasks and a configuration file to install software, configure settings, and more. These tasks are applied to the new dev box in the final stage of the creation process. Microsoft Dev Box customization is a config-as-code approach to customizing dev boxes. You can add other settings and software without having to create a custom virtual machine (VM) image.
+
+By using customizations, you can automate common setup steps, save time, and reduce the chance of configuration errors. Some example setup tasks include:
+
+- Installing software with the WinGet or Chocolatey package managers.
+- Setting OS settings like enabling Windows Features.
+- Configuring applications like installing Visual Studio extensions.
+
+You can implement customizations in stages, building from a simple but functional configuration to an automated process. The stages are as follows:
+
+1. [Create a customized dev box by using an example configuration file](#create-a-customized-dev-box-by-using-an-example-configuration-file)
+1. [Write a configuration file](#write-a-configuration-file)
+1. [Share a configuration file from a code repository](#share-a-configuration-file-from-a-code-repository)
+1. [Define new tasks in a catalog](#define-new-tasks-in-a-catalog)
+
+> [!IMPORTANT]
+> Customizations in Microsoft Dev Box are currently in PREVIEW.
+> See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+
+### Team-specific customization scenarios
+
+Customizations are useful wherever you need to configure settings, install software, add extensions, or set common OS settings like enabling Windows Features on your dev boxes during the final stage of creation. Development team leads can use customizations to preconfigure the software required for their specific development team. Developer team leads can author configuration files that apply only the setup tasks relevant for their teams. This method lets developers make their own dev boxes that best fit their work, without needing to ask IT for changes or wait for the engineering team to create a custom VM image.
+
+### What are tasks?
+
+A task performs a specific action, like installing software. Each task consists of one or more PowerShell scripts, along with a *task.yaml* file that provides parameters and defines how the scripts run. You can also include a PowerShell command in the task.yaml file. You can store a collection of curated setup tasks in a catalog attached to your dev center, with each task in a separate folder. Dev Box supports using a GitHub repository or an Azure DevOps repository as a catalog, and scans a specified folder of the catalog recursively to find task definitions.
+
+Microsoft provides a quick start catalog to help you get started with customizations. It includes a default set of tasks that define common setup tasks:
+
+- Installing software with the WinGet or Chocolatey package managers
+- Cloning a repository by using git-clone
+- Configuring applications like installing Visual Studio extensions
+- Running PowerShell scripts
+
+The following example shows a catalog with choco, git-clone, install-vs-extension, and PowerShell tasks defined. Notice that each folder contains a task.yaml file and at least one PowerShell script. Task.yaml files cache scripts and the input parameters needed to reference them from configuration files.
++
+### What is a configuration file?
+
+Dev Box customizations use a yaml formatted file to specify a list of tasks to apply from the catalog when creating a new dev box. These configuration files include one or more 'tasks', which identify the catalog task and provide parameters like the name of the software to install. The configuration file is then made available to the developers creating new dev boxes. The following example uses a winget task to install Visual Studio Code, and a `git clone` task to clone a repository.
+
+```yaml
+# From https://github.com/microsoft/devcenter-examples
+$schema: 1.0
+tasks:
+ - name: winget
+ parameters:
+ package: Microsoft.VisualStudioCode
+ runAsUser: true
+ - name: git-clone
+ description: Clone this repository into C:\Workspaces
+ parameters:
+ repositoryUrl: https://github.com/OrchardCMS/OrchardCore.git
+ directory: C:\Workspaces
+```
+
+### Permissions required to configure Microsoft Dev Box for customizations
+
+To perform the actions required to create and apply customizations to a dev box, you need certain permissions. The following table describes the actions and permissions or roles you need to configure customizations.
+
+|Action |Permission / Role |
+|||
+|Attach a catalog to a dev center |Platform engineer with Contributor permission to the dev center. |
+|Use the developer portal to upload and apply a yaml file during dev box creation | Dev Box User |
+|Create a configuration file | Anyone can create a configuration file. |
+|Add tasks to a catalog | Permission to add to the repository hosting the catalog. |
+
+## Prerequisites
+
+To complete the steps in this article, you must have a [dev center configured with a dev box definition, dev box pool, and dev box project](./quickstart-configure-dev-box-service.md).
+
+## Create a customized dev box by using an example configuration file
+
+Use the default quick start catalog and an example configuration file to get started with customizations.
+
+### Attach the quick start catalog
+
+Attaching a catalog with customization tasks to a dev center means you can create a dev box in that dev center and reference the customization tasks from that catalog. Microsoft provides a sample repository on GitHub with a standard set of default tasks to help you get started, known as the [*quick start catalog*](https://github.com/microsoft/devcenter-catalog).
+
+To attach the quick start catalog to the dev center:
+
+1. Sign in to the [Microsoft Dev Box developer portal](https://aka.ms/devbox-portal).
+1. In the left menu under **Environment configuration**, select **Catalogs**, and then select **Add**.
+1. In **Add catalog**, select **Dev box customization tasks** as the quick start catalog. Then, select **Add**.
+1. In your dev center, select **Catalogs**, and verify that your catalog appears.
+
+ :::image type="content" source="media/how-to-customize-dev-box-setup-tasks/add-quick-start-catalog.png" alt-text="Screenshot of the Azure portal showing the Add catalog pane with Microsoft's quick start catalog and Dev box customization tasks highlighted.":::
+
+ If the connection is successful, the **Status** is displayed as **Sync successful**.
+
+### Create your customized dev box
+
+Now you have a catalog that defines the tasks your developers can use, you can reference those tasks from a configuration file and create a customized dev box.
+
+1. Download an [example yaml configuration from the samples repository](https://aka.ms/devbox/customizations/samplefile). This example configuration installs Visual Studio Code, and clones the OrchardCore .NET web app repo to your dev box.
+1. Sign in to the [Microsoft Dev Box developer portal](https://aka.ms/devbox-portal).
+1. Select **New** > **Dev Box**.
+1. In **Add a dev box**, enter the following values:
+
+ | Setting | Value |
+ |||
+ | **Name** | Enter a name for your dev box. Dev box names must be unique within a project. |
+ | **Project** | Select a project from the dropdown list. |
+ | **Dev box pool** | Select a pool from the dropdown list, which includes all the dev box pools for that project. Choose a dev box pool near to you for least latency.|
+ | **Uploaded customization files** | Select **Upload a customization file** and upload the configuration file you downloaded in step 1. |
+
+ :::image type="content" source="media/how-to-customize-dev-box-setup-tasks/developer-portal-customization-upload.png" alt-text="Screenshot showing the dev box customization options in the developer portal with Uploaded customization files highlighted." lightbox="media/how-to-customize-dev-box-setup-tasks/developer-portal-customization-upload.png":::
+
+1. Select **Create**.
+
+When the creation process is complete, the new dev box has nodejs and Visual Studio Code installed.
+
+For more examples, see the [dev center examples repository on GitHub](https://github.com/microsoft/devcenter-examples).
++
+## Write a configuration file
+
+You can define new tasks to apply to your dev boxes by creating your own configuration file. You can test your configuration file in Visual Studio Code and make any required changes without the need to create a separate dev box for each test.
+
+Before you can create and test your own configuration file, there must be a catalog that contains tasks attached to the dev center. You can use a Visual Studio Code extension to discover the tasks in the attached catalog.
+
+1. Create a Dev Box (or use an existing Dev Box) for testing.
+1. On the test dev box, install Visual Studio Code and then install the [Dev Box v1.2.2 VS Code extension](https://aka.ms/devbox/preview/customizations/vsc-extension).
+1. Download an [example yaml configuration file](https://aka.ms/devbox/customizations/samplefile) from the samples repository and open it in Visual Studio Code.
+1. Discover tasks available in the catalog by using the command palette. From **View** > **Command Palette**, select **Dev Box: List available tasks for this dev box**.
+
+ :::image type="content" source="media/how-to-customize-dev-box-setup-tasks/dev-box-command-list-tasks.png" alt-text="Screenshot of Visual Studio Code showing the command palette with Dev Box List available tasks for this dev box highlighted." lightbox="media/how-to-customize-dev-box-setup-tasks/dev-box-command-list-tasks.png":::
+
+1. Test configuration in Visual Studio Code by using f5/command palette. From **View** > **Command Palette**, select **Dev Box: Apply customizations tasks**.
+
+ :::image type="content" source="media/how-to-customize-dev-box-setup-tasks/dev-box-command-apply-tasks.png" alt-text="Screenshot of Visual Studio Code showing the command palette with Dev Box Apply customizations tasks highlighted." lightbox="media/how-to-customize-dev-box-setup-tasks/dev-box-command-apply-tasks.png":::
+
+1. The configuration file runs immediately, applying the specified tasks to your test dev box. Inspect the changes and check the Visual Studio Code terminal for any errors or warnings generated during the task execution.
+1. When the configuration file runs successfully, share it with developers to upload when they create a new dev box.
+
+> [!NOTE]
+> The ability to create and upload a file isnΓÇÖt a security risk; the file uploaded can only apply settings defined in the catalog attached to the dev center. If the task isn't defined there, the developer will get an error saying the task isn't defined.
++
+## Share a configuration file from a code repository
+
+Make your configuration file seamlessly available to your developers by naming it *workload.yaml* and uploading it to a repository accessible to the developers, usually their coding repository. When you create a dev box, you specify the repository URL and the configuration file is cloned along with the rest of the repository. Dev box searches the repository for a file named workload.yaml and, if one is located, performs the tasks listed. This configuration provides a seamless way to perform customizations on a dev box.
+
+1. Create a configuration file named *workload.yaml*.
+1. Add the configuration file to the root of a private Azure DevOps repository with your code and commit it.
+1. Sign in to the [Microsoft Dev Box developer portal](https://aka.ms/devbox-portal).
+1. Select **New** > **Dev Box**.
+1. In **Add a dev box**, enter the following values:
+
+ | Setting | Value |
+ |||
+ | **Name** | Enter a name for your dev box. Dev box names must be unique within a project. |
+ | **Project** | Select a project from the dropdown list. |
+ | **Dev box pool** | Select a pool from the dropdown list, which includes all the dev box pools for that project. Choose a dev box pool near to you for least latency.|
+ | **Repository clone URL** | Enter the URL for the repository that contains the configuration file and your code. |
+
+ :::image type="content" source="media/how-to-customize-dev-box-setup-tasks/developer-portal-customization-clone.png" alt-text="Screenshot showing the dev box customization options in the developer portal with Repository clone URL highlighted." lightbox="media/how-to-customize-dev-box-setup-tasks/developer-portal-customization-clone.png":::
+
+1. Select **Create**.
+
+The new dev box has the repository cloned, and all instructions from configuration file applied.
+
+## Define new tasks in a catalog
+
+Creating new tasks in a catalog allows you to create customizations tailored to your development teams and add guardrails around the configurations that are possible.
+
+1. Create a repository to store your tasks.
+
+ Optionally, you can make a copy of the [quick start catalog](https://github.com/microsoft/devcenter-catalog) in your own repository to use as a starting point.
+
+1. Create tasks in your repository by modifying existing PowerShell scripts, or creating new scripts.
+
+ To get started with creating tasks, you can use the examples given in the [dev center examples repository on GitHub](https://github.com/microsoft/devcenter-examples) and [PowerShell documentation](/powershell/).
+
+1. [Attach your repository to your dev center as a catalog](/azure/deployment-environments/how-to-configure-catalog?tabs=DevOpsRepoMSI).
+
+1. Create a configuration file for those tasks by following the steps in [Write a configuration file](#write-a-configuration-file).
+
+## Related content
+
+- [Add and configure a catalog from GitHub or Azure DevOps](/azure/deployment-environments/how-to-configure-catalog?tabs=DevOpsRepoMSI)
+- [Accelerate developer onboarding with the configuration-as-code customization in Microsoft Dev Box](https://techcommunity.microsoft.com/t5/azure-developer-community-blog/accelerate-developer-onboarding-with-the-configuration-as-code/ba-p/4062416)
+
dev-box How To Manage Dev Center https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-manage-dev-center.md
To create a dev center in the Azure portal:
| **ResourceGroup** | Select an existing resource group, or select **Create new** and then enter a name for the new resource group. | | **Name** | Enter a name for your dev center. | | **Location** | Select the location or region where you want the dev center to be created. |
+ | **Attach a quick start catalog** | Clear both checkboxes. |
- :::image type="content" source="./media/how-to-manage-dev-center/create-dev-center-basics.png" alt-text="Screenshot that shows the Basics tab on the pane for creating a dev center." lightbox="./media/how-to-manage-dev-center/create-dev-center-basics.png":::
+ :::image type="content" source="./media/how-to-manage-dev-center/create-dev-center-basics-not-selected.png" alt-text="Screenshot that shows the Basics tab on the pane for creating a dev center." lightbox="./media/how-to-manage-dev-center/create-dev-center-basics-not-selected.png":::
For a list of the currently supported Azure locations with capacity, see [Frequently asked questions about Microsoft Dev Box](https://aka.ms/devbox_acom).
+ The Dev Box quick start catalog contains tasks and scripts that you can use to configure your dev box during the final stage of the creation process. You can attach a quick start catalog to a dev center later. For more information, see [Create reusable dev box customizations](./how-to-customize-dev-box-setup-tasks.md).
+ 1. (Optional) On the **Tags** tab, enter a name/value pair that you want to assign. :::image type="content" source="./media/how-to-manage-dev-center/create-dev-center-tags.png" alt-text="Screenshot that shows the Tags tab on the page for creating a dev center." lightbox="./media/how-to-manage-dev-center/create-dev-center-tags.png":::
dev-box Overview What Is Microsoft Dev Box https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/overview-what-is-microsoft-dev-box.md
After a developer team lead is assigned the DevCenter Project Admin role, they c
- Create dev box pools and add appropriate dev box definitions. - Control costs by using auto-stop schedules.
+- Use a configuration script that invokes setup tasks from a catalog attached to the dev center. The setup tasks execute during the creation of a dev box to install and customize software specific to the project.
### Developer scenarios
dev-box Quickstart Configure Dev Box Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/quickstart-configure-dev-box-service.md
Use the following steps to create a dev center so you can manage your dev box re
| **ResourceGroup** | Select an existing resource group, or select **Create new** and then enter a name for the new resource group. | | **Name** | Enter a name for your dev center. | | **Location** | Select the location or region where you want the dev center to be created. |
+ | **Attach a quick start catalog** | Clear both checkboxes. |
- :::image type="content" source="./media/quickstart-configure-dev-box-service/create-dev-center-basics.png" alt-text="Screenshot that shows the Basics tab on the pane for creating a dev center." lightbox="./media/quickstart-configure-dev-box-service/create-dev-center-basics.png":::
+ :::image type="content" source="./media/quickstart-configure-dev-box-service/create-dev-center-not-selected.png" alt-text="Screenshot that shows the Basics tab on the pane for creating a dev center." lightbox="./media/quickstart-configure-dev-box-service/create-dev-center-not-selected.png":::
For a list of the currently supported Azure locations with capacity, see [Frequently asked questions about Microsoft Dev Box](https://aka.ms/devbox_acom).
dev-box Quickstart Create Dev Box https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/quickstart-create-dev-box.md
To create a dev box in the Microsoft Dev Box developer portal:
1. Select **Add a dev box**.
- :::image type="content" source="./media/quickstart-create-dev-box/welcome-to-developer-portal.png" alt-text="Screenshot of the developer portal and the button for adding a dev box.":::
+ :::image type="content" source="./media/quickstart-create-dev-box/welcome-to-developer-portal.png" alt-text="Screenshot of the developer portal and the button for adding a dev box." lightbox="./media/quickstart-create-dev-box/welcome-to-developer-portal.png":::
1. In **Add a dev box**, enter the following values:
To create a dev box in the Microsoft Dev Box developer portal:
| **Name** | Enter a name for your dev box. Dev box names must be unique within a project. | | **Project** | Select a project from the dropdown list. | | **Dev box pool** | Select a pool from the dropdown list, which includes all the dev box pools for that project. Choose a dev box pool near to you for least latency.|
+ | **Repository clone URL** | Leave blank. |
+ | **Uploaded customization files** | Leave blank. |
- :::image type="content" source="./media/quickstart-create-dev-box/create-dev-box.png" alt-text="Screenshot of the dialog for adding a dev box.":::
+ :::image type="content" source="./media/quickstart-create-dev-box/developer-portal-create-dev-box.png" alt-text="Screenshot of the dialog for adding a dev box." lightbox="./media/quickstart-create-dev-box/developer-portal-create-dev-box.png":::
After you make your selections, the page shows you the following information:
To create a dev box in the Microsoft Dev Box developer portal:
> [!Note] > If you encounter a vCPU quota error with a *QuotaExceeded* message, ask your administrator to [request an increased quota limit](/azure/dev-box/how-to-request-quota-increase). If your admin can't increase the quota limit at this time, try selecting another pool with a region close to your location.
- :::image type="content" source="./media/quickstart-create-dev-box/dev-box-tile-creating.png" alt-text="Screenshot of the developer portal that shows the dev box card with a status of Creating.":::
+ :::image type="content" source="./media/quickstart-create-dev-box/dev-box-tile-creating.png" alt-text="Screenshot of the developer portal that shows the dev box card with a status of Creating." lightbox="./media/quickstart-create-dev-box/dev-box-tile-creating.png":::
[!INCLUDE [dev box runs on creation note](./includes/note-dev-box-runs-on-creation.md)]
To connect to a dev box by using the browser:
1. Select **Open in browser**.
- :::image type="content" source="./media/quickstart-create-dev-box/dev-portal-open-in-browser.png" alt-text="Screenshot of dev box card that shows the option for opening in a browser.":::
+ :::image type="content" source="./media/quickstart-create-dev-box/dev-portal-open-in-browser.png" alt-text="Screenshot of dev box card that shows the option for opening in a browser." lightbox="./media/quickstart-create-dev-box/dev-portal-open-in-browser.png":::
A new tab opens with a Remote Desktop session through which you can use your dev box. Use a work or school account to sign in to your dev box, not a personal Microsoft account.
dms Known Issues Azure Sql Migration Azure Data Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-azure-sql-migration-azure-data-studio.md
This article provides a list of known issues and troubleshooting steps associate
- **Recommendation**: To debug this issue, you can try pinging your Azure Blob Storage URL from your SQL Server on Azure VM target and confirm if you have a connectivity problem. To solve this issue, you have to allow the Azure IP addresses configured in your DNS server. For more information, see [Troubleshoot Azure Private Endpoint connectivity problems](/azure/private-link/troubleshoot-private-endpoint-connectivity)
+## Azure Database Migration Service Naming Rules
+
+If your DMS service failed with "Error: Service name 'x_y_z' is not valid", then you need to follow the Azure Database Migration Service Naming Rules. As Azure Database Migration Service uses Azure Data factory for its compute, it follows the exact same naming rules as mentioned [here](https://learn.microsoft.com/azure/data-factory/naming-rules).
+ ## Azure SQL Database limitations Migrating to Azure SQL Database by using the Azure SQL extension for Azure Data Studio has the following limitations:
energy-data-services How To Deploy Osdu Admin Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-deploy-osdu-admin-ui.md
+
+ Title: Deploy OSDU Admin UI on top of Azure Data Manager for Energy
+description: Learn how to deploy the OSDU Admin UI on top of your Azure Data Manager for Energy instance.
+++++ Last updated : 02/15/2024+
+# Deploy OSDU Admin UI on top of Azure Data Manager for Energy
+
+This guide shows you how to deploy the OSDU Admin UI on top of your Azure Data Manager for Energy instance.
+
+The OSDU Admin UI enables platform administrators to manage the Azure Data Manager for Energy data partition you connect it to. The management tasks include entitlements (user and group management), legal tags, schemas, reference data, and view objects and visualize those on a map.
+
+## Prerequisites
+- Install [Visual Studio Code with Dev Containers](https://code.visualstudio.com/docs/devcontainers/tutorial). It's possible to deploy the OSDU Admin UI from your local computer using either Linux or Windows WSL, we recommend using a Dev Container to eliminate potential conflicts of tooling versions, environments etc.
+- Provision an [Azure Data Manager for Energy instance](quickstart-create-microsoft-energy-data-services-instance.md).
+- Add the App Registration permissions to enable Admin UI to function properly:
+ - [Application.Read.All](/graph/permissions-reference#applicationreadall)
+ - [User.Read](/graph/permissions-reference#applicationreadall)
+ - [User.Read.All](/graph/permissions-reference#userreadall)
+
+ :::image type="content" source="media/how-to-deploy-osdu-admin-ui/app-permission-1.png" alt-text="Screenshot that shows applications read all permission.":::
+
+ :::image type="content" source="media/how-to-deploy-osdu-admin-ui/app-permission-2.png" alt-text="Screenshot that shows user read all permission.":::
+
+## Environment setup
+1. Use the Dev Container in Visual Studio Code to deploy the OSDU Admin UI to eliminate conflicts from your local machine.
+2. Click on Open to clone the repository.
+
+ [![Open in Remote - Containers](https://img.shields.io/static/v1?style=for-the-badge&label=Remote%20-%20Containers&message=Open&color=blue&logo=visualstudiocode)](https://vscode.dev/redirect?url=vscode://ms-vscode-remote.remote-containers/cloneInVolume?url=https://community.opengroup.org/osdu/ui/admin-ui-group/admin-ui-totalenergies/admin-ui-totalenergies)
+
+3. Accept the cloning prompt.
+
+ :::image type="content" source="media/how-to-deploy-osdu-admin-ui/clone-the-repository.png" alt-text="Screenshot that shows cloning the repository.":::
+
+4. When prompted for a container configuration template,
+ 1. Select [Ubuntu](https://github.com/devcontainers/templates/tree/main/src/ubuntu).
+ 2. Accept the default version.
+ 3. Add the [Azure CLI](https://github.com/devcontainers/features/tree/main/src/azure-cli) feature.
+
+ ![Screenshot that shows option selection.](./media/how-to-deploy-osdu-admin-ui/option-selection.png)
+
+5. After a few minutes, the devcontainer is running.
+
+ :::image type="content" source="media/how-to-deploy-osdu-admin-ui/running-devcontainer.png" alt-text="Screenshot that shows running devcontainer.":::
+
+6. Open the terminal.
+
+ :::image type="content" source="media/how-to-deploy-osdu-admin-ui/open-terminal.png" alt-text="Screenshot that shows opening terminal.":::
+
+7. Install NVM, Node.js, npm, and Angular CLI by executing the command in the bash terminal.
+
+ ```bash
+ curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash && \
+ export NVM_DIR="$([ -z "${XDG_CONFIG_HOME-}" ] && printf %s "${HOME}/.nvm" || printf %s "${XDG_CONFIG_HOME}/nvm")"
+ [ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" && \
+ nvm install 14.17.3 && \
+ export NG_CLI_ANALYTICS=false && \
+ npm install -g @angular/cli@13.3.9 && \
+ curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash
+ ```
+
+ :::image type="content" source="media/how-to-deploy-osdu-admin-ui/install-screen.png" alt-text="Screenshot that shows installation.":::
+
+8. Log into Azure CLI by executing the command on the terminal. It takes you to the login screen.
+ ```azurecli-interactive
+ az login
+ ```
+
+9. It takes you to the login screen. Enter your credentials and upon success, you see a success message.
+
+ :::image type="content" source="media/how-to-deploy-osdu-admin-ui/login.png" alt-text="Screenshot that shows successful login.":::
+
+
+## Configure environment variables
+1. Fetch `client-id` as authAppId, `resource-group`, `subscription-id`, and `location`.
+
+ ![Screenshot that shows how to fetch location and resource group.](./media/how-to-deploy-osdu-admin-ui/location-resource-group.png)
+
+2. Fetch the value of `id` as the subscription ID by running the following command on the terminal.
+ ```azurecli-interactive
+ az account show
+ ```
+
+3. If the above ID isn't same as the `subcription-id` from the Azure Data Manager for Energy instance, you need to change subscription.
+ ```azurecli-interactive
+ az account set --subscription <subscription-id>
+ ```
+
+4. Enter the required environment variables on the terminal.
+ ```bash
+ export CLIENT_ID="<client-id>" ## App Registration to be used by OSDU Admin UI, usually the client ID used to provision ADME
+ export TENANT_ID="<tenant-id>" ## Tenant ID
+ export ADME_URL="<adme-url>" ## Remove www or https from the text
+ export DATA_PARTITION="<partition>"
+ export WEBSITE_NAME="<storage-name>" ## Unique name of the storage account or static web app that will be generated
+ export RESOURCE_GROUP="<resource-group>" ## Name of resource group
+ export LOCATION="<location>" ## Azure region to deploy to, i.e. "westeurope"
+ ```
+
+## Deploy storage account
+1. Create resource group. Skip this step if the resource group exists already.
+ ```azurecli-interactive
+ az group create \
+ --name $RESOURCE_GROUP \
+ --location $LOCATION
+ ```
+
+1. Create storage account.
+ ```azurecli-interactive
+ az storage account create \
+ --resource-group $RESOURCE_GROUP \
+ --location $LOCATION \
+ --name $WEBSITE_NAME \
+ --sku Standard_LRS \
+ --public-network-access Enabled \
+ --allow-blob-public-access true
+ ```
+
+1. Configure the static website.
+ ```azurecli-interactive
+ az storage blob service-properties update \
+ --account-name $WEBSITE_NAME \
+ --static-website \
+ --404-document https://docsupdatetracker.net/index.html \
+ --index-document https://docsupdatetracker.net/index.html
+ ```
+
+1. Fetch the redirect URI.
+ ```azurecli-interactive
+ export REDIRECT_URI=$(az storage account show --resource-group $RESOURCE_GROUP --name $WEBSITE_NAME --query "primaryEndpoints.web") && \
+ echo "Redirect URL: $REDIRECT_URI"
+ ```
+
+1. Get the App Registration's Single-page Application (SPA) section.
+ ```azurecli-interactive
+ echo "https://ms.portal.azure.com/#view/Microsoft_AAD_RegisteredApps/ApplicationMenuBlade/~/Authentication/appId/$CLIENT_ID/isMSAApp~/false"
+ ```
+
+1. Open the link you got from the above result in the browser and add the `REDIRECT_URI`.
+
+ ![Screenshot showing redirect URIs of an App Registration.](./media/how-to-deploy-osdu-admin-ui/app-uri-config.png)
+
+## Build and deploy the web app
+
+1. Navigate to the `OSDUApp` folder.
+ ```bash
+ cd OSDUApp/
+ ```
+2. Install the dependencies.
+ ```nodejs
+ npm install
+ ```
+3. Modify the parameters in the config file located at `/src/config/config.json`.
+ ```json
+ {
+ "mapboxKey": "key", // This is optional for the access token from Mapbox.com and used to visualize data on the map feature.
+ ...
+ "data_partition": "<adme_data_partition>", // ADME Data Partition ID (i.e. opendes)
+ "idp": {
+ ...
+ "tenant_id": "<tenant_id>", // Entra ID tenant ID
+ "client_id": "<client_id>", // App Registration ID to use for the admin UI, usually the same as the ADME App Registration ID, i.e. "6ee7e0d6-0641-4b29-a283-541c5d00655a"
+ "redirect_uri": "<https://storageaccount.zXX.web.core.windows.net/>", // This is the website URL ($REDIRECT_URI)
+ "scope": "<client_id>/.default" // Scope of the ADME instance, i.e. "6ee7e0d6-0641-4b29-a283-541c5d00655a/.default"
+ },
+ "api_endpoints": { // Just replace contoso.energy.azure.com with your ADME_URL after removing https or wwww in all the API endpoints below.
+ "entitlement_endpoint": "https://contoso.energy.azure.com/api/",
+ "storage_endpoint": "https://contoso.energy.azure.com/api/",
+ "search_endpoint": "https://contoso.energy.azure.com/api/",
+ "legal_endpoint": "https://contoso.energy.azure.com/api/",
+ "schema_endpoint": "https://contoso.energy.azure.com/api/",
+ "osdu_connector_api_endpoint":"osdu_connector", // Optional. API endpoint of the OSDU Connector API*
+ "file_endpoint": "https://contoso.energy.azure.com/api/",
+ "graphAPI_endpoint": "https://graph.microsoft.com/v1.0/",
+ "workflow_endpoint": "https://contoso.energy.azure.com/api/"
+ }
+ ...
+ }
+ ```
++
+ \* [OSDU Connector API](https://community.opengroup.org/osdu/ui/admin-ui-group/admin-ui-totalenergies/connector-api-totalenergies) is built as an interface between consumers and OSDU APIs wrapping some API chain calls and objects. Currently, it manages all operations and actions on project and scenario objects.
+
+4. If you aren't able to give app permissions in the Prerequisite step because of the subscription constraints, remove `User.ReadBasic.All` and `Application.Read.All` from the `src/config/environments/environment.ts`. Removing these permissions would disable the Admin UI from converting the OIDs of users and applications into the user names and application names respectively.
+
+ :::image type="content" source="media/how-to-deploy-osdu-admin-ui/graph-permission.png" alt-text="Screenshot that shows graph permissions.":::
+
+5. Build the web UI.
+ ```bash
+ ng build
+ ```
+
+6. Upload the build to Storage Account.
+ ```azurecli-interactive
+ az storage blob upload-batch \
+ --account-name $WEBSITE_NAME \
+ --source ./dist/OSDUApp \
+ --destination '$web' \
+ --overwrite
+ ```
+
+7. Fetch the website URL.
+ ```bash
+ echo $REDIRECT_URI
+ ```
+
+8. Open the Website URL in the browser and validate that it's working correctly and connected to the correct Azure Data Manager for Energy instance.
+
+## References
+
+For information about OSDU Admin UI, see [OSDU GitLab](https://community.opengroup.org/osdu/ui/admin-ui-group/admin-ui-totalenergies/admin-ui-totalenergies).<br>
+For other deployment methods (Terraform or Azure DevOps pipeline), see [OSDU Admin UI DevOps](https://community.opengroup.org/osdu/ui/admin-ui-group/admin-ui-totalenergies/admin-ui-totalenergies/-/tree/main/OSDUApp/devops/azure).
expressroute Expressroute Locations Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations-providers.md
The following table shows connectivity locations and the service providers for e
| **Paris2** | [Equinix](https://www.equinix.com/data-centers/europe-colocation/france-colocation/paris-data-centers/pa4) | 1 | France Central | Supported | Equinix<br/>InterCloud<br/>Orange | | **Perth** | [NextDC P1](https://www.nextdc.com/data-centres/p1-perth-data-centre) | 2 | n/a | Supported | Equinix<br/>Megaport<br/>NextDC | | **Phoenix** | [EdgeConneX PHX01](https://www.cyrusone.com/data-centers/north-america/arizona/phx1-phx8-phoenix) | 1 | West US 3 | Supported | Cox Business Cloud Port<br/>CenturyLink Cloud Connect<br/>DE-CIX<br/>Megaport<br/>Zayo |
-| **Phoenix2** | [PhoenixNAP](https://phoenixnap.com/) | 1 | West US 3 | Supported | n/a |
+| **Phoenix2** | [PhoenixNAP](https://phoenixnap.com/) | 1 | West US 3 | Supported | |
| **Portland** | [EdgeConnex POR01](https://www.edgeconnex.com/locations/north-america/portland-or/) | 1 | West US 2 | Supported | | | **Pune** | [STT GDC Pune DC1](https://www.sttelemediagdc.in/our-data-centres-in-india) | 2 | Central India | Supported | Airtel<br/>Lightstorm<br/>Tata Communications | | **Quebec City** | [Vantage](https://vantage-dc.com/data_centers/quebec-city-data-center-campus/) | 1 | Canada East | Supported | Bell Canada<br/>Equinix<br/>Megaport<br/>RISQ<br/>Telus |
firewall Deploy Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/deploy-ps.md
description: In this article, you learn how to deploy and configure Azure Firewa
Previously updated : 08/02/2022 Last updated : 02/20/2024
$wsn = Get-AzVirtualNetworkSubnetConfig -Name Workload-SN -VirtualNetwork $test
$NIC01 = New-AzNetworkInterface -Name Srv-Work -ResourceGroupName Test-FW-RG -Location "East us" -Subnet $wsn #Define the virtual machine
+$SecurePassword = ConvertTo-SecureString "<choose a password>" -AsPlainText -Force
+$Credential = New-Object System.Management.Automation.PSCredential ("<choose a user name>", $SecurePassword);
$VirtualMachine = New-AzVMConfig -VMName Srv-Work -VMSize "Standard_DS2"
-$VirtualMachine = Set-AzVMOperatingSystem -VM $VirtualMachine -Windows -ComputerName Srv-Work -ProvisionVMAgent -EnableAutoUpdate
+$VirtualMachine = Set-AzVMOperatingSystem -VM $VirtualMachine -Windows -ComputerName Srv-Work -ProvisionVMAgent -EnableAutoUpdate -Credential $Credential
$VirtualMachine = Add-AzVMNetworkInterface -VM $VirtualMachine -Id $NIC01.Id $VirtualMachine = Set-AzVMSourceImage -VM $VirtualMachine -PublisherName 'MicrosoftWindowsServer' -Offer 'WindowsServer' -Skus '2019-Datacenter' -Version latest
governance Remediation Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/concepts/remediation-options.md
delivers configurations automatically or on-demand.
There are three available assignment types when guest assignments are created. The property is available as a parameter of machine configuration definitions that support `DeployIfNotExists`.
+**The [assignmentType property][05] property is case sensitive**
+ | Assignment type | Behavior | | | - | | `Audit` | Report on the state of the machine, but don't make changes. |
governance Create Policy Definition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/how-to/create-policy-definition.md
Parameters of the `New-GuestConfigurationPolicy` cmdlet:
- **Path**: Destination path where policy definitions are created. - **Platform**: Target platform (Windows/Linux) for machine configuration policy and content package.-- **Mode**: (`ApplyAndMonitor`, `ApplyAndAutoCorrect`, `Audit`) choose if the policy should audit
+- **Mode**: (case sensitive: `ApplyAndMonitor`, `ApplyAndAutoCorrect`, `Audit`) choose if the policy should audit
or deploy the configuration. The default is `Audit`. - **Tag** adds one or more tag filters to the policy definition - **Category** sets the category metadata field in the policy definition
specified path:
$PolicyConfig2 = @{ PolicyId = '_My GUID_' ContentUri = $contentUri
- DisplayName = 'My audit policy'
- Description = 'My audit policy'
+ DisplayName = 'My deployment policy'
+ Description = 'My deployment policy'
Path = './policies/deployIfNotExists.json' Platform = 'Windows' PolicyVersion = 1.0.0
hdinsight Hbase Troubleshoot Timeouts Hbase Hbck https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/hbase-troubleshoot-timeouts-hbase-hbck.md
Title: Timeouts with 'hbase hbck' command in Azure HDInsight
description: Time out issue with 'hbase hbck' command when fixing region assignments Previously updated : 01/31/2023 Last updated : 02/20/2024 # Scenario: Timeouts with 'hbase hbck' command in Azure HDInsight
hdinsight Apache Kafka Ssl Encryption Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-ssl-encryption-authentication.md
description: Set up TLS encryption for communication between Kafka clients and K
Previously updated : 02/16/2023 Last updated : 02/20/2024 # Set up TLS encryption and authentication for Non ESP Apache Kafka cluster in Azure HDInsight
These steps are detailed in the following code snippets.
keytool -keystore kafka.client.keystore.jks -alias CARoot -import -file ca-cert -storepass "MyClientPassword123" -keypass "MyClientPassword123" -noprompt ```
-1. Create the file `client-ssl-auth.properties` on client machine (hn1) . It should have the following lines:
+1. Create the file `client-ssl-auth.properties` on client machine (hn1). It should have the following lines:
```config security.protocol=SSL
The details of each step are given.
keytool -keystore kafka.client.keystore.jks -import -file client-signed-cert -storepass "MyClientPassword123" -keypass "MyClientPassword123" -noprompt ```
-1. Create a file `client-ssl-auth.properties` on client machine (hn1) . It should have the following lines:
+1. Create a file `client-ssl-auth.properties` on client machine (hn1). It should have the following lines:
```bash security.protocol=SSL
hdinsight Rest Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/rest-proxy.md
description: Learn how to do Apache Kafka operations using a Kafka REST proxy on
Previously updated : 02/17/2023 Last updated : 02/20/2024 # Interact with Apache Kafka clusters in Azure HDInsight using a REST proxy
The steps use the Azure portal. For an example using Azure CLI, see [Create Apac
1. During the Kafka cluster creation workflow, in the **Security + networking** tab, check the **Enable Kafka REST proxy** option.
- :::image type="content" source="./media/rest-proxy/azure-portal-cluster-security-networking-kafka-rest.png" alt-text="Screenshot shows the Create H D Insight cluster page with Security + networking selected." border="true":::
+ :::image type="content" source="./media/rest-proxy/azure-portal-cluster-security-networking-kafka-rest.png" alt-text="Screenshot shows the Create HDInsight cluster page with Security + networking selected." border="true":::
1. Click **Select Security Group**. From the list of security groups, select the security group that you want to have access to the REST proxy. You can use the search box to find the appropriate security group. Click the **Select** button at the bottom.
- :::image type="content" source="./media/rest-proxy/azure-portal-cluster-security-networking-kafka-rest2.png" alt-text="Screenshot shows the Create H D Insight cluster page with the option to select a security group." border="true":::
+ :::image type="content" source="./media/rest-proxy/azure-portal-cluster-security-networking-kafka-rest2.png" alt-text="Screenshot shows the Create HDInsight cluster page with the option to select a security group." border="true":::
1. Complete the remaining steps to create your cluster as described in [Create Apache Kafka cluster in Azure HDInsight using Azure portal](./apache-kafka-get-started.md).
hdinsight Open Source Software https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/open-source-software.md
Title: Open-source software support in Azure HDInsight
description: Microsoft Azure provides a general level of support for open-source technologies. Previously updated : 02/18/2023 Last updated : 02/20/2024 # Open-source software support in Azure HDInsight
hdinsight Transport Layer Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/transport-layer-security.md
Title: Transport layer security in Azure HDInsight
description: Transport layer security (TLS) and secure sockets layer (SSL) are cryptographic protocols that provide communications security over a computer network. Previously updated : 10/16/2023 Last updated : 02/20/2024 # Transport layer security in Azure HDInsight
hdinsight Troubleshoot Oozie https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/troubleshoot-oozie.md
Title: Troubleshoot Apache Oozie in Azure HDInsight
description: Troubleshoot certain Apache Oozie errors in Azure HDInsight. Previously updated : 01/31/2023 Last updated : 02/20/2024 # Troubleshoot Apache Oozie in Azure HDInsight
machine-learning How To Datastore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-datastore.md
Previously updated : 10/25/2023 Last updated : 02/20/2024 # Customer intent: As an experienced Python developer, I need to make my data in Azure storage available to my remote compute resource, to train my machine learning models.
In this article, learn how to connect to Azure data storage services with Azure
- An Azure Machine Learning workspace. > [!NOTE]
-> Azure Machine Learning datastores do **not** create the underlying storage account resources. Instead, they link an **existing** storage account for Azure Machine Learning use. Azure Machine Learning datastores are not required for this. If you have access to the underlying data, you can use storage URIs directly.
+> Azure Machine Learning datastores do **not** create the underlying storage account resources. Instead, they link an **existing** storage account for Azure Machine Learning use. This does not require Azure Machine Learning datastores. If you have access to the underlying data, you can use storage URIs directly.
## Create an Azure Blob datastore
ml_client.create_or_update(store)
``` # [CLI: Identity-based access](#tab/cli-identity-based-access)
-Create the following YAML file (be sure to update the appropriate values):
+Create the following YAML file (make sure you update the appropriate values):
```yaml # my_blob_datastore.yml
az ml datastore create --file my_blob_datastore.yml
``` # [CLI: Account key](#tab/cli-account-key)
-Create the following YAML file (be sure to update the appropriate values):
+Create this YAML file (make sure you update the appropriate values):
```yaml # my_blob_datastore.yml
az ml datastore create --file my_blob_datastore.yml
``` # [CLI: SAS](#tab/cli-sas)
-Create the following YAML file (be sure to update the appropriate values):
+Create this YAML file (make sure you update the appropriate values):
```yaml # my_blob_datastore.yml
ml_client.create_or_update(store)
``` # [CLI: Identity-based access](#tab/cli-adls-identity-based-access)
-Create the following YAML file (updating the values):
+Create this YAML file (updating the values):
```yaml # my_adls_datastore.yml
az ml datastore create --file my_adls_datastore.yml
``` # [CLI: Service principal](#tab/cli-adls-sp)
-Create the following YAML file (updating the values):
+Create this YAML file (updating the values):
```yaml # my_adls_datastore.yml
ml_client.create_or_update(store)
``` # [CLI: Account key](#tab/cli-azfiles-account-key)
-Create the following YAML file (updating the values):
+Create this YAML file (updating the values):
```yaml # my_files_datastore.yml
az ml datastore create --file my_files_datastore.yml
``` # [CLI: SAS](#tab/cli-azfiles-sas)
-Create the following YAML file (updating the values):
+Create this YAML file (updating the values):
```yaml # my_files_datastore.yml
ml_client.create_or_update(store)
``` # [CLI: Identity-based access](#tab/cli-adlsgen1-identity-based-access)
-Create the following YAML file (updating the values):
+Create this YAML file (updating the values):
```yaml # my_adls_datastore.yml
az ml datastore create --file my_adls_datastore.yml
``` # [CLI: Service principal](#tab/cli-adlsgen1-sp)
-Create the following YAML file (updating the values):
+Create this YAML file (updating the values):
```yaml # my_adls_datastore.yml
az ml datastore create --file my_adls_datastore.yml
## Create a OneLake (Microsoft Fabric) datastore (preview)
-This section describes the creation of a OneLake datastore using various options. The OneLake datastore is part of Microsoft Fabric. At this time, Azure Machine Learning supports connecting to Microsoft Fabric Lakehouse artifacts that includes folders/ files and Amazon S3 shortcuts. For more information about Lakehouse, see [What is a lakehouse in Microsoft Fabric](/fabric/data-engineering/lakehouse-overview).
+This section describes various options to create a OneLake datastore. The OneLake datastore is part of Microsoft Fabric. At this time, Azure Machine Learning supports connection to Microsoft Fabric Lakehouse artifacts that include folders / files and Amazon S3 shortcuts. For more information about Lakehouse, visit [What is a lakehouse in Microsoft Fabric](/fabric/data-engineering/lakehouse-overview).
-To create a OneLake datastore, you need
+OneLake datastore creation requires
- Endpoint - Fabric workspace name or GUID
In your Microsoft Fabric instance, you can find the workspace information as sho
:::image type="content" source="media/how-to-datastore/fabric-workspace.png" alt-text="Screenshot that shows Fabric Workspace details in Microsoft Fabric UI." lightbox="./media/how-to-datastore/fabric-workspace.png"::: #### OneLake endpoint
-In your Microsoft Fabric instance, you can find the endpoint information as shown in this screenshot:
+This screenshot shows how you can find endpoint information in your Microsoft Fabric instance:
:::image type="content" source="media/how-to-datastore/fabric-endpoint.png" alt-text="Screenshot that shows Fabric endpoint details in Microsoft Fabric UI." lightbox="./media/how-to-datastore/fabric-endpoint.png"::: #### OneLake artifact name
-In your Microsoft Fabric instance, you can find the artifact information as shown in this screenshot. You can use either a GUID value, or a "friendly name" to create an Azure Machine Learning OneLake datastore, as shown in this screenshot:
+This screenshot shows how you can find the artifact information in your Microsoft Fabric instance. The screenshot also shows how you can either use a GUID value or a "friendly name" to create an Azure Machine Learning OneLake datastore:
:::image type="content" source="media/how-to-datastore/fabric-lakehouse.png" alt-text="Screenshot showing how to get Fabric LH artifact details in Microsoft Fabric UI." lightbox="./media/how-to-datastore/fabric-lakehouse.png":::
az ml datastore create --file my_onelakesp_datastore.yml
- [Access data in a job](how-to-read-write-data-v2.md#access-data-in-a-job) - [Create and manage data assets](how-to-create-data-assets.md#create-and-manage-data-assets) - [Import data assets (preview)](how-to-import-data-assets.md#import-data-assets-preview)-- [Data administration](how-to-administrate-data-authentication.md#data-administration)
+- [Data administration](how-to-administrate-data-authentication.md#data-administration)
machine-learning How To Monitor Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-monitor-online-endpoints.md
There are three logs that can be enabled for online endpoints:
| Created | Created container image-fetcher | Created | Created container inference-server | Created | Created container model-mount
- | Unhealthy | Liveness probe failed: \<FAILURE\_CONTENT\>
- | Unhealthy | Readiness probe failed: \<FAILURE\_CONTENT\>
+ | LivenessProbeFailed | Liveness probe failed: \<FAILURE\_CONTENT\>
+ | ReadinessProbeFailed | Readiness probe failed: \<FAILURE\_CONTENT\>
| Started | Started container image-fetcher | Started | Started container inference-server | Started | Started container model-mount
machine-learning How To Use Synapsesparkstep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-use-synapsesparkstep.md
Title: Use Apache Spark in a machine learning pipeline (deprecated)
-description: Link your Azure Synapse Analytics workspace to your Azure machine learning pipeline to use Apache Spark for data manipulation.
+description: Link your Azure Synapse Analytics workspace to your Azure Machine Learning pipeline to use Apache Spark for data manipulation.
Previously updated : 11/28/2022 Last updated : 02/20/2024 #Customer intent: As a user of both Azure Machine Learning pipelines and Azure Synapse Analytics, I'd like to use Apache Spark for the data preparation of my pipeline
# How to use Apache Spark (powered by Azure Synapse Analytics) in your machine learning pipeline (deprecated) - [!INCLUDE [sdk v1](../includes/machine-learning-sdk-v1.md)] > [!WARNING]
-> The Azure Synapse Analytics integration with Azure Machine Learning available in Python SDK v1 is deprecated. Users can continue using Synapse workspace registered with Azure Machine Learning as a linked service. However, a new Synapse workspace can no longer be registered with Azure Machine Learning as a linked service. We recommend using Managed (Automatic) Synapse compute and attached Synapse Spark pools available in CLI v2 and Python SDK v2. Please see [https://aka.ms/aml-spark](https://aka.ms/aml-spark) for more details.
+> The Azure Synapse Analytics integration with Azure Machine Learning available in Python SDK v1 is deprecated. Users can still use Synapse workspace registered with Azure Machine Learning as a linked service. However, a new Synapse workspace can no longer be registered with Azure Machine Learning as a linked service. We recommend use of Managed (Automatic) Synapse compute and attached Synapse Spark pools available in CLI v2 and Python SDK v2. Visit [https://aka.ms/aml-spark](https://aka.ms/aml-spark) for more information.
-In this article, you'll learn how to use Apache Spark pools powered by Azure Synapse Analytics as the compute target for a data preparation step in an Azure Machine Learning pipeline. You'll learn how a single pipeline can use compute resources suited for the specific step, such as data preparation or training. You'll see how data is prepared for the Spark step and how it's passed to the next step.
+In this article, you learn how to use Apache Spark pools, powered by Azure Synapse Analytics, as the compute target for a data preparation step in an Azure Machine Learning pipeline. You learn how a single pipeline can use compute resources suited for the specific step - for example, data preparation or training. You'll see how data is prepared for the Spark step and how it passes to the next step.
## Prerequisites
-* Create an [Azure Machine Learning workspace](../quickstart-create-resources.md) to hold all your pipeline resources.
+* Create an [Azure Machine Learning workspace](../quickstart-create-resources.md) to hold all your pipeline resources
-* [Configure your development environment](how-to-configure-environment.md) to install the Azure Machine Learning SDK, or use an [Azure Machine Learning compute instance](../concept-compute-instance.md) with the SDK already installed.
+* [Configure your development environment](how-to-configure-environment.md) to install the Azure Machine Learning SDK, or use an [Azure Machine Learning compute instance](../concept-compute-instance.md) with the SDK already installed
-* Create an Azure Synapse Analytics workspace and Apache Spark pool (see [Quickstart: Create a serverless Apache Spark pool using Synapse Studio](../../synapse-analytics/quickstart-create-apache-spark-pool-studio.md)).
+* Create an Azure Synapse Analytics workspace and Apache Spark pool (see [Quickstart: Create a serverless Apache Spark pool using Synapse Studio](../../synapse-analytics/quickstart-create-apache-spark-pool-studio.md))
-## Link your Azure Machine Learning workspace and Azure Synapse Analytics workspace
+## Link your Azure Machine Learning workspace and Azure Synapse Analytics workspace
-You create and administer your Apache Spark pools in an Azure Synapse Analytics workspace. To integrate an Apache Spark pool with an Azure Machine Learning workspace, you must [link to the Azure Synapse Analytics workspace](how-to-link-synapse-ml-workspaces.md).
+You create and administer your Apache Spark pools in an Azure Synapse Analytics workspace. To integrate an Apache Spark pool with an Azure Machine Learning workspace, you must [link to the Azure Synapse Analytics workspace](how-to-link-synapse-ml-workspaces.md).
-Once your Azure Machine Learning workspace and your Azure Synapse Analytics workspaces are linked, you can attach an Apache Spark pool via
+Once you link your Azure Machine Learning workspace and your Azure Synapse Analytics workspaces, you can attach an Apache Spark pool with
* [Azure Machine Learning studio](how-to-link-synapse-ml-workspaces.md#attach-a-pool-via-the-studio)
-* Python SDK ([as elaborated below](#attach-your-apache-spark-pool-as-a-compute-target-for-azure-machine-learning))
-* Azure Resource Manager (ARM) template (see this [Example ARM template](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.machinelearningservices/machine-learning-linkedservice-create/azuredeploy.json)).
- * You can use the command line to follow the ARM template, add the linked service, and attach the Apache Spark pool with the following code:
+* Python SDK ([as explained later](#attach-your-apache-spark-pool-as-a-compute-target-for-azure-machine-learning))
+* Azure Resource Manager (ARM) template (see this [Example ARM template](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.machinelearningservices/machine-learning-linkedservice-create/azuredeploy.json)).
+ * You can use the command line to follow the ARM template, add the linked service, and attach the Apache Spark pool with this code:
```azurecli az deployment group create --name --resource-group <rg_name> --template-file "azuredeploy.json" --parameters @"azuredeploy.parameters.json" ``` > [!Important]
-> To link to the Azure Synapse Analytics workspace successfully, you must have the Owner role in the Azure Synapse Analytics workspace resource. Check your access in the Azure portal.
+> To successfully link to the Azure Synapse Analytics workspace, you must have the Owner role in the Azure Synapse Analytics workspace resource. Check your access in the Azure portal.
+>
+> The linked service will get a system-assigned managed identity (SAI) at creation time. You must assign this link service SAI the "Synapse Apache Spark administrator" role from Synapse Studio, so that it can submit the Spark job (see [How to manage Synapse RBAC role assignments in Synapse Studio](../../synapse-analytics/security/how-to-manage-synapse-rbac-role-assignments.md)).
>
-> The linked service will get a system-assigned managed identity (SAI) when you create it. You must assign this link service SAI the "Synapse Apache Spark administrator" role from Synapse Studio so that it can submit the Spark job (see [How to manage Synapse RBAC role assignments in Synapse Studio](../../synapse-analytics/security/how-to-manage-synapse-rbac-role-assignments.md)).
->
-> You must also give the user of the Azure Machine Learning workspace the role "Contributor" from Azure portal of resource management.
+> You must also give the user of the Azure Machine Learning workspace the "Contributor" role, from Azure portal of resource management.
## Retrieve the link between your Azure Synapse Analytics workspace and your Azure Machine Learning workspace
-You can retrieve linked services in your workspace with code such as:
+This code shows hot to retrieve linked services in your workspace:
```python from azureml.core import Workspace, LinkedService, SynapseWorkspaceLinkedServiceConfiguration
for service in LinkedService.list(ws) :
linked_service = LinkedService.get(ws, 'synapselink1') ```
-First, `Workspace.from_config()` accesses your Azure Machine Learning workspace using the configuration in `config.json` (see [Create a workspace configuration file](how-to-configure-environment.md)). Then, the code prints all of the linked services available in the Workspace. Finally, `LinkedService.get()` retrieves a linked service named `'synapselink1'`.
+First, `Workspace.from_config()` accesses your Azure Machine Learning workspace with the configuration in `config.json` (see [Create a workspace configuration file](how-to-configure-environment.md)). Then, the code prints all of the linked services available in the workspace. Finally, `LinkedService.get()` retrieves a linked service named `'synapselink1'`.
## Attach your Apache spark pool as a compute target for Azure Machine Learning
-To use your Apache spark pool to power a step in your machine learning pipeline, you must attach it as a `ComputeTarget` for the pipeline step, as shown in the following code.
+To use your Apache spark pool to power a step in your machine learning pipeline, you must attach it as a `ComputeTarget` for the pipeline step, as shown in this code.
```python from azureml.core.compute import SynapseCompute, ComputeTarget
synapse_compute=ComputeTarget.attach(
synapse_compute.wait_for_completion() ```
-The first step is to configure the `SynapseCompute`. The `linked_service` argument is the `LinkedService` object you created or retrieved in the previous step. The `type` argument must be `SynapseSpark`. The `pool_name` argument in `SynapseCompute.attach_configuration()` must match that of an existing pool in your Azure Synapse Analytics workspace. For more information on creating an Apache spark pool in the Azure Synapse Analytics workspace, see [Quickstart: Create a serverless Apache Spark pool using Synapse Studio](../../synapse-analytics/quickstart-create-apache-spark-pool-studio.md). The type of `attach_config` is `ComputeTargetAttachConfiguration`.
+The first step configures the `SynapseCompute`. The `linked_service` argument is the `LinkedService` object you created or retrieved in the previous step. The `type` argument must be `SynapseSpark`. The `pool_name` argument in `SynapseCompute.attach_configuration()` must match that of an existing pool in your Azure Synapse Analytics workspace. For more information about creation of an Apache spark pool in the Azure Synapse Analytics workspace, see [Quickstart: Create a serverless Apache Spark pool using Synapse Studio](../../synapse-analytics/quickstart-create-apache-spark-pool-studio.md). The `attach_config` type is `ComputeTargetAttachConfiguration`.
-Once the configuration is created, you create a machine learning `ComputeTarget` by passing in the `Workspace`, `ComputeTargetAttachConfiguration`, and the name by which you'd like to refer to the compute within the machine learning workspace. The call to `ComputeTarget.attach()` is asynchronous, so the sample blocks until the call completes.
+After creation of the configuration, create a machine learning `ComputeTarget` by passing in the `Workspace`, `ComputeTargetAttachConfiguration`, and the name by which you'd like to refer to the compute within the machine learning workspace. The call to `ComputeTarget.attach()` is asynchronous, so the sample is blocked until the call completes.
## Create a `SynapseSparkStep` that uses the linked Apache Spark pool
-The sample notebook [Spark job on Apache spark pool](https://github.com/azure/machinelearningnotebooks/blob/master/how-to-use-azureml/azure-synapse/spark_job_on_synapse_spark_pool.ipynb) defines a simple machine learning pipeline. First, the notebook defines a data preparation step powered by the `synapse_compute` defined in the previous step. Then, the notebook defines a training step powered by a compute target better suited for training. The sample notebook uses the Titanic survival database to demonstrate data input and output; it doesn't actually clean the data or make a predictive model. Since there's no real training in this sample, the training step uses an inexpensive, CPU-based compute resource.
+The sample notebook [Spark job on Apache spark pool](https://github.com/azure/machinelearningnotebooks/blob/master/how-to-use-azureml/azure-synapse/spark_job_on_synapse_spark_pool.ipynb) defines a simple machine learning pipeline. First, the notebook defines a data preparation step, powered by the `synapse_compute` defined in the previous step. Then, the notebook defines a training step powered by a compute target more appropriate for training. The sample notebook uses the Titanic survival database to show data input and output; it doesn't actually clean the data or make a predictive model. Since this sample doesn't really involve training, the training step uses an inexpensive, CPU-based compute resource.
-Data flows into a machine learning pipeline by way of `DatasetConsumptionConfig` objects, which can hold tabular data or sets of files. The data often comes from files in blob storage in a workspace's datastore. The following code shows some typical code for creating input for a machine learning pipeline:
+Data flows into a machine learning pipeline through `DatasetConsumptionConfig` objects, which can hold tabular data or sets of files. The data often comes from files in blob storage in a workspace datastore. This code shows typical code that creates input for a machine learning pipeline:
```python from azureml.core import Dataset
titanic_file_dataset = Dataset.File.from_files(path=[(datastore, file_name)])
step1_input2 = titanic_file_dataset.as_named_input("file_input").as_hdfs() ```
-The above code assumes that the file `Titanic.csv` is in blob storage. The code shows how to read the file as a `TabularDataset` and as a `FileDataset`. This code is for demonstration purposes only, as it would be confusing to duplicate inputs or to interpret a single data source as both a table-containing resource and just as a file.
+That code assumes that the file `Titanic.csv` is in blob storage. The code shows how to read the file as a `TabularDataset` and as a `FileDataset`. This code is for demonstration purposes only, because it would become confusing to duplicate inputs or to interpret a single data source as both a table-containing resource and strictly as a file.
> [!IMPORTANT]
-> In order to use a `FileDataset` as input, your `azureml-core` version must be at least `1.20.0`. How to specify this using the `Environment` class is discussed below.
+> To use a `FileDataset` as input, your `azureml-core` version must be at least `1.20.0`. You can specify this with the `Environment` class, as discussed later.
-When a step completes, you may choose to store output data using code similar to:
+When a step completes, you can choose to store the output data, as shown in this code sample:
```python from azureml.data import HDFSOutputDatasetConfig step1_output = HDFSOutputDatasetConfig(destination=(datastore,"test")).register_on_complete(name="registered_dataset") ```
-In this case, the data would be stored in the `datastore` in a file called `test` and would be available within the machine learning workspace as a `Dataset` with the name `registered_dataset`.
+Here, the `datastore` would store the data in a file named `test`. The data would be available within the machine learning workspace as a `Dataset` with the name `registered_dataset`.
-In addition to data, a pipeline step may have per-step Python dependencies. Individual `SynapseSparkStep` objects can specify their precise Azure Synapse Apache Spark configuration, as well. This is shown in the following code, which specifies that the `azureml-core` package version must be at least `1.20.0`. (As mentioned previously, this requirement for `azureml-core` is needed to use a `FileDataset` as an input.)
+In addition to data, a pipeline step can have per-step Python dependencies. Individual `SynapseSparkStep` objects can specify their precise Azure Synapse Apache Spark configuration as well. To show this, the following code sample specifies that the `azureml-core` package version must be at least `1.20.0`. As mentioned previously, this requirement for `azureml-core` is needed to use a `FileDataset` as an input.
```python from azureml.core.environment import Environment
step_1 = SynapseSparkStep(name = 'synapse-spark',
environment = env) ```
-The above code specifies a single step in the Azure machine learning pipeline. This step's `environment` specifies a specific `azureml-core` version and could add other conda or pip dependencies as necessary.
+This code specifies a single step in the Azure Machine Learning pipeline. The `environment` value of this code sets a specific `azureml-core` version, and the code can add other conda or pip dependencies as needed.
-The `SynapseSparkStep` will zip and upload from the local computer the subdirectory `./code`. That directory will be recreated on the compute server and the step will run the file `dataprep.py` from that directory. The `inputs` and `outputs` of that step are the `step1_input1`, `step1_input2`, and `step1_output` objects previously discussed. The easiest way to access those values within the `dataprep.py` script is to associate them with named `arguments`.
+The `SynapseSparkStep` zips and uploads the `./code` subdirectory from the local computer. That directory is recreated on the compute server, and the step runs the file `dataprep.py` from that directory. The `inputs` and `outputs` of that step are the `step1_input1`, `step1_input2`, and `step1_output` objects discussed earlier. The easiest way to access those values within the `dataprep.py` script is to associate them with named `arguments`.
The next set of arguments to the `SynapseSparkStep` constructor control Apache Spark. The `compute_target` is the `'link1-spark01'` that we attached as a compute target previously. The other parameters specify the memory and cores we'd like to use.
-The sample notebook uses the following code for `dataprep.py`:
+The sample notebook uses this code for `dataprep.py`:
```python import os
sdf.coalesce(1).write\
.csv(args.output_dir) ```
-This "data preparation" script doesn't do any real data transformation, but illustrates how to retrieve data, convert it to a spark dataframe, and how to do some basic Apache Spark manipulation. You can find the output in Azure Machine Learning Studio by opening the child job, choosing the **Outputs + logs** tab, and opening the `logs/azureml/driver/stdout` file, as shown in the following figure.
+This "data preparation" script doesn't do any real data transformation, but it shows how to retrieve data, convert it to a Spark dataframe, and how to do some basic Apache Spark manipulation. To find the output in Azure Machine Learning studio, open the child job, choose the **Outputs + logs** tab, and open the `logs/azureml/driver/stdout` file, as shown in this screenshot:
:::image type="content" source="media/how-to-use-synapsesparkstep/synapsesparkstep-stdout.png" alt-text="Screenshot of Studio showing stdout tab of child job"::: ## Use the `SynapseSparkStep` in a pipeline
-The following example uses the output from the `SynapseSparkStep` created in the [previous section](#create-a-synapsesparkstep-that-uses-the-linked-apache-spark-pool). Other steps in the pipeline may have their own unique environments and run on different compute resources appropriate to the task at hand. The sample notebook runs the "training step" on a small CPU cluster:
+The next example uses the output from the `SynapseSparkStep` created in the [previous section](#create-a-synapsesparkstep-that-uses-the-linked-apache-spark-pool). Other steps in the pipeline might have their own unique environments and run on different compute resources appropriate to the task at hand. The sample notebook runs the "training step" on a small CPU cluster:
```python from azureml.core.compute import AmlCompute
step_2 = PythonScriptStep(script_name="train.py",
allow_reuse=False) ```
-The code above creates the new compute resource if necessary. Then, the `step1_output` result is converted to input for the training step. The `as_download()` option means that the data will be moved onto the compute resource, resulting in faster access. If the data was so large that it wouldn't fit on the local compute hard drive, you would use the `as_mount()` option to stream the data via the FUSE filesystem. The `compute_target` of this second step is `'cpucluster'`, not the `'link1-spark01'` resource you used in the data preparation step. This step uses a simple program `train.py` instead of the `dataprep.py` you used in the previous step. You can see the details of `train.py` in the sample notebook.
+This code creates the new compute resource if necessary. Then, the `step1_output` result is converted to input for the training step. The `as_download()` option means that the data is moved onto the compute resource, resulting in faster access. If the data was so large that it wouldn't fit on the local compute hard drive, you'd need to use the `as_mount()` option to stream the data with the FUSE filesystem. The `compute_target` of this second step is `'cpucluster'`, not the `'link1-spark01'` resource you used in the data preparation step. This step uses a simple program `train.py` instead of the `dataprep.py` you used in the previous step. You can see the details of `train.py` in the sample notebook.
-Once you've defined all of your steps, you can create and run your pipeline.
+After you define all of your steps, you can create and run your pipeline.
```python from azureml.pipeline.core import Pipeline
pipeline = Pipeline(workspace=ws, steps=[step_1, step_2])
pipeline_run = pipeline.submit('synapse-pipeline', regenerate_outputs=True) ```
-The above code creates a pipeline consisting of the data preparation step on Apache Spark pools powered by Azure Synapse Analytics (`step_1`) and the training step (`step_2`). Azure calculates the execution graph by examining the data dependencies between the steps. In this case, there's only a straightforward dependency that `step2_input` necessarily requires `step1_output`.
+This code creates a pipeline consisting of the data preparation step on Apache Spark pools, powered by Azure Synapse Analytics (`step_1`) and the training step (`step_2`). Azure examines the data dependencies between the steps to calculate the execution graph. In this case, there's only a straightforward dependency that `step2_input` necessarily requires `step1_output`.
-The call to `pipeline.submit` creates, if necessary, an Experiment called `synapse-pipeline` and asynchronously begins a Job within it. Individual steps within the pipeline are run as Child Jobs of this main job and can be monitored and reviewed in the Experiments page of Studio.
+The `pipeline.submit` call creates, if necessary, an Experiment named `synapse-pipeline`, and asynchronously starts a Job within it. Individual steps within the pipeline run as Child Jobs of this main job, and the Experiments page of Studio can monitor and review those steps.
## Next steps
-* [Publish and track machine learning pipelines](how-to-deploy-pipelines.md)
+* [Publish and track machine learning pipelines](how-to-deploy-pipelines.md)
* [Monitor Azure Machine Learning](../monitor-azure-machine-learning.md)
-* [Use automated ML in an Azure Machine Learning pipeline in Python](how-to-use-automlstep-in-pipelines.md)
+* [Use automated ML in an Azure Machine Learning pipeline in Python](how-to-use-automlstep-in-pipelines.md)
managed-instance-apache-cassandra Dba Commands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/dba-commands.md
The `cassandra-reset-password` command lets a user change their password for the
```azurecli-interactive az managed-cassandra cluster invoke-command --resource-group <rg> --cluster-name <cluster> --host <ip of data node> --command-name cassandra-reset-password --arguments password="<password>" ```
+> [!IMPORTANT]
+> The password is URL encoded (UTF-8) when it is passed into this command, meaning the following rules apply:
+>
+> * `The alphanumeric characters "a" through "z", "A" through "Z" and "0" through "9" remain the same.`
+> * `The special characters ".", "-", "*", and "_" remain the same.`
+> * `The space character " " is converted into a plus sign "+".`
+> * `All other characters are unsafe and are first converted into one or more bytes using some encoding scheme. Then each byte is represented
+> by the 3-character string "%xy", where xy is the two-digit
+> hexadecimal representation of the byte.`
+ The `cassandra-reset-auth-replication` command lets a user change their schema for the Cassandra user. Separate the datacenter names by space. ```azurecli-interactive az managed-cassandra cluster invoke-command --resource-group <rg> --cluster-name <cluster> --host <ip of data node> --command-name cassandra-reset-auth-replication --arguments password="<datacenters>" ```
+> [!IMPORTANT]
+> The datacenters are URL encoded (UTF-8) when they are passed into this command, meaning the following rules apply:
+>
+> * `The alphanumeric characters "a" through "z", "A" through "Z" and "0" through "9" remain the same.`
+> * `The special characters ".", "-", "*", and "_" remain the same.`
+> * `The space character " " is converted into a plus sign "+".`
+> * `All other characters are unsafe and are first converted into one or more bytes using some encoding scheme. Then each byte is represented
+> by the 3-character string "%xy", where xy is the two-digit
+> hexadecimal representation of the byte.`
+ The `sstable-tree` command lets a user see their sstables. ```azurecli-interactive az managed-cassandra cluster invoke-command --resource-group <rg> --cluster-name <cluster> --host <ip of data node> --command-name sstable-tree
mariadb Whats Happening To Mariadb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/whats-happening-to-mariadb.md
A. Unfortunately, we don't plan to support Azure Database for MariaDB beyond the
**Q. How do I manage my reserved instances for MariaDB?**
-A. Since MariaDB service is on deprecation path you will not be able to purchase new MariaDB reserved instances. For any existing reserved instances, you will continue to use the benefits of your reserved instances until the September, 1 2025 when MariaDB service will no longer be available.
+A. Since MariaDB service is on deprecation path you will not be able to purchase new MariaDB reserved instances. For any existing reserved instances, you will continue to use the benefits of your reserved instances until the September, 19 2025 when MariaDB service will no longer be available. You can exchange your existing MariaDB reservations to MySQL reservations.
**Q. After the Azure Database for MariaDB retirement announcement, what if I still need to create a new MariaDB server to meet my business needs?**
migrate Migrate Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix.md
Europe | North Europe or West Europe
France | France Central Germany | Germany West Central India | Central India or South India
+Italy | North Italy
Japan | Japan East or Japan West Jio India | Jio India West Korea | Korea Central
mysql February 2024 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/release-notes/february-2024.md
We're pleased to announce the February 2024 maintenance for Azure Database for MySQL Flexible Server. This maintenance mainly focuses on known issue fix, underlying OS upgrading and vulnerability patching. ## Engine version changes
-There will be no engine version changes in this maintenance update.
+ - All existing 5.7.42 engine version server will upgrade to 5.7.44 engine version.
+ - All existing 8.0.34 engine version server will upgrade to 8.0.35 engine version.
+
+To check your engine version, run `SELECT VERSION();` command at the MySQL prompt
## Features There will be no new features in this maintenance update.
mysql Migrate External Mysql Import Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/migrate/migrate-external-mysql-import-cli.md
- mode-api ms.devlang: azurecli
-# Migrate MySQL on-premises or Virtual Machine (VM) workload to Azure Database for MySQL - Flexible Server using Azure Database for MySQL Import CLI
+# Migrate MySQL on-premises or Virtual Machine (VM) workload to Azure Database for MySQL - Flexible Server using Azure Database for MySQL Import CLI (Public Preview)
-Azure Database for MySQL Import enables you to migrate your MySQL on-premises or Virtual Machine (VM) workload seamlessly to Azure Database for MySQL - Flexible Server. It uses a user-provided physical backup file and restores the source server's physical data files to the target server offering a simple and fast migration path. Post Import operation, you can take advantage of the benefits of Flexible Server, including better price & performance, granular control over database configuration, and custom maintenance windows.
+Azure Database for MySQL Import for external migrations (Public Preview) enables you to migrate your MySQL on-premises or Virtual Machine (VM) workload seamlessly to Azure Database for MySQL - Flexible Server. It uses a user-provided physical backup file and restores the source server's physical data files to the target server offering a simple and fast migration path. Post Import operation, you can take advantage of the benefits of Flexible Server, including better price & performance, granular control over database configuration, and custom maintenance windows.
Based on user-inputs, it takes up the responsibility of provisioning your target Flexible Server and then restoring the user-provided physical backup of the source server stored in the Azure Blob storage account to the target Flexible Server instance.
nat-gateway Troubleshoot Nat Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nat-gateway/troubleshoot-nat-connectivity.md
Title: Troubleshoot Azure NAT Gateway connectivity
-description: Troubleshoot connectivity issues with a NAT gateway.
+
+description: Learn how to troubleshoot connectivity issues and possible causes and solutions for Azure NAT Gateway.
Previously updated : 04/24/2023 Last updated : 02/20/2024
+#Customer intent: For customers to troubleshoot and resolve common outbound connectivity issues with your NAT gateway. This article also provides best practices on how to design applications to use outbound connections efficiently.
# Troubleshoot Azure NAT Gateway connectivity
This article provides guidance on how to troubleshoot and resolve common outboun
You observe a drop in the datapath availability of NAT gateway, which coincides with connection failures. **Possible causes**
-* SNAT port exhaustion
-* Reached simultaneous SNAT connection limits
-* Connection timeouts
-**How to troubleshoot**
-* Check the [datapath availability metric](/azure/nat-gateway/nat-metrics#datapath-availability) to assess the health of the NAT gateway.
-* Check the SNAT Connection Count metric and [split the connection state](/azure/nat-gateway/nat-metrics#snat-connection-count) by attempted and failed connections. More than zero failed connections may indicate SNAT port exhaustion or reaching the SNAT connection count limit of NAT gateway.
-* Verify the [Total SNAT Connection Count metric](/azure/nat-gateway/nat-metrics#total-snat-connection-count) to ensure it is within limits. NAT gateway supports 50,000 simultaneous connections per IP address to a unique destination and up to 2 million connections in total. For more information, see [NAT Gateway Performance](/azure/nat-gateway/nat-gateway-resource#performance).
+* Source Network Address Translation (SNAT) port exhaustion.
+
+* Simultaneous SNAT connection limits.
+
+* Connection timeouts.
+
+**Troubleshoot steps**
+
+* Assess the health of the NAT gateway by checking the [datapath availability metric](/azure/nat-gateway/nat-metrics#datapath-availability).
+
+* Check the SNAT Connection Count metric and [split the connection state](/azure/nat-gateway/nat-metrics#snat-connection-count) by attempted and failed connections. More than zero failed connections indicate SNAT port exhaustion or reaching the SNAT connection count limit of NAT gateway.
+
+* Ensure the connection count metric is within limits by verifying the [Total SNAT Connection Count metric](/azure/nat-gateway/nat-metrics#total-snat-connection-count). NAT gateway supports 50,000 simultaneous connections per IP address to a unique destination and up to 2 million connections in total. For more information, see [NAT Gateway Performance](/azure/nat-gateway/nat-gateway-resource#performance).
+ * Check the [dropped packets metric](/azure/nat-gateway/nat-metrics#dropped-packets) for any packet drops that align with connection failures or high connection volume.
-* Adjust the [TCP idle timeout timer](./nat-gateway-resource.md#tcp-idle-timeout) settings as needed. An idle timeout timer set higher than the default (4 minutes) holds on to flows longer, and can create [extra pressure on SNAT port inventory](./nat-gateway-resource.md#timers).
+
+* Adjust the [Transmission Control Protocol (TCP) idle timeout timer](./nat-gateway-resource.md#tcp-idle-timeout) settings as needed. An idle timeout timer set higher than the default (4 minutes) holds on to flows longer, and can create [extra pressure on SNAT port inventory](./nat-gateway-resource.md#timers).
### Possible solutions for SNAT port exhaustion or hitting simultaneous connection limits+ * Add public IP addresses to your NAT gateway up to a total of 16 to scale your outbound connectivity. Each public IP provides 64,512 SNAT ports and supports up to 50,000 simultaneous connections per unique destination endpoint for NAT gateway.+ * Distribute your application environment across multiple subnets and provide a NAT gateway resource for each subnet.
-* Reduce the [TCP idle timeout timer](./nat-gateway-resource.md#idle-timeout-timers) to a lower value to free up SNAT port inventory earlier. The TCP idle timeout timer can't be set lower than 4 minutes.
+
+* Free up SNAT port inventory by reducing the [TCP idle timeout timer](./nat-gateway-resource.md#idle-timeout-timers) to a lower value. The TCP idle timeout timer can't be set lower than 4 minutes.
+ * Consider **[asynchronous polling patterns](/azure/architecture/patterns/async-request-reply)** to free up connection resources for other operations.+ * Make connections to Azure PaaS services over the Azure backbone using [Private Link](/azure/private-link/private-link-overview). Private link frees up SNAT ports for outbound connections to the internet.+ * If your investigation is inconclusive, open a support case to [further troubleshoot](#more-troubleshooting-guidance). >[!NOTE]
You observe a drop in the datapath availability of NAT gateway, which coincides
Use TCP keepalives or application layer keepalives to refresh idle flows and reset the idle timeout timer. For examples, see [.NET examples](/dotnet/api/system.net.servicepoint.settcpkeepalive).
-TCP keepalives only need to be enabled from one side of a connection in order to keep a connection alive from both sides. When a TCP keepalive is sent from one side of a connection, the other side automatically sends an ACK packet. The idle timeout timer is then reset on both sides of the connection.
+TCP keepalives only need to be enabled from one side of a connection in order to keep a connection alive from both sides. When a TCP keepalive is sent from one side of a connection, the other side automatically sends an acknowledge (ACK) packet. The idle timeout timer is then reset on both sides of the connection.
>[!Note] >Increasing the TCP idle timeout is a last resort and may not resolve the root cause of the issue. A long timeout can introduce delay and cause unnecessary low-rate failures when timeout expires.
-### Possible solutions for UDP connection timeouts
+### Possible solutions for User Datagram Protocol (UDP) connection timeouts
UDP idle timeout timers are set to 4 minutes and aren't configurable. Enable UDP keepalives for both directions in a connection flow to maintain long connections. When a UDP keepalive is enabled, it's only active for one direction in a connection. The connection can still go idle and time out on the other side of a connection. To prevent a UDP connection from idle time-out, UDP keepalives should be enabled for both directions in a connection flow.
The datapath availability of NAT gateway drops but no failed connections are obs
Transient drop in datapath availability caused by noise in the datapath.
-**How to troubleshoot**
+**Troubleshooting steps**
-If you observe no impact on your outbound connectivity but see a drop in datapath availability, NAT gateway may be picking up noise in the datapath that shows as a transient drop.
+If you notice a drop in datapath availability without any effect on your outbound connectivity, it could be due to NAT gateway picking up transient noise in the datapath.
### Recommended alert setup
Set up an [alert for datapath availability drops](/azure/nat-gateway/nat-metrics
You observe no outbound connectivity on your NAT gateway. **Possible causes**
-* NAT gateway misconfiguration
+
+* NAT gateway misconfiguration.
+ * Internet traffic is redirected away from NAT gateway and force-tunneled to a virtual appliance or to an on-premises destination with a VPN or ExpressRoute.
-* NSG rules are configured that block internet traffic
-* NAT gateway datapath availability is degraded
-* DNS misconfiguration
-**How to troubleshoot**
-* Check that NAT gateway is configured with at least one public IP address or prefix and attached to a subnet. NAT gateway isn't operational without a public IP and subnet attached. For more information, see [NAT gateway configuration basics](/azure/nat-gateway/troubleshoot-nat#nat-gateway-configuration-basics).
-* Check the routing table of the subnet attached to NAT gateway. Any 0.0.0.0/0 traffic being force-tunneled to an NVA, ExpressRoute, or VPN Gateway will take priority over NAT gateway. For more information, see [how Azure selects a route](/azure/virtual-network/virtual-networks-udr-overview#how-azure-selects-a-route).
-* Check if there are any NSG rules configured for the NIC on your virtual machine that blocks internet access.
+* Network Security Group (NSG) rules are configured that block internet traffic.
+
+* NAT gateway datapath availability is degraded.
+
+* Domain Name System (DNS) misconfiguration.
+
+**Troubleshooting steps**
+
+* Check that NAT gateway is configured with at least one public IP address or prefix and attached to a subnet. NAT gateway isn't operational until a public IP and subnet attached. For more information, see [NAT gateway configuration basics](/azure/nat-gateway/troubleshoot-nat#nat-gateway-configuration-basics).
+
+* Check the routing table of the subnet attached to NAT gateway. Any 0.0.0.0/0 traffic being force-tunneled to a Network Virtual Appliance (NVA), ExpressRoute, or VPN Gateway will take priority over NAT gateway. For more information, see [how Azure selects a route](/azure/virtual-network/virtual-networks-udr-overview#how-azure-selects-a-route).
+
+* Check if there are any NSG rules configured for the network interface on your virtual machine that blocks internet access.
+ * Check if the datapath availability of NAT gateway is in a degraded state. Refer to [connection failure troubleshooting guidance](#datapath-availability-drop-on-nat-gateway-with-connection-failures) if NAT gateway is in a degraded state.+ * Check your DNS settings if DNS isn't resolving properly. ### Possible solutions for loss of outbound connectivity+ * Attach a public IP address or prefix to NAT gateway. Also make sure that NAT gateway is attached to subnets from the same virtual network. [Validate that NAT gateway can connect outbound](/azure/nat-gateway/troubleshoot-nat#how-to-validate-connectivity).
-* Carefully consider your traffic routing requirements before making any changes to traffic routes for your virtual network. UDRs that send 0.0.0.0/0 traffic to a virtual appliance or virtual network gateway override NAT gateway. See [custom routes](/azure/virtual-network/virtual-networks-udr-overview#custom-routes) to learn more about how custom routes impact the routing of network traffic. To explore options for updating your traffic routes on your subnet routing table, see:
- * [Add a custom route](/azure/virtual-network/manage-route-table#create-a-route)
- * [Change a route](/azure/virtual-network/manage-route-table#change-a-route)
- * [Delete a route](/azure/virtual-network/manage-route-table#delete-a-route)
+
+* Carefully consider your traffic routing requirements before making any changes to traffic routes for your virtual network. User Defined Routes (UDRs) that send 0.0.0.0/0 traffic to a virtual appliance or virtual network gateway override NAT gateway. See [custom routes](/azure/virtual-network/virtual-networks-udr-overview#custom-routes) to learn more about how custom routes affect the routing of network traffic.
+
+ To explore options for updating your traffic routes on your subnet routing table, see:
+
+ * [Add a custom route](/azure/virtual-network/manage-route-table#create-a-route)
+
+ * [Change a route](/azure/virtual-network/manage-route-table#change-a-route)
+
+ * [Delete a route](/azure/virtual-network/manage-route-table#delete-a-route)
+ * Update NSG security rules that block internet access for any of your VMs. For more information, see [manage network security groups](/azure/virtual-network/manage-network-security-group?tabs=network-security-group-portal).
-* DNS not resolving correctly can happen for many reasons. Refer to the [DNS troubleshooting guide](/azure/dns/dns-troubleshoot) to help investigate why DNS resolution may be failing.
+
+* DNS not resolving correctly can happen for many reasons. Refer to the [DNS troubleshooting guide](/azure/dns/dns-troubleshoot) to help investigate why DNS resolution is failing.
## NAT gateway public IP isn't used to connect outbound
You observe no outbound connectivity on your NAT gateway.
NAT gateway is deployed in your Azure virtual network but unexpected IP addresses are used for outbound connections. **Possible causes**
-* NAT gateway misconfiguration
-* Active connection with another Azure outbound connectivity method such as Azure Load balancer or instance-level public IPs on virtual machines. Active connection flows continue to use the old public IP address that was assigned when the connection was established. When NAT gateway is deployed, new connections start using NAT gateway right away.
-* Private IPs are used to connect to Azure services by service endpoints or Private Link
-* Connections to storage accounts come from the same region as the VM you're making a connection from.
+
+* NAT gateway misconfiguration.
+
+* Active connection with another Azure outbound connectivity method such as Azure Load balancer or instance-level public IPs on virtual machines. Active connection flows continue to use the previous public IP address that was assigned when the connection was established. When NAT gateway is deployed, new connections start using NAT gateway right away.
+
+* Private IPs are used to connect to Azure services by service endpoints or Private Link.
+
+* Connections to storage accounts come from the same region as the virtual machine you're making a connection from.
+ * Internet traffic is being redirected away from NAT gateway and force-tunneled to an NVA or firewall. **How to troubleshoot**+ * Check that your NAT gateway has at least one public IP address or prefix associated and at least one subnet.
-* Check if any previous outbound connectivity method that you previously used before deploying NAT gateway, like a public Load balancer, is still deployed.
+
+* Verify if any previous outbound connectivity method, such as a public Load balancer, is still active after deploying NAT gateway.
+ * Check if connections being made to other Azure services is coming from a private IP address in your Azure virtual network.+ * Check if you have [Private Link](/azure/private-link/manage-private-endpoint?tabs=manage-private-link-powershell#manage-private-endpoint-connections-on-azure-paas-resources) or [service endpoints](../virtual-network/virtual-network-service-endpoints-overview.md#logging-and-troubleshooting) enabled for connecting to other Azure services.
-* If connecting to Azure storage, check if your VM is in the same region as Azure storage.
-* Check if the public IP address being used to make connections is coming from another Azure service in your Azure virtual network, such as an NVA.
+
+* Ensure that your virtual machine is located in the same region as the Azure storage when making a storage connection.
++
+* Verify if the public IP address used for connections is originating from another Azure service within your Azure virtual network, such as a Network Virtual Appliance (NVA).
+ ### Possible solutions for NAT gateway public IP not used to connect outbound
-* Attach a public IP address or prefix to NAT gateway. Also make sure that NAT gateway is attached to subnets from the same virtual network. [Validate that NAT gateway can connect outbound](/azure/nat-gateway/troubleshoot-nat#how-to-validate-connectivity).
+
+* Attach a public IP address or prefix to NAT gateway. Ensure that NAT gateway is attached to subnets from the same virtual network. [Validate that NAT gateway can connect outbound](/azure/nat-gateway/troubleshoot-nat#how-to-validate-connectivity).
+ * Test and resolve issues with VMs holding on to old SNAT IP addresses from another outbound connectivity method by:
- * Ensure you establish a new connection and that existing connections aren't being reused in the OS or that the browser is caching the connections. For example, when using curl in PowerShell, make sure to specify the -DisableKeepalive parameter to force a new connection. If you're using a browser, connections may also be pooled.
- * It isn't necessary to reboot a virtual machine in a subnet configured to NAT gateway. However, if a virtual machine is rebooted, the connection state is flushed. When the connection state is flushed, all connections begin using the NAT gateway resource's IP address(es). This behavior is a side effect of the virtual machine reboot and not an indicator that a reboot is required.
+
+ * Ensure you establish a new connection and that existing connections aren't being reused in the OS or that the browser is caching the connections. For example, when using curl in PowerShell, make sure to specify the -DisableKeepalive parameter to force a new connection. If you're using a browser, connections can also be pooled.
+
+ * It isn't necessary to reboot a virtual machine in a subnet configured to NAT gateway. However, if a virtual machine is rebooted, the connection state is flushed. When the connection state is flushed, all connections begin using the NAT gateway resource's IP address or addresses. This behavior is a side effect of the virtual machine reboot and not an indicator that a reboot is required.
+ * If you're still having trouble, [open a support case](#more-troubleshooting-guidance) for further troubleshooting.+ * Custom routes directing 0.0.0.0/0 traffic to an NVA will take precedence over NAT gateway for routing traffic to the internet. To have NAT gateway route traffic to the internet instead of the NVA, [remove the custom route](/azure/virtual-network/manage-route-table#delete-a-route) for 0.0.0.0/0 traffic going to the virtual appliance. The 0.0.0.0/0 traffic resumes using the default route to the internet and NAT gateway is used instead.+ > [!IMPORTANT]
-> Consider the routing requirements of your cloud architecture before making any changes to how traffic is routed.
+> Before making any changes to how traffic routes, carefully consider the routing requirements of your cloud architecture.
* Services deployed in the same region as an Azure storage account use private Azure IP addresses for communication. You can't restrict access to specific Azure services based on their public outbound IP address range. For more information, see [restrictions for IP network rules](/azure/storage/common/storage-network-security?tabs=azure-portal#restrictions-for-ip-network-rules).
-* Private Link and service endpoints use the private IP addresses of virtual machine instances in your virtual network to connect to Azure platform services instead of the public IP of NAT gateway. It's recommended to use Private Link to connect to other Azure services over the Azure backbone instead of over the internet with NAT gateway.
+* Private Link and service endpoints use the private IP addresses of virtual machine instances in your virtual network to connect to Azure platform services instead of the public IP of NAT gateway. Use Private Link to connect to other Azure services over the Azure backbone instead of over the internet with NAT gateway.
+ >[!NOTE] >Private Link is the recommended option over Service endpoints for private access to Azure hosted services.
NAT gateway is deployed in your Azure virtual network but unexpected IP addresse
NAT gateway connections to internet destinations fail or time out. **Possible causes**+ * Firewall or other traffic management components at the destination.+ * API rate limiting imposed by the destination side.+ * Volumetric DDoS mitigations or transport layer traffic shaping.
-* The destination endpoint responds with fragmented packets
+
+* The destination endpoint responds with fragmented packets.
**How to troubleshoot**+ * Validate connectivity to an endpoint in the same region or elsewhere for comparison.+ * Conduct packet capture from source and destination sides.+ * Look at [packet count](/azure/nat-gateway/nat-metrics#packets) at the source and the destination (if available) to determine how many connection attempts were made.+ * Look at [dropped packets](/azure/nat-gateway/nat-metrics#dropped-packets) to see how many packets dropped by NAT gateway.+ * Analyze the packets. TCP packets with fragmented IP protocol packets indicate IP fragmentation. **NAT gateway does not support IP fragmentation** and so connections with fragmented packets fail.
-* Check that the NAT gateway public IP is allow listed at partner destinations with Firewalls or other traffic management components
+
+* Ensure that the NAT gateway public IP is listed as allowed at partner destinations with Firewalls or other traffic management components.
### Possible solutions for connection failures at internet destination
-* Verify if NAT gateway public IP is allow listed at destination.
+
+* Verify NAT gateway public IP is listed as allowed at the destination.
+ * If you're creating high volume or transaction rate testing, explore if reducing the rate reduces the occurrence of failures.
-* If changing rate impacts the rate of failures, check if API rate limits, or other constraints on the destination side might have been reached.
+
+* If reducing the rate of connections decreases the occurrence of failures, check if the destination reached its API rate limits or other constraints.
## Connection failures at FTP server for active or passive mode
NAT gateway connections to internet destinations fail or time out.
You see connection failures at an FTP server when using NAT gateway with active or passive FTP mode. **Possible causes**+ * Active FTP mode is enabled.+ * Passive FTP mode is enabled and NAT gateway is using more than one public IP address. ### Possible solution for Active FTP mode
An alternative solution to active FTP mode is to use passive FTP mode instead. H
### Possible solution for Passive FTP mode
-In passive FTP mode, the client establishes connections on both the command and data channels. The client requests that the server listen on a port rather than try to establish a connection back to the client.
+In passive FTP mode, the client establishes connections on both the command and data channels. The client requests that the server answer on a port instead of trying to establish a connection back to the client.
-Outbound Passive FTP may not work for NAT gateway with multiple public IP addresses, depending on your FTP server configuration. When a NAT gateway with multiple public IP addresses sends traffic outbound, it randomly selects one of its public IP addresses for the source IP address. FTP may fail when data and control channels use different source IP addresses, depending on your FTP server configuration.
+Outbound Passive FTP doesn't work for NAT gateway with multiple public IP addresses, depending on your FTP server configuration. When a NAT gateway with multiple public IP addresses sends traffic outbound, it randomly selects one of its public IP addresses for the source IP address. FTP fails when data and control channels use different source IP addresses, depending on your FTP server configuration.
To prevent possible passive FTP connection failures, do the following steps: 1. Check that your NAT gateway is attached to a single public IP address rather than multiple IP addresses or a prefix.
-2. Make sure that the passive port range from your NAT gateway is allowed to pass any firewalls that may be at the destination endpoint.
+
+2. Make sure that the passive port range from your NAT gateway is allowed to pass any firewalls at the destination endpoint.
>[!NOTE] >Reducing the amount of public IP addresses on your NAT gateway reduces the SNAT port inventory available for making outbound connections and may increase the risk of SNAT port exhaustion. Consider your SNAT connectivity needs before removing public IP addresses from NAT gateway.
To prevent possible passive FTP connection failures, do the following steps:
**Scenario**
-Unable to connect outbound with NAT gateway on port 25 for SMTP traffic.
+Unable to connect outbound with NAT gateway on port 25 for Simple Mail Transfer Protocol (SMTP) traffic.
**Cause**
The Azure platform blocks outbound SMTP connections on TCP port 25 for deployed
### Recommended guidance for sending email
-It's recommended you use authenticated SMTP relay services to send email from Azure VMs or from Azure App Service. For more information, see [troubleshoot outbound SMTP connectivity problems](/azure/virtual-network/troubleshoot-outbound-smtp-connectivity).
+Use an authenticated SMTP relay service to send email from Azure VMs or from Azure App Service. For more information, see [troubleshoot outbound SMTP connectivity problems](/azure/virtual-network/troubleshoot-outbound-smtp-connectivity).
## More troubleshooting guidance ### Extra network captures
-If your investigation is inconclusive, open a support case for further troubleshooting and collect the following information for a quicker resolution. Choose a single virtual machine in your NAT gateway configured subnet to perform the following tests:
+If your investigation is inconclusive, open a support case for further troubleshooting and collect the following information for a quicker resolution. Choose a single virtual machine in your NAT gateway configured subnet and perform the following tests:
-* Use **`ps ping`** from one of the backend VMs within the virtual network to test the probe port response (example: **`ps ping 10.0.0.4:3389`**) and record results.
+* Test the probe port response using **`ps ping`** from one of the backend VMs within the virtual network and record results (example: **`ps ping 10.0.0.4:3389`**).
-* If no response is received in these ping tests, run a simultaneous Netsh trace on the backend VM, and the virtual network test VM while you run PsPing then stop the Netsh trace.
+* If no response is received in these ping tests, run a simultaneous `netsh` trace on the backend virtual machine, and the virtual network test virtual machine while you run PsPing then stop the `netsh` trace.
## Outbound connectivity best practices
-Azure monitors and operates its infrastructure with great care. However, transient failures can still occur from deployed applications, there's no guarantee that transmissions are lossless. NAT gateway is the preferred option to connect outbound from Azure deployments in order to ensure highly reliable and resilient outbound connectivity. In addition to using NAT gateway to connect outbound, use the guidance later in the article for how to ensure that applications are using connections efficiently.
+Azure monitors and operates its infrastructure with great care. However, transient failures can still occur from deployed applications, and there's no guarantee of lossless transmissions. NAT gateway is the preferred option for establishing highly reliable and resilient outbound connectivity from Azure deployments. For optimizing application connection efficiency, refer to the guidance later in the article.
-### Modify the application to use connection pooling
+### Use connection pooling
When you pool your connections, you avoid opening new network connections for calls to the same address and port. You can implement a connection pooling scheme in your application where requests are internally distributed across a fixed set of connections and reused when possible. This setup constrains the number of SNAT ports in use and creates a predictable environment. Connection pooling helps reduce latency and resource utilization and ultimately improve the performance of your applications. To learn more on pooling HTTP connections, see [Pool HTTP connections](/aspnet/core/performance/performance-best-practices#pool-http-connections-with-httpclientfactory) with HttpClientFactory.
-### Modify the application to reuse connections
+### Reuse connections
Rather than generating individual, atomic TCP connections for each request, configure your application to reuse connections. Connection reuse results in more performant TCP transactions and is especially relevant for protocols like HTTP/1.1, where connection reuse is the default. This reuse applies to other protocols that use HTTP as their transport such as REST.
-### Modify the application to use less aggressive retry logic
+### Use less aggressive retry logic
When SNAT ports are exhausted or application failures occur, aggressive or brute force retries without delay and back-off logic cause exhaustion to occur or persist. You can reduce demand for SNAT ports by using a less aggressive retry logic.
-Depending on the configured idle timeout, if retries are too aggressive, connections may not have enough time to close and release SNAT ports for reuse.
+Depending on the configured idle timeout, if retries are too aggressive, connections don't have enough time to close and release SNAT ports for reuse.
For extra guidance and examples, see [Retry pattern](../app-service/troubleshoot-intermittent-outbound-connection-errors.md).
openshift Concepts Egress Lockdown https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/concepts-egress-lockdown.md
Depending on whether egress lockdown is enabled or disabled, you'll see one of t
- `Egress Lockdown Feature Enabled` - `Egress Lockdown Feature Disabled`
+## Relation to storage lockdown
+
+Storage lockdown is another feature of Azure Red Hat OpenShift that enhances cluster security. Storage Accounts created with the cluster are configured to restrict any public access. Exceptions are added for the Azure Red Hat OpenShift Resource Provisioner subnets as well as the subnet of egress lockdown gateway.
+Cluster components that utilize this storage, for example, OpenShift Image Registry, rely on egress lockdown functionality instead of accessing the storage accounts directly.
+ ## Next steps For more information on controlling egress traffic on your Azure Red Hat OpenShift cluster, see [Control egress traffic for your Azure Red Hat OpenShift (ARO) cluster (preview)](howto-restrict-egress.md).
postgresql Best Practices Seamless Migration Single To Flexible https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/best-practices-seamless-migration-single-to-flexible.md
A good place to begin is the quickstart to [Create an Azure Database for Postgre
To get an idea of the downtime required for migrating your server, we strongly recommend taking a **PITR (point in time restore)** of your single server and running it against the single to flex migration tool. Monitoring the **PITR** migration gives a good estimate of the required downtime. Additionally, if Read replicas (RR) or High Availability (HA) is used, they should be enabled or provisioned **after** the migration is complete. When the migration starts, there's a lot of data copied to the target. If HA or RR is enabled, every transaction has to be acknowledged and it increases the lag between the primary and the backups. The lag in turn impacts cost in terms of extra storage and time required to complete the migration and hence should be avoided. This precaution ensures that the migration process completes seamlessly.
-## Set up Online migration parameters
+## Set up Online migration (preview) parameters
-> [!NOTE]
-> For Online migrations using Single servers running PostgreSQL 9.5 and 9.6, we explicitly have to allow replication connection. To enable that, add a firewall entry to allowlist connection from target. Make sure the firewall rule name has `_replrule` suffix. The suffic isn't required for Single servers running PostgreSQL 10 and 11. **Online migrations preview** is currently available in France Central, Germany West Central, North Europe, South Africa North, UAE North, all regions across Asia, Australia, UK and public US regions. In other regions, Online migration can be enabled by the user at a subscription-level by registering for the **Online PostgreSQL migrations to Azure PostgreSQL Flexible server** preview feature as shown in the image.
+> [!NOTE]
+> For Online migrations using Single servers running PostgreSQL 9.5 and 9.6, we explicitly have to allow replication connection. To enable that, add a firewall entry to allowlist connection from target. Make sure the firewall rule name has `_replrule` suffix. The suffic isn't required for Single servers running PostgreSQL 10 and 11. **Online migrations preview** is currently available in all public clouds and in China regions. In other regions, Online migration can be enabled by the user at a subscription-level by registering for the **Online PostgreSQL migrations to Azure PostgreSQL Flexible server** preview feature as shown in the image.
:::image type="content" source="./media/concepts-single-to-flexible/online-migration-feature-switch.png" alt-text="Screenshot of online PostgreSQL migrations to Azure PostgreSQL Flexible server." lightbox="./media/concepts-single-to-flexible/online-migration-feature-switch.png":::
postgresql Concepts Single To Flexible https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/concepts-single-to-flexible.md
The following table lists the different tools available for performing the migra
| pg_dump and pg_restore | Offline | - Tried and tested tool that is in use for a long time<br />- Suited for databases of size less than 10 GB<br />| - Need prior knowledge of setting up and using this tool<br />- Slow when compared to other tools<br />Significant downtime to your application. | > [!NOTE]
-> The Single to Flex Migration tool is available in all Azure regions and currently supports **Offline** migrations. **Online migrations preview** is currently available in France Central, Germany West Central, North Europe, South Africa North, UAE North, all regions across Asia, Australia, UK and public US regions. In other regions, Online migration can be enabled by the user at a subscription-level by registering for the **Online PostgreSQL migrations to Azure PostgreSQL Flexible server** preview feature as shown in the image.
+> The Single to Flex Migration tool is available in all Azure regions and currently supports **Offline** migrations. **Online migrations preview** is currently available in all public clouds and in China regions. In other regions, Online migration can be enabled by the user at a subscription-level by registering for the **Online PostgreSQL migrations to Azure PostgreSQL Flexible server** preview feature as shown in the image.
:::image type="content" source="media\concepts-single-to-flexible\online-migration-feature-switch.png" alt-text="Screenshot of online PostgreSQL migrations to Azure PostgreSQL Flexible server." lightbox="media\concepts-single-to-flexible\online-migration-feature-switch.png":::
postgresql How To Migrate Single To Flexible Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/how-to-migrate-single-to-flexible-cli.md
Note these important points for the command response:
- The migration moves to the `Succeeded` state as soon as the `Migrating Data` substate finishes successfully. If there's a problem at the `Migrating Data` substate, the migration moves into a `Failed` state. > [!NOTE]
-> The Single to Flex Migration tool is available in all Azure regions and currently supports **Offline** migrations. **Online migrations preview** is currently available in France Central, Germany West Central, North Europe, South Africa North, UAE North, all regions across Asia, Australia, UK and public US regions. In other regions, Online migration can be enabled by the user at a subscription-level by registering for the **Online PostgreSQL migrations to Azure PostgreSQL Flexible server** preview feature as shown in the image.
+> The Single to Flex Migration tool is available in all Azure regions and currently supports **Offline** migrations. **Online migrations preview** is currently available in all public clouds and in China regions. In other regions, Online migration can be enabled by the user at a subscription-level by registering for the **Online PostgreSQL migrations to Azure PostgreSQL Flexible server** preview feature as shown in the image.
:::image type="content" source="./media/concepts-single-to-flexible/online-migration-feature-switch.png" alt-text="Screenshot of online PostgreSQL migrations to Azure PostgreSQL Flexible server." lightbox="./media/concepts-single-to-flexible/online-migration-feature-switch.png":::
postgresql How To Migrate Single To Flexible Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/how-to-migrate-single-to-flexible-portal.md
[!INCLUDE[applies-to-postgres-single-flexible-server](../includes/applies-to-postgresql-single-flexible-server.md)] >[!NOTE]
-> Before you begin, it is highly recommended to go through some of the [best practices to sensure a seamless migration experience](best-practices-seamless-migration-single-to-flexible.md)
+> Before you begin, it is highly recommended to go through some of the [best practices to ensure a seamless migration experience](best-practices-seamless-migration-single-to-flexible.md)
You can migrate an instance of Azure Database for PostgreSQL ΓÇô Single Server to Azure Database for PostgreSQL ΓÇô Flexible Server by using the Azure portal. In this tutorial, we perform migration of a sample database from an Azure Database for PostgreSQL single server to a PostgreSQL flexible server using the Azure portal.
The first tab is **Setup**. Just in case you missed it, allowlist necessary exte
It's always a good practice to choose **Validate** or **Validate and Migrate** option to perform pre-migration validations before running the migration. To learn more about the pre-migration validation refer to this [documentation](./concepts-single-to-flexible.md#pre-migration-validations).
-**Migration mode** gives you the option to pick the mode for the migration. **Offline** is the default option. **Online migrations preview** is currently available in France Central, Germany West Central, North Europe, South Africa North, UAE North, all regions across Asia, Australia, UK and public US regions. In other regions, Online migration can be enabled by the user at a subscription-level by registering for the **Online PostgreSQL migrations to Azure PostgreSQL Flexible server** preview feature as shown in the image.
+**Migration mode** gives you the option to pick the mode for the migration. **Offline** is the default option. **Online migrations preview** is currently available in all public clouds and in China regions. In other regions, Online migration can be enabled by the user at a subscription-level by registering for the **Online PostgreSQL migrations to Azure PostgreSQL Flexible server** preview feature as shown in the image.
:::image type="content" source="./media/concepts-single-to-flexible/online-migration-feature-switch.png" alt-text="Screenshot of online PostgreSQL migrations to Azure PostgreSQL Flexible server." lightbox="./media/concepts-single-to-flexible/online-migration-feature-switch.png":::
reliability Availability Zones Service Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/availability-zones-service-support.md
The following regions currently support availability zones:
| US Gov Virginia | West Europe | | | China North 3 | | West US 2 | Sweden Central | | | | | West US 3 | Switzerland North | | | |
-||Poland Central ||||
+| Mexico Central* | Poland Central ||||
+ \* To learn more about availability zones and available services support in these regions, contact your Microsoft sales or customer representative. For the upcoming regions that will support availability zones, see [Azure geographies](https://azure.microsoft.com/global-infrastructure/geographies/).
reliability Reliability App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-app-service.md
Title: Reliability in Azure App Service description: Find out about reliability in Azure App Service -+
reliability Reliability Image Builder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-image-builder.md
Title: Reliability in Azure Image Builder description: Find out about reliability in Azure Image Builder -+
reliability Reliability Load Balancer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-load-balancer.md
Title: Reliability in Azure Load Balancer description: Find out about reliability in Azure Load Balancer -+
reliability Reliability Traffic Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-traffic-manager.md
Title: Reliability in Azure Traffic Manager description: Learn about reliability in Azure Traffic Manager. -+
reliability Reliability Virtual Machine Scale Sets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-virtual-machine-scale-sets.md
Title: Reliability in Azure Virtual Machine Scale Sets description: Learn about reliability in Azure Virtual Machine Scale Sets. -+
reliability Reliability Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-virtual-machines.md
Title: Reliability in Azure Virtual Machines description: Find out about reliability in Azure Virtual Machines -+
search Search Howto Index Azure Data Lake Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-azure-data-lake-storage.md
- ignite-2023 Previously updated : 03/22/2023 Last updated : 02/19/2024 # Index data from Azure Data Lake Storage Gen2
In a [search index](search-what-is-an-index.md), add fields to accept the conten
1. [Create or update an index](/rest/api/searchservice/create-index) to define search fields that will store blob content and metadata: ```http
- POST https://[service name].search.windows.net/indexes?api-version=2020-06-30
{ "name" : "my-search-index", "fields": [
In a [search index](search-what-is-an-index.md), add fields to accept the conten
{ "name": "content", "type": "Edm.String", "searchable": true, "filterable": false }, { "name": "metadata_storage_name", "type": "Edm.String", "searchable": false, "filterable": true, "sortable": true }, { "name": "metadata_storage_size", "type": "Edm.Int64", "searchable": false, "filterable": true, "sortable": true },
- { "name": "metadata_storage_content_type", "type": "Edm.String", "searchable": false, "filterable": true, "sortable": true },
+ { "name": "metadata_storage_content_type", "type": "Edm.String", "searchable": false, "filterable": true, "sortable": true }
]
- }
} ```
Once the index and data source have been created, you're ready to create the ind
1. [Create or update an indexer](/rest/api/searchservice/create-indexer) by giving it a name and referencing the data source and target index: ```http
- POST https://[service name].search.windows.net/indexers?api-version=2020-06-30
{
- "name" : "my-adlsgen2-indexer,
+ "name" : "my-adlsgen2-indexer",
"dataSourceName" : "my-adlsgen2-datasource", "targetIndexName" : "my-search-index", "parameters": {
Once the index and data source have been created, you're ready to create the ind
"maxFailedItems": null, "maxFailedItemsPerBatch": null, "base64EncodeKeys": null,
- "configuration:" {
+ "configuration": {
"indexedFileNameExtensions" : ".pdf,.docx", "excludedFileNameExtensions" : ".png,.jpeg", "dataToExtract": "contentAndMetadata",
An indexer runs automatically when it's created. You can prevent this by setting
To monitor the indexer status and execution history, send a [Get Indexer Status](/rest/api/searchservice/get-indexer-status) request: ```http
-GET https://myservice.search.windows.net/indexers/myindexer/status?api-version=2020-06-30
+GET https://myservice.search.windows.net/indexers/myindexer/status?api-version=2023-11-01
Content-Type: application/json api-key: [admin key] ```
The response includes status and the number of items processed. It should look s
"lastResult": { "status":"success", "errorMessage":null,
- "startTime":"2022-02-21T00:23:24.957Z",
- "endTime":"2022-02-21T00:36:47.752Z",
+ "startTime":"2024-02-21T00:23:24.957Z",
+ "endTime":"2024-02-21T00:36:47.752Z",
"errors":[], "itemsProcessed":1599501, "itemsFailed":0,
The response includes status and the number of items processed. It should look s
{ "status":"success", "errorMessage":null,
- "startTime":"2022-02-21T00:23:24.957Z",
- "endTime":"2022-02-21T00:36:47.752Z",
+ "startTime":"2024-02-21T00:23:24.957Z",
+ "endTime":"2024-02-21T00:36:47.752Z",
"errors":[], "itemsProcessed":1599501, "itemsFailed":0,
Execution history contains up to 50 of the most recently completed executions, w
Errors that commonly occur during indexing include unsupported content types, missing content, or oversized blobs.
-By default, the blob indexer stops as soon as it encounters a blob with an unsupported content type (for example, an audio file). You could use the "excludedFileNameExtensions" parameter to skip certain content types. However, you might want to indexing to proceed even if errors occur, and then debug individual documents later. For more information about indexer errors, see [Indexer troubleshooting guidance](search-indexer-troubleshooting.md) and [Indexer errors and warnings](cognitive-search-common-errors-warnings.md).
+By default, the blob indexer stops as soon as it encounters a blob with an unsupported content type (for example, an audio file). You could use the "excludedFileNameExtensions" parameter to skip certain content types. However, you might want indexing to proceed even if errors occur, and then debug individual documents later. For more information about indexer errors, see [Indexer troubleshooting guidance](search-indexer-troubleshooting.md) and [Indexer errors and warnings](cognitive-search-common-errors-warnings.md).
There are five indexer properties that control the indexer's response when errors occur. ```http
-PUT /indexers/[indexer name]?api-version=2020-06-30
+PUT /indexers/[indexer name]?api-version=2023-11-01
{ "parameters" : { "maxFailedItems" : 10,
PUT /indexers/[indexer name]?api-version=2020-06-30
"failOnUnsupportedContentType" : false, "failOnUnprocessableDocument" : false, "indexStorageMetadataOnlyForOversizedDocuments": false
+ }
} } ```
search Search Howto Index Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-mysql.md
Last updated 06/10/2022
# Index data from Azure Database for MySQL
-> [!IMPORTANT]
+> [!IMPORTANT]
> MySQL support is currently in public preview under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). Use a [preview REST API](search-api-preview.md) (2020-06-30-preview or later) to index your content. There is currently no portal support.
-In this article, learn how to configure an [**indexer**](search-indexer-overview.md) that imports content from Azure Database for MySQL and makes it searchable in Azure AI Search.
-
-This article supplements [Creating indexers in Azure AI Search](search-howto-create-indexers.md) with information that's specific to indexing files in Azure Database for MySQL. It uses the REST APIs to demonstrate a three-part workflow common to all indexers:
+In this article, learn how to configure an [**indexer**](search-indexer-overview.md) that imports content from Azure Database for MySQL and makes it searchable in Azure AI Search. Inputs to the indexer are your row, in a single table or view. Output is a search index with searchable content in individual fields.
-- Create a data source-- Create an index-- Create an indexer
+This article supplements [**Create an indexer**](search-howto-create-indexers.md) with information that's specific to indexing from ADLS Gen2. It uses the REST APIs to demonstrate a three-part workflow common to all indexers: create a data source, create an index, create an indexer. Data extraction occurs when you submit the Create Indexer request.
When configured to include a high water mark and soft deletion, the indexer takes all changes, uploads, and deletes for your MySQL database. It reflects these changes in your search index. Data extraction occurs when you submit the Create Indexer request.
The data source definition specifies the data to index, credentials, and policie
1. [Create or Update Data Source](/rest/api/searchservice/create-data-source) specifies the definition. Be sure to use a preview REST API version (2020-06-30-Preview or later) when creating the data source. ```http
- POST https://[search service name].search.windows.net/datasources?api-version=2020-06-30-Preview
- Content-Type: application/json
- api-key: [admin key]
-
{ "name" : "hotel-mysql-ds", "description" : "[Description of MySQL data source]",
In a [search index](search-what-is-an-index.md), add search index fields that co
{ "name": "City", "type": "Edm.String", "searchable": false, "filterable": true, "sortable": true }, { "name": "Description", "type": "Edm.String", "searchable": false, "filterable": false, "sortable": false } ]
+}
``` If the primary key in the source table matches the document key (in this case, "ID"), the indexer imports the primary key as the document key.
Once the index and data source have been created, you're ready to create the ind
1. [Create or update an indexer](/rest/api/searchservice/create-indexer) by giving it a name and referencing the data source and target index: ```http
- POST https://[search service name].search.windows.net/indexers?api-version=2020-06-30
-
{ "name" : "hotels-mysql-idxr", "dataSourceName" : "hotels-mysql-ds",
An indexer runs automatically when it's created. You can prevent it from running
To monitor the indexer status and execution history, send a [Get Indexer Status](/rest/api/searchservice/get-indexer-status) request: ```http
-GET https://myservice.search.windows.net/indexers/myindexer/status?api-version=2020-06-30
+GET https://myservice.search.windows.net/indexers/myindexer/status?api-version=2023-11-01
Content-Type: application/json api-key: [admin key] ```
The response includes status and the number of items processed. It should look s
"lastResult": { "status":"success", "errorMessage":null,
- "startTime":"2022-02-21T00:23:24.957Z",
- "endTime":"2022-02-21T00:36:47.752Z",
+ "startTime":"2024-02-21T00:23:24.957Z",
+ "endTime":"2024-02-21T00:36:47.752Z",
"errors":[], "itemsProcessed":1599501, "itemsFailed":0,
The response includes status and the number of items processed. It should look s
{ "status":"success", "errorMessage":null,
- "startTime":"2022-02-21T00:23:24.957Z",
- "endTime":"2022-02-21T00:36:47.752Z",
+ "startTime":"2024-02-21T00:23:24.957Z",
+ "endTime":"2024-02-21T00:36:47.752Z",
"errors":[], "itemsProcessed":1599501, "itemsFailed":0,
In your MySQL database, the high water mark column must meet the following requi
The following example shows a [data source definition](#define-the-data-source) with a change detection policy: ```http
-POST https://[search service name].search.windows.net/datasources?api-version=2020-06-30-Preview
-Content-Type: application/json
-api-key: [admin key]
- {
- "name" : "[Data source name]",
- "type" : "mysql",
- "credentials" : { "connectionString" : "[connection string]" },
- "container" : { "name" : "[table or view name]" },
- "dataChangeDetectionPolicy" : {
- "@odata.type" : "#Microsoft.Azure.Search.HighWaterMarkChangeDetectionPolicy",
- "highWaterMarkColumnName" : "[last_updated column name]"
- }
+{
+ "name" : "[Data source name]",
+ "type" : "mysql",
+ "credentials" : { "connectionString" : "[connection string]" },
+ "container" : { "name" : "[table or view name]" },
+ "dataChangeDetectionPolicy" : {
+ "@odata.type" : "#Microsoft.Azure.Search.HighWaterMarkChangeDetectionPolicy",
+ "highWaterMarkColumnName" : "[last_updated column name]"
}
+}
``` > [!IMPORTANT]
search Troubleshoot Shared Private Link Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/troubleshoot-shared-private-link-resources.md
- ignite-2023 Previously updated : 02/22/2023 Last updated : 02/20/2024 # Troubleshoot issues with Shared Private Links in Azure AI Search A shared private link allows Azure AI Search to make secure outbound connections over a private endpoint when accessing customer resources in a virtual network. This article can help you resolve errors that might occur.
-Creating a shared private link is search service control plane operation. You can [create a shared private link](search-indexer-howto-access-private.md) using either the portal or a [Management REST API](/rest/api/searchmanagement/shared-private-link-resources/create-or-update). During provisioning, the state of the request is "Updating". After the operation completes successfully, status is "Succeeded". A private endpoint to the resource, along with any DNS zones and mappings, is created. This endpoint is used exclusively by your search service instance and is managed through Azure AI Search.
+Creating a shared private link is a search service control plane operation. You can [create a shared private link](search-indexer-howto-access-private.md) using either the portal or a [Management REST API](/rest/api/searchmanagement/shared-private-link-resources/create-or-update). During provisioning, the state of the request is "Updating". After the operation completes successfully, status is "Succeeded". A private endpoint to the resource, along with any DNS zones and mappings, is created. This endpoint is used exclusively by your search service instance and is managed through Azure AI Search.
![Steps involved in creating shared private link resources ](media\troubleshoot-shared-private-link-resources\shared-private-link-states.png)
site-recovery Azure To Azure How To Enable Zone To Zone Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-how-to-enable-zone-to-zone-disaster-recovery.md
Previously updated : 10/09/2023 Last updated : 02/20/2024
If you want to move VMs to an availability zone in a different region, review [t
Support for zone-to-zone disaster recovery is currently limited to the following regions:
-| Americas | Europe | Middle East | Africa | APAC |
-|--|--|-|--|--|
-| Canada Central | UK South | Qatar Central | South Africa North | Southeast Asia |
-| US Gov Virginia | West Europe | | | East Asia |
-| Central US | North Europe | UAE North | | Japan East |
-| South Central US | Germany West Central | | | Korea Central |
-| East US | Norway East | | | Australia East |
-| East US 2 | France Central | | | Central India |
-| West US 2 | Switzerland North | | | China North 3 |
-| West US 3 | Sweden Central (managed access) | | | |
-| Brazil South | Poland Central | | | |
-| | Italy North | | | |
+| Americas | Europe | Middle East | Africa | Asia Pacific |
+||||||
+| Brazil South | France Central | Israel Central | South Africa North | Australia East |
+| Canada Central | Germany West Central | Qatar Central | | Central India |
+| Central US | Italy North | UAE North | | China North 3 |
+| East US | North Europe | | | East Asia |
+| East US 2 | Norway East | | | Japan East |
+| South Central US | Poland Central | | | Korea Central |
+| US Gov Virginia | Sweden Central | | | Southeast Asia |
+| West US 2 | Switzerland North | | | |
+| West US 3 | UK South | | | |
+|| West Europe ||||
When you use zone-to-zone disaster recovery, Site Recovery doesn't move or store data out of the region in which it's deployed. You can select a Recovery Services vault from a different region if you want one. The Recovery Services vault contains metadata but no actual customer data.
+Learn more about [currently supported availability zones](../reliability/availability-zones-service-support.md#azure-regions-with-availability-zone-support).
+ > [!Note] > Zone-to-zone disaster recovery isn't supported for VMs that have managed disks via zone-redundant storage (ZRS).
site-recovery Azure To Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-support-matrix.md
This article summarizes support and prerequisites for disaster recovery of Azure
## Region support
-Azure Site Recovery allows you to perform global disaster recovery. You can replicate and recover VMs between any two Azure regions in the world. If you have concerns around data sovereignty, you may choose to limit replication within your specific geographic cluster. The various geographic clusters are as follows:
-
-**Geographic cluster** | **Azure regions**
| --
-America | Canada East, Canada Central, South Central US, West Central US, East US, East US 2, West US, West US 2, West US 3, Central US, North Central US
-Europe | UK West, UK South, North Europe, West Europe, South Africa West, South Africa North, Norway East, France Central, Switzerland North, Germany West Central, UAE North (UAE is treated as part of the Europe geo cluster)
-Asia | South India, Central India, West India, Southeast Asia, East Asia, Japan East, Japan West, Korea Central, Korea South, Qatar Central
-JIO | JIO India West<br/><br/>Replication can't be done between JIO and non-JIO regions for Virtual Machines present in JIO subscriptions. This is because JIO subscriptions can have resources only in JIO regions.
-Australia | Australia East, Australia Southeast, Australia Central, Australia Central 2
-Azure Government | US GOV Virginia, US GOV Iowa, US GOV Arizona, US GOV Texas, US DOD East, US DOD Central
-Germany | Germany Central, Germany Northeast
-China | China East, China North, China North2, China East2
-Brazil | Brazil South
-Restricted Regions reserved for in-country/region disaster recovery |Switzerland West reserved for Switzerland North, France South reserved for France Central, Norway West for Norway East customers, JIO India Central for JIO India West customers, Brazil Southeast for Brazil South customers, South Africa West for South Africa North customers, Germany North for Germany West Central customers, UAE Central for UAE North customers.<br/><br/> To use restricted regions as your primary or recovery region, get yourselves allowlisted by raising a request [here](/troubleshoot/azure/general/region-access-request-process) for both source and target subscriptions.
-
->[!NOTE]
->
+Azure Site Recovery allows you to perform global disaster recovery. You can replicate and recover VMs between any two Azure regions in the world. If you have concerns around data sovereignty, you may choose to limit replication within your specific geographic cluster.
++
+[See here](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=site-recovery&regions=all) to find details on the various geographic clusters supported.
+
+> [!NOTE]
+> - **Support for restricted Regions reserved for in-country/region disaster recovery:** Switzerland West reserved for Switzerland North, France South reserved for France Central, Norway West for Norway East customers, JIO India Central for JIO India West customers, Brazil Southeast for Brazil South customers, South Africa West for South Africa North customers, Germany North for Germany West Central customers, UAE Central for UAE North customers.<br/><br/> To use restricted regions as your primary or recovery region, get yourselves allowlisted by raising a request [here](/troubleshoot/azure/general/region-access-request-process) for both source and target subscriptions.
+> <br>
> - For **Brazil South**, you can replicate and fail over to these regions: Brazil Southeast, South Central US, West Central US, East US, East US 2, West US, West US 2, and North Central US. > - Brazil South can only be used as a source region from which VMs can replicate using Site Recovery. It can't act as a target region. Note that if you fail over from Brazil South as a source region to a target, failback to Brazil South from the target region is supported. Brazil Southeast can only be used as a target region. > - If the region in which you want to create a vault doesn't show, make sure your subscription has access to create resources in that region. > - If you can't see a region within a geographic cluster when you enable replication, make sure your subscription has permissions to create VMs in that region. + ## Cache storage This table summarizes support for the cache storage account used by Site Recovery during replication.
site-recovery Site Recovery Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-whats-new.md
For Site Recovery components, we support N-4 versions, where N is the latest rel
**Update** | **Unified Setup** | **Replication appliance / Configuration server** | **Mobility service agent** | **Site Recovery Provider** | **Recovery Services agent** | | | | |
+[Rollup 71](https://support.microsoft.com/topic/update-rollup-71-for-azure-site-recovery-kb5035688-4df258c7-7143-43e7-9aa5-afeef9c26e1a) | 9.59.6930.1 | NA | 9.59.6930.1 | NA | NA
[Rollup 70](https://support.microsoft.com/topic/e94901f6-7624-4bb4-8d43-12483d2e1d50) | 9.57.6920.1 | 9.57.6911.1 / NA | 9.57.6911.1 | 5.23.1204.5 (VMware) | 2.0.9263.0 (VMware) [Rollup 69](https://support.microsoft.com/topic/update-rollup-69-for-azure-site-recovery-kb5033791-a41c2400-0079-4f93-b4a4-366660d0a30d) | NA | 9.56.6879.1 / NA | 9.56.6879.1 | 5.23.1101.10 (VMware) | 2.0.9263.0 (VMware) [Rollup 68](https://support.microsoft.com/topic/a81c2d22-792b-4cde-bae5-dc7df93a7810) | 9.55.6765.1 | 9.55.6765.1 / 5.1.8095.0 | 9.55.6765.1 | 5.23.0720.4 (VMware) & 5.1.8095.0 (Hyper-V) | 2.0.9261.0 (VMware) & 2.0.9260.0 (Hyper-V) [Rollup 67](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f) | 9.54.6682.1 | 9.54.6682.1 / 5.1.8095.0 | 9.54.6682.1 | 5.23.0428.1 (VMware) & 5.1.8095.0 (Hyper-V) | 2.0.9261.0 (VMware) & 2.0.9260.0 (Hyper-V)
-[Rollup 66](https://support.microsoft.com/en-us/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5) | 9.53.6615.1 | 9.53.6615.1 / 5.1.8095.0 | 9.53.6615.1 | 5.1.8103.0 (Modernized VMware), 5.1.8095.0 (Hyper-V) & 5.23.0210.5 (Classic VMware) | 2.0.9260.0
[Learn more](service-updates-how-to.md) about update installation and support.
+## Updates (February 2024)
+
+### Update Rollup 71
+
+> [!Note]
+> - The version 9.59 has been released only for Classic VMware/Physical to Azure scenario.
+> - 9.58 and 9.59 versions have not been released for Azure to Azure and Modernized VMware to Azure replication scenarios.
+
+[Update rollup 71](https://support.microsoft.com/topic/update-rollup-71-for-azure-site-recovery-kb5035688-4df258c7-7143-43e7-9aa5-afeef9c26e1a) provides the following updates:
+
+**Update** | **Details**
+ |
+**Providers and agents** | Updates to Site Recovery agents and providers as detailed in the rollup KB article.
+**Issue fixes/improvements** | No fixes added. 
+**Azure VM disaster recovery** | No improvements added. 
+**VMware VM/physical disaster recovery to Azure** |  No improvements added. 
+ ## Updates (December 2023)
site-recovery Vmware Azure Enable Replication Added Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-enable-replication-added-disk.md
+
+ Title: Enable replication for an added VMware virtual machine disk in Azure Site Recovery
+description: This article describes how to enable replication for a disk added to a VMware virtual machine that's enabled for disaster recovery with Azure Site Recovery
+++ Last updated : 02/23/2024++++
+# Enable replication for a disk added to a VMware virtual machine
+
+This article describes how to enable replication for newly added data disks that are added to a VMware virtual machine, which already has disaster recovery enabled to an Azure region, using [Azure Site Recovery](site-recovery-overview.md).
+
+Enabling replication for a disk you add to a virtual machine is now supported for VMware virtual machines also.
+
+**When you add a new disk to a VMware virtual machine that is replicating to an Azure region, the following occurs:**
+- Replication health for the virtual machine shows a warning, and a note in the portal informs you that one or more disks are available for protection.
+- If you enable protection for the added disks, the warning will disappear after initial replication of the disk.
+- If you choose not to enable replication for the disk, you can select to dismiss the warning.
+
+ ![Screenshot of `Enable replication` for an added disk.](./media/vmware-azure-enable-replication-added-disk/post-add-disk.png)
+
+## Before you start
+
+This article assumes that you've already set up disaster recovery for the VMware virtual machine to which you're adding the disk. If you haven't, follow the [VMware to Azure disaster recovery tutorial](vmware-azure-set-up-replication-tutorial-modernized.md).
+
+## Enable replication for an added disk
+
+To enable replication for an added disk, do the following:
+
+1. In the vault > **Replicated Items**, select the virtual machine to which you added the disk.
+2. Select **Disks** > **Data disks** section of the protected item, and then select the data disk for which you want to enable replication (these disks have a **Not protected** status).
+
+
+ > [!NOTE]
+ > If the enable replication operation for this disk fails, you can resolve the issues and retry the operation.
+
+3. In **Disk Details**, select **Enable replication**.
+
+ ![Screenshot of the disk to enable replication.](./media/vmware-azure-enable-replication-added-disk/enable-replication.png)
+
+1. Confirm **Enable Replication**
+
+ ![Screenshot of confirming enable replication for added disk.](./media/vmware-azure-enable-replication-added-disk/confirm-enable-replication.png)
++
+After the enable replication job runs and the initial replication finishes, the replication health warning for the disk issue is removed.
+
+## Next steps
+
+[Learn more](site-recovery-test-failover-to-azure.md) about running a test failover.
+
storage Storage Use Azurite https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-use-azurite.md
export AZURITE_ACCOUNTS="account1:key1:key2;account2:key1:key2"
Azurite refreshes custom account names and keys from the environment variable every minute by default. With this feature, you can dynamically rotate the account key, or add new storage accounts without restarting Azurite. > [!NOTE]
-> The default `devstoreaccount1` storage account is disabled when you set custom storage accounts.
+> The default `devstoreaccount1` storage account is disabled when you set custom storage accounts. If you want to continue using `devstoreaccount1` after enabling custom storage accounts, you need to add it to the list of custom accounts and keys in the `AZURITE_ACCOUNTS` environment variable.
The account keys must be a base64 encoded string.
storage Storage Files How To Mount Nfs Shares https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-how-to-mount-nfs-shares.md
You have now mounted your NFS share.
If you want the NFS file share to automatically mount every time the Linux server or VM boots, create a record in the **/etc/fstab** file for your Azure file share. Replace `YourStorageAccountName` and `FileShareName` with your information. ```bash
-<YourStorageAccountName>.file.core.windows.net:/<YourStorageAccountName>/<FileShareName> /media/<YourStorageAccountName>/<FileShareName> nfs vers=4,minorversion=1,sec=sys 0 0
+<YourStorageAccountName>.file.core.windows.net:/<YourStorageAccountName>/<FileShareName> /media/<YourStorageAccountName>/<FileShareName> nfs _netdev,nofail,vers=4,minorversion=1,sec=sys 0 0
``` For more information, enter the command `man fstab` from the Linux command line.
stream-analytics Kafka Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/kafka-output.md
Previously updated : 01/18/2024 Last updated : 02/20/2024 # Kafka output from Azure Stream Analytics (Preview)
Last updated 01/18/2024
Azure Stream Analytics allows you to connect directly to Kafka clusters as a producer to output data. The solution is low code and entirely managed by the Azure Stream Analytics team at Microsoft, allowing it to meet business compliance standards. The ASA Kafka output is backward compatible and supports all versions with the latest client release starting from version 0.10. Users can connect to Kafka clusters inside a VNET and Kafka clusters with a public endpoint, depending on the configurations. The configuration relies on existing Kafka configuration conventions. Supported compression types are None, Gzip, Snappy, LZ4, and Zstd.
+## Steps
+This article shows how to set up Kafka as an output from Azure Stream Analytics. There are six steps:
+
+1. Create an Azure Stream Analytics job.
+2. Configure your Azure Stream Analytics job to use managed identity if you are using mTLS or SASL_SSl security protocols.
+3. Configure Azure Key vault if you are using mTLS or SASL_SSl security protocols.
+4. Upload certificates as secrets into Azure Key vault.
+5. Grant Azure Stream Analytics permissions to access the uploaded certificate.
+6. Configure Kafka output in your Azure Stream Analytics job.
+
+> [!NOTE]
+> Depending on how your Kafka cluster is configured and the type of Kafka cluster you are using, some of the above steps may not apply to you. Examples are: if you are using confluent cloud Kafka, you will not need to upload a certificate to use the Kafka connector. If your Kafka cluster is inside a virtual network (VNET) or behind a firewall, you may have to configure your Azure Stream Analytics job to access your Kafka topic using a private link or a dedicated networking configuration.
++ ## Configuration The following table lists the property names and their description for creating a Kafka output:
The following table lists the property names and their description for creating
| Partition key | Azure Stream Analytics assigns partitions using round partitioning. | | Kafka event compression type | The compression type used for outgoing data streams, such as Gzip, Snappy, Lz4, Zstd, or None. | ++ ## Authentication and encryption You can use four types of security protocols to connect to your Kafka clusters:
stream-analytics Stream Analytics Define Kafka Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-define-kafka-input.md
Previously updated : 01/18/2024 Last updated : 02/20/2024 # Stream data from Kafka into Azure Stream Analytics (Preview)
The following are the major use cases:
Azure Stream Analytics lets you connect directly to Kafka clusters to ingest data. The solution is low code and entirely managed by the Azure Stream Analytics team at Microsoft, allowing it to meet business compliance standards. The ASA Kafka input is backward compatible and supports all versions with the latest client release starting from version 0.10. Users can connect to Kafka clusters inside a VNET and Kafka clusters with a public endpoint, depending on the configurations. The configuration relies on existing Kafka configuration conventions. Supported compression types are None, Gzip, Snappy, LZ4, and Zstd. +
+## Steps
+This article shows how to set up Kafka as an input source for Azure Stream Analytics. There are six steps:
+
+1. Create an Azure Stream Analytics job.
+2. Configure your Azure Stream Analytics job to use managed identity if you are using mTLS or SASL_SSl security protocols.
+3. Configure Azure Key vault if you are using mTLS or SASL_SSl security protocols.
+4. Upload certificates as secrets into Azure Key vault.
+5. Grant Azure Stream Analytics permissions to access the uploaded certificate.
+6. Configure Kafka input in your Azure Stream Analytics job.
+
+> [!NOTE]
+> Depending on how your Kafka cluster is configured and the type of Kafka cluster you are using, some of the above steps may not apply to you. Examples are: if you are using confluent cloud Kafka, you will not need to upload a certificate to use the Kafka connector. If your Kafka cluster is inside a virtual network (VNET) or behind a firewall, you may have to configure your Azure Stream Analytics job to access your Kafka topic using a private link or a dedicated networking configuration.
++ ## Configuration The following table lists the property names and their description for creating a Kafka Input:
The following table lists the property names and their description for creating
| Security Protocol | How you want to connect to your Kafka cluster. Azure Stream Analytics supports mTLS, SASL_SSL, SASL_PLAINTEXT or None. | | Event Serialization format | The serialization format (JSON, CSV, Avro, Parquet, Protobuf) of the incoming data stream. | + ## Authentication and encryption
Use the following steps to grant special permissions to your stream analytics jo
### VNET integration
-If your Kafka is inside a virtual network (VNET) or behind a firewall, you must configure your Azure Stream Analytics job to access your Kafka topic.
+If your Kafka cluster is inside a virtual network (VNET) or behind a firewall, you may have to configure your Azure Stream Analytics job to access your Kafka topic using a private link or a dedicated networking configuration.
Visit the [Run your Azure Stream Analytics job in an Azure Virtual Network documentation](../stream-analytics/run-job-in-virtual-network.md) for more information.
update-manager Prerequsite For Schedule Patching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/prerequsite-for-schedule-patching.md
Title: Configure schedule patching on Azure VMs for business continuity
description: The article describes the new prerequisites to configure scheduled patching to ensure business continuity in Azure Update Manager. Previously updated : 02/03/2024 Last updated : 02/20/2024
In some instances, when you remove the schedule from a VM, there's a possibility
All VMs in a common [availability set](../virtual-machines/availability-set-overview.md) aren't updated concurrently.
-VMs in a common availability set are updated within Update Domain boundaries. VMs across multiple Update Domains aren't updated concurrently.
+VMs in a common availability set are updated within Update Domain boundaries. VMs across multiple Update Domains aren't updated concurrently.
+
+In scenarios where machines from the same availability set are being patched at the same time in different schedules, it is likely that they might not get patched or could potentially fail if the maintenance window exceeds. To avoid this, we recommend that you either increase the maintenance window or split the machines belonging to the same availability set across multiple schedules at different times.
## Find VMs with associated schedules
virtual-machines Disks Shared https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-shared.md
description: Learn about sharing Azure managed disks across multiple Linux VMs.
Previously updated : 07/12/2023 Last updated : 02/20/2024
The following formulas explain how the performance attributes can be set, since
- DiskIOPSReadWrite (Read/write disk IOPS): - Has a baseline minimum IOPS of 100, for disks 100 GiB and smaller. - For disks larger than 100 GiB, the baseline minimum IOPS you can set increases by 1 per GiB. So the lowest you can set DiskIOPSReadWrite for a 101 GiB disk is 101 IOPS.
- - The maximum you can set this attribute is determined by the size of your disk, the formula is 300 * GiB, up to a maximum of 160,000.
+ - The maximum you can set this attribute is determined by the size of your disk, the formula is 300 * GiB, up to a maximum of 400,000.
- DiskMB/sReadWrite (Read/write disk throughput) - The minimum throughput (MB/s) of this attribute is determined by your IOPS, the formula is 4 KiB per second per IOPS. So if you had 101 IOPS, the minimum MB/s you can set is 1.
- - The maximum you can set this attribute is determined by the amount of IOPS you set, the formula is 256 KiB per second per IOPS, up to a maximum of 4,000 MB/s.
+ - The maximum you can set this attribute is determined by the amount of IOPS you set, the formula is 256 KiB per second per IOPS, up to a maximum of 10,000 MB/s.
- DiskIOPSReadOnly (Read-only disk IOPS) - The minimum baseline IOPS for this attribute is 100. For DiskIOPSReadOnly, the baseline doesn't increase with disk size.
- - The maximum you can set this attribute is determined by the size of your disk, the formula is 300 * GiB, up to a maximum of 160,000.
+ - The maximum you can set this attribute is determined by the size of your disk, the formula is 300 * GiB, up to a maximum of 400,000.
- DiskMB/sReadOnly (Read-only disk throughput) - The minimum throughput (MB/s) for this attribute is 1. For DiskMB/sReadOnly, the baseline doesn't increase with IOPS.
- - The maximum you can set this attribute is determined by the amount of IOPS you set, the formula is 256 KiB per second per IOPS, up to a maximum of 4,000 MB/s.
+ - The maximum you can set this attribute is determined by the amount of IOPS you set, the formula is 256 KiB per second per IOPS, up to a maximum of 10,000 MB/s.
#### Examples
virtual-machines Disks Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-types.md
Title: Select a disk type for Azure IaaS VMs - managed disks
description: Learn about the available Azure disk types for virtual machines, including ultra disks, Premium SSDs v2, Premium SSDs, standard SSDs, and Standard HDDs. Previously updated : 02/07/2024 Last updated : 02/20/2024
The following table provides a comparison of the five disk types to help you dec
| **Disk type** | SSD | SSD |SSD | SSD | HDD | | **Scenario** | IO-intensive workloads such as [SAP HANA](workloads/sap/hana-vm-operations-storage.md), top tier databases (for example, SQL, Oracle), and other transaction-heavy workloads. | Production and performance-sensitive workloads that consistently require low latency and high IOPS and throughput | Production and performance sensitive workloads | Web servers, lightly used enterprise applications and dev/test | Backup, non-critical, infrequent access | | **Max disk size** | 65,536 GiB | 65,536 GiB |32,767 GiB | 32,767 GiB | 32,767 GiB |
-| **Max throughput** | 4,000 MB/s | 1,200 MB/s | 900 MB/s | 750 MB/s | 500 MB/s |
-| **Max IOPS** | 160,000 | 80,000 | 20,000 | 6,000 | 2,000, 3,000* |
+| **Max throughput** | 10,000 MB/s | 1,200 MB/s | 900 MB/s | 750 MB/s | 500 MB/s |
+| **Max IOPS** | 400,000 | 80,000 | 20,000 | 6,000 | 2,000, 3,000* |
| **Usable as OS Disk?** | No | No | Yes | Yes | Yes | \* Only applies to disks with performance plus (preview) enabled.
The following table provides a comparison of disk sizes and performance caps to
|8 |2,400 |600 | |16 |4,800 |1,200 | |32 |9,600 |2,400 |
-|64 |19,200 |4,000 |
-|128 |38,400 |4,000 |
-|256 |76,800 |4,000 |
-|512 |153,600 |4,000 |
-|1,024-65,536 (sizes in this range increasing in increments of 1 TiB) |160,000 |4,000 |
+|64 |19,200 |4,900 |
+|128 |38,400 |9,800 |
+|256 |76,800 |10,000 |
+|512 |153,600 |10,000 |
+|1,024-65,536 (sizes in this range increasing in increments of 1 TiB) |400,000 |10,000 |
### Ultra disk performance
Ultra disks are designed to provide low sub millisecond latencies and provisione
### Ultra disk IOPS
-Ultra disks support IOPS limits of 300 IOPS/GiB, up to a maximum of 160,000 IOPS per disk. To achieve the target IOPS for the disk, ensure that the selected disk IOPS are less than the VM IOPS limit.
-
-The current maximum limit for IOPS for a single VM in generally available sizes is 80,000. Ultra disks with greater IOPS can be used as shared disks to support multiple VMs.
+Ultra disks support IOPS limits of 300 IOPS/GiB, up to a maximum of 400,000 IOPS per disk. To achieve the target IOPS for the disk, ensure that the selected disk IOPS are less than the VM IOPS limit. Ultra disks with greater IOPS can be used as shared disks to support multiple VMs.
The minimum guaranteed IOPS per disk are 1 IOPS/GiB, with an overall baseline minimum of 100 IOPS. For example, if you provisioned a 4-GiB ultra disk, the minimum IOPS for that disk is 100, instead of four.
For more information about IOPS, see [Virtual machine and disk performance](disk
### Ultra disk throughput
-The throughput limit of a single ultra disk is 256-kB/s for each provisioned IOPS, up to a maximum of 4000 MB/s per disk (where MB/s = 10^6 Bytes per second). The minimum guaranteed throughput per disk is 4kB/s for each provisioned IOPS, with an overall baseline minimum of 1 MB/s.
+The throughput limit of a single ultra disk is 256-kB/s for each provisioned IOPS, up to a maximum of 10,000 MB/s per disk (where MB/s = 10^6 Bytes per second). The minimum guaranteed throughput per disk is 4kB/s for each provisioned IOPS, with an overall baseline minimum of 1 MB/s.
You can adjust ultra disk IOPS and throughput performance at runtime without detaching the disk from the virtual machine. After a performance resize operation has been issued on a disk, it can take up to an hour for the change to take effect. Up to four performance resize operations are permitted during a 24-hour window.